Sort by:
Page 218 of 2342335 results

Deep learning MRI-based radiomic models for predicting recurrence in locally advanced nasopharyngeal carcinoma after neoadjuvant chemoradiotherapy: a multi-center study.

Hu C, Xu C, Chen J, Huang Y, Meng Q, Lin Z, Huang X, Chen L

pubmed logopapersMay 15 2025
Local recurrence and distant metastasis were a common manifestation of locoregionally advanced nasopharyngeal carcinoma (LA-NPC) after neoadjuvant chemoradiotherapy (NACT). To validate the clinical value of MRI radiomic models based on deep learning for predicting the recurrence of LA-NPC patients. A total of 328 NPC patients from four hospitals were retrospectively included and divided into the training(n = 229) and validation (n = 99) cohorts randomly. Extracting 975 traditional radiomic features and 1000 deep radiomic features from contrast enhanced T1-weighted (T1WI + C) and T2-weighted (T2WI) sequences, respectively. Least absolute shrinkage and selection operator (LASSO) was applied for feature selection. Five machine learning classifiers were conducted to develop three models for LA-NPC prediction in training cohort, namely Model I: traditional radiomic features, Model II: combined the deep radiomic features with Model I, and Model III: combined Model II with clinical features. The predictive performance of these models were evaluated by receive operating characteristic (ROC) curve analysis, area under the curve (AUC), accuracy, sensitivity and specificity in both cohorts. The clinical characteristics in two cohorts showed no significant differences. Choosing 15 radiomic features and 6 deep radiomic features from T1WI + C. Choosing 9 radiomic features and 6 deep radiomic features from T2WI. In T2WI, the Model II based on Random forest (RF) (AUC = 0.87) performed best compared with other models in validation cohort. Traditional radiomic model combined with deep radiomic features shows excellent predictive performance. It could be used assist clinical doctors to predict curative effect for LA-NPC patients after NACT.

Interobserver agreement between artificial intelligence models in the thyroid imaging and reporting data system (TIRADS) assessment of thyroid nodules.

Leoncini A, Trimboli P

pubmed logopapersMay 15 2025
As ultrasound (US) is the most accurate tool for assessing the thyroid nodule (TN) risk of malignancy (RoM), international societies have published various Thyroid Imaging and Reporting Data Systems (TIRADSs). With the recent advent of artificial intelligence (AI), clinicians and researchers should ask themselves how AI could interpret the terminology of the TIRADSs and whether or not AIs agree in the risk assessment of TNs. The study aim was to analyze the interobserver agreement (IOA) between AIs in assessing the RoM of TNs across various TIRADSs categories using a cases series created combining TIRADSs descriptors. ChatGPT, Google Gemini, and Claude were compared. ACR-TIRADS, EU-TIRADS, and K-TIRADS, were employed to evaluate the AI assessment. Multiple written scenarios for the three TIRADS were created, the cases were evaluated by the three AIs, and their assessments were analyzed and compared. The IOA was estimated by comparing the kappa (κ) values. Ninety scenarios were created. With ACR-TIRADS the IOA analysis gave κ = 0.58 between ChatGPT and Gemini, 0.53 between ChatGPT and Claude, and 0.90 between Gemini and Claude. With EU-TIRADS it was observed κ value = 0.73 between ChatGPT and Gemini, 0.62 between ChatGPT and Claude, and 0.72 between Gemini and Claude. With K-TIRADS it was found κ = 0.88 between ChatGPT and Gemini, 0.70 between ChatGPT and Claude, and 0.61 between Gemini and Claude. This study found that there were non-negligible variability between the three AIs. Clinicians and patients should be aware of these new findings.

Predicting Immunotherapy Response in Unresectable Hepatocellular Carcinoma: A Comparative Study of Large Language Models and Human Experts.

Xu J, Wang J, Li J, Zhu Z, Fu X, Cai W, Song R, Wang T, Li H

pubmed logopapersMay 15 2025
Hepatocellular carcinoma (HCC) is an aggressive cancer with limited biomarkers for predicting immunotherapy response. Recent advancements in large language models (LLMs) like GPT-4, GPT-4o, and Gemini offer the potential for enhancing clinical decision-making through multimodal data analysis. However, their effectiveness in predicting immunotherapy response, especially compared to human experts, remains unclear. This study assessed the performance of GPT-4, GPT-4o, and Gemini in predicting immunotherapy response in unresectable HCC, compared to radiologists and oncologists of varying expertise. A retrospective analysis of 186 patients with unresectable HCC utilized multimodal data (clinical and CT images). LLMs were evaluated with zero-shot prompting and two strategies: the 'voting method' and the 'OR rule method' for improved sensitivity. Performance metrics included accuracy, sensitivity, area under the curve (AUC), and agreement across LLMs and physicians.GPT-4o, using the 'OR rule method,' achieved 65% accuracy and 47% sensitivity, comparable to intermediate physicians but lower than senior physicians (accuracy: 72%, p = 0.045; sensitivity: 70%, p < 0.0001). Gemini-GPT, combining GPT-4, GPT-4o, and Gemini, achieved an AUC of 0.69, similar to senior physicians (AUC: 0.72, p = 0.35), with 68% accuracy, outperforming junior and intermediate physicians while remaining comparable to senior physicians (p = 0.78). However, its sensitivity (58%) was lower than senior physicians (p = 0.0097). LLMs demonstrated higher inter-model agreement (κ = 0.59-0.70) than inter-physician agreement, especially among junior physicians (κ = 0.15). This study highlights the potential of LLMs, particularly Gemini-GPT, as valuable tools in predicting immunotherapy response for HCC.

Accuracy and Reliability of Multimodal Imaging in Diagnosing Knee Sports Injuries.

Zhu D, Zhang Z, Li W

pubmed logopapersMay 15 2025
Due to differences in subjective experience and professional level among doctors, as well as inconsistent diagnostic criteria, there are issues with the accuracy and reliability of single imaging diagnosis results for knee joint injuries. To address these issues, magnetic resonance imaging (MRI), computed tomography (CT) and ultrasound (US) are adopted in this article for ensemble learning, and deep learning (DL) is combined for automatic analysis. By steps such as image enhancement, noise elimination, and tissue segmentation, the quality of image data is improved, and then convolutional neural networks (CNN) are used to automatically identify and classify injury types. The experimental results show that the DL model exhibits high sensitivity and specificity in the diagnosis of different types of injuries, such as anterior cruciate ligament tear, meniscus injury, cartilage injury, and fracture. The diagnostic accuracy of anterior cruciate ligament tear exceeds 90%, and the highest diagnostic accuracy of cartilage injury reaches 95.80%. In addition, compared with traditional manual image interpretation, the DL model has significant advantages in time efficiency, with a significant reduction in average interpretation time per case. The diagnostic consistency experiment shows that the DL model has high consistency with doctors' diagnosis results, with an overall error rate of less than 2%. The model has high accuracy and strong generalization ability when dealing with different types of joint injuries. These data indicate that combining multiple imaging technologies and the DL algorithm can effectively improve the accuracy and efficiency of diagnosing sports injuries of knee joints.

Does Whole Brain Radiomics on Multimodal Neuroimaging Make Sense in Neuro-Oncology? A Proof of Concept Study.

Danilov G, Kalaeva D, Vikhrova N, Shugay S, Telysheva E, Goraynov S, Kosyrkova A, Pavlova G, Pronin I, Usachev D

pubmed logopapersMay 15 2025
Employing a whole-brain (WB) mask as a region of interest for extracting radiomic features is a feasible, albeit less common, approach in neuro-oncology research. This study aims to evaluate the relationship between WB radiomic features, derived from various neuroimaging modalities in patients with gliomas, and some key baseline characteristics of patients and tumors such as sex, histological tumor type, WHO Grade (2021), IDH1 mutation status, necrosis lesions, contrast enhancement, T/N peak value and metabolic tumor volume. Forty-one patients (average age 50 ± 15 years, 21 females and 20 males) with supratentorial glial tumors were enrolled in this study. A total of 38,720 radiomic features were extracted. Cluster analysis revealed that whole-brain images of biologically different tumors could be distinguished to a certain extent based on their imaging biomarkers. Machine learning capabilities to detect image properties like contrast-enhanced or necrotic zones validated radiomic features in objectifying image semantics. Furthermore, the predictive capability of imaging biomarkers in determining tumor histology, grade and mutation type underscores their diagnostic potential. Whole-brain radiomics using multimodal neuroimaging data appeared to be informative in neuro-oncology, making research in this area well justified.

A Deep-Learning Framework for Ovarian Cancer Subtype Classification Using Whole Slide Images.

Wang C, Yi Q, Aflakian A, Ye J, Arvanitis T, Dearn KD, Hajiyavand A

pubmed logopapersMay 15 2025
Ovarian cancer, a leading cause of cancer-related deaths among women, comprises distinct subtypes each requiring different treatment approaches. This paper presents a deep-learning framework for classifying ovarian cancer subtypes using Whole Slide Imaging (WSI). Our method contains three stages: image tiling, feature extraction, and multi-instance learning. Our approach is trained and validated on a public dataset from 80 distinct patients, achieving up to 89,8% accuracy with a notable improvement in computational efficiency. The results demonstrate the potential of our framework to augment diagnostic precision in clinical settings, offering a scalable solution for the accurate classification of ovarian cancer subtypes.

Leveraging Vision Transformers in Multimodal Models for Retinal OCT Analysis.

Feretzakis G, Karakosta C, Gkoulalas-Divanis A, Bisoukis A, Boufeas IZ, Bazakidou E, Sakagianni A, Kalles D, Verykios VS

pubmed logopapersMay 15 2025
Optical Coherence Tomography (OCT) has become an indispensable imaging modality in ophthalmology, providing high-resolution cross-sectional images of the retina. Accurate classification of OCT images is crucial for diagnosing retinal diseases such as Age-related Macular Degeneration (AMD) and Diabetic Macular Edema (DME). This study explores the efficacy of various deep learning models, including convolutional neural networks (CNNs) and Vision Transformers (ViTs), in classifying OCT images. We also investigate the impact of integrating metadata (patient age, sex, eye laterality, and year) into the classification process, even when a significant portion of metadata is missing. Our results demonstrate that multimodal models leveraging both image and metadata inputs, such as the Multimodal ResNet18, can achieve competitive performance compared to image-only models, such as DenseNet121. Notably, DenseNet121 and Multimodal ResNet18 achieved the highest accuracy of 95.16%, with DenseNet121 showing a slightly higher F1-score of 0.9313. The multimodal ViT-based model also demonstrated promising results, achieving an accuracy of 93.22%, indicating the potential of Vision Transformers (ViTs) in medical image analysis, especially for handling complex multimodal data.

Energy-Efficient AI for Medical Diagnostics: Performance and Sustainability Analysis of ResNet and MobileNet.

Rehman ZU, Hassan U, Islam SU, Gallos P, Boudjadar J

pubmed logopapersMay 15 2025
Artificial intelligence (AI) has transformed medical diagnostics by enhancing the accuracy of disease detection, particularly through deep learning models to analyze medical imaging data. However, the energy demands of training these models, such as ResNet and MobileNet, are substantial and often overlooked; however, researchers mainly focus on improving model accuracy. This study compares the energy use of these two models for classifying thoracic diseases using the well-known CheXpert dataset. We calculate power and energy consumption during training using the EnergyEfficientAI library. Results demonstrate that MobileNet outperforms ResNet by consuming less power and completing training faster, resulting in lower overall energy costs. This study highlights the importance of prioritizing energy efficiency in AI model development, promoting sustainable, eco-friendly approaches to advance medical diagnosis.

Comparison of lumbar disc degeneration grading between deep learning model SpineNet and radiologist: a longitudinal study with a 14-year follow-up.

Murto N, Lund T, Kautiainen H, Luoma K, Kerttula L

pubmed logopapersMay 15 2025
To assess the agreement between lumbar disc degeneration (DD) grading by the convolutional neural network model SpineNet and radiologist's visual grading. In a 14-year follow-up MRI study involving 19 male volunteers, lumbar DD was assessed by SpineNet and two radiologists using the Pfirrmann classification at baseline (age 37) and after 14 years (age 51). Pfirrmann summary scores (PSS) were calculated by summing individual disc grades. The agreement between the first radiologist and SpineNet was analyzed, with the second radiologist's grading used for inter-observer agreement. Significant differences were observed in the Pfirrmann grades and PSS assigned by the radiologist and SpineNet at both time points. SpineNet assigned Pfirrmann grade 1 to several discs and grade 5 to more discs compared to the radiologists. The concordance correlation coefficients (CCC) of PSS between the radiologist and SpineNet were 0.54 (95% CI: 0.28 to 0.79) at baseline and 0.54 (0.27 to 0.80) at follow-up. The average kappa (κ) values of 0.74 (0.68 to 0.81) at baseline and 0.68 (0.58 to 0.77) at follow-up. CCC of PSS between the radiologists was 0.83 (0.69 to 0.97) at baseline and 0.78 (0.61 to 0.95) at follow-up, with κ values ranging from 0.73 to 0.96. We found fair to substantial agreement in DD grading between SpineNet and the radiologist, albeit with notable discrepancies. These findings indicate that AI-based systems like SpineNet hold promise as complementary tools in radiological evaluation, including in longitudinal studies, but emphasize the need for ongoing refinement of AI algorithms.

Recent advancements in personalized management of prostate cancer biochemical recurrence after radical prostatectomy.

Falkenbach F, Ekrutt J, Maurer T

pubmed logopapersMay 15 2025
Biochemical recurrence (BCR) after radical prostatectomy exhibits heterogeneous prognostic implications. Recent advancements in imaging and biomarkers have high potential for personalizing care. Prostate-specific membrane antigen imaging (PSMA)-PET/CT has revolutionized the BCR management in prostate cancer by detecting microscopic lesions earlier than conventional staging, leading to improved cancer control outcomes and changes in treatment plans in approximately two-thirds of cases. Salvage radiotherapy, often combined with androgen deprivation therapy, remains the standard treatment for high-risk BCR postprostatectomy, with PSMA-PET/CT guiding treatment adjustments, such as the radiation field, and improving progression-free survival. Advancements in biomarkers, genomic classifiers, and artificial intelligence-based models have enhanced risk stratification and personalized treatment planning, resulting in both treatment intensification and de-escalation. While conventional risk grouping relying on Gleason score and PSA level and kinetics remain the foundation for BCR management, PSMA-PET/CT, novel biomarkers, and artificial intelligence may enable more personalized treatment strategies.
Page 218 of 2342335 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.