Sort by:
Page 115 of 2412410 results

Automated CAD-RADS scoring from multiplanar CCTA images using radiomics-driven machine learning.

Corti A, Ronchetti F, Lo Iacono F, Chiesa M, Colombo G, Annoni A, Baggiano A, Carerj ML, Del Torto A, Fazzari F, Formenti A, Junod D, Mancini ME, Maragna R, Marchetti F, Sbordone FP, Tassetti L, Volpe A, Mushtaq S, Corino VDA, Pontone G

pubmed logopapersJul 16 2025
Coronary Artery Disease-Reporting and Data System (CAD-RADS), a standardized reporting system of stenosis severity from coronary computed tomography angiography (CCTA), is performed manually by expert radiologists, being time-consuming and prone to interobserver variability. While deep learning methods automating CAD-RADS scoring have been proposed, radiomics-based machine-learning approaches are lacking, despite their improved interpretability. This study aims to introduce a novel radiomics-based machine-learning approach for automating CAD-RADS scoring from CCTA images with multiplanar reconstruction. This retrospective monocentric study included 251 patients (male 70 %; mean age 60.5 ± 12.7) who underwent CCTA in 2016-2018 for clinical evaluation of CAD. Images were automatically segmented, and radiomic features were extracted. Clinical characteristics were collected. The image dataset was partitioned into training and test sets (90 %-10 %). The training phase encompassed feature scaling and selection, data balancing and model training within a 5-fold cross-validation. A cascade pipeline was implemented for both 6-class CAD-RADS scoring and 4-class therapy-oriented classification (0-1, 2, 3-4, 5), through consecutive sub-tasks. For each classification task the cascade pipeline was applied to develop clinical, radiomic, and combined models. The radiomic, combined and clinical models yielded AUC = 0.88 [0.86-0.88], AUC = 0.90 [0.88-0.90], and AUC = 0.66 [0.66-0.67] for the CAD-RADS scoring, and AUC = 0.93 [0.91-0.93], AUC = 0.97 [0.96-0.97], and AUC = 79 [0.78-0.79] for the therapy-oriented classification. The radiomic and combined models significantly outperformed (DeLong p-value < 0.05) the clinical one in class 1 and 2 (CAD-RADS cascade) and class 2 (therapy-oriented cascade). This study represents the first CAD-RADS classification radiomic model, guaranteeing higher explainability and providing a promising support system in coronary artery stenosis assessment.

SLOTMFound: Foundation-Based Diagnosis of Multiple Sclerosis Using Retinal SLO Imaging and OCT Thickness-maps

Esmailizadeh, R., Aghababaei, A., Mirzaei, S., Arian, R., Kafieh, R.

medrxiv logopreprintJul 15 2025
Multiple Sclerosis (MS) is a chronic autoimmune disorder of the central nervous system that can lead to significant neurological disability. Retinal imaging--particularly Scanning Laser Ophthalmoscopy (SLO) and Optical Coherence Tomography (OCT)--provides valuable biomarkers for early MS diagnosis through non-invasive visualization of neurodegenerative changes. This study proposes a foundation-based bi-modal classification framework that integrates SLO images and OCT-derived retinal thickness maps for MS diagnosis. To facilitate this, we introduce two modality-specific foundation models--SLOFound and TMFound--fine-tuned from the RETFound-Fundus backbone using an independent dataset of 203 healthy eyes, acquired at Noor Ophthalmology Hospital with the Heidelberg Spectralis HRA+OCT system. This dataset, which contains only normal cases, was used exclusively for encoder adaptation and is entirely disjoint from the classification dataset. For the classification stage, we use a separate dataset comprising IR-SLO images from 32 MS patients and 70 healthy controls, collected at the Kashani Comprehensive MS Center in Isfahan, Iran. We first assess OCT-derived maps layer-wise and identify the Ganglion Cell-Inner Plexiform Layer (GCIPL) as the most informative for MS detection. All subsequent analyses utilize GCIPL thickness maps in conjunction with SLO images. Experimental evaluations on the MS classification dataset demonstrate that our foundation-based bi-modal model outperforms unimodal variants and a prior ResNet-based state-of-the-art model, achieving a classification accuracy of 97.37%, with perfect sensitivity (100%). These results highlight the effectiveness of leveraging pre-trained foundation models, even when fine-tuned on limited data, to build robust, efficient, and generalizable diagnostic tools for MS in medical imaging contexts where labeled datasets are often scarce.

Advancing Early Detection of Major Depressive Disorder Using Multisite Functional Magnetic Resonance Imaging Data: Comparative Analysis of AI Models.

Mansoor M, Ansari K

pubmed logopapersJul 15 2025
Major depressive disorder (MDD) is a highly prevalent mental health condition with significant public health implications. Early detection is crucial for timely intervention, but current diagnostic methods often rely on subjective clinical assessments, leading to delayed or inaccurate diagnoses. Advances in neuroimaging and machine learning (ML) offer the potential for objective and accurate early detection. This study aimed to develop and validate ML models using multisite functional magnetic resonance imaging data for the early detection of MDD, compare their performance, and evaluate their clinical applicability. We used functional magnetic resonance imaging data from 1200 participants (600 with early-stage MDD and 600 healthy controls) across 3 public datasets. In total, 4 ML models-support vector machine, random forest, gradient boosting machine, and deep neural network-were trained and evaluated using a 5-fold cross-validation framework. Models were assessed for accuracy, sensitivity, specificity, F1-score, and area under the receiver operating characteristic curve. Shapley additive explanations values and activation maximization techniques were applied to interpret model predictions. The deep neural network model demonstrated superior performance with an accuracy of 89% (95% CI 86%-92%) and an area under the receiver operating characteristic curve of 0.95 (95% CI 0.93-0.97), outperforming traditional diagnostic methods by 15% (P<.001). Key predictive features included altered functional connectivity between the dorsolateral prefrontal cortex, anterior cingulate cortex, and limbic regions. The model achieved 78% sensitivity (95% CI 71%-85%) in identifying individuals who developed MDD within a 2-year follow-up period, demonstrating good generalizability across datasets. Our findings highlight the potential of artificial intelligence-driven approaches for the early detection of MDD, with implications for improving early intervention strategies. While promising, these tools should complement rather than replace clinical expertise, with careful consideration of ethical implications such as patient privacy and model biases.

Fully Automated Online Adaptive Radiation Therapy Decision-Making for Cervical Cancer Using Artificial Intelligence.

Sun S, Gong X, Cheng S, Cao R, He S, Liang Y, Yang B, Qiu J, Zhang F, Hu K

pubmed logopapersJul 15 2025
Interfraction variations during radiation therapy pose a challenge for patients with cervical cancer, highlighting the benefits of online adaptive radiation therapy (oART). However, adaptation decisions rely on subjective image reviews by physicians, leading to high interobserver variability and inefficiency. This study explores the feasibility of using artificial intelligence for decision-making in oART. A total of 24 patients with cervical cancer who underwent 671 fractions of daily fan-beam computed tomography (FBCT) guided oART were included in this study, with each fraction consisting of a daily FBCT image series and a pair of scheduled and adaptive plans. Dose deviations of scheduled plans exceeding predefined criteria were labeled as "trigger," otherwise as "nontrigger." A data set comprising 588 fractions from 21 patients was used for model development. For the machine learning model (ML), 101 morphologic, gray-level, and dosimetric features were extracted, with feature selection by the least absolute shrinkage and selection operator (LASSO) and classification by support vector machine (SVM). For deep learning, a Siamese network approach was used: the deep learning model of contour (DL_C) used only imaging data and contours, whereas a deep learning model of contour and dose (DL_D) also incorporated dosimetric data. A 5-fold cross-validation strategy was employed for model training and testing, and model performance was evaluated using the area under the curve (AUC), accuracy, precision, and recall. An independent data set comprising 83 fractions from 3 patients was used for model evaluation, with predictions compared against trigger labels assigned by 3 experienced radiation oncologists. Based on dosimetric labels, the 671 fractions were classified into 492 trigger and 179 nontrigger cases. The ML model selected 39 key features, primarily reflecting morphologic and gray-level changes in the clinical target volume (CTV) of the uterus (CTV_U), the CTV of the cervix, vagina, and parametrial tissues (CTV_C), and the small intestine. It achieved an AUC of 0.884, with accuracy, precision, and recall of 0.825, 0.824, and 0.827, respectively. The DL_C model demonstrated superior performance with an AUC of 0.917, accuracy of 0.869, precision of 0.860, and recall of 0.881. The DL_D model, which incorporated additional dosimetric data, exhibited a slight decline in performance compared with DL_C. Heatmap analyses indicated that for trigger fractions, the deep learning models focused on regions where the reference CT's CTV_U did not fully encompass the daily FBCT's CTV_U. Evaluation on an independent data set confirmed the robustness of all models. The weighted model's prediction accuracy significantly outperformed the physician consensus (0.855 vs 0.795), with comparable precision (0.917 vs 0.925) but substantially higher recall (0.887 vs 0.790). This study proposes machine learning and deep learning models to identify treatment fractions that may benefit from adaptive replanning in radical radiation therapy for cervical cancer, providing a promising decision-support tool to assist clinicians in determining when to trigger the oART workflow during treatment.

Vision transformer and complex network analysis for autism spectrum disorder classification in T1 structural MRI.

Gao X, Xu Y

pubmed logopapersJul 15 2025
Autism spectrum disorder (ASD) affects social interaction, communication, and behavior. Early diagnosis is important as it enables timely intervention that can significantly improve long-term outcomes, but current diagnostic, which rely heavily on behavioral observations and clinical interviews, are often subjective and time-consuming. This study introduces an AI-based approach that uses T1-weighted structural MRI (sMRI) scans, network analysis, and vision transformers to automatically diagnose ASD. sMRI data from 79 ASD patients and 105 healthy controls were obtained from the Autism Brain Imaging Data Exchange (ABIDE) database. Complex network analysis (CNA) features and ViT (Vision Transformer) features were developed for predicting ASD. Five models were developed for each type of features: logistic regression, support vector machine (SVM), gradient boosting (GB), K-nearest neighbors (KNN), and neural network (NN). 25 models were further developed by federating the two sets of 5 models. Model performance was evaluated using accuracy, area under the receiver operating characteristic curve (AUC-ROC), sensitivity, and specificity via fivefold cross-validation. The federate model CNA(KNN)-ViT(NN) achieved highest performance, with accuracy 0.951 ± 0.067, AUC-ROC 0.980 ± 0.020, sensitivity 0.963 ± 0.050, and specificity 0.943 ± 0.047. The performance of the ViT-based models exceeds that of the complex network-based models on 80% of the performance metrics. By federating CNA models, the ViT models can achieve better performance. This study demonstrates the feasibility of using CNA and ViT models for the automated diagnosis of ASD. The proposed CNA(KNN)-ViT(NN) model achieved better accuracy in ASD classification based solely on T1 sMRI images. The proposed method's reliance on widely available T1 sMRI scans highlights its potential for integration into routine clinical examinations, facilitating more efficient and accessible ASD screening.

Assessment of local recurrence risk in extremity high-grade osteosarcoma through multimodality radiomics integration.

Luo Z, Liu R, Li J, Ye Q, Zhou Z, Shen X

pubmed logopapersJul 15 2025
BackgroundA timely assessment of local recurrence (LoR) risk in extremity high-grade osteosarcoma is crucial for optimizing treatment strategies and improving patient outcomes.PurposeTo explore the potential of machine-learning algorithms in predicting LoR in patients with osteosarcoma.Material and MethodsData from patients with high-grade osteosarcoma who underwent preoperative radiograph and multiparametric magnetic resonance imaging (MRI) were collected. Machine-learning models were developed and trained on this dataset to predict LoR. The study involved selecting relevant features, training the models, and evaluating their performance using the receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC). DeLong's test was utilized for comparing the AUCs.ResultsThe performance (AUC, sensitivity, specificity, and accuracy) of four classifiers (random forest [RF], support vector machine, logistic regression, and extreme gradient boosting) using radiograph-MRI as image inputs were stable (all Hosmer-Lemeshow index >0.05) with the fair to good prognosis efficacy. The RF classifier using radiograph-MRI features as training inputs exhibited better performance (AUC = 0.806, 0.868) than that using MRI only (AUC = 0.774, 0.771) and radiograph only (AUC = 0.613 and 0.627) in the training and testing sets (<i>P</i> <0.05) while the other three classifiers showed no difference between MRI-only and radiograph-MRI models.ConclusionThis study provides valuable insights into the use of machine learning for predicting LoR in osteosarcoma patients. These findings emphasize the potential of integrating radiomics data with algorithms to improve prognostic assessments.

A literature review of radio-genomics in breast cancer: Lessons and insights for low and middle-income countries.

Mooghal M, Shaikh K, Shaikh H, Khan W, Siddiqui MS, Jamil S, Vohra LM

pubmed logopapersJul 15 2025
To improve precision medicine in breast cancer (BC) decision-making, radio-genomics is an emerging branch of artificial intelligence (AI) that links cancer characteristics assessed radiologically with the histopathology and genomic properties of the tumour. By employing MRIs, mammograms, and ultrasounds to uncover distinctive radiomics traits that potentially predict genomic abnormalities, this review attempts to find literature that links AI-based models with the genetic mutations discovered in BC patients. The review's findings can be used to create AI-based population models for low and middle-income countries (LMIC) and evaluate how well they predict outcomes for our cohort.Magnetic resonance imaging (MRI) appears to be the modality employed most frequently to research radio-genomics in BC patients in our systemic analysis. According to the papers we analysed, genetic markers and mutations linked to imaging traits, such as tumour size, shape, enhancing patterns, as well as clinical outcomes of treatment response, disease progression, and survival, can be identified by employing AI. The use of radio-genomics can help LMICs get through some of the barriers that keep the general population from having access to high-quality cancer care, thereby improving the health outcomes for BC patients in these regions. It is imperative to ensure that emerging technologies are used responsibly, in a way that is accessible to and affordable for all patients, regardless of their socio-economic condition.

Non-invasive liver fibrosis screening on CT images using radiomics.

Yoo JJ, Namdar K, Carey S, Fischer SE, McIntosh C, Khalvati F, Rogalla P

pubmed logopapersJul 15 2025
To develop a radiomics machine learning model for detecting liver fibrosis on CT images of the liver. With Ethics Board approval, 169 patients (68 women, 101 men; mean age, 51.2 years ± 14.7 [SD]) underwent an ultrasound-guided liver biopsy with simultaneous CT acquisitions without and following intravenous contrast material administration. Radiomic features were extracted from two regions of interest (ROIs) on the CT images, one placed at the biopsy site and another distant from the biopsy site. A development cohort, which was split further into training and validation cohorts across 100 trials, was used to determine the optimal combinations of contrast, normalization, machine learning model, and radiomic features for liver fibrosis detection based on their Area Under the Receiver Operating Characteristic curve (AUC) on the validation cohort. The optimal combinations were then used to develop one final liver fibrosis model which was evaluated on a test cohort. When averaging the AUC across all combinations, non-contrast enhanced (NC) CT (AUC, 0.6100; 95% CI: 0.5897, 0.6303) outperformed contrast-enhanced CT (AUC, 0.5680; 95% CI: 0.5471, 0.5890). The most effective model was found to be a logistic regression model with input features of maximum, energy, kurtosis, skewness, and small area high gray level emphasis extracted from non-contrast enhanced NC CT normalized using Gamma correction with γ = 1.5 (AUC, 0.7833; 95% CI: 0.7821, 0.7845). The presented radiomics-based logistic regression model holds promise as a non-invasive detection tool for subclinical, asymptomatic liver fibrosis. The model may serve as an opportunistic liver fibrosis screening tool when operated in the background during routine CT examinations covering liver parenchyma. The final liver fibrosis detection model is made publicly available at: https://github.com/IMICSLab/RadiomicsLiverFibrosisDetection .

An interpretable machine learning model for predicting bone marrow invasion in patients with lymphoma via <sup>18</sup>F-FDG PET/CT: a multicenter study.

Zhu X, Lu D, Wu Y, Lu Y, He L, Deng Y, Mu X, Fu W

pubmed logopapersJul 15 2025
Accurate identification of bone marrow invasion (BMI) is critical for determining the prognosis of and treatment strategies for lymphoma. Although bone marrow biopsy (BMB) is the current gold standard, its invasive nature and sampling errors highlight the necessity for noninvasive alternatives. We aimed to develop and validate an interpretable machine learning model that integrates clinical data, <sup>18</sup>F-fluorodeoxyglucose positron emission tomography/computed tomography (<sup>18</sup>F-FDG PET/CT) parameters, radiomic features, and deep learning features to predict BMI in lymphoma patients. We included 159 newly diagnosed lymphoma patients (118 from Center I and 41 from Center II), excluding those with prior treatments, incomplete data, or under 18 years of age. Data from Center I were randomly allocated to training (n = 94) and internal test (n = 24) sets; Center II served as an external validation set (n = 41). Clinical parameters, PET/CT features, radiomic characteristics, and deep learning features were comprehensively analyzed and integrated into machine learning models. Model interpretability was elucidated via Shapley Additive exPlanations (SHAPs). Additionally, a comparative diagnostic study evaluated reader performance with and without model assistance. BMI was confirmed in 70 (44%) patients. The key clinical predictors included B symptoms and platelet count. Among the tested models, the ExtraTrees classifier achieved the best performance. For external validation, the combined model (clinical + PET/CT + radiomics + deep learning) achieved an area under the receiver operating characteristic curve (AUC) of 0.886, outperforming models that use only clinical (AUC 0.798), radiomic (AUC 0.708), or deep learning features (AUC 0.662). SHAP analysis revealed that PET radiomic features (especially PET_lbp_3D_m1_glcm_DependenceEntropy), platelet count, and B symptoms were significant predictors of BMI. Model assistance significantly enhanced junior reader performance (AUC improved from 0.663 to 0.818, p = 0.03) and improved senior reader accuracy, although not significantly (AUC 0.768 to 0.867, p = 0.10). Our interpretable machine learning model, which integrates clinical, imaging, radiomic, and deep learning features, demonstrated robust BMI prediction performance and notably enhanced physician diagnostic accuracy. These findings underscore the clinical potential of interpretable AI to complement medical expertise and potentially reduce the reliance on invasive BMB for lymphoma staging.

An efficient deep learning based approach for automated identification of cervical vertebrae fracture as a clinical support aid.

Singh M, Tripathi U, Patel KK, Mohit K, Pathak S

pubmed logopapersJul 15 2025
Cervical vertebrae fractures pose a significant risk to a patient's health. The accurate diagnosis and prompt treatment need to be provided for effective treatment. Moreover, the automated analysis of the cervical vertebrae fracture is of utmost important, as deep learning models have been widely used and play significant role in identification and classification. In this paper, we propose a novel hybrid transfer learning approach for the identification and classification of fractures in axial CT scan slices of the cervical spine. We utilize the publicly available RSNA (Radiological Society of North America) dataset of annotated cervical vertebrae fractures for our experiments. The CT scan slices undergo preprocessing and analysis to extract features, employing four distinct pre-trained transfer learning models to detect abnormalities in the cervical vertebrae. The top-performing model, Inception-ResNet-v2, is combined with the upsampling component of U-Net to form a hybrid architecture. The hybrid model demonstrates superior performance over traditional deep learning models, achieving an overall accuracy of 98.44% on 2,984 test CT scan slices, which represents a 3.62% improvement over the 95% accuracy of predictions made by radiologists. This study advances clinical decision support systems, equipping medical professionals with a powerful tool for timely intervention and accurate diagnosis of cervical vertebrae fractures, thereby enhancing patient outcomes and healthcare efficiency.
Page 115 of 2412410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.