Sort by:
Page 144 of 2432424 results

Automated Scoliosis Cobb Angle Classification in Biplanar Radiograph Imaging With Explainable Machine Learning Models.

Yu J, Lahoti YS, McCandless KC, Namiri NK, Miyasaka MS, Ahmed H, Song J, Corvi JJ, Berman DC, Cho SK, Kim JS

pubmed logopapersJul 1 2025
Retrospective cohort study. To quantify the pathology of the spine in patients with scoliosis through one-dimensional feature analysis. Biplanar radiograph (EOS) imaging is a low-dose technology offering high-resolution spinal curvature measurement, crucial for assessing scoliosis severity and guiding treatment decisions. Machine learning (ML) algorithms, utilizing one-dimensional image features, can enable automated Cobb angle classification, improving accuracy and efficiency in scoliosis evaluation while reducing the need for manual measurements, thus supporting clinical decision-making. This study used 816 annotated AP EOS spinal images with a spine segmentation mask and a 10° polynomial to represent curvature. Engineered features included the first and second derivatives, Fourier transform, and curve energy, normalized for robustness. XGBoost selected the top 32 features. The models classified scoliosis into multiple groups based on curvature degree, measured through Cobb angle. To address the class imbalance, stratified sampling, undersampling, and oversampling techniques were used, with 10-fold stratified K-fold cross-validation for generalization. An automatic grid search was used for hyperparameter optimization, with K-fold cross-validation (K=3). The top-performing model was Random Forest, achieving an ROC AUC of 91.8%. An accuracy of 86.1%, precision of 86.0%, recall of 86.0%, and an F1 score of 85.1% were also achieved. Of the three techniques used to address class imbalance, stratified sampling produced the best out-of-sample results. SHAP values were generated for the top 20 features, including spine curve length and linear regression error, with the most predictive features ranked at the top, enhancing model explainability. Feature engineering with classical ML methods offers an effective approach for classifying scoliosis severity based on Cobb angle ranges. The high interpretability of features in representing spinal pathology, along with the ease of use of classical ML techniques, makes this an attractive solution for developing automated tools to manage complex spinal measurements.

Prediction of adverse pathology in prostate cancer using a multimodal deep learning approach based on [<sup>18</sup>F]PSMA-1007 PET/CT and multiparametric MRI.

Lin H, Yao F, Yi X, Yuan Y, Xu J, Chen L, Wang H, Zhuang Y, Lin Q, Xue Y, Yang Y, Pan Z

pubmed logopapersJul 1 2025
Accurate prediction of adverse pathology (AP) in prostate cancer (PCa) patients is crucial for formulating effective treatment strategies. This study aims to develop and evaluate a multimodal deep learning model based on [<sup>18</sup>F]PSMA-1007 PET/CT and multiparametric MRI (mpMRI) to predict the presence of AP, and investigate whether the model that integrates [<sup>18</sup>F]PSMA-1007 PET/CT and mpMRI outperforms the individual PET/CT or mpMRI models in predicting AP. 341 PCa patients who underwent radical prostatectomy (RP) with mpMRI and PET/CT scans were retrospectively analyzed. We generated deep learning signature from mpMRI and PET/CT with a multimodal deep learning model (MPC) based on convolutional neural networks and transformer, which was subsequently incorporated with clinical characteristics to construct an integrated model (MPCC). These models were compared with clinical models and single mpMRI or PET/CT models. The MPCC model showed the best performance in predicting AP (AUC, 0.955 [95% CI: 0.932-0.975]), which is higher than MPC model (AUC, 0.930 [95% CI: 0.901-0.955]). The performance of the MPC model is better than that of single PET/CT (AUC, 0.813 [95% CI: 0.780-0.845]) or mpMRI (AUC, 0.865 [95% CI: 0.829-0.901]). Additionally, MPCC model is also effective in predicting single adverse pathological features. The deep learning model that integrates mpMRI and [<sup>18</sup>F]PSMA-1007 PET/CT enhances the predictive capabilities for the presence of AP in PCa patients. This improvement aids physicians in making informed preoperative decisions, ultimately enhancing patient prognosis.

Habitat-Based Radiomics for Revealing Tumor Heterogeneity and Predicting Residual Cancer Burden Classification in Breast Cancer.

Li ZY, Wu SN, Lin P, Jiang MC, Chen C, Lin WJ, Xue ES, Liang RX, Lin ZH

pubmed logopapersJul 1 2025
To investigate the feasibility of characterizing tumor heterogeneity in breast cancer ultrasound images using habitat analysis technology and establish a radiomics machine learning model for predicting response to neoadjuvant chemotherapy (NAC). Ultrasound images from patients with pathologically confirmed breast cancer who underwent neoadjuvant therapy at our institution between July 2021 and December 2023 were retrospectively reviewed. Initially, the region of interest was delineated and segmented into multiple habitat areas using local feature delineation and cluster analysis techniques. Subsequently, radiomics features were extracted from each habitat area to construct 3 machine learning models. Finally, the model's efficacy was assessed through operating characteristic (ROC) curve analysis, decision curve analysis (DCA), and calibration curve evaluation. A total of 945 patients were enrolled, with 333 demonstrating a favorable response to NAC and 612 exhibiting an unfavorable response to NAC. Through the application of habitat analysis techniques, 3 distinct habitat regions within the tumor were identified. Subsequently, a predictive model was developed by incorporating 19 radiomics features, and all 3 machine learning models demonstrated excellent performance in predicting treatment outcomes. Notably, extreme gradient boosting (XGBoost) exhibited superior performance with an area under the curve (AUC) of 0.872 in the training cohort and 0.740 in the testing cohort. Additionally, DCA and calibration curves were employed for further evaluation. The habitat analysis technique effectively distinguishes distinct biological subregions of breast cancer, while the established radiomics machine learning model predicts NAC response by forecasting residual cancer burden (RCB) classification.

CT-Based Machine Learning Radiomics Analysis to Diagnose Dysthyroid Optic Neuropathy.

Ma L, Jiang X, Yang X, Wang M, Hou Z, Zhang J, Li D

pubmed logopapersJul 1 2025
To develop CT-based machine learning radiomics models used for the diagnosis of dysthyroid optic neuropathy (DON). This is a retrospective study included 57 patients (114 orbits) diagnosed with thyroid-associated ophthalmopathy (TAO) at the Beijing Tongren Hospital between December 2019 and June 2023. CT scans, medical history, examination results, and clinical data of the participants were collected. DON was diagnosed based on clinical manifestations and examinations. The DON orbits and non-DON orbits were then divided into a training set and a test set at a ratio of approximately 7:3. The 3D slicer software was used to identify the volumes of interest (VOI). Radiomics features were extracted using the Pyradiomics and selected by t-test and least absolute shrinkage and selection operator (LASSO) regression algorithm with 10-fold cross-validation. Machine-learning models, including random forest (RF) model, support vector machine (SVM) model, and logistic regression (LR) model were built and validated by receiver operating characteristic (ROC) curves, area under the curves (AUC) and confusion matrix-related data. The net benefit of the models is shown by the decision curve analysis (DCA). We extracted 107 features from the imaging data, representing various image information of the optic nerve and surrounding orbital tissues. Using the LASSO method, we identified the five most informative features. The AUC ranged from 0.77 to 0.80 in the training set and the AUC of the RF, SVM and LR models based on the features were 0.86, 0.80 and 0.83 in the test set, respectively. The DeLong test showed there was no significant difference between the three models (RF model vs SVM model: <i>p</i> = .92; RF model vs LR model: <i>p</i> = .94; SVM model vs LR model: <i>p</i> = .98) and the models showed optimal clinical efficacy in DCA. The CT-based machine learning radiomics analysis exhibited excellent ability to diagnose DON and may enhance diagnostic convenience.

Redefining prostate cancer care: innovations and future directions in active surveillance.

Koett M, Melchior F, Artamonova N, Bektic J, Heidegger I

pubmed logopapersJul 1 2025
This review provides a critical analysis of recent advancements in active surveillance (AS), emphasizing updates from major international guidelines and their implications for clinical practice. Recent revisions to international guidelines have broadened the eligibility criteria for AS to include selected patients with ISUP grade group 2 prostate cancer. This adjustment acknowledges that certain intermediate-risk cancers may be appropriate for AS, reflecting a heightened focus on achieving a balance between oncologic control and maintaining quality of life by minimizing the risk of overtreatment. This review explores key innovations in AS for prostate cancer, including multi parametric magnetic resonance imaging (mpMRI), genomic biomarkers, and risk calculators, which enhance patient selection and monitoring. While promising, their routine use remains debated due to guideline inconsistencies, cost, and accessibility. Special focus is given to biomarkers for identifying ISUP grade group 2 cancers suitable for AS. Additionally, the potential of artificial intelligence to improve diagnostic accuracy and risk stratification is examined. By integrating these advancements, this review provides a critical perspective on optimizing AS for more personalized and effective prostate cancer management.

The impact of updated imaging software on the performance of machine learning models for breast cancer diagnosis: a multi-center, retrospective study.

Cai L, Golatta M, Sidey-Gibbons C, Barr RG, Pfob A

pubmed logopapersJul 1 2025
Artificial Intelligence models based on medical (imaging) data are increasingly developed. However, the imaging software on which the original data is generated is frequently updated. The impact of updated imaging software on the performance of AI models is unclear. We aimed to develop machine learning models using shear wave elastography (SWE) data to identify malignant breast lesions and to test the models' generalizability by validating them on external data generated by both the original updated software versions. We developed and validated different machine learning models (GLM, MARS, XGBoost, SVM) using multicenter, international SWE data (NCT02638935) using tenfold cross-validation. Findings were compared to the histopathologic evaluation of the biopsy specimen or 2-year follow-up. The outcome measure was the area under the curve (AUROC). We included 1288 cases in the development set using the original imaging software and 385 cases in the validation set using both, original and updated software. In the external validation set, the GLM and XGBoost models showed better performance with the updated software data compared to the original software data (AUROC 0.941 vs. 0.902, p < 0.001 and 0.934 vs. 0.872, p < 0.001). The MARS model showed worse performance with the updated software data (0.847 vs. 0.894, p = 0.045). SVM was not calibrated. In this multicenter study using SWE data, some machine learning models demonstrated great potential to bridge the gap between original software and updated software, whereas others exhibited weak generalizability.

A novel deep learning framework for retinal disease detection leveraging contextual and local features cues from retinal images.

Khan SD, Basalamah S, Lbath A

pubmed logopapersJul 1 2025
Retinal diseases are a serious global threat to human vision, and early identification is essential for effective prevention and treatment. However, current diagnostic methods rely on manual analysis of fundus images, which heavily depends on the expertise of ophthalmologists. This manual process is time-consuming and labor-intensive and can sometimes lead to missed diagnoses. With advancements in computer vision technology, several automated models have been proposed to improve diagnostic accuracy for retinal diseases and medical imaging in general. However, these methods face challenges in accurately detecting specific diseases within images due to inherent issues associated with fundus images, including inter-class similarities, intra-class variations, limited local information, insufficient contextual understanding, and class imbalances within datasets. To address these challenges, we propose a novel deep learning framework for accurate retinal disease classification. This framework is designed to achieve high accuracy in identifying various retinal diseases while overcoming inherent challenges associated with fundus images. Generally, the framework consists of three main modules. The first module is Densely Connected Multidilated Convolution Neural Network (DCM-CNN) that extracts global contextual information by effectively integrating novel Casual Dilated Dense Convolutional Blocks (CDDCBs). The second module of the framework, namely, Local-Patch-based Convolution Neural Network (LP-CNN), utilizes Class Activation Map (CAM) (obtained from DCM-CNN) to extract local and fine-grained information. To identify the correct class and minimize the error, a synergic network is utilized that takes the feature maps of both DCM-CNN and LP-CNN and connects both maps in a fully connected fashion to identify the correct class and minimize the errors. The framework is evaluated through a comprehensive set of experiments, both quantitatively and qualitatively, using two publicly available benchmark datasets: RFMiD and ODIR-5K. Our experimental results demonstrate the effectiveness of the proposed framework and achieves higher performance on RFMiD and ODIR-5K datasets compared to reference methods.

Identifying Primary Sites of Spinal Metastases: Expert-Derived Features vs. ResNet50 Model Using Nonenhanced MRI.

Liu K, Ning J, Qin S, Xu J, Hao D, Lang N

pubmed logopapersJul 1 2025
The spinal column is a frequent site for metastases, affecting over 30% of solid tumor patients. Identifying the primary tumor is essential for guiding clinical decisions but often requires resource-intensive diagnostics. To develop and validate artificial intelligence (AI) models using noncontrast MRI to identify primary sites of spinal metastases, aiming to enhance diagnostic efficiency. Retrospective. A total of 514 patients with pathologically confirmed spinal metastases (mean age, 59.3 ± 11.2 years; 294 males) were included, split into a development set (360) and a test set (154). Noncontrast sagittal MRI sequences (T1-weighted, T2-weighted, and fat-suppressed T2) were acquired using 1.5 T and 3 T scanners. Two models were evaluated for identifying primary sites of spinal metastases: the expert-derived features (EDF) model using radiologist-identified imaging features and a ResNet50-based deep learning (DL) model trained on noncontrast MRI. Performance was assessed using accuracy, precision, recall, F1 score, and the area under the receiver operating characteristic curve (ROC-AUC) for top-1, top-2, and top-3 indicators. Statistical analyses included Shapiro-Wilk, t tests, Mann-Whitney U test, and chi-squared tests. ROC-AUCs were compared via DeLong tests, with 95% confidence intervals from 1000 bootstrap replications and significance at P < 0.05. The EDF model outperformed the DL model in top-3 accuracy (0.88 vs. 0.69) and AUC (0.80 vs. 0.71). Subgroup analysis showed superior EDF performance for common sites like lung and kidney (e.g., kidney F1: 0.94 vs. 0.76), while the DL model had higher recall for rare sites like thyroid (0.80 vs. 0.20). SHapley Additive exPlanations (SHAP) analysis identified sex (SHAP: -0.57 to 0.68), age (-0.48 to 0.98), T1WI signal intensity (-0.29 to 0.72), and pathological fractures (-0.76 to 0.25) as key features. AI techniques using noncontrast MRI improve diagnostic efficiency for spinal metastases. The EDF model outperformed the DL model, showing greater clinical potential. Spinal metastases, or cancer spreading to the spine, are common in patients with advanced cancer, often requiring extensive tests to determine the original tumor site. Our study explored whether artificial intelligence could make this process faster and more accurate using noncontrast MRI scans. We tested two methods: one based on radiologists' expertise in identifying imaging features and another using a deep learning model trained to analyze MRI images. The expert-based method was more reliable, correctly identifying the tumor site in 88% of cases when considering the top three likely diagnoses. This approach may help doctors reduce diagnostic time and improve patient care. 3 TECHNICAL EFFICACY: Stage 2.

Effect of artificial intelligence-aided differentiation of adenomatous and non-adenomatous colorectal polyps at CT colonography on radiologists' therapy management.

Grosu S, Fabritius MP, Winkelmann M, Puhr-Westerheide D, Ingenerf M, Maurus S, Graser A, Schulz C, Knösel T, Cyran CC, Ricke J, Kazmierczak PM, Ingrisch M, Wesp P

pubmed logopapersJul 1 2025
Adenomatous colorectal polyps require endoscopic resection, as opposed to non-adenomatous hyperplastic colorectal polyps. This study aims to evaluate the effect of artificial intelligence (AI)-assisted differentiation of adenomatous and non-adenomatous colorectal polyps at CT colonography on radiologists' therapy management. Five board-certified radiologists evaluated CT colonography images with colorectal polyps of all sizes and morphologies retrospectively and decided whether the depicted polyps required endoscopic resection. After a primary unassisted reading based on current guidelines, a second reading with access to the classification of a radiomics-based random-forest AI-model labelling each polyp as "non-adenomatous" or "adenomatous" was performed. Performance was evaluated using polyp histopathology as the reference standard. 77 polyps in 59 patients comprising 118 polyp image series (47% supine position, 53% prone position) were evaluated unassisted and AI-assisted by five independent board-certified radiologists, resulting in a total of 1180 readings (subsequent polypectomy: yes or no). AI-assisted readings had higher accuracy (76% +/- 1% vs. 84% +/- 1%), sensitivity (78% +/- 6% vs. 85% +/- 1%), and specificity (73% +/- 8% vs. 82% +/- 2%) in selecting polyps eligible for polypectomy (p < 0.001). Inter-reader agreement was improved in the AI-assisted readings (Fleiss' kappa 0.69 vs. 0.92). AI-based characterisation of colorectal polyps at CT colonography as a second reader might enable a more precise selection of polyps eligible for subsequent endoscopic resection. However, further studies are needed to confirm this finding and histopathologic polyp evaluation is still mandatory. Question This is the first study evaluating the impact of AI-based polyp classification in CT colonography on radiologists' therapy management. Findings Compared with unassisted reading, AI-assisted reading had higher accuracy, sensitivity, and specificity in selecting polyps eligible for polypectomy. Clinical relevance Integrating an AI tool for colorectal polyp classification in CT colonography could further improve radiologists' therapy recommendations.

Machine-learning model based on ultrasomics for non-invasive evaluation of fibrosis in IgA nephropathy.

Huang Q, Huang F, Chen C, Xiao P, Liu J, Gao Y

pubmed logopapersJul 1 2025
To develop and validate an ultrasomics-based machine-learning (ML) model for non-invasive assessment of interstitial fibrosis and tubular atrophy (IF/TA) in patients with IgA nephropathy (IgAN). In this multi-center retrospective study, 471 patients with primary IgA nephropathy from four institutions were included (training, n = 275; internal testing, n = 69; external testing, n = 127; respectively). The least absolute shrinkage and selection operator logistic regression with tenfold cross-validation was used to identify the most relevant features. The ML models were constructed based on ultrasomics. The Shapley Additive Explanation (SHAP) was used to explore the interpretability of the models. Logistic regression analysis was employed to combine ultrasomics, clinical data, and ultrasound imaging characteristics, creating a comprehensive model. A receiver operating characteristic curve, calibration, decision curve, and clinical impact curve were used to evaluate prediction performance. To differentiate between mild and moderate-to-severe IF/TA, three prediction models were developed: the Rad_SVM_Model, Clinic_LR_Model, and Rad_Clinic_Model. The area under curves of these three models were 0.861, 0.884, and 0.913 in the training cohort, and 0.760, 0.860, and 0.894 in the internal validation cohort, as well as 0.794, 0.865, and 0.904 in the external validation cohort. SHAP identified the contribution of radiomics features. Difference analysis showed that there were significant differences between radiomics features and fibrosis. The comprehensive model was superior to that of individual indicators and performed well. We developed and validated a model that combined ultrasomics, clinical data, and clinical ultrasonic characteristics based on ML to assess the extent of fibrosis in IgAN. Question Currently, there is a lack of a comprehensive ultrasomics-based machine-learning model for non-invasive assessment of the extent of Immunoglobulin A nephropathy (IgAN) fibrosis. Findings We have developed and validated a robust and interpretable machine-learning model based on ultrasomics for assessing the degree of fibrosis in IgAN. Clinical relevance The machine-learning model developed in this study has significant interpretable clinical relevance. The ultrasomics-based comprehensive model had the potential for non-invasive assessment of fibrosis in IgAN, which helped evaluate disease progress.
Page 144 of 2432424 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.