Sort by:
Page 49 of 1331325 results

Machine Learning Models for Predicting Mortality in Pneumonia Patients.

Pavlovic V, Haque MS, Grubor N, Pavlovic A, Stanisavljevic D, Milic N

pubmed logopapersJun 26 2025
Pneumonia remains a significant cause of hospital mortality, prompting the need for precise mortality prediction methods. This study conducted a systematic review identifying predictors of mortality using Machine Learning (ML) and applied these methods to hospitalized pneumonia patients at the University Clinical Centre Zvezdara. The systematic review identified 16 studies (313,572 patients), revealing common mortality predictors including age, oxygen levels, and albumin. A Random Forest (RF) model was developed using local data (n=343), achieving an accuracy of 99%, and AUC of 0.99. Key predictors identified were chest X-ray worsening, ventilator use, age, and oxygen support. ML demonstrated high potential for accurately predicting pneumonia mortality, surpassing traditional severity scores, and highlighting its practical clinical utility.

Artificial Intelligence in Cognitive Decline Diagnosis: Evaluating Cutting-Edge Techniques and Modalities.

Gharehbaghi A, Babic A

pubmed logopapersJun 26 2025
This paper presents the results of a scoping review that examines potentials of Artificial Intelligence (AI) in early diagnosis of Cognitive Decline (CD), which is regarded as a key issue in elderly health. The review encompasses peer-reviewed publications from 2020 to 2025, including scientific journals and conference proceedings. Over 70% of the studies rely on using magnetic resonance imaging (MRI) as the input to the AI models, with a high diagnostic accuracy of 98%. Integration of the relevant clinical data and electroencephalograms (EEG) with deep learning methods enhances diagnostic accuracy in the clinical settings. Recent studies have also explored the use of natural language processing models for detecting CD at its early stages, with an accuracy of 75%, exhibiting a high potential to be used in the appropriate pre-clinical environments.

Deep transfer learning radiomics combined with explainable machine learning for preoperative thymoma risk prediction based on CT.

Wu S, Fan L, Wu Y, Xu J, Guo Y, Zhang H, Xu Z

pubmed logopapersJun 26 2025
To develop and validate a computerized tomography (CT)‑based deep transfer learning radiomics model combined with explainable machine learning for preoperative risk prediction of thymoma. This retrospective study included 173 pathologically confirmed thymoma patients from our institution in the training group and 93 patients from two external centers in the external validation group. Tumors were classified according to the World Health Organization simplified criteria as low‑risk types (A, AB, and B1) or high‑risk types (B2 and B3). Radiomics features and deep transfer learning features were extracted from venous‑phase contrast‑enhanced CT images by using a modified Inception V3 network. Principal component analysis and least absolute shrinkage and selection operator regression identified 20 key predictors. Six classifiers-decision tree, gradient boosting machine, k‑nearest neighbors, naïve Bayes, random forest (RF), and support vector machine-were trained on five feature sets: CT imaging model, radiomics feature model, deep transfer learning feature model, combined feature model, and combined model. Interpretability was assessed with SHapley Additive exPlanations (SHAP), and an interactive web application was developed for real‑time individualized risk prediction and visualization. In the external validation group, the RF classifier achieved the highest area under the receiver operating characteristic curve (AUC) value of 0.956. In the training group, the AUC values for the CT imaging model, radiomics feature model, deep transfer learning feature model, combined feature model, and combined model were 0.684, 0.831, 0.815, 0.893, and 0.910, respectively. The corresponding AUC values in the external validation group were 0.604, 0.865, 0.880, 0.934, and 0.956, respectively. SHAP visualizations revealed the relative contribution of each feature, while the web application provided real‑time individual prediction probabilities with interpretative outputs. We developed a CT‑based deep transfer learning radiomics model combined with explainable machine learning and an interactive web application; this model achieved high accuracy and transparency for preoperative thymoma risk stratification, facilitating personalized clinical decision‑making.

Design and Optimization of an automatic deep learning-based cerebral reperfusion scoring (TICI) using thrombus localization.

Folcher A, Piters J, Wallach D, Guillard G, Ognard J, Gentric JC

pubmed logopapersJun 26 2025
The Thrombolysis in Cerebral Infarction (TICI) scale is widely used to assess angiographic outcomes of mechanical thrombectomy despite significant variability. Our objective was to create and optimize an artificial intelligence (AI)-based classification model for digital subtraction angiography (DSA) TICI scoring. Using a monocentric DSA dataset of thrombectomies, and a platform for medical image analysis, independent readers labeled each series according to TICI score and marked each thrombus. A convolutional neural network (CNN) classification model was created to classify TICI scores, into 2 groups (TICI 0,1 or 2a versus TICI 2b, 2c or 3) and 3 groups (TICI 0,1 or 2a versus TICI 2b versus TICI 2c or 3). The algorithm was first tested alone, and then thrombi positions were introduced to the algorithm by manual placement firstly, then after using a thrombus detection module. A total of 422 patients were enrolled in the study. 2492 thrombi were annotated on the TICI-labeled series. The model trained on a total of 1609 DSA series. The classification model into two classes had a specificity of 0.97 ±0.01 and a sensibility of 0.86 ±0.01. The 3-class models showed insufficient performance, even when combined with the true thrombi positions, with, respectively, F1 scores for TICI 2b classification of 0.50 and 0.55 ±0.07. The automatic thrombus detection module did not enhance the performance of the 3-class model, with a F1 score for the TICI 2b class measured at 0.50 ±0.07. The AI model provided a reproducible 2-class (TICI 0,1 or 2a versus 2b, 2c or 3) classification according to TICI scale. Its performance in distinguishing three classes (TICI 0,1 or 2a versus 2b versus 2c or 3) remains insufficient for clinical practice. Automatic thrombus detection did not improve the model's performance.

Predicting brain metastases in EGFR-positive lung adenocarcinoma patients using pre-treatment CT lung imaging data.

He X, Guan C, Chen T, Wu H, Su L, Zhao M, Guo L

pubmed logopapersJun 26 2025
This study aims to establish a dual-feature fusion model integrating radiomic features with deep learning features, utilizing single-modality pre-treatment lung CT image data to achieve early warning of brain metastasis (BM) risk within 2 years in EGFR-positive lung adenocarcinoma. After rigorous screening of 362 EGFR-positive lung adenocarcinoma patients with pre-treatment lung CT images, 173 eligible participants were ultimately enrolled in this study, including 93 patients with BM and 80 without BM. Radiomic features were extracted from manually segmented lung nodule regions, and a selection of features was used to develop radiomics models. For deep learning, ROI-level CT images were processed using several deep learning networks, including the novel vision mamba, which was applied for the first time in this context. A feature-level fusion model was developed by combining radiomic and deep learning features. Model performance was assessed using receiver operating characteristic (ROC) curves and decision curve analysis (DCA), with statistical comparisons of area under the curve (AUC) values using the DeLong test. Among the models evaluated, the fused vision mamba model demonstrated the best classification performance, achieving an AUC of 0.86 (95% CI: 0.82-0.90), with a recall of 0.88, F1-score of 0.70, and accuracy of 0.76. This fusion model outperformed both radiomics-only and deep learning-only models, highlighting its superior predictive accuracy for early BM risk detection in EGFR-positive lung adenocarcinoma patients. The fused vision mamba model, utilizing single CT imaging data, significantly enhances the prediction of brain metastasis within two years in EGFR-positive lung adenocarcinoma patients. This novel approach, combining radiomic and deep learning features, offers promising clinical value for early detection and personalized treatment.

Application Value of Deep Learning-Based AI Model in the Classification of Breast Nodules.

Zhi S, Cai X, Zhou W, Qian P

pubmed logopapersJun 25 2025
<b>Aims/Background</b> Breast nodules are highly prevalent among women, and ultrasound is a widely used screening tool. However, single ultrasound examinations often result in high false-positive rates, leading to unnecessary biopsies. Artificial intelligence (AI) has demonstrated the potential to improve diagnostic accuracy, reducing misdiagnosis and minimising inter-observer variability. This study developed a deep learning-based AI model to evaluate its clinical utility in assisting sonographers with the Breast Imaging Reporting and Data System (BI-RADS) classification of breast nodules. <b>Methods</b> A retrospective analysis was conducted on 558 patients with breast nodules classified as BI-RADS categories 3 to 5, confirmed through pathological examination at The People's Hospital of Pingyang County between December 2019 and December 2023. The image dataset was divided into a training set, validation set, and test set, and a convolutional neural network (CNN) was used to construct a deep learning-based AI model. Patients underwent ultrasound examination and AI-assisted diagnosis. The receiver operating characteristic (ROC) curve was used to analyse the performance of the AI model, physician adjudication results, and the diagnostic efficacy of physicians before and after AI model assistance. Cohen's weighted Kappa coefficient was used to assess the consistency of BI-RADS classification among five ultrasound physicians before and after AI model assistance. Additionally, statistical analyses were performed to evaluate changes in BI-RADS classification results before and after AI model assistance for each physician. <b>Results</b> According to pathological examination, 765 of the 1026 breast nodules were benign, while 261 were malignant. The sensitivity, specificity, and accuracy of routine ultrasonography in diagnosing benign and malignant nodules were 80.85%, 91.59%, and 88.31%, respectively. In comparison, the AI system achieved a sensitivity of 89.36%, specificity of 92.52%, and accuracy of 91.56%. Furthermore, AI model assistance significantly improved the consistency of physicians' BI-RADS classification (<i>p</i> < 0.001). <b>Conclusion</b> A deep learning-based AI model constructed using ultrasound images can enhance the differentiation between benign and malignant breast nodules and improve classification accuracy, thereby reducing the incidence of missed and misdiagnoses.

Regional free-water diffusion is more strongly related to neuroinflammation than neurodegeneration.

Sumra V, Hadian M, Dilliott AA, Farhan SMK, Frank AR, Lang AE, Roberts AC, Troyer A, Arnott SR, Marras C, Tang-Wai DF, Finger E, Rogaeva E, Orange JB, Ramirez J, Zinman L, Binns M, Borrie M, Freedman M, Ozzoude M, Bartha R, Swartz RH, Munoz D, Masellis M, Black SE, Dixon RA, Dowlatshahi D, Grimes D, Hassan A, Hegele RA, Kumar S, Pasternak S, Pollock B, Rajji T, Sahlas D, Saposnik G, Tartaglia MC

pubmed logopapersJun 25 2025
Recent research has suggested that neuroinflammation may be important in the pathogenesis of neurodegenerative diseases. Free-water diffusion (FWD) has been proposed as a non-invasive neuroimaging-based biomarker for neuroinflammation. Free-water maps were generated using diffusion MRI data in 367 patients from the Ontario Neurodegenerative Disease Research Initiative (108 Alzheimer's Disease/Mild Cognitive Impairment, 42 Frontotemporal Dementia, 37 Amyotrophic Lateral Sclerosis, 123 Parkinson's Disease, and 58 vascular disease-related Cognitive Impairment). The ability of FWD to predict neuroinflammation and neurodegeneration from biofluids was estimated using plasma glial fibrillary-associated protein (GFAP) and neurofilament light chain (NfL), respectively. Recursive Feature Elimination (RFE) performed the strongest out of all feature selection algorithms used and revealed regional specificity for areas that are the most important features for predicting GFAP over NfL concentration. Deep learning models using selected features and demographic information revealed better prediction of GFAP over NfL. Based on feature selection and deep learning methods, FWD was found to be more strongly related to GFAP concentration (measure of astrogliosis) over NfL (measure of neuro-axonal damage), across neurodegenerative disease groups, in terms of predictive performance. Non-invasive markers of neurodegeneration such as MRI structural imaging that can reveal neurodegeneration already exist, while non-invasive markers of neuroinflammation are not available. Our results support the use of FWD as a non-invasive neuroimaging-based biomarker for neuroinflammation.

Comparative Analysis of Automated vs. Expert-Designed Machine Learning Models in Age-Related Macular Degeneration Detection and Classification.

Durmaz Engin C, Beşenk U, Özizmirliler D, Selver MA

pubmed logopapersJun 25 2025
To compare the effectiveness of expert-designed machine learning models and code-free automated machine learning (AutoML) models in classifying optical coherence tomography (OCT) images for detecting age-related macular degeneration (AMD) and distinguishing between its dry and wet forms. Custom models were developed by an artificial intelligence expert using the EfficientNet V2 architecture, while AutoML models were created by an ophthalmologist utilizing LobeAI with transfer learning via ResNet-50 V2. Both models were designed to differentiate normal OCT images from AMD and to also distinguish between dry and wet AMD. The models were trained and tested using an 80:20 split, with each diagnostic group containing 500 OCT images. Performance metrics, including sensitivity, specificity, accuracy, and F1 scores, were calculated and compared. The expert-designed model achieved an overall accuracy of 99.67% for classifying all images, with F1 scores of 0.99 or higher across all binary class comparisons. In contrast, the AutoML model achieved an overall accuracy of 89.00%, with F1 scores ranging from 0.86 to 0.90 in binary comparisons. Notably lower recall was observed for dry AMD vs. normal (0.85) in the AutoML model, indicating challenges in correctly identifying dry AMD. While the AutoML models demonstrated acceptable performance in identifying and classifying AMD cases, the expert-designed models significantly outperformed them. The use of advanced neural network architectures and rigorous optimization in the expert-developed models underscores the continued necessity of expert involvement in the development of high-precision diagnostic tools for medical image classification.

Fusing Radiomic Features with Deep Representations for Gestational Age Estimation in Fetal Ultrasound Images

Fangyijie Wang, Yuan Liang, Sourav Bhattacharjee, Abey Campbell, Kathleen M. Curran, Guénolé Silvestre

arxiv logopreprintJun 25 2025
Accurate gestational age (GA) estimation, ideally through fetal ultrasound measurement, is a crucial aspect of providing excellent antenatal care. However, deriving GA from manual fetal biometric measurements depends on the operator and is time-consuming. Hence, automatic computer-assisted methods are demanded in clinical practice. In this paper, we present a novel feature fusion framework to estimate GA using fetal ultrasound images without any measurement information. We adopt a deep learning model to extract deep representations from ultrasound images. We extract radiomic features to reveal patterns and characteristics of fetal brain growth. To harness the interpretability of radiomics in medical imaging analysis, we estimate GA by fusing radiomic features and deep representations. Our framework estimates GA with a mean absolute error of 8.0 days across three trimesters, outperforming current machine learning-based methods at these gestational ages. Experimental results demonstrate the robustness of our framework across different populations in diverse geographical regions. Our code is publicly available on \href{https://github.com/13204942/RadiomicsImageFusion_FetalUS}{GitHub}.

Opportunistic Osteoporosis Diagnosis via Texture-Preserving Self-Supervision, Mixture of Experts and Multi-Task Integration

Jiaxing Huang, Heng Guo, Le Lu, Fan Yang, Minfeng Xu, Ge Yang, Wei Luo

arxiv logopreprintJun 25 2025
Osteoporosis, characterized by reduced bone mineral density (BMD) and compromised bone microstructure, increases fracture risk in aging populations. While dual-energy X-ray absorptiometry (DXA) is the clinical standard for BMD assessment, its limited accessibility hinders diagnosis in resource-limited regions. Opportunistic computed tomography (CT) analysis has emerged as a promising alternative for osteoporosis diagnosis using existing imaging data. Current approaches, however, face three limitations: (1) underutilization of unlabeled vertebral data, (2) systematic bias from device-specific DXA discrepancies, and (3) insufficient integration of clinical knowledge such as spatial BMD distribution patterns. To address these, we propose a unified deep learning framework with three innovations. First, a self-supervised learning method using radiomic representations to leverage unlabeled CT data and preserve bone texture. Second, a Mixture of Experts (MoE) architecture with learned gating mechanisms to enhance cross-device adaptability. Third, a multi-task learning framework integrating osteoporosis diagnosis, BMD regression, and vertebra location prediction. Validated across three clinical sites and an external hospital, our approach demonstrates superior generalizability and accuracy over existing methods for opportunistic osteoporosis screening and diagnosis.
Page 49 of 1331325 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.