Sort by:
Page 25 of 3433422 results

Two-Step Semi-Automated Classification of Choroidal Metastases on MRI: Orbit Localization via Bounding Boxes Followed by Binary Classification via Evolutionary Strategies.

Shi JS, McRae-Posani B, Haque S, Holodny A, Shalu H, Stember J

pubmed logopapersSep 9 2025
The choroid of the eye is a rare site for metastatic tumor spread, and as small lesions on the periphery of brain MRI studies, these choroidal metastases are often missed. To improve their detection, we aimed to use artificial intelligence to distinguish between brain MRI scans containing normal orbits and choroidal metastases. We present a novel hierarchical deep learning framework for sequential cropping and classification on brain MRI images to detect choroidal metastases. The key innovation of this approach lies in training an orbit localization network based on a YOLOv5 architecture to focus on the orbits, isolating the structures of interest and eliminating irrelevant background information. The initial sub-task of localization ensures that the input to the subsequent classification network is restricted to the precise anatomical region where choroidal metastases are likely to occur. In Step 1, we trained a localization network on 386 T2-weighted brain MRI axial slices from 97 patients. Using the localized orbit images from Step 1, in Step 2 we trained a binary classifier network with 33 normal and 33 choroidal metastasis-containing brain MRIs. To address the challenges posed by the small dataset, we employed a data-efficient evolutionary strategies approach, which has been shown to avoid both overfitting and underfitting in small training sets. Our orbit localization model identified globes with 100% accuracy and a mean Average Precision of Intersection over Union thresholds of 0.5 to 0.95 (mAP(0.5:0.95)) of 0.47 on held-out testing data. Similarly, the model generalized well to our Step 2 dataset which included orbits demonstrating pathologies, achieving 100% accuracy and mAP(0.5:0.95) of 0.44. mAP(0.5:0.95) appeared low because the model could not distinguish left and right orbits. Using the cropped orbits as inputs, our evolutionary strategies-trained convolutional neural network achieved a testing set area under the curve (AUC) of 0.93 (95% CI [0.83, 1.03]), with 100% sensitivity and 87% specificity at the optimal Youden's index. The semi-automated pipeline from brain MRI slices to choroidal metastasis classification demonstrates the utility of a sequential localization and classification approach, and clinical relevance for identifying small, "corner-of-the-image", easily overlooked lesions. AI = artificial intelligence; AUC = area under the curve; CNN = convolutional neural network; DNE = deep neuroevolution; IoU = intersection over union; mAP = mean average precision; ROC = receiver operating characteristic.

Individual hearts: computational models for improved management of cardiovascular disease.

van Osta N, van Loon T, Lumens J

pubmed logopapersSep 9 2025
Cardiovascular disease remains a leading cause of morbidity and mortality worldwide, with conventional management often applying standardised approaches that struggle to address individual variability in increasingly complex patient populations. Computational models, both knowledge-driven and data-driven, have the potential to reshape cardiovascular medicine by offering innovative tools that integrate patient-specific information with physiological understanding or statistical inference to generate insights beyond conventional diagnostics. This review traces how computational modelling has evolved from theoretical research tools into clinical decision support systems that enable personalised cardiovascular care. We examine this evolution across three key domains: enhancing diagnostic accuracy through improved measurement techniques, deepening mechanistic insights into cardiovascular pathophysiology and enabling precision medicine through patient-specific simulations. The review covers the complementary strengths of data-driven approaches, which identify patterns in large clinical datasets, and knowledge-driven models, which simulate cardiovascular processes based on established biophysical principles. Applications range from artificial intelligence-guided measurements and model-informed diagnostics to digital twins that enable in silico testing of therapeutic interventions in the digital replicas of individual hearts. This review outlines the main types of cardiovascular modelling, highlighting their strengths, limitations and complementary potential through current clinical and research applications. We also discuss future directions, emphasising the need for interdisciplinary collaboration, pragmatic model design and integration of hybrid approaches. While progress is promising, challenges remain in validation, regulatory approval and clinical workflow integration. With continued development and thoughtful implementation, computational models hold the potential to enable more informed decision-making and advance truly personalised cardiovascular care.

A comprehensive review of techniques, algorithms, advancements, challenges, and clinical applications of multi-modal medical image fusion for improved diagnosis.

Zubair M, Hussain M, Albashrawi MA, Bendechache M, Owais M

pubmed logopapersSep 9 2025
Multi-modal medical image fusion (MMIF) is increasingly recognized as an essential technique for enhancing diagnostic precision and facilitating effective clinical decision-making within computer-aided diagnosis systems. MMIF combines data from X-ray, MRI, CT, PET, SPECT, and ultrasound to create detailed, clinically useful images of patient anatomy and pathology. These integrated representations significantly advance diagnostic accuracy, lesion detection, and segmentation. This comprehensive review meticulously surveys the evolution, methodologies, algorithms, current advancements, and clinical applications of MMIF. We present a critical comparative analysis of traditional fusion approaches, including pixel-, feature-, and decision-level methods, and delves into recent advancements driven by deep learning, generative models, and transformer-based architectures. A critical comparative analysis is presented between these conventional methods and contemporary techniques, highlighting differences in robustness, computational efficiency, and interpretability. The article addresses extensive clinical applications across oncology, neurology, and cardiology, demonstrating MMIF's vital role in precision medicine through improved patient-specific therapeutic outcomes. Moreover, the review thoroughly investigates the persistent challenges affecting MMIF's broad adoption, including issues related to data privacy, heterogeneity, computational complexity, interpretability of AI-driven algorithms, and integration within clinical workflows. It also identifies significant future research avenues, such as the integration of explainable AI, adoption of privacy-preserving federated learning frameworks, development of real-time fusion systems, and standardization efforts for regulatory compliance. This review organizes key knowledge, outlines challenges, and highlights opportunities, guiding researchers, clinicians, and developers in advancing MMIF for routine clinical use and promoting personalized healthcare. To support further research, we provide a GitHub repository that includes popular multi-modal medical imaging datasets along with recent models in our shared GitHub repository.

Prognostic Utility of a Deep Learning Radiomics Nomogram Integrating Ultrasound and Multi-Sequence MRI in Triple-Negative Breast Cancer Treated with Neoadjuvant Chemotherapy.

Cheng C, Peng X, Sang K, Zhao H, Wu D, Li H, Wang Y, Wang W, Xu F, Zhao J

pubmed logopapersSep 8 2025
The aim of this study is to evaluate the prognostic performance of a nomogram integrating clinical parameters with deep learning radiomics (DLRN) features derived from ultrasound and multi-sequence magnetic resonance imaging (MRI) for predicting survival, recurrence, and metastasis in patients diagnosed with triple-negative breast cancer (TNBC) undergoing neoadjuvant chemotherapy (NAC). This retrospective, multicenter study included 103 patients with histopathologically confirmed TNBC across four institutions. The training group comprised 72 cases from the First People's Hospital of Lianyungang, while the validation group included 31 cases from three external centers. Clinical and follow-up data were collected to assess prognostic outcomes. Radiomics features were extracted from two-dimensional ultrasound and three-dimensional MRI images following image segmentation. A DLRN model was developed, and its prognostic performance was evaluated using the concordance index (C-index) in comparison with alternative modeling approaches. Risk stratification for postoperative recurrence was subsequently performed, and recurrence and metastasis rates were compared between low- and high-risk groups. The DLRN model demonstrated strong predictive capability for DFS (C-index: 0.859-0.887) and moderate performance for overall survival (OS) (C-index: 0.800-0.811). For DFS prediction, the DLRN model outperformed other models, whereas its performance in predicting OS was slightly lower than that of the combined MRI + US radiomics model. The 3-year recurrence and metastasis rates were significantly lower in the low-risk group than in the high-risk group (21.43-35.71% vs 77.27-82.35%). The preoperative DLRN model, integrating ultrasound and multi-sequence MRI, shows promise as a prognostic tool for recurrence, metastasis, and survival outcomes in patients with TNBC undergoing NAC. The derived risk score may facilitate individualized prognostic evaluation and aid in preoperative risk stratification within clinical settings.

AI-Driven Fetal Liver Echotexture Analysis: A New Frontier in Predicting Neonatal Insulin Imbalance.

Da Correggio KS, Santos LO, Muylaert Barroso FS, Galluzzo RN, Chaves TZL, Wangenheim AV, Onofre ASC

pubmed logopapersSep 8 2025
To evaluate the performance of artificial intelligence (AI)-based models in predicting elevated neonatal insulin levels through fetal hepatic echotexture analysis. This diagnostic accuracy study analyzed ultrasound images of fetal livers from pregnancies between 37 and 42 weeks, including cases with and without gestational diabetes mellitus (GDM). Images were stored in Digital Imaging and Communications in Medicine (DICOM) format, annotated by experts, and converted to segmented masks after quality checks. A balanced dataset was created by randomly excluding overrepresented categories. Artificial intelligence classification models developed using the FastAI library-ResNet-18, ResNet-34, ResNet-50, EfficientNet-B0, and EfficientNet-B7-were trained to detect elevated C-peptide levels (>75th percentile) in umbilical cord blood at birth, based on fetal hepatic ultrasonographic images. Out of 2339 ultrasound images, 606 were excluded due to poor quality, resulting in 1733 images analyzed. Elevated C-peptide levels were observed in 34.3% of neonates. Among the 5 CNN models evaluated, EfficientNet-B0 demonstrated the highest overall performance, achieving a sensitivity of 86.5%, specificity of 82.1%, positive predictive value (PPV) of 83.0%, negative predictive value (NPV) of 85.7%, accuracy of 84.3%, and an area under the ROC curve (AUC) of 0.83 in predicting elevated neonatal insulin levels through fetal hepatic echotexture analysis. AI-based analysis of fetal liver echotexture via ultrasound effectively predicted elevated neonatal C-peptide levels, offering a promising non-invasive method for detecting insulin imbalance in newborns.

FetalMLOps: operationalizing machine learning models for standard fetal ultrasound plane classification.

Testi M, Fiorentino MC, Ballabio M, Visani G, Ciccozzi M, Frontoni E, Moccia S, Vessio G

pubmed logopapersSep 8 2025
Fetal standard plane detection is essential in prenatal care, enabling accurate assessment of fetal development and early identification of potential anomalies. Despite significant advancements in machine learning (ML) in this domain, its integration into clinical workflows remains limited-primarily due to the lack of standardized, end-to-end operational frameworks. To address this gap, we introduce FetalMLOps, the first comprehensive MLOps framework specifically designed for fetal ultrasound imaging. Our approach adopts a ten-step MLOps methodology that covers the entire ML lifecycle, with each phase meticulously adapted to clinical needs. From defining the clinical objective to curating and annotating fetal US datasets, every step ensures alignment with real-world medical practice. ETL (extract, transform, load) processes are developed to standardize, anonymize, and harmonize inputs, enhancing data quality. Model development prioritizes architectures that balance accuracy and efficiency, using clinically relevant evaluation metrics to guide selection. The best-performing model is deployed via a RESTful API, following MLOps best practices for continuous integration, delivery, and performance monitoring. Crucially, the framework embeds principles of explainability and environmental sustainability, promoting ethical, transparent, and responsible AI. By operationalizing ML models within a clinically meaningful pipeline, FetalMLOps bridges the gap between algorithmic innovation and real-world application, setting a precedent for trustworthy and scalable AI adoption in prenatal care.

Explainable Machine Learning for Estimating the Contrast Material Arrival Time in Computed Tomography Pulmonary Angiography.

Meng XP, Yu H, Pan C, Chen FM, Li X, Wang J, Hu C, Fang X

pubmed logopapersSep 8 2025
To establish an explainable machine learning (ML) approach using patient-related and noncontrast chest CT-derived features to predict the contrast material arrival time (TARR) in CT pulmonary angiography (CTPA). This retrospective study included consecutive patients referred for CTPA between September 2023 to October 2024. Sixteen clinical and 17 chest CT-derived parameters were used as inputs for the ML approach, which employed recursive feature elimination for feature selection and XGBoost with SHapley Additive exPlanations (SHAP) for explainable modeling. The prediction target was abnormal TARR of the pulmonary artery (ie, TARR <7 seconds or >10 s), determined by the time to peak enhancement in the test bolus, with 2 models distinguishing these cases. External validation was conducted. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). A total of 666 patients (mean age, 70 [IQR, 59.3 to 78.0]; 46.8% female participants) were split into training (n = 353), testing (n = 151), and external validation (n = 162) sets. 86 cases (12.9%) had TARR <7 seconds, and 138 cases (20.7%) had TARR >10 seconds. The ML models exhibited good performance in their respective testing and external validation sets (AUC: 0.911 and 0.878 for TARR <7 s; 0.834 and 0.897 for TARR >10 s). SHAP analysis identified the measurements of the vena cava and pulmonary artery as key features for distinguishing abnormal TARR. The explainable ML algorithm accurately identified normal and abnormal TARR of the pulmonary artery, facilitating personalized CTPA scans.

Predicting Breath Hold Task Compliance From Head Motion.

Weng TB, Porwal G, Srinivasan D, Inglis B, Rodriguez S, Jacobs DR, Schreiner PJ, Sorond FA, Sidney S, Lewis C, Launer L, Erus G, Nasrallah IM, Bryan RN, Dula AN

pubmed logopapersSep 8 2025
Cerebrovascular reactivity reflects changes in cerebral blood flow in response to an acute stimulus and is reflective of the brain's ability to match blood flow to demand. Functional MRI with a breath-hold task can be used to elicit this vasoactive response, but data validity hinges on subject compliance. Determining breath-hold compliance often requires external monitoring equipment. To develop a non-invasive and data-driven quality filter for breath-hold compliance using only measurements of head motion during imaging. Prospective cohort. Longitudinal data from healthy middle-aged subjects enrolled in the Coronary Artery Risk Development in Young Adults Brain MRI Study, N = 1141, 47.1% female. 3.0 Tesla gradient-echo MRI. Manual labelling of respiratory belt monitored data was used to determine breath hold compliance during MRI scan. A model to estimate the probability of non-compliance with the breath hold task was developed using measures of head motion. The model's ability to identify scans in which the participant was not performing the breath hold were summarized using performance metrics including sensitivity, specificity, recall, and F1 score. The model was applied to additional unmarked data to assess effects on population measures of CVR. Sensitivity analysis revealed exclusion of non-compliant scans using the developed model did not affect median cerebrovascular reactivity (Median [q1, q3] = 1.32 [0.96, 1.71]) compared to using manual review of respiratory belt data (1.33 [1.02, 1.74]) while reducing interquartile range. The final model based on a multi-layer perceptron machine learning classifier estimated non-compliance with an accuracy of 76.9% and an F1 score of 69.5%, indicating a moderate balance between precision and recall for the identification of scans in which the participant was not compliant. The developed model provides the probability of non-compliance with a breath-hold task, which could later be used as a quality filter or included in statistical analyses. TECHNICAL EFFICACY: Stage 3.

Automatic bone age assessment: a Turkish population study.

Öztürk S, Yüce M, Pamuk GG, Varlık C, Cimilli AT, Atay M

pubmed logopapersSep 8 2025
Established methods for bone age assessment (BAA), such as the Greulich and Pyle atlas, suffer from variability due to population differences and observer discrepancies. Although automated BAA offers speed and consistency, limited research exists on its performance across different populations using deep learning. This study examines deep learning algorithms on the Turkish population to enhance bone age models by understanding demographic influences. We analyzed reports from Bağcılar Hospital's Health Information Management System between April 2012 and September 2023 using "bone age" as a keyword. Patient images were re-evaluated by an experienced radiologist and anonymized. A total of 2,730 hand radiographs from Bağcılar Hospital (Turkish population), 12,572 from the Radiological Society of North America (RSNA), and 6,185 from the Radiological Hand Pose Estimation (RHPE) public datasets were collected, along with corresponding bone ages and gender information. A random set of 546 radiographs (273 from Bağcılar, 273 from public datasets) was initially randomly split for an internal test set with bone age stratification; the remaining data were used for training and validation. BAAs were generated using a modified InceptionV3 model on 500 × 500-pixel images, selecting the model with the lowest mean absolute error (MAE) on the validation set. Three models were trained and tested based on dataset origin: Bağcılar (Turkish), public (RSNA-RHPE), and a Combined model. Internal test set predictions of the Combined model estimated bone age within less than 6, 12, 18, and 24 months at rates of 44%, 73%, 87%, and 94%, respectively. The MAE was 9.2 months in the overall internal test set, 7 months on the public test set, and 11.5 months on the Bağcılar internal test data. The Bağcılar-only model had an MAE of 12.7 months on the Bağcılar internal test data. Despite less training data, there was no significant difference between the combined and Bağcılar models on the Bağcılar dataset (<i>P</i> > 0.05). The public model showed an MAE of 16.5 months on the Bağcılar dataset, significantly worse than the other models (<i>P</i> < 0.05). We developed an automatic BAA model including the Turkish population, one of the few such studies using deep learning. Despite challenges from population differences and data heterogeneity, these models can be effectively used in various clinical settings. Model accuracy can improve over time with cumulative data, and publicly available datasets may further refine them. Our approach enables more accurate and efficient BAAs, supporting healthcare professionals where traditional methods are time-consuming and variable. The developed automated BAA model for the Turkish population offers a reliable and efficient alternative to traditional methods. By utilizing deep learning with diverse datasets from Bağcılar Hospital and publicly available sources, the model minimizes assessment time and reduces variability. This advancement enhances clinical decision-making, supports standardized BAA practices, and improves patient care in various healthcare settings.

Evaluating artificial intelligence for a focal nodular hyperplasia diagnosis using magnetic resonance imaging: preliminary findings.

Kantarcı M, Kızılgöz V, Terzi R, Kılıç AE, Kabalcı H, Durmaz Ö, Tokgöz N, Harman M, Sağır Kahraman A, Avanaz A, Aydın S, Elpek GÖ, Yazol M, Aydınlı B

pubmed logopapersSep 8 2025
This study aimed to evaluate the effectiveness of artificial intelligence (AI) in diagnosing focal nodular hyperplasia (FNH) of the liver using magnetic resonance imaging (MRI) and compare its performance with that of radiologists. In the first phase of the study, the MRIs of 60 patients (30 patients with FNH and 30 patients with no lesions or lesions other than FNH) were processed using a segmentation program and introduced to an AI model. After the learning process, the MRIs of 42 different patients that the AI model had no experience with were introduced to the system. In addition, a radiology resident and a radiology specialist evaluated patients with the same MR sequences. The sensitivity and specificity values were obtained from all three reviews. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of the AI model were found to be 0.769, 0.966, 0.909, and 0.903, respectively. The sensitivity and specificity values were higher than those of the radiology resident and lower than those of the radiology specialist. The results of the specialist versus the AI model revealed a good agreement level, with a kappa (κ) value of 0.777. For the diagnosis of FNH, the sensitivity, specificity, PPV, and NPV of the AI device were higher than those of the radiology resident and lower than those of the radiology specialist. With additional studies focused on different specific lesions of the liver, AI models are expected to be able to diagnose each liver lesion with high accuracy in the future. AI is studied to provide assisted or automated interpretation of radiological images with an accurate and reproducible imaging diagnosis.
Page 25 of 3433422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.