Sort by:
Page 36 of 2352341 results

Prognostic Utility of a Deep Learning Radiomics Nomogram Integrating Ultrasound and Multi-Sequence MRI in Triple-Negative Breast Cancer Treated with Neoadjuvant Chemotherapy.

Cheng C, Peng X, Sang K, Zhao H, Wu D, Li H, Wang Y, Wang W, Xu F, Zhao J

pubmed logopapersSep 8 2025
The aim of this study is to evaluate the prognostic performance of a nomogram integrating clinical parameters with deep learning radiomics (DLRN) features derived from ultrasound and multi-sequence magnetic resonance imaging (MRI) for predicting survival, recurrence, and metastasis in patients diagnosed with triple-negative breast cancer (TNBC) undergoing neoadjuvant chemotherapy (NAC). This retrospective, multicenter study included 103 patients with histopathologically confirmed TNBC across four institutions. The training group comprised 72 cases from the First People's Hospital of Lianyungang, while the validation group included 31 cases from three external centers. Clinical and follow-up data were collected to assess prognostic outcomes. Radiomics features were extracted from two-dimensional ultrasound and three-dimensional MRI images following image segmentation. A DLRN model was developed, and its prognostic performance was evaluated using the concordance index (C-index) in comparison with alternative modeling approaches. Risk stratification for postoperative recurrence was subsequently performed, and recurrence and metastasis rates were compared between low- and high-risk groups. The DLRN model demonstrated strong predictive capability for DFS (C-index: 0.859-0.887) and moderate performance for overall survival (OS) (C-index: 0.800-0.811). For DFS prediction, the DLRN model outperformed other models, whereas its performance in predicting OS was slightly lower than that of the combined MRI + US radiomics model. The 3-year recurrence and metastasis rates were significantly lower in the low-risk group than in the high-risk group (21.43-35.71% vs 77.27-82.35%). The preoperative DLRN model, integrating ultrasound and multi-sequence MRI, shows promise as a prognostic tool for recurrence, metastasis, and survival outcomes in patients with TNBC undergoing NAC. The derived risk score may facilitate individualized prognostic evaluation and aid in preoperative risk stratification within clinical settings.

AI-Driven Fetal Liver Echotexture Analysis: A New Frontier in Predicting Neonatal Insulin Imbalance.

Da Correggio KS, Santos LO, Muylaert Barroso FS, Galluzzo RN, Chaves TZL, Wangenheim AV, Onofre ASC

pubmed logopapersSep 8 2025
To evaluate the performance of artificial intelligence (AI)-based models in predicting elevated neonatal insulin levels through fetal hepatic echotexture analysis. This diagnostic accuracy study analyzed ultrasound images of fetal livers from pregnancies between 37 and 42 weeks, including cases with and without gestational diabetes mellitus (GDM). Images were stored in Digital Imaging and Communications in Medicine (DICOM) format, annotated by experts, and converted to segmented masks after quality checks. A balanced dataset was created by randomly excluding overrepresented categories. Artificial intelligence classification models developed using the FastAI library-ResNet-18, ResNet-34, ResNet-50, EfficientNet-B0, and EfficientNet-B7-were trained to detect elevated C-peptide levels (>75th percentile) in umbilical cord blood at birth, based on fetal hepatic ultrasonographic images. Out of 2339 ultrasound images, 606 were excluded due to poor quality, resulting in 1733 images analyzed. Elevated C-peptide levels were observed in 34.3% of neonates. Among the 5 CNN models evaluated, EfficientNet-B0 demonstrated the highest overall performance, achieving a sensitivity of 86.5%, specificity of 82.1%, positive predictive value (PPV) of 83.0%, negative predictive value (NPV) of 85.7%, accuracy of 84.3%, and an area under the ROC curve (AUC) of 0.83 in predicting elevated neonatal insulin levels through fetal hepatic echotexture analysis. AI-based analysis of fetal liver echotexture via ultrasound effectively predicted elevated neonatal C-peptide levels, offering a promising non-invasive method for detecting insulin imbalance in newborns.

FetalMLOps: operationalizing machine learning models for standard fetal ultrasound plane classification.

Testi M, Fiorentino MC, Ballabio M, Visani G, Ciccozzi M, Frontoni E, Moccia S, Vessio G

pubmed logopapersSep 8 2025
Fetal standard plane detection is essential in prenatal care, enabling accurate assessment of fetal development and early identification of potential anomalies. Despite significant advancements in machine learning (ML) in this domain, its integration into clinical workflows remains limited-primarily due to the lack of standardized, end-to-end operational frameworks. To address this gap, we introduce FetalMLOps, the first comprehensive MLOps framework specifically designed for fetal ultrasound imaging. Our approach adopts a ten-step MLOps methodology that covers the entire ML lifecycle, with each phase meticulously adapted to clinical needs. From defining the clinical objective to curating and annotating fetal US datasets, every step ensures alignment with real-world medical practice. ETL (extract, transform, load) processes are developed to standardize, anonymize, and harmonize inputs, enhancing data quality. Model development prioritizes architectures that balance accuracy and efficiency, using clinically relevant evaluation metrics to guide selection. The best-performing model is deployed via a RESTful API, following MLOps best practices for continuous integration, delivery, and performance monitoring. Crucially, the framework embeds principles of explainability and environmental sustainability, promoting ethical, transparent, and responsible AI. By operationalizing ML models within a clinically meaningful pipeline, FetalMLOps bridges the gap between algorithmic innovation and real-world application, setting a precedent for trustworthy and scalable AI adoption in prenatal care.

Explainable Machine Learning for Estimating the Contrast Material Arrival Time in Computed Tomography Pulmonary Angiography.

Meng XP, Yu H, Pan C, Chen FM, Li X, Wang J, Hu C, Fang X

pubmed logopapersSep 8 2025
To establish an explainable machine learning (ML) approach using patient-related and noncontrast chest CT-derived features to predict the contrast material arrival time (TARR) in CT pulmonary angiography (CTPA). This retrospective study included consecutive patients referred for CTPA between September 2023 to October 2024. Sixteen clinical and 17 chest CT-derived parameters were used as inputs for the ML approach, which employed recursive feature elimination for feature selection and XGBoost with SHapley Additive exPlanations (SHAP) for explainable modeling. The prediction target was abnormal TARR of the pulmonary artery (ie, TARR <7 seconds or >10 s), determined by the time to peak enhancement in the test bolus, with 2 models distinguishing these cases. External validation was conducted. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). A total of 666 patients (mean age, 70 [IQR, 59.3 to 78.0]; 46.8% female participants) were split into training (n = 353), testing (n = 151), and external validation (n = 162) sets. 86 cases (12.9%) had TARR <7 seconds, and 138 cases (20.7%) had TARR >10 seconds. The ML models exhibited good performance in their respective testing and external validation sets (AUC: 0.911 and 0.878 for TARR <7 s; 0.834 and 0.897 for TARR >10 s). SHAP analysis identified the measurements of the vena cava and pulmonary artery as key features for distinguishing abnormal TARR. The explainable ML algorithm accurately identified normal and abnormal TARR of the pulmonary artery, facilitating personalized CTPA scans.

Predicting Breath Hold Task Compliance From Head Motion.

Weng TB, Porwal G, Srinivasan D, Inglis B, Rodriguez S, Jacobs DR, Schreiner PJ, Sorond FA, Sidney S, Lewis C, Launer L, Erus G, Nasrallah IM, Bryan RN, Dula AN

pubmed logopapersSep 8 2025
Cerebrovascular reactivity reflects changes in cerebral blood flow in response to an acute stimulus and is reflective of the brain's ability to match blood flow to demand. Functional MRI with a breath-hold task can be used to elicit this vasoactive response, but data validity hinges on subject compliance. Determining breath-hold compliance often requires external monitoring equipment. To develop a non-invasive and data-driven quality filter for breath-hold compliance using only measurements of head motion during imaging. Prospective cohort. Longitudinal data from healthy middle-aged subjects enrolled in the Coronary Artery Risk Development in Young Adults Brain MRI Study, N = 1141, 47.1% female. 3.0 Tesla gradient-echo MRI. Manual labelling of respiratory belt monitored data was used to determine breath hold compliance during MRI scan. A model to estimate the probability of non-compliance with the breath hold task was developed using measures of head motion. The model's ability to identify scans in which the participant was not performing the breath hold were summarized using performance metrics including sensitivity, specificity, recall, and F1 score. The model was applied to additional unmarked data to assess effects on population measures of CVR. Sensitivity analysis revealed exclusion of non-compliant scans using the developed model did not affect median cerebrovascular reactivity (Median [q1, q3] = 1.32 [0.96, 1.71]) compared to using manual review of respiratory belt data (1.33 [1.02, 1.74]) while reducing interquartile range. The final model based on a multi-layer perceptron machine learning classifier estimated non-compliance with an accuracy of 76.9% and an F1 score of 69.5%, indicating a moderate balance between precision and recall for the identification of scans in which the participant was not compliant. The developed model provides the probability of non-compliance with a breath-hold task, which could later be used as a quality filter or included in statistical analyses. TECHNICAL EFFICACY: Stage 3.

Evaluating artificial intelligence for a focal nodular hyperplasia diagnosis using magnetic resonance imaging: preliminary findings.

Kantarcı M, Kızılgöz V, Terzi R, Kılıç AE, Kabalcı H, Durmaz Ö, Tokgöz N, Harman M, Sağır Kahraman A, Avanaz A, Aydın S, Elpek GÖ, Yazol M, Aydınlı B

pubmed logopapersSep 8 2025
This study aimed to evaluate the effectiveness of artificial intelligence (AI) in diagnosing focal nodular hyperplasia (FNH) of the liver using magnetic resonance imaging (MRI) and compare its performance with that of radiologists. In the first phase of the study, the MRIs of 60 patients (30 patients with FNH and 30 patients with no lesions or lesions other than FNH) were processed using a segmentation program and introduced to an AI model. After the learning process, the MRIs of 42 different patients that the AI model had no experience with were introduced to the system. In addition, a radiology resident and a radiology specialist evaluated patients with the same MR sequences. The sensitivity and specificity values were obtained from all three reviews. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of the AI model were found to be 0.769, 0.966, 0.909, and 0.903, respectively. The sensitivity and specificity values were higher than those of the radiology resident and lower than those of the radiology specialist. The results of the specialist versus the AI model revealed a good agreement level, with a kappa (κ) value of 0.777. For the diagnosis of FNH, the sensitivity, specificity, PPV, and NPV of the AI device were higher than those of the radiology resident and lower than those of the radiology specialist. With additional studies focused on different specific lesions of the liver, AI models are expected to be able to diagnose each liver lesion with high accuracy in the future. AI is studied to provide assisted or automated interpretation of radiological images with an accurate and reproducible imaging diagnosis.

Automated Radiographic Total Sharp Score (ARTSS) in Rheumatoid Arthritis: A Solution to Reduce Inter-Intra Reader Variation and Enhancing Clinical Practice

Hajar Moradmand, Lei Ren

arxiv logopreprintSep 8 2025
Assessing the severity of rheumatoid arthritis (RA) using the Total Sharp/Van Der Heijde Score (TSS) is crucial, but manual scoring is often time-consuming and subjective. This study introduces an Automated Radiographic Sharp Scoring (ARTSS) framework that leverages deep learning to analyze full-hand X-ray images, aiming to reduce inter- and intra-observer variability. The research uniquely accommodates patients with joint disappearance and variable-length image sequences. We developed ARTSS using data from 970 patients, structured into four stages: I) Image pre-processing and re-orientation using ResNet50, II) Hand segmentation using UNet.3, III) Joint identification using YOLOv7, and IV) TSS prediction using models such as VGG16, VGG19, ResNet50, DenseNet201, EfficientNetB0, and Vision Transformer (ViT). We evaluated model performance with Intersection over Union (IoU), Mean Average Precision (MAP), mean absolute error (MAE), Root Mean Squared Error (RMSE), and Huber loss. The average TSS from two radiologists was used as the ground truth. Model training employed 3-fold cross-validation, with each fold consisting of 452 training and 227 validation samples, and external testing included 291 unseen subjects. Our joint identification model achieved 99% accuracy. The best-performing model, ViT, achieved a notably low Huber loss of 0.87 for TSS prediction. Our results demonstrate the potential of deep learning to automate RA scoring, which can significantly enhance clinical practice. Our approach addresses the challenge of joint disappearance and variable joint numbers, offers timesaving benefits, reduces inter- and intra-reader variability, improves radiologist accuracy, and aids rheumatologists in making more informed decisions.

Curia: A Multi-Modal Foundation Model for Radiology

Corentin Dancette, Julien Khlaut, Antoine Saporta, Helene Philippe, Elodie Ferreres, Baptiste Callard, Théo Danielou, Léo Alberge, Léo Machado, Daniel Tordjman, Julie Dupuis, Korentin Le Floch, Jean Du Terrail, Mariam Moshiri, Laurent Dercle, Tom Boeken, Jules Gregory, Maxime Ronot, François Legou, Pascal Roux, Marc Sapoval, Pierre Manceron, Paul Hérent

arxiv logopreprintSep 8 2025
AI-assisted radiological interpretation is based on predominantly narrow, single-task models. This approach is impractical for covering the vast spectrum of imaging modalities, diseases, and radiological findings. Foundation models (FMs) hold the promise of broad generalization across modalities and in low-data settings. However, this potential has remained largely unrealized in radiology. We introduce Curia, a foundation model trained on the entire cross-sectional imaging output of a major hospital over several years, which to our knowledge is the largest such corpus of real-world data-encompassing 150,000 exams (130 TB). On a newly curated 19-task external validation benchmark, Curia accurately identifies organs, detects conditions like brain hemorrhages and myocardial infarctions, and predicts outcomes in tumor staging. Curia meets or surpasses the performance of radiologists and recent foundation models, and exhibits clinically significant emergent properties in cross-modality, and low-data regimes. To accelerate progress, we release our base model's weights at https://huggingface.co/raidium/curia.

MRI-Based Brain Tumor Detection through an Explainable EfficientNetV2 and MLP-Mixer-Attention Architecture

Mustafa Yurdakul, Şakir Taşdemir

arxiv logopreprintSep 8 2025
Brain tumors are serious health problems that require early diagnosis due to their high mortality rates. Diagnosing tumors by examining Magnetic Resonance Imaging (MRI) images is a process that requires expertise and is prone to error. Therefore, the need for automated diagnosis systems is increasing day by day. In this context, a robust and explainable Deep Learning (DL) model for the classification of brain tumors is proposed. In this study, a publicly available Figshare dataset containing 3,064 T1-weighted contrast-enhanced brain MRI images of three tumor types was used. First, the classification performance of nine well-known CNN architectures was evaluated to determine the most effective backbone. Among these, EfficientNetV2 demonstrated the best performance and was selected as the backbone for further development. Subsequently, an attention-based MLP-Mixer architecture was integrated into EfficientNetV2 to enhance its classification capability. The performance of the final model was comprehensively compared with basic CNNs and the methods in the literature. Additionally, Grad-CAM visualization was used to interpret and validate the decision-making process of the proposed model. The proposed model's performance was evaluated using the five-fold cross-validation method. The proposed model demonstrated superior performance with 99.50% accuracy, 99.47% precision, 99.52% recall and 99.49% F1 score. The results obtained show that the model outperforms the studies in the literature. Moreover, Grad-CAM visualizations demonstrate that the model effectively focuses on relevant regions of MRI images, thus improving interpretability and clinical reliability. A robust deep learning model for clinical decision support systems has been obtained by combining EfficientNetV2 and attention-based MLP-Mixer, providing high accuracy and interpretability in brain tumor classification.

MM-DINOv2: Adapting Foundation Models for Multi-Modal Medical Image Analysis

Daniel Scholz, Ayhan Can Erdur, Viktoria Ehm, Anke Meyer-Baese, Jan C. Peeken, Daniel Rueckert, Benedikt Wiestler

arxiv logopreprintSep 8 2025
Vision foundation models like DINOv2 demonstrate remarkable potential in medical imaging despite their origin in natural image domains. However, their design inherently works best for uni-modal image analysis, limiting their effectiveness for multi-modal imaging tasks that are common in many medical fields, such as neurology and oncology. While supervised models perform well in this setting, they fail to leverage unlabeled datasets and struggle with missing modalities, a frequent challenge in clinical settings. To bridge these gaps, we introduce MM-DINOv2, a novel and efficient framework that adapts the pre-trained vision foundation model DINOv2 for multi-modal medical imaging. Our approach incorporates multi-modal patch embeddings, enabling vision foundation models to effectively process multi-modal imaging data. To address missing modalities, we employ full-modality masking, which encourages the model to learn robust cross-modality relationships. Furthermore, we leverage semi-supervised learning to harness large unlabeled datasets, enhancing both the accuracy and reliability of medical predictions. Applied to glioma subtype classification from multi-sequence brain MRI, our method achieves a Matthews Correlation Coefficient (MCC) of 0.6 on an external test set, surpassing state-of-the-art supervised approaches by +11.1%. Our work establishes a scalable and robust solution for multi-modal medical imaging tasks, leveraging powerful vision foundation models pre-trained on natural images while addressing real-world clinical challenges such as missing data and limited annotations.
Page 36 of 2352341 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.