Sort by:
Page 67 of 1241232 results

Deep Learning MRI Models for the Differential Diagnosis of Tumefactive Demyelination versus <i>IDH</i> Wild-Type Glioblastoma.

Conte GM, Moassefi M, Decker PA, Kosel ML, McCarthy CB, Sagen JA, Nikanpour Y, Fereidan-Esfahani M, Ruff MW, Guido FS, Pump HK, Burns TC, Jenkins RB, Erickson BJ, Lachance DH, Tobin WO, Eckel-Passow JE

pubmed logopapersJun 26 2025
Diagnosis of tumefactive demyelination can be challenging. The diagnosis of indeterminate brain lesions on MRI often requires tissue confirmation via brain biopsy. Noninvasive methods for accurate diagnosis of tumor and nontumor etiologies allows for tailored therapy, optimal tumor control, and a reduced risk of iatrogenic morbidity and mortality. Tumefactive demyelination has imaging features that mimic <i>isocitrate dehydrogenase</i> wild-type glioblastoma (<i>IDH</i>wt GBM). We hypothesized that deep learning applied to postcontrast T1-weighted (T1C) and T2-weighted (T2) MRI can discriminate tumefactive demyelination from <i>IDH</i>wt GBM. Patients with tumefactive demyelination (<i>n</i> = 144) and <i>IDH</i>wt GBM (<i>n</i> = 455) were identified by clinical registries. A 3D DenseNet121 architecture was used to develop models to differentiate tumefactive demyelination and <i>IDH</i>wt GBM by using both T1C and T2 MRI, as well as only T1C and only T2 images. A 3-stage design was used: 1) model development and internal validation via 5-fold cross validation by using a sex-, age-, and MRI technology-matched set of tumefactive demyelination and <i>IDH</i>wt GBM, 2) validation of model specificity on independent <i>IDH</i>wt GBM, and 3) prospective validation on tumefactive demyelination and <i>IDH</i>wt GBM. Stratified area under the receiver operating curves (AUROCs) were used to evaluate model performance stratified by sex, age at diagnosis, MRI scanner strength, and MRI acquisition. The deep learning model developed by using both T1C and T2 images had a prospective validation AUROC of 88% (95% CI: 0.82-0.95). In the prospective validation stage, a model score threshold of 0.28 resulted in 91% sensitivity of correctly classifying tumefactive demyelination and 80% specificity (correctly classifying <i>IDH</i>wt GBM). Stratified AUROCs demonstrated that model performance may be improved if thresholds were chosen stratified by age and MRI acquisition. MRI can provide the basis for applying deep learning models to aid in the differential diagnosis of brain lesions. Further validation is needed to evaluate how well the model generalizes across institutions, patient populations, and technology, and to evaluate optimal thresholds for classification. Next steps also should incorporate additional tumor etiologies such as CNS lymphoma and brain metastases.

Towards automated multi-regional lung parcellation for 0.55-3T 3D T2w fetal MRI

Uus, A., Avena Zampieri, C., Downes, F., Egloff Collado, A., Hall, M., Davidson, J., Payette, K., Aviles Verdera, J., Grigorescu, I., Hajnal, J. V., Deprez, M., Aertsen, M., Hutter, J., Rutherford, M., Deprest, J., Story, L.

medrxiv logopreprintJun 26 2025
Fetal MRI is increasingly being employed in the diagnosis of fetal lung anomalies and segmentation-derived total fetal lung volumes are used as one of the parameters for prediction of neonatal outcomes. However, in clinical practice, segmentation is performed manually in 2D motion-corrupted stacks with thick slices which is time consuming and can lead to variations in estimated volumes. Furthermore, there is a known lack of consensus regarding a universal lung parcellation protocol and expected normal total lung volume formulas. The lungs are also segmented as one label without parcellation into lobes. In terms of automation, to the best of our knowledge, there have been no reported works on multi-lobe segmentation for fetal lung MRI. This work introduces the first automated deep learning segmentation pipeline for multi-regional lung segmentation for 3D motion-corrected T2w fetal body images for normal anatomy and congenital diaphragmatic hernia cases. The protocol for parcellation into 5 standard lobes was defined in the population-averaged 3D atlas. It was then used to generate a multi-label training dataset including 104 normal anatomy controls and 45 congenital diaphragmatic hernia cases from 0.55T, 1.5T and 3T acquisition protocols. The performance of 3D Attention UNet network was evaluated on 18 cases and showed good results for normal lung anatomy with expectedly lower Dice values for the ipsilateral lung. In addition, we also produced normal lung volumetry growth charts from 290 0.55T and 3T controls. This is the first step towards automated multi-regional fetal lung analysis for 3D fetal MRI.

Improving Clinical Utility of Fetal Cine CMR Using Deep Learning Super-Resolution.

Vollbrecht TM, Hart C, Katemann C, Isaak A, Voigt MB, Pieper CC, Kuetting D, Geipel A, Strizek B, Luetkens JA

pubmed logopapersJun 26 2025
Fetal cardiovascular magnetic resonance is an emerging tool for prenatal congenital heart disease assessment, but long acquisition times and fetal movements limit its clinical use. This study evaluates the clinical utility of deep learning super-resolution reconstructions for rapidly acquired, low-resolution fetal cardiovascular magnetic resonance. This prospective study included participants with fetal congenital heart disease undergoing fetal cardiovascular magnetic resonance in the third trimester of pregnancy, with axial cine images acquired at normal resolution and low resolution. Low-resolution cine data was subsequently reconstructed using a deep learning super-resolution framework (cine<sub>DL</sub>). Acquisition times, apparent signal-to-noise ratio, contrast-to-noise ratio, and edge rise distance were assessed. Volumetry and functional analysis were performed. Qualitative image scores were rated on a 5-point Likert scale. Cardiovascular structures and pathological findings visible in cine<sub>DL</sub> images only were assessed. Statistical analysis included the Student paired <i>t</i> test and the Wilcoxon test. A total of 42 participants were included (median gestational age, 35.9 weeks [interquartile range (IQR), 35.1-36.4]). Cine<sub>DL</sub> acquisition was faster than cine images acquired at normal resolution (134±9.6 s versus 252±8.8 s; <i>P</i><0.001). Quantitative image quality metrics and image quality scores for cine<sub>DL</sub> were higher or comparable with those of cine images acquired at normal-resolution images (eg, fetal motion, 4.0 [IQR, 4.0-5.0] versus 4.0 [IQR, 3.0-4.0]; <i>P</i><0.001). Nonpatient-related artifacts (eg, backfolding) were more pronounced in Cine<sub>DL</sub> compared with cine images acquired at normal-resolution images (4.0 [IQR, 4.0-5.0] versus 5.0 [IQR, 3.0-4.0]; <i>P</i><0.001). Volumetry and functional results were comparable. Cine<sub>DL</sub> revealed additional structures in 10 of 42 fetuses (24%) and additional pathologies in 5 of 42 fetuses (12%), including partial anomalous pulmonary venous connection. Deep learning super-resolution reconstructions of low-resolution acquisitions shorten acquisition times and achieve diagnostic quality comparable with standard images, while being less sensitive to fetal bulk movements, leading to additional diagnostic findings. Therefore, deep learning super-resolution may improve the clinical utility of fetal cardiovascular magnetic resonance for accurate prenatal assessment of congenital heart disease.

Robust Deep Learning for Myocardial Scar Segmentation in Cardiac MRI with Noisy Labels

Aida Moafi, Danial Moafi, Evgeny M. Mirkes, Gerry P. McCann, Abbas S. Alatrany, Jayanth R. Arnold, Mostafa Mehdipour Ghazi

arxiv logopreprintJun 26 2025
The accurate segmentation of myocardial scars from cardiac MRI is essential for clinical assessment and treatment planning. In this study, we propose a robust deep-learning pipeline for fully automated myocardial scar detection and segmentation by fine-tuning state-of-the-art models. The method explicitly addresses challenges of label noise from semi-automatic annotations, data heterogeneity, and class imbalance through the use of Kullback-Leibler loss and extensive data augmentation. We evaluate the model's performance on both acute and chronic cases and demonstrate its ability to produce accurate and smooth segmentations despite noisy labels. In particular, our approach outperforms state-of-the-art models like nnU-Net and shows strong generalizability in an out-of-distribution test set, highlighting its robustness across various imaging conditions and clinical tasks. These results establish a reliable foundation for automated myocardial scar quantification and support the broader clinical adoption of deep learning in cardiac imaging.

Deep Learning Model for Automated Segmentation of Orbital Structures in MRI Images.

Bakhshaliyeva E, Reiner LN, Chelbi M, Nawabi J, Tietze A, Scheel M, Wattjes M, Dell'Orco A, Meddeb A

pubmed logopapersJun 26 2025
Magnetic resonance imaging (MRI) is a crucial tool for visualizing orbital structures and detecting eye pathologies. However, manual segmentation of orbital anatomy is challenging due to the complexity and variability of the structures. Recent advancements in deep learning (DL), particularly convolutional neural networks (CNNs), offer promising solutions for automated segmentation in medical imaging. This study aimed to train and evaluate a U-Net-based model for the automated segmentation of key orbital structures. This retrospective study included 117 patients with various orbital pathologies who underwent orbital MRI. Manual segmentation was performed on four anatomical structures: the ocular bulb, ocular tumors, retinal detachment, and the optic nerve. Following the UNet autoconfiguration by nnUNet, we conducted a five-fold cross-validation and evaluated the model's performances using Dice Similarity Coefficient (DSC) and Relative Absolute Volume Difference (RAVD) as metrics. nnU-Net achieved high segmentation performance for the ocular bulb (mean DSC: 0.931) and the optic nerve (mean DSC: 0.820). Segmentation of ocular tumors (mean DSC: 0.788) and retinal detachment (mean DSC: 0.550) showed greater variability, with performance declining in more challenging cases. Despite these challenges, the model achieved high detection rates, with ROC AUCs of 0.90 for ocular tumors and 0.78 for retinal detachment. This study demonstrates nnU-Net's capability for accurate segmentation of orbital structures, particularly the ocular bulb and optic nerve. However, challenges remain in the segmentation of tumors and retinal detachment due to variability and artifacts. Future improvements in deep learning models and broader, more diverse datasets may enhance segmentation performance, ultimately aiding in the diagnosis and treatment of orbital pathologies.

Self-supervised learning for MRI reconstruction: a review and new perspective.

Li X, Huang J, Sun G, Yang Z

pubmed logopapersJun 26 2025
To review the latest developments in self-supervised deep learning (DL) techniques for magnetic resonance imaging (MRI) reconstruction, emphasizing their potential to overcome the limitations of supervised methods dependent on fully sampled k-space data. While DL has significantly advanced MRI, supervised approaches require large amounts of fully sampled k-space data for training-a major limitation given the impracticality and expense of acquiring such data clinically. Self-supervised learning has emerged as a promising alternative, enabling model training using only undersampled k-space data, thereby enhancing feasibility and driving research interest. We conducted a comprehensive literature review to synthesize recent progress in self-supervised DL for MRI reconstruction. The analysis focused on methods and architectures designed to improve image quality, reduce scanning time, and address data scarcity challenges, drawing from peer-reviewed publications and technical innovations in the field. Self-supervised DL holds transformative potential for MRI reconstruction, offering solutions to data limitations while maintaining image quality and accelerating scans. Key challenges include robustness across diverse anatomies, standardization of validation, and clinical integration. Future research should prioritize hybrid methodologies, domain-specific adaptations, and rigorous clinical validation. This review consolidates advancements and unresolved issues, providing a foundation for next-generation medical imaging technologies.

Recent Advances in Generative Models for Synthetic Brain MRI Image Generation.

Ding X, Bai L, Abbasi SF, Pournik O, Arvanitis T

pubmed logopapersJun 26 2025
With the use of artificial intelligence (AI) for image analysis of Magnetic Resonance Imaging (MRI), the lack of training data has become an issue. Realistic synthetic MRI images can serve as a solution and generative models have been proposed. This study investigates the most recent advances on synthetic brain MRI image generation with AI-based generative models. A search has been conducted on the relevant studies published within the last three years, followed by a narrative review on the identified articles. Popular models from the search results have been discussed in this study, including Generative Adversarial Networks (GANs), diffusion models, Variational Autoencoders (VAEs), and transformers.

Constructing high-quality enhanced 4D-MRI with personalized modeling for liver cancer radiotherapy.

Yao Y, Chen B, Wang K, Cao Y, Zuo L, Zhang K, Chen X, Kuo M, Dai J

pubmed logopapersJun 26 2025
For magnetic resonance imaging (MRI), a short acquisition time and good image quality are incompatible. Thus, reconstructing time-resolved volumetric MRI (4D-MRI) to delineate and monitor thoracic and upper abdominal tumor movements is a challenge. Existing MRI sequences have limited applicability to 4D-MRI. A method is proposed for reconstructing high-quality personalized enhanced 4D-MR images. Low-quality 4D-MR images are scanned followed by deep learning-based personalization to generate high-quality 4D-MR images. High-speed multiphase 3D fast spoiled gradient recalled echo (FSPGR) sequences were utilized to generate low-quality enhanced free-breathing 4D-MR images and paired low-/high-quality breath-holding 4D-MR images for 58 liver cancer patients. Then, a personalized model guided by the paired breath-holding 4D-MR images was developed for each patient to cope with patient heterogeneity. The 4D-MR images generated by the personalized model were of much higher quality compared with the low-quality 4D-MRI images obtained by conventional scanning as demonstrated by significant improvements in the peak signal-to-noise ratio, structural similarity, normalized root mean square error, and cumulative probability of blur detection. The introduction of individualized information helped the personalized model demonstrate a statistically significant improvement compared to the general model (p < 0.001). The proposed method can be used to quickly reconstruct high-quality 4D-MR images and is potentially applicable to radiotherapy for liver cancer.

Artificial Intelligence in Cognitive Decline Diagnosis: Evaluating Cutting-Edge Techniques and Modalities.

Gharehbaghi A, Babic A

pubmed logopapersJun 26 2025
This paper presents the results of a scoping review that examines potentials of Artificial Intelligence (AI) in early diagnosis of Cognitive Decline (CD), which is regarded as a key issue in elderly health. The review encompasses peer-reviewed publications from 2020 to 2025, including scientific journals and conference proceedings. Over 70% of the studies rely on using magnetic resonance imaging (MRI) as the input to the AI models, with a high diagnostic accuracy of 98%. Integration of the relevant clinical data and electroencephalograms (EEG) with deep learning methods enhances diagnostic accuracy in the clinical settings. Recent studies have also explored the use of natural language processing models for detecting CD at its early stages, with an accuracy of 75%, exhibiting a high potential to be used in the appropriate pre-clinical environments.

Deep learning-based contour propagation in magnetic resonance imaging-guided radiotherapy of lung cancer patients.

Wei C, Eze C, Klaar R, Thorwarth D, Warda C, Taugner J, Hörner-Rieber J, Regnery S, Jaekel O, Weykamp F, Palacios MA, Marschner S, Corradini S, Belka C, Kurz C, Landry G, Rabe M

pubmed logopapersJun 26 2025
Fast and accurate organ-at-risk (OAR) and gross tumor volume (GTV) contour propagation methods are needed to improve the efficiency of magnetic resonance (MR) imaging-guided radiotherapy. We trained deformable image registration networks to accurately propagate contours from planning to fraction MR images.&#xD;Approach: Data from 140 stage 1-2 lung cancer patients treated at a 0.35T MR-Linac were split into 102/17/21 for training/validation/testing. Additionally, 18 central lung tumor patients, treated at a 0.35T MR-Linac externally, and 14 stage 3 lung cancer patients from a phase 1 clinical trial, treated at 0.35T or 1.5T MR-Linacs at three institutions, were used for external testing. Planning and fraction images were paired (490 pairs) for training. Two hybrid transformer-convolutional neural network TransMorph models with mean squared error (MSE), Dice similarity coefficient (DSC), and regularization losses (TM_{MSE+Dice}) or MSE and regularization losses (TM_{MSE}) were trained to deformably register planning to fraction images. The TransMorph models predicted diffeomorphic dense displacement fields. Multi-label images including seven thoracic OARs and the GTV were propagated to generate fraction segmentations. Model predictions were compared with contours obtained through B-spline, vendor registration and the auto-segmentation method nnUNet. Evaluation metrics included the DSC and Hausdorff distance percentiles (50th and 95th) against clinical contours.&#xD;Main results: TM_{MSE+Dice} and TM_{MSE} achieved mean OARs/GTV DSCs of 0.90/0.82 and 0.90/0.79 for the internal and 0.84/0.77 and 0.85/0.76 for the central lung tumor external test data. On stage 3 data, TM_{MSE+Dice} achieved mean OARs/GTV DSCs of 0.87/0.79 and 0.83/0.78 for the 0.35 T MR-Linac datasets, and 0.87/0.75 for the 1.5 T MR-Linac dataset. TM_{MSE+Dice} and TM_{MSE} had significantly higher geometric accuracy than other methods on external data. No significant difference between TM_{MSE+Dice} and TM_{MSE} was found.&#xD;Significance: TransMorph models achieved time-efficient segmentation of fraction MRIs with high geometrical accuracy and accurately segmented images obtained at different field strengths.
Page 67 of 1241232 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.