Sort by:
Page 42 of 1621616 results

Profiling disease experience in patients living with brain aneurysms by analyzing multimodal clinical data and quality of life measures.

Reder SR, Hardt J, Brockmann MA, Brockmann C, Kim S, Kawulycz M, Schulz M, Kantelhardt SR, Petrowski K, Fischbeck S

pubmed logopapersAug 20 2025
To explore the mental and physical health (MH, PH) on individuals living with brain aneurysms and to profile their differences in disease experience. In N = 111 patients the Short Form 36 Health Survey (SF-36) was assessed via an online survey; Supplementary data included angiography and magnetic resonance imaging (MRI) findings (including AI-based brain Lesion Volume analyses in ml, or LV). Correlation and regression analyses were conducted (including biological sex, age, overall brain LV, PH, MH). Disease profiles were determined using principal component analysis. Compared to the German normative cohort, patients exhibited overall lower SF-36 scores. In regression analyses, the DW was predictable by PH (β = 0.345) and MH (β=-0.646; R = 0.557; p < 0.001). Vasospasm severity correlated significantly with LV (r = 0.242, p = 0.043), MH (r=-0.321, p = 0.043), and PH (r=-0.372, p = 0.028). Higher LV were associated with poorer PH (r=-0.502, p = 0.001), unlike MH (p > 0.05). Main disease profiles were identified: (1) those with increased LV post-rupture (high DW); (2) older individuals with stable aneurysms (low DW); (3) revealing a sex disparity in QoL despite similar vasospasm severity; and 4), focused on chronic pain and its impact on daily tasks. Two sub-profiles highlighted trauma-induced impairments, functional disabilities from LV, and persistent anxiety. Reduced thalamic and pallidal volumes were linked to low QoL following subarachnoid hemorrhage. MH has a greater impact on life quality compared to physical disabilities, leading to prolonged DW. A singular physical impairment was rather atypical for a perceived worse outcome. Patient profiles revealed that clinical history, sex, psychological stress, and pain each contribute uniquely to QoL and work capacity. Prioritizing MH in assessing workability and rehabilitation is crucial for survivors' long-term outcome.

Multicenter Validation of Automated Segmentation and Composition Analysis of Lumbar Paraspinal Muscles Using Multisequence MRI.

Zhang Z, Hides JA, De Martino E, Millner J, Tuxworth G

pubmed logopapersAug 20 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Chronic low back pain is a global health issue with considerable socioeconomic burdens and is associated with changes in lumbar paraspinal muscles (LPM). In this retrospective study, a deep learning method was trained and externally validated for automated LPM segmentation, muscle volume quantification, and fatty infiltration assessment across multisequence MRIs. A total of 1,302 MRIs from 641 participants across five centers were included. Data from two centers were used for model training and tuning, while data from the remaining three centers were used for external testing. Model segmentation performance was evaluated against manual segmentation using the Dice similarity coefficient (DSC), and measurement accuracy was assessed using two one-sided tests and Intraclass Correlation Coefficients (ICCs). The model achieved global DSC values of 0.98 on the internal test set and 0.93 to 0.97 on external test sets. Statistical equivalence between automated and manual measurements of muscle volume and fat ratio was confirmed in most regions (<i>P</i> < .05). Agreement between automated and manual measurements was high (ICCs > 0.92). In conclusion, the proposed automated method accurately segmented LPM and demonstrated statistical equivalence to manual measurements of muscle volume and fatty infiltration ratio across multisequence, multicenter MRIs. ©RSNA, 2025.

ScarNet: A Novel Foundation Model for Automated Myocardial Scar Quantification from Late Gadolinium-Enhancement Images.

Tavakoli N, Rahsepar AA, Benefield BC, Shen D, López-Tapia S, Schiffers F, Goldberger JJ, Albert CM, Wu E, Katsaggelos AK, Lee DC, Kim D

pubmed logopapersAug 20 2025
Late Gadolinium Enhancement (LGE) imaging remains the gold standard for assessing myocardial fibrosis and scarring, with left ventricular (LV) LGE presence and extent serving as a predictor of major adverse cardiac events (MACE). Despite its clinical significance, LGE-based LV scar quantification is not used routinely due to the labor-intensive manual segmentation and substantial inter-observer variability. We developed ScarNet that synergistically combines a transformer-based encoder in Medical Segment Anything Model (MedSAM), which we fine-tuned with our dataset, and a convolution-based decoder in U-Net with tailored attention blocks to automatically segment myocardial scar boundaries while maintaining anatomical context. This network was trained and fine-tuned on an existing database of 401 ischemic cardiomyopathy patients (4,137 2D LGE images) with expert segmentation of myocardial and scar boundaries in LGE images, validated on 100 patients (1,034 2D LGE images) during training, and tested on unseen set of 184 patients (1,895 2D LGE images). Ablation studies were conducted to validate each architectural component's contribution. In 184 independent testing patients, ScarNet achieved accurate scar boundary segmentation (median DICE=0.912 [interquartile range (IQR): 0.863-0.944], concordance correlation coefficient [CCC]=0.963), significantly outperforming both MedSAM (median DICE=0.046 [IQR: 0.043-0.047], CCC=0.018) and nnU-Net (median DICE=0.638 [IQR: 0.604-0.661], CCC=0.734). For scar volume quantification, ScarNet demonstrated excellent agreement with manual analysis (CCC=0.995, percent bias=-0.63%, CoV=4.3%) compared to MedSAM (CCC=0.002, percent bias=-13.31%, CoV=130.3%) and nnU-Net (CCC=0.910, percent bias=-2.46%, CoV=20.3%). Similar trends were observed in the Monte Carlo simulations with noise perturbations. The overall accuracy was highest for SCARNet (sensitivity=95.3%; specificity=92.3%), followed by nnU-Net (sensitivity=74.9%; specificity=69.2%) and MedSAM (sensitivity=15.2%; specificity=92.3%). ScarNet outperformed MedSAM and nnU-Net for predicting myocardial and scar boundaries in LGE images of patients with ischemic cardiomyopathy. The Monte Carlo simulations demonstrated that ScarNet is less sensitive to noise perturbations than other tested networks.

Comparing Conditional Diffusion Models for Synthesizing Contrast-Enhanced Breast MRI from Pre-Contrast Images

Sebastian Ibarra, Javier del Riego, Alessandro Catanese, Julian Cuba, Julian Cardona, Nataly Leon, Jonathan Infante, Karim Lekadir, Oliver Diaz, Richard Osuala

arxiv logopreprintAug 19 2025
Dynamic contrast-enhanced (DCE) MRI is essential for breast cancer diagnosis and treatment. However, its reliance on contrast agents introduces safety concerns, contraindications, increased cost, and workflow complexity. To this end, we present pre-contrast conditioned denoising diffusion probabilistic models to synthesize DCE-MRI, introducing, evaluating, and comparing a total of 22 generative model variants in both single-breast and full breast settings. Towards enhancing lesion fidelity, we introduce both tumor-aware loss functions and explicit tumor segmentation mask conditioning. Using a public multicenter dataset and comparing to respective pre-contrast baselines, we observe that subtraction image-based models consistently outperform post-contrast-based models across five complementary evaluation metrics. Apart from assessing the entire image, we also separately evaluate the region of interest, where both tumor-aware losses and segmentation mask inputs improve evaluation metrics. The latter notably enhance qualitative results capturing contrast uptake, albeit assuming access to tumor localization inputs that are not guaranteed to be available in screening settings. A reader study involving 2 radiologists and 4 MRI technologists confirms the high realism of the synthetic images, indicating an emerging clinical potential of generative contrast-enhancement. We share our codebase at https://github.com/sebastibar/conditional-diffusion-breast-MRI.

Latent Interpolation Learning Using Diffusion Models for Cardiac Volume Reconstruction

Niklas Bubeck, Suprosanna Shit, Chen Chen, Can Zhao, Pengfei Guo, Dong Yang, Georg Zitzlsberger, Daguang Xu, Bernhard Kainz, Daniel Rueckert, Jiazhen Pan

arxiv logopreprintAug 19 2025
Cardiac Magnetic Resonance (CMR) imaging is a critical tool for diagnosing and managing cardiovascular disease, yet its utility is often limited by the sparse acquisition of 2D short-axis slices, resulting in incomplete volumetric information. Accurate 3D reconstruction from these sparse slices is essential for comprehensive cardiac assessment, but existing methods face challenges, including reliance on predefined interpolation schemes (e.g., linear or spherical), computational inefficiency, and dependence on additional semantic inputs such as segmentation labels or motion data. To address these limitations, we propose a novel \textbf{Ca}rdiac \textbf{L}atent \textbf{I}nterpolation \textbf{D}iffusion (CaLID) framework that introduces three key innovations. First, we present a data-driven interpolation scheme based on diffusion models, which can capture complex, non-linear relationships between sparse slices and improves reconstruction accuracy. Second, we design a computationally efficient method that operates in the latent space and speeds up 3D whole-heart upsampling time by a factor of 24, reducing computational overhead compared to previous methods. Third, with only sparse 2D CMR images as input, our method achieves SOTA performance against baseline methods, eliminating the need for auxiliary input such as morphological guidance, thus simplifying workflows. We further extend our method to 2D+T data, enabling the effective modeling of spatiotemporal dynamics and ensuring temporal coherence. Extensive volumetric evaluations and downstream segmentation tasks demonstrate that CaLID achieves superior reconstruction quality and efficiency. By addressing the fundamental limitations of existing approaches, our framework advances the state of the art for spatio and spatiotemporal whole-heart reconstruction, offering a robust and clinically practical solution for cardiovascular imaging.

A fully automatic knee subregion segmentation network based on tissue segmentation and anatomical geometry.

Chen S, Zhong L, Zhang Z, Zhang X

pubmed logopapersAug 19 2025
Aiming at the difficulty of knee MRI bone and cartilage subregion segmentation caused by numerous subregions and unclear subregion boundary, a fully automatic knee subregion segmentation network based on tissue segmentation and anatomical geometry is proposed. Specifically, first, we use a transformer-based multilevel region and edge aggregation network to achieve precise segmentation of bone and cartilage tissue edges in knee MRI. Then, we designed a fibula detection module, which determines the medial and lateral of the knee by detecting the position of the fibula. Afterwards, a subregion segmentation module based on boundary information was designed, which divides bone and cartilage tissues into subregions by detecting the boundaries. In addition, in order to provide data support for the proposed model, fibula classification dataset and knee MRI bone and cartilage subregion dataset were established respectively. Testing on the fibula classification dataset we established, the proposed method achieved a detection accuracy of 1.000 in detecting the medial and lateral of the knee. On the knee MRI bone and cartilage subregion dataset we established, the proposed method attained an average dice score of 0.953 for bone subregions and 0.831 for cartilage subregions, which verifies the correctness of the proposed method.

A Systematic Study of Deep Learning Models and xAI Methods for Region-of-Interest Detection in MRI Scans

Justin Yiu, Kushank Arora, Daniel Steinberg, Rohit Ghiya

arxiv logopreprintAug 19 2025
Magnetic Resonance Imaging (MRI) is an essential diagnostic tool for assessing knee injuries. However, manual interpretation of MRI slices remains time-consuming and prone to inter-observer variability. This study presents a systematic evaluation of various deep learning architectures combined with explainable AI (xAI) techniques for automated region of interest (ROI) detection in knee MRI scans. We investigate both supervised and self-supervised approaches, including ResNet50, InceptionV3, Vision Transformers (ViT), and multiple U-Net variants augmented with multi-layer perceptron (MLP) classifiers. To enhance interpretability and clinical relevance, we integrate xAI methods such as Grad-CAM and Saliency Maps. Model performance is assessed using AUC for classification and PSNR/SSIM for reconstruction quality, along with qualitative ROI visualizations. Our results demonstrate that ResNet50 consistently excels in classification and ROI identification, outperforming transformer-based models under the constraints of the MRNet dataset. While hybrid U-Net + MLP approaches show potential for leveraging spatial features in reconstruction and interpretability, their classification performance remains lower. Grad-CAM consistently provided the most clinically meaningful explanations across architectures. Overall, CNN-based transfer learning emerges as the most effective approach for this dataset, while future work with larger-scale pretraining may better unlock the potential of transformer models.

Emerging modalities for neuroprognostication in neonatal encephalopathy: harnessing the potential of artificial intelligence.

Chawla V, Cizmeci MN, Sullivan KM, Gritz EC, Q Cardona V, Menkiti O, Natarajan G, Rao R, McAdams RM, Dizon ML

pubmed logopapersAug 19 2025
Neonatal Encephalopathy (NE) from presumed hypoxic-ischemic encephalopathy (pHIE) is a leading cause of morbidity and mortality in infants worldwide. Recent advancements in HIE research have introduced promising tools for improved screening of high-risk infants, time to diagnosis, and accuracy of assessment of neurologic injury to guide management and predict outcomes, some of which integrate artificial intelligence (AI) and machine learning (ML). This review begins with an overview of AI/ML before examining emerging prognostic approaches for predicting outcomes in pHIE. It explores various modalities including placental and fetal biomarkers, gene expression, electroencephalography, brain magnetic resonance imaging and other advanced neuroimaging techniques, clinical video assessment tools, and transcranial magnetic stimulation paired with electromyography. Each of these approaches may come to play a crucial role in predicting outcomes in pHIE. We also discuss the application of AI/ML to enhance these emerging prognostic tools. While further validation is needed for widespread clinical adoption, these tools and their multimodal integration hold the potential to better leverage neuroplasticity windows of affected infants. IMPACT: This article provides an overview of placental pathology, biomarkers, gene expression, electroencephalography, motor assessments, brain imaging, and transcranial magnetic stimulation tools for long-term neurodevelopmental outcome prediction following neonatal encephalopathy, that lend themselves to augmentation by artificial intelligence/machine learning (AI/ML). Emerging AI/ML tools may create opportunities for enhanced prognostication through multimodal analyses.

Automated adaptive detection and reconstruction of quiescent cardiac phases in free-running whole-heart acquisitions using Synchronicity Maps from PHysiological mOtioN In Cine (SYMPHONIC) MRI.

Bongiolatti GMCR, Masala N, Bastiaansen JAM, Yerly J, Prša M, Rutz T, Tenisch E, Si-Mohamed S, Stuber M, Roy CW

pubmed logopapersAug 19 2025
To reconstruct whole-heart images from free-running acquisitions through automated selection of data acceptance windows (ES: end-systole, MD: mid-diastole, ED: end-diastole) that account for heart rate variability (HRV). SYMPHONIC was developed and validated in simulated (N = 1000) and volunteer (N = 14) data. To validate SYMPHONIC, the position of the detected acceptance windows, total duration, and resulting ventricular volume were compared to the simulated ground truth to establish metrics for temporal error, quiescent interval duration, and volumetric error, respectively. SYMPHONIC MD images and those using manually defined acceptance windows with fixed (MANUAL<sub>FIXED</sub>) or adaptive (MANUAL<sub>ADAPT</sub>) width were compared by measuring vessel sharpness (VS). The impact of HRV was assessed in patients (N = 6). Mean temporal error was larger for MD than for ED and ED in both simulations and volunteers. Mean volumetric errors were comparable. Interval duration differed for ES (p = 0.04) and ED (p < 10<sup>-3</sup>), but not for MD (p = 0.08). In simulations, SYMPHONIC and MANUAL<sub>ADAPT</sub> provided consistent VS for increasing HRV, while VS decreased for MANUAL<sub>FIXED</sub>. In volunteers, VS differed between MANUAL<sub>ADAPT</sub> and MANUAL<sub>FIXED</sub> (p < 0.01), but not between SYMPHONIC and MANUAL<sub>ADAPT</sub> (p = 0.03) or MANUAL<sub>FIXED</sub> (p = 0.42). SYMPHONIC accurately detected quiescent cardiac phases in free-running data and resulted in high-quality whole-heart images despite the presence of HRV.

Advanced liver fibrosis detection using a two-stage deep learning approach on standard T2-weighted MRI.

Gupta P, Singh S, Gulati A, Dutta N, Aggarwal Y, Kalra N, Premkumar M, Taneja S, Verma N, De A, Duseja A

pubmed logopapersAug 19 2025
To develop and validate a deep learning model for automated detection of advanced liver fibrosis using standard T2-weighted MRI. We utilized two datasets: the public CirrMRI600 + dataset (n = 374) containing T2-weighted MRI scans from patients with cirrhosis (n = 318) and healthy subjects (n = 56), and an in-house dataset of chronic liver disease patients (n = 187). A two-stage deep learning pipeline was developed: first, an automated liver segmentation model using nnU-Net architecture trained on CirrMRI600 + and then applied to segment livers in our in-house dataset; second, a Masked Attention ResNet classification model. For classification model training, patients with liver stiffness measurement (LSM) > 12 kPa were classified as advanced fibrosis (n = 104). In contrast, healthy subjects from CirrMRI600 + and patients with LSM ≤ 12 kPa were classified as non-advanced fibrosis (n = 116). Model validation was exclusively performed on a separate test set of 23 patients with histopathological confirmation of the degree of fibrosis (METAVIR ≥ F3 indicating advanced fibrosis). We additionally compared our two-stage approach with direct classification without segmentation, and evaluated alternative architectures including DenseNet121 and SwinTransformer. The liver segmentation model performed excellently on the test set (mean Dice score: 0.960 ± 0.009; IoU: 0.923 ± 0.016). On the pathologically confirmed independent test set (n = 23), our two-stage model achieved strong diagnostic performance (sensitivity: 0.778, specificity: 0.800, AUC: 0.811, accuracy: 0.783), significantly outperforming direct classification without segmentation (AUC: 0.743). Classification performance was highly dependent on segmentation quality, with cases having excellent segmentation (Score 1) showing higher accuracy (0.818) than those with poor segmentation (Score 3, accuracy: 0.625). Alternative architectures with masked attention showed comparable but slightly lower performance (DenseNet121: AUC 0.795; SwinTransformer: AUC 0.782). Our fully automated deep learning pipeline effectively detects advanced liver fibrosis using standard non-contrast T2-weighted MRI, potentially offering a non-invasive alternative to current diagnostic approaches. The segmentation-first approach provides significant performance gains over direct classification.
Page 42 of 1621616 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.