Sort by:
Page 8 of 1241236 results

Generating Brain MRI with StyleGAN2-ADA: The Effect of the Training Set Size on the Quality of Synthetic Images.

Lai M, Mascalchi M, Tessa C, Diciotti S

pubmed logopapersSep 23 2025
The potential of deep learning for medical imaging is often constrained by limited data availability. Generative models can unlock this potential by generating synthetic data that reproduces the statistical properties of real data while being more accessible for sharing. In this study, we investigated the influence of training set size on the performance of a state-of-the-art generative adversarial network, the StyleGAN2-ADA, trained on a cohort of 3,227 subjects from the OpenBHB dataset to generate 2D slices of brain MR images from healthy subjects. The quality of the synthetic images was assessed through qualitative evaluations and state-of-the-art quantitative metrics, which are provided in a publicly accessible repository. Our results demonstrate that StyleGAN2-ADA generates realistic and high-quality images, deceiving even expert radiologists while preserving privacy, as it did not memorize training images. Notably, increasing the training set size led to slight improvements in fidelity metrics. However, training set size had no noticeable impact on diversity metrics, highlighting the persistent limitation of mode collapse. Furthermore, we observed that diversity metrics, such as coverage and β-recall, are highly sensitive to the number of synthetic images used in their computation, leading to inflated values when synthetic data significantly outnumber real ones. These findings underscore the need to carefully interpret diversity metrics and the importance of employing complementary evaluation strategies for robust assessment. Overall, while StyleGAN2-ADA shows promise as a tool for generating privacy-preserving synthetic medical images, overcoming diversity limitations will require exploring alternative generative architectures or incorporating additional regularization techniques.

Early prediction of periventricular leukomalacia from MRI changes: a machine learning approach for risk stratification.

Lin J, Luo J, Luo Y, Zhuang Y, Mo T, Wen S, Chen T, Yun G, Zeng H

pubmed logopapersSep 23 2025
To develop an accessible model integrating clinical, MRI, and radiomic features to predict periventricular leukomalacia (PVL) in high-risk infants. Two hundred and seventeen infants (2015-2022) with suspected motor abnormalities, stratified into training (n = 124), internal validation (n = 31), and external validation (n = 62) cohorts by MRI scanners. Radiomic features were extracted from white matter regions on axial sequences. Feature selection employed T-tests, correlation filtering, Random Forest, and LASSO regression. Multivariate logistic models were evaluated by receiver operating characteristic (ROC), accuracy, sensitivity, specificity, positive predictive value, negative predictive value, calibration, decision curve analysis (DCA), net reclassification index (NRI), and integrated discrimination improvement (IDI). Clinical predictors (gestational age, neonatal hypoglycemia, hypoxic-ischemic events, infection) and MRI features (dilated lateral ventricle, delayed myelination, and periventricular abnormal signal) were retained through univariate and multivariate screening. Five clinical predictive models, including clinical model (Model C), MRI model (Model M), Clinical + MRI model (Model C + M), radiomic model and Clinical + MRI + Radiomics model (Model C + M + R), were developed and validated using internal testing, bootstrapping, and external cohorts. Among them, Model C + M + R achieved the best overall performance, with an area under curve (AUC) of 0.96 (95% CI: 0.90-1.00), accuracy of 0.87 (95% CI: 0.76-0.94), sensitivity of 0.88, specificity of 0.85, PPV of 0.96, and NPV of 0.65 in the external validation cohort. Comparison with Model C + M, Model C + M + R demonstrated significant reclassification (NRI = 0.631, p < 0.001) and discrimination improvements (IDI = 0.037, p = 0.020). Conventional MRI-derived radiomics enhances PVL risk stratification. Interpretable accessible model for clinical use provides a new tool for high-risk infant evaluation. Question Periventricular leukomalacia requires early identification to optimize neurorehabilitation. Early white matter injury in infants is challenging to identify through conventional MRI visual assessment. Findings The clinical-MRI-radiomic model demonstrates the best performance for predicting PVL, with an AUC of 0.93 in the training and 0.96 in the external validation cohort. Clinical relevance An accessible and interpretable predictive tool for periventricular leukomalacia prediction has been developed and validated, which may enable earlier targeted interventions.

Enhancing AI-based decision support system with automatic brain tumor segmentation for EGFR mutation classification.

Gökmen N, Kocadağlı O, Cevik S, Aktan C, Eghbali R, Liu C

pubmed logopapersSep 23 2025
Glioblastoma (GBM) carries poor prognosis; epidermal-growth-factor-receptor (EGFR) mutations further shorten survival. We propose a fully automated MRI-based decision-support system (DSS) that segments GBM and classifies EGFR status, reducing reliance on invasive biopsy. The segmentation module (UNet SI) fuses multiresolution, entropy-ranked shearlet features with CNN features, preserving fine detail through identity long-skip connections, to yield a Lightweight 1.9 M-parameter network. Tumour masks are fed to an Inception ResNet-v2 classifier via a 512-D bottleneck. The pipeline was five-fold cross-validated on 98 contrast-enhanced T1-weighted scans (Memorial Hospital; Ethics 24.12.2021/008) and externally validated on BraTS 2019. On the Memorial cohort UNet SI achieved Dice 0.873, Jaccard 0.853, SSIM 0.992, HD95 24.19 mm. EGFR classification reached Accuracy 0.960, Precision 1.000, Recall 0.871, AUC 0.94, surpassing published state-of-the-art results. Inference time is ≤ 0.18 s per slice on a 4 GB GPU. By combining shearlet-enhanced segmentation with streamlined classification, the DSS delivers superior EGFR prediction and is suitable for integration into routine clinical workflows.

Improved pharmacokinetic parameter estimation from DCE-MRI via spatial-temporal information-driven unsupervised learning.

He X, Wang L, Yang Q, Wang J, Xing Z, Cao D, Cai C, Cai S

pubmed logopapersSep 23 2025
<b>Objective</b>: Pharmacokinetic (PK) parameters derived from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) provide quantitative characterization of tissue perfusion and permeability. However, existing deep learning methods for PK parameter estimation rely on either temporal or spatial features alone, overlooking the integrated spatial-temporal characteristics of DCE-MRI data. This study aims to remove this barrier by fully leveraging the spatial and temporal information to improve parameter estimation.&#xD;<b>Approach</b>: A spatial-temporal information-driven unsupervised deep learning method (STUDE) was proposed. STUDE combines convolutional neural networks (CNNs) and a customized Vision Transformer (ViT) to separately capture spatial and temporal features, enabling comprehensive modelling of contrast agent dynamics and tissue heterogeneity. Besides, a spatial-temporal attention (STA) feature fusion module was proposed to enable adaptive focus on both dimensions for more effective feature fusion. Moreover, the extended Tofts model imposed physical constraints on PK parameter estimation, enabling unsupervised training of STUDE. The accuracy and diagnostic value of STUDE was compared with the orthodox non-linear least squares (NLLS) and representative deep learning-based methods (i.e., GRU, CNN, U-Net, and VTDCE-Net) on a numerical brain phantom and 87 glioma patients, respectively.&#xD;<b>Main results</b>: On the numerical brain phantom, STUDE produced PK parameter maps with the lowest systematic and random errors even under low SNR conditions (SNR = 10 dB). On glioma data, STUDE generated parameter maps with reduced noise compared to NLLS and demonstrated superior structural clarity compared to other methods. Furthermore, STUDE outshined all other methods in the identification of glioma isocitrate dehydrogenase (IDH) mutation status, achieving the area under the curve (AUC) values at 0.840 and 0.908 for the receiver operating characteristic curves of<i>K<sup>trans</sup></i>and<i>V<sub>e</sub></i>, respectively. A combination of all PK parameters improved AUC to 0.926.&#xD;<b>Significance</b>: STUDE advances spatial-temporal information-driven and physics-informed learning for precise PK parameter estimation, demonstrating its potential clinical significance.&#xD.

Exploiting Cross-modal Collaboration and Discrepancy for Semi-supervised Ischemic Stroke Lesion Segmentation from Multi-sequence MRI Images.

Cao Y, Qin T, Liu Y

pubmed logopapersSep 23 2025
Accurate ischemic stroke lesion segmentation is useful to define the optimal reperfusion treatment and unveil the stroke etiology. Despite the importance of diffusion-weighted MRI (DWI) for stroke diagnosis, learning from multi-sequence MRI images like apparent diffusion coefficient (ADC) can capitalize on the complementary nature of information from various modalities and show strong potential to improve the performance of segmentation. However, existing deep learning-based methods require large amounts of well-annotated data from multiple modalities for training, while acquiring such datasets is often impractical. We conduct the exploration of semi-supervised stroke lesion segmentation from multi-sequence MRI images by utilizing unlabeled data to improve performance using limited annotation and propose a novel framework by exploiting cross-modality collaboration and discrepancy to efficiently utilize unlabeled data. Specifically, we adopt a cross-modal bidirectional copy-paste strategy to enable information collaboration between different modalities and a cross-modal discrepancy-informed correction strategy to efficiently learn from limited labeled multi-sequence MRI data and abundant unlabeled data. Extensive experiments on the ischemic stroke lesion segmentation (ISLES 22) dataset demonstrate that our method efficiently utilizes unlabeled data with 12.32% DSC improvements compared with a supervised baseline using 10% annotations and outperforms existing semi-supervised segmentation methods with better performance.

Graph-Radiomic Learning (GrRAiL) Descriptor to Characterize Imaging Heterogeneity in Confounding Tumor Pathologies

Dheerendranath Battalapalli, Apoorva Safai, Maria Jaramillo, Hyemin Um, Gustavo Adalfo Pineda Ortiz, Ulas Bagci, Manmeet Singh Ahluwalia, Marwa Ismail, Pallavi Tiwari

arxiv logopreprintSep 23 2025
A significant challenge in solid tumors is reliably distinguishing confounding pathologies from malignant neoplasms on routine imaging. While radiomics methods seek surrogate markers of lesion heterogeneity on CT/MRI, many aggregate features across the region of interest (ROI) and miss complex spatial relationships among varying intensity compositions. We present a new Graph-Radiomic Learning (GrRAiL) descriptor for characterizing intralesional heterogeneity (ILH) on clinical MRI scans. GrRAiL (1) identifies clusters of sub-regions using per-voxel radiomic measurements, then (2) computes graph-theoretic metrics to quantify spatial associations among clusters. The resulting weighted graphs encode higher-order spatial relationships within the ROI, aiming to reliably capture ILH and disambiguate confounding pathologies from malignancy. To assess efficacy and clinical feasibility, GrRAiL was evaluated in n=947 subjects spanning three use cases: differentiating tumor recurrence from radiation effects in glioblastoma (GBM; n=106) and brain metastasis (n=233), and stratifying pancreatic intraductal papillary mucinous neoplasms (IPMNs) into no+low vs high risk (n=608). In a multi-institutional setting, GrRAiL consistently outperformed state-of-the-art baselines - Graph Neural Networks (GNNs), textural radiomics, and intensity-graph analysis. In GBM, cross-validation (CV) and test accuracies for recurrence vs pseudo-progression were 89% and 78% with >10% test-accuracy gains over comparators. In brain metastasis, CV and test accuracies for recurrence vs radiation necrosis were 84% and 74% (>13% improvement). For IPMN risk stratification, CV and test accuracies were 84% and 75%, showing >10% improvement.

Dual-Feature Cross-Fusion Network for Precise Brain Tumor Classification: A Neurocomputational Approach.

M M, G S, Bendre M, Nirmal M

pubmed logopapersSep 23 2025
Brain tumors represent a significant neurological challenge, affecting individuals across all age groups. Accurate and timely diagnosis of tumor types is critical for effective treatment planning. Magnetic Resonance Imaging (MRI) remains a primary diagnostic modality due to its non-invasive nature and ability to provide detailed brain imaging. However, traditional tumor classification relies on expert interpretation, which is time-consuming and prone to subjectivity. This study proposes a novel deep learning architecture, the Dual-Feature Cross-Fusion Network (DF-CFN), for the automated classification of brain tumors using MRI data. The model integrates ConvNeXt for capturing global contextual features and a shallow CNN combined with Feature Channel Attention Network (FcaNet) for extracting local features. These are fused through a cross-feature fusion mechanism for improved classification. The model is trained and validated using a Kaggle dataset encompassing four tumor classes (glioma, meningioma, pituitary, and non-tumor), achieving an accuracy of 99.33%. Its generalizability is further confirmed using the Figshare dataset, yielding 99.22% accuracy. Comparative analyses with baseline and recent models validate the superiority of DF-CFN in terms of precision and robustness. This approach demonstrates strong potential for assisting clinicians in reliable brain tumor classification, thereby improving diagnostic efficiency and reducing the burden on healthcare professionals.

MRN: Harnessing 2D Vision Foundation Models for Diagnosing Parkinson's Disease with Limited 3D MR Data

Ding Shaodong, Liu Ziyang, Zhou Yijun, Liu Tao

arxiv logopreprintSep 22 2025
The automatic diagnosis of Parkinson's disease is in high clinical demand due to its prevalence and the importance of targeted treatment. Current clinical practice often relies on diagnostic biomarkers in QSM and NM-MRI images. However, the lack of large, high-quality datasets makes training diagnostic models from scratch prone to overfitting. Adapting pre-trained 3D medical models is also challenging, as the diversity of medical imaging leads to mismatches in voxel spacing and modality between pre-training and fine-tuning data. In this paper, we address these challenges by leveraging 2D vision foundation models (VFMs). Specifically, we crop multiple key ROIs from NM and QSM images, process each ROI through separate branches to compress the ROI into a token, and then combine these tokens into a unified patient representation for classification. Within each branch, we use 2D VFMs to encode axial slices of the 3D ROI volume and fuse them into the ROI token, guided by an auxiliary segmentation head that steers the feature extraction toward specific brain nuclei. Additionally, we introduce multi-ROI supervised contrastive learning, which improves diagnostic performance by pulling together representations of patients from the same class while pushing away those from different classes. Our approach achieved first place in the MICCAI 2025 PDCADxFoundation challenge, with an accuracy of 86.0% trained on a dataset of only 300 labeled QSM and NM-MRI scans, outperforming the second-place method by 5.5%.These results highlight the potential of 2D VFMs for clinical analysis of 3D MR images.

Neural Network-Driven Direct CBCT-Based Dose Calculation for Head-and-Neck Proton Treatment Planning

Muheng Li, Evangelia Choulilitsa, Lisa Fankhauser, Francesca Albertini, Antony Lomax, Ye Zhang

arxiv logopreprintSep 22 2025
Accurate dose calculation on cone beam computed tomography (CBCT) images is essential for modern proton treatment planning workflows, particularly when accounting for inter-fractional anatomical changes in adaptive treatment scenarios. Traditional CBCT-based dose calculation suffers from image quality limitations, requiring complex correction workflows. This study develops and validates a deep learning approach for direct proton dose calculation from CBCT images using extended Long Short-Term Memory (xLSTM) neural networks. A retrospective dataset of 40 head-and-neck cancer patients with paired planning CT and treatment CBCT images was used to train an xLSTM-based neural network (CBCT-NN). The architecture incorporates energy token encoding and beam's-eye-view sequence modelling to capture spatial dependencies in proton dose deposition patterns. Training utilized 82,500 paired beam configurations with Monte Carlo-generated ground truth doses. Validation was performed on 5 independent patients using gamma analysis, mean percentage dose error assessment, and dose-volume histogram comparison. The CBCT-NN achieved gamma pass rates of 95.1 $\pm$ 2.7% using 2mm/2% criteria. Mean percentage dose errors were 2.6 $\pm$ 1.4% in high-dose regions ($>$90% of max dose) and 5.9 $\pm$ 1.9% globally. Dose-volume histogram analysis showed excellent preservation of target coverage metrics (Clinical Target Volume V95% difference: -0.6 $\pm$ 1.1%) and organ-at-risk constraints (parotid mean dose difference: -0.5 $\pm$ 1.5%). Computation time is under 3 minutes without sacrificing Monte Carlo-level accuracy. This study demonstrates the proof-of-principle of direct CBCT-based proton dose calculation using xLSTM neural networks. The approach eliminates traditional correction workflows while achieving comparable accuracy and computational efficiency suitable for adaptive protocols.

Automated Labeling of Intracranial Arteries with Uncertainty Quantification Using Deep Learning

Javier Bisbal, Patrick Winter, Sebastian Jofre, Aaron Ponce, Sameer A. Ansari, Ramez Abdalla, Michael Markl, Oliver Welin Odeback, Sergio Uribe, Cristian Tejos, Julio Sotelo, Susanne Schnell, David Marlevi

arxiv logopreprintSep 22 2025
Accurate anatomical labeling of intracranial arteries is essential for cerebrovascular diagnosis and hemodynamic analysis but remains time-consuming and subject to interoperator variability. We present a deep learning-based framework for automated artery labeling from 3D Time-of-Flight Magnetic Resonance Angiography (3D ToF-MRA) segmentations (n=35), incorporating uncertainty quantification to enhance interpretability and reliability. We evaluated three convolutional neural network architectures: (1) a UNet with residual encoder blocks, reflecting commonly used baselines in vascular labeling; (2) CS-Net, an attention-augmented UNet incorporating channel and spatial attention mechanisms for enhanced curvilinear structure recognition; and (3) nnUNet, a self-configuring framework that automates preprocessing, training, and architectural adaptation based on dataset characteristics. Among these, nnUNet achieved the highest labeling performance (average Dice score: 0.922; average surface distance: 0.387 mm), with improved robustness in anatomically complex vessels. To assess predictive confidence, we implemented test-time augmentation (TTA) and introduced a novel coordinate-guided strategy to reduce interpolation errors during augmented inference. The resulting uncertainty maps reliably indicated regions of anatomical ambiguity, pathological variation, or manual labeling inconsistency. We further validated clinical utility by comparing flow velocities derived from automated and manual labels in co-registered 4D Flow MRI datasets, observing close agreement with no statistically significant differences. Our framework offers a scalable, accurate, and uncertainty-aware solution for automated cerebrovascular labeling, supporting downstream hemodynamic analysis and facilitating clinical integration.
Page 8 of 1241236 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.