Sort by:
Page 34 of 1621612 results

Federated Fine-tuning of SAM-Med3D for MRI-based Dementia Classification

Kaouther Mouheb, Marawan Elbatel, Janne Papma, Geert Jan Biessels, Jurgen Claassen, Huub Middelkoop, Barbara van Munster, Wiesje van der Flier, Inez Ramakers, Stefan Klein, Esther E. Bron

arxiv logopreprintAug 29 2025
While foundation models (FMs) offer strong potential for AI-based dementia diagnosis, their integration into federated learning (FL) systems remains underexplored. In this benchmarking study, we systematically evaluate the impact of key design choices: classification head architecture, fine-tuning strategy, and aggregation method, on the performance and efficiency of federated FM tuning using brain MRI data. Using a large multi-cohort dataset, we find that the architecture of the classification head substantially influences performance, freezing the FM encoder achieves comparable results to full fine-tuning, and advanced aggregation methods outperform standard federated averaging. Our results offer practical insights for deploying FMs in decentralized clinical settings and highlight trade-offs that should guide future method development.

Mapping heterogeneity in the neuroanatomical correlates of depression

Watts, D., Mallard, T. T., Dall' Aglio, L., Giangrande, E., Kennedy, C., Cai, N., Choi, K. W., Ge, T., Smoller, J.

medrxiv logopreprintAug 29 2025
Major depressive disorder (MDD) affects millions worldwide, yet its neurobiological underpinnings remain elusive. Neuroimaging studies have yielded inconsistent results, hindered by small sample sizes and heterogeneous depression definitions. We sought to address these limitations by leveraging the UK Biobanks extensive neuroimaging data (n=30,122) to investigate how depression phenotyping depth influences neuroanatomic profiles of MDD. We examined 256 brain structural features, obtained from T1- and diffusion-weighted brain imaging, and nine depression phenotypes, ranging from self-reported symptoms (shallow definitions) to clinical diagnoses (deep). Multivariable logistic regression, machine learning classifiers, and feature transfer approaches were used to explore correlational patterns, predictive accuracy and the transferability of important features across depression definitions. For white matter microstructure, we observed widespread fractional anisotropy decreases and mean diffusivity increases. In contrast, cortical thickness and surface area were less consistently associated across depression definitions, and demonstrated weaker associations. Machine learning classifiers showed varying performance in distinguishing depression cases from controls, with shallow phenotypes achieving similar discriminative performance (AUC=0.807) and slightly higher positive predictive value (PPV=0.655) compared to deep phenotypes (AUC=0.831, PPV=0.456), when sensitivity was standardized at 80%. However, when shallow phenotypes were downsampled to match deep phenotype case/control ratios, performance degraded substantially (AUC=0.690). Together, these results suggest that while core white-matter alterations are shared across phenotyping strategies, shallow phenotypes require approximately twice the sample size of deep phenotypes to achieve comparable classification performance, underscoring the fundamental power-specificity tradeoff in psychiatric neuroimaging research.

Automated DWI-FLAIR mismatch assessment in stroke using DWI only.

Benzakoun J, Scheldeman L, Wouters A, Cheng B, Ebinger M, Endres M, Fiebach JB, Fiehler J, Galinovic I, Muir KW, Nighoghossian N, Pedraza S, Puig J, Simonsen CZ, Thijs V, Thomalla G, Micard E, Chen B, Lapergue B, Boulouis G, Le Berre A, Baron JC, Turc G, Ben Hassen W, Naggara O, Oppenheim C, Lemmens R

pubmed logopapersAug 28 2025
In Acute Ischemic Stroke (AIS), mismatch between Diffusion-Weighted Imaging (DWI) and Fluid-Attenuated Inversion-Recovery (FLAIR) helps identify patients who can benefit from thrombolysis when stroke onset time is unknown (15% of AIS). However, visual assessment has suboptimal observer agreement. Our study aims to develop and validate a Deep-Learning model for predicting DWI-FLAIR mismatch using solely DWI data. This retrospective study included AIS patients from ETIS registry (derivation cohort, 2018-2024) and WAKE-UP trial (validation cohort, 2012-2017). DWI-FLAIR mismatch was rated visually. We trained a model to predict manually-labeled FLAIR visible areas (FVA) matching the DWI lesion on baseline and early follow-up MRIs, using only DWI as input. FVA-index was defined as the volume of predicted regions. Area under the ROC curve (AUC) and optimal FVA-index cutoff to predict DWI-FLAIR mismatch in the derivation cohort were computed. Validation was performed using baseline MRIs of the validation cohort. The derivation cohort included 3605 MRIs in 2922 patients and the validation cohort 844 MRIs in 844 patients. FVA-index demonstrated strong predictive value for DWI-FLAIR mismatch in baseline MRIs from the derivation (<i>n</i> = 2453, AUC = 0.85, 95%CI: 0.84-0.87) and validation cohort (<i>n</i> = 844, AUC = 0.86, 95%CI: 0.84-0.89). With an optimal FVA-index cutoff at 0.5, we obtained a kappa of 0.54 (95%CI: 0.48-0.59), 70% sensitivity (378/537, 95%CI: 66-74%) and 88% specificity (269/307, 95%CI: 83-91%) in the validation cohort. The model accurately predicts DWI-FLAIR mismatch in AIS patients with unknown stroke onset. It could aid readers when visual rating is challenging, or FLAIR unavailable.

Hybrid quantum-classical-quantum convolutional neural networks.

Long C, Huang M, Ye X, Futamura Y, Sakurai T

pubmed logopapersAug 28 2025
Deep learning has achieved significant success in pattern recognition, with convolutional neural networks (CNNs) serving as a foundational architecture for extracting spatial features from images. Quantum computing provides an alternative computational framework, a hybrid quantum-classical convolutional neural networks (QCCNNs) leverage high-dimensional Hilbert spaces and entanglement to surpass classical CNNs in image classification accuracy under comparable architectures. Despite performance improvements, QCCNNs typically use fixed quantum layers without incorporating trainable quantum parameters. This limits their ability to capture non-linear quantum representations and separates the model from the potential advantages of expressive quantum learning. In this work, we present a hybrid quantum-classical-quantum convolutional neural network (QCQ-CNN) that incorporates a quantum convolutional filter, a shallow classical CNN, and a trainable variational quantum classifier. This architecture aims to enhance the expressivity of decision boundaries in image classification tasks by introducing tunable quantum parameters into the end-to-end learning process. Through a series of small-sample experiments on MNIST, F-MNIST, and MRI tumor datasets, QCQ-CNN demonstrates competitive accuracy and convergence behavior compared to classical and hybrid baselines. We further analyze the effect of ansatz depth and find that moderate-depth quantum circuits can improve learning stability without introducing excessive complexity. Additionally, simulations incorporating depolarizing noise and finite sampling shots suggest that QCQ-CNN maintains a certain degree of robustness under realistic quantum noise conditions. While our results are currently limited to simulations with small-scale quantum circuits, the proposed approach offers a potentially promising direction for hybrid quantum learning in near-term applications.

High-Resolution 3T MRI of the Membranous Labyrinth Using Deep Learning Reconstruction.

Boubaker F, Lane JI, Puel U, Drouot G, Witte RJ, Ambarki K, Teixeira PAG, Blum A, Parietti-Winkler C, Vallee JN, Gillet R, Eliezer M

pubmed logopapersAug 28 2025
The labyrinth is a complex anatomical structure in the temporal bone. However, high-resolution imaging of its membranous portion is challenging due to its small size and the limitations of current MRI techniques. Deep Learning Reconstruction (DLR) represents a promising approach to advancing MRI image quality, enabling higher spatial resolution and reduced noise. This study aims to evaluate DLR-High-Resolution 3D-T2 MRI sequences for visualizing the labyrinthine structures, comparing them to conventional 3D-T2 sequences. The goal is to improve spatial resolution without prolonging acquisition times, allowing a more detailed view of the labyrinthine microanatomy. High-resolution heavy T2-weighted TSE SPACE images were acquired in patients using 3D-T2 and DLR-3D-T2. Two radiologists rated structure visibility on a four-point qualitative scale for the spiral lamina, scala tympani, scala vestibuli, scala media, utricle, saccule, utricular and saccular maculae, membranous semicircular ducts, and ampullary nerves. Ex vivo 9.4T MRI served as an anatomical reference. DLR-3D-T2 significantly improved the visibility of several inner ear structures. The utricle and utricular macula were systematically visualized, achieving grades ≥3 in 95% of cases (p < 0.001), while the saccule remained challenging to assess, with grades ≥3 in only 10% of cases. The cochlear spiral lamina and scala tympani were better delineated in the first two turns but remained poorly visible in the apical turn. Semicircular ducts were only partially visualized, with grades ≥3 in 12.5-20% of cases, likely due to resolution limitations relative to their diameter. Ampullary nerves were moderately improved, with grades ≥3 in 52.5-55% of cases, depending on the nerve. While DLR does not yet provide a complete anatomical assessment, it represents a significant step forward in the non-invasive evaluation of inner ear structures. Pending further technical refinements, this approach may help reduce reliance on delayed gadolinium-enhanced techniques for imaging membranous structures. 3D-T2 = Three-dimensional T2-weighted turbo spin-echo; DLR-3D-T2 = improved T2 weighted turbo spinecho sequence incorporating Deep Learning Reconstruction; DLR = Deep Learning Reconstruction.

Deep Learning-Based Generation of DSC MRI Parameter Maps Using Dynamic Contrast-Enhanced MRI Data.

Pei H, Lyu Y, Lambrecht S, Lin D, Feng L, Liu F, Nyquist P, van Zijl P, Knutsson L, Xu X

pubmed logopapersAug 28 2025
Perfusion and perfusion-related parameter maps obtained by using DSC MRI and dynamic contrast-enhanced (DCE) MRI are both useful for clinical diagnosis and research. However, using both DSC and DCE MRI in the same scan session requires 2 doses of gadolinium contrast agent. The objective was to develop deep learning-based methods to synthesize DSC-derived parameter maps from DCE MRI data. Independent analysis of data collected in previous studies was performed. The database contained 64 participants, including patients with and without brain tumors. The reference parameter maps were measured from DSC MRI performed after DCE MRI. A conditional generative adversarial network (cGAN) was designed and trained to generate synthetic DSC-derived maps from DCE MRI data. The median parameter values and distributions between synthetic and real maps were compared by using linear regression and Bland-Altman plots. Using cGAN, realistic DSC parameter maps could be synthesized from DCE MRI data. For controls without brain tumors, the synthesized parameters had distributions similar to the ground truth values. For patients with brain tumors, the synthesized parameters in the tumor region correlated linearly with the ground truth values. In addition, areas not visible due to susceptibility artifacts in real DSC maps could be visualized by using DCE-derived DSC maps. DSC-derived parameter maps could be synthesized by using DCE MRI data, including susceptibility-artifact-prone regions. This shows the potential to obtain both DSC and DCE parameter maps from DCE MRI by using a single dose of contrast agent.

CardioMorphNet: Cardiac Motion Prediction Using a Shape-Guided Bayesian Recurrent Deep Network

Reza Akbari Movahed, Abuzar Rezaee, Arezoo Zakeri, Colin Berry, Edmond S. L. Ho, Ali Gooya

arxiv logopreprintAug 28 2025
Accurate cardiac motion estimation from cine cardiac magnetic resonance (CMR) images is vital for assessing cardiac function and detecting its abnormalities. Existing methods often struggle to capture heart motion accurately because they rely on intensity-based image registration similarity losses that may overlook cardiac anatomical regions. To address this, we propose CardioMorphNet, a recurrent Bayesian deep learning framework for 3D cardiac shape-guided deformable registration using short-axis (SAX) CMR images. It employs a recurrent variational autoencoder to model spatio-temporal dependencies over the cardiac cycle and two posterior models for bi-ventricular segmentation and motion estimation. The derived loss function from the Bayesian formulation guides the framework to focus on anatomical regions by recursively registering segmentation maps without using intensity-based image registration similarity loss, while leveraging sequential SAX volumes and spatio-temporal features. The Bayesian modelling also enables computation of uncertainty maps for the estimated motion fields. Validated on the UK Biobank dataset by comparing warped mask shapes with ground truth masks, CardioMorphNet demonstrates superior performance in cardiac motion estimation, outperforming state-of-the-art methods. Uncertainty assessment shows that it also yields lower uncertainty values for estimated motion fields in the cardiac region compared with other probabilistic-based cardiac registration methods, indicating higher confidence in its predictions.

Mask-Guided Multi-Channel SwinUNETR Framework for Robust MRI Classification

Smriti Joshi, Lidia Garrucho, Richard Osuala, Oliver Diaz, Karim Lekadir

arxiv logopreprintAug 28 2025
Breast cancer is one of the leading causes of cancer-related mortality in women, and early detection is essential for improving outcomes. Magnetic resonance imaging (MRI) is a highly sensitive tool for breast cancer detection, particularly in women at high risk or with dense breast tissue, where mammography is less effective. The ODELIA consortium organized a multi-center challenge to foster AI-based solutions for breast cancer diagnosis and classification. The dataset included 511 studies from six European centers, acquired on scanners from multiple vendors at both 1.5 T and 3 T. Each study was labeled for the left and right breast as no lesion, benign lesion, or malignant lesion. We developed a SwinUNETR-based deep learning framework that incorporates breast region masking, extensive data augmentation, and ensemble learning to improve robustness and generalizability. Our method achieved second place on the challenge leaderboard, highlighting its potential to support clinical breast MRI interpretation. We publicly share our codebase at https://github.com/smriti-joshi/bcnaim-odelia-challenge.git.

GENRE-CMR: Generalizable Deep Learning for Diverse Multi-Domain Cardiac MRI Reconstruction

Kian Anvari Hamedani, Narges Razizadeh, Shahabedin Nabavi, Mohsen Ebrahimi Moghaddam

arxiv logopreprintAug 28 2025
Accelerated Cardiovascular Magnetic Resonance (CMR) image reconstruction remains a critical challenge due to the trade-off between scan time and image quality, particularly when generalizing across diverse acquisition settings. We propose GENRE-CMR, a generative adversarial network (GAN)-based architecture employing a residual deep unrolled reconstruction framework to enhance reconstruction fidelity and generalization. The architecture unrolls iterative optimization into a cascade of convolutional subnetworks, enriched with residual connections to enable progressive feature propagation from shallow to deeper stages. To further improve performance, we integrate two loss functions: (1) an Edge-Aware Region (EAR) loss, which guides the network to focus on structurally informative regions and helps prevent common reconstruction blurriness; and (2) a Statistical Distribution Alignment (SDA) loss, which regularizes the feature space across diverse data distributions via a symmetric KL divergence formulation. Extensive experiments confirm that GENRE-CMR surpasses state-of-the-art methods on training and unseen data, achieving 0.9552 SSIM and 38.90 dB PSNR on unseen distributions across various acceleration factors and sampling trajectories. Ablation studies confirm the contribution of each proposed component to reconstruction quality and generalization. Our framework presents a unified and robust solution for high-quality CMR reconstruction, paving the way for clinically adaptable deployment across heterogeneous acquisition protocols.

Enhancing Corpus Callosum Segmentation in Fetal MRI via Pathology-Informed Domain Randomization

Marina Grifell i Plana, Vladyslav Zalevskyi, Léa Schmidt, Yvan Gomez, Thomas Sanchez, Vincent Dunet, Mériam Koob, Vanessa Siffredi, Meritxell Bach Cuadra

arxiv logopreprintAug 28 2025
Accurate fetal brain segmentation is crucial for extracting biomarkers and assessing neurodevelopment, especially in conditions such as corpus callosum dysgenesis (CCD), which can induce drastic anatomical changes. However, the rarity of CCD severely limits annotated data, hindering the generalization of deep learning models. To address this, we propose a pathology-informed domain randomization strategy that embeds prior knowledge of CCD manifestations into a synthetic data generation pipeline. By simulating diverse brain alterations from healthy data alone, our approach enables robust segmentation without requiring pathological annotations. We validate our method on a cohort comprising 248 healthy fetuses, 26 with CCD, and 47 with other brain pathologies, achieving substantial improvements on CCD cases while maintaining performance on both healthy fetuses and those with other pathologies. From the predicted segmentations, we derive clinically relevant biomarkers, such as corpus callosum length (LCC) and volume, and show their utility in distinguishing CCD subtypes. Our pathology-informed augmentation reduces the LCC estimation error from 1.89 mm to 0.80 mm in healthy cases and from 10.9 mm to 0.7 mm in CCD cases. Beyond these quantitative gains, our approach yields segmentations with improved topological consistency relative to available ground truth, enabling more reliable shape-based analyses. Overall, this work demonstrates that incorporating domain-specific anatomical priors into synthetic data pipelines can effectively mitigate data scarcity and enhance analysis of rare but clinically significant malformations.
Page 34 of 1621612 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.