Sort by:
Page 33 of 1621612 results

Brain Atrophy Does Not Predict Clinical Progression in Progressive Supranuclear Palsy.

Quattrone A, Franzmeier N, Huppertz HJ, Seneca N, Petzold GC, Spottke A, Levin J, Prudlo J, Düzel E, Höglinger GU

pubmed logopapersAug 30 2025
Clinical progression rate is the typical primary endpoint measure in progressive supranuclear palsy (PSP) clinical trials. This longitudinal multicohort study investigated whether baseline clinical severity and regional brain atrophy could predict clinical progression in PSP-Richardson's syndrome (PSP-RS). PSP-RS patients (n = 309) from the placebo arms of clinical trials (NCT03068468, NCT01110720, NCT02985879, NCT01049399) and DescribePSP cohort were included. We investigated associations of baseline clinical and volumetric magnetic resonance imaging (MRI) data with 1-year longitudinal PSP rating scale (PSPRS) change. Machine learning (ML) models were tested to predict individual clinical trajectories. PSP-RS patients showed a mean PSPRS score increase of 10.3 points/yr. The frontal lobe volume showed the strongest association with subsequent clinical progression (β: -0.34, P < 0.001). However, ML models did not accurately predict individual progression rates (R<sup>2</sup> <0.15). Baseline clinical severity and brain atrophy could not predict individual clinical progression, suggesting no need for MRI-based stratification of patients in future PSP trials. © 2025 The Author(s). Movement Disorders published by Wiley Periodicals LLC on behalf of International Parkinson and Movement Disorder Society.

A network-assisted joint image and motion estimation approach for robust 3D MRI motion correction across severity levels.

Nghiem B, Wu Z, Kashyap S, Kasper L, Uludağ K

pubmed logopapersAug 29 2025
The purpose of this work was to develop and evaluate a novel method that leverages neural networks and physical modeling for 3D motion correction at different levels of corruption. The novel method ("UNet+JE") combines an existing neural network ("UNet<sub>mag</sub>") with a physics-informed algorithm for jointly estimating motion parameters and the motion-compensated image ("JE"). UNet<sub>mag</sub> and UNet+JE were trained on two training datasets separately with different distributions of motion corruption severity and compared to JE as a benchmark. All five resulting methods were tested on T<sub>1</sub>w 3D MPRAGE scans of healthy participants with simulated (n = 40) and in vivo (n = 10) motion corruption ranging from mild to severe motion. UNet+JE provided better motion correction than UNet<sub>mag</sub> ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mi>p</mi> <mo><</mo> <msup><mn>10</mn> <mrow><mo>-</mo> <mn>2</mn></mrow> </msup> </mrow> <annotation>$$ p<{10}^{-2} $$</annotation></semantics> </math> for all metrics for both simulated and in vivo data), under both training datasets. UNet<sub>mag</sub> exhibited residual image artifacts and blurring, as well as greater susceptibility to data distribution shifts than UNet+JE. UNet+JE and JE did not significantly differ in image correction quality ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mi>p</mi> <mo>></mo> <mn>0.05</mn></mrow> <annotation>$$ p>0.05 $$</annotation></semantics> </math> for all metrics), even under strong distribution shifts for UNet+JE. However, UNet+JE reduced runtimes by a median reduction factor of between 2.00 to 3.80 as well as 4.05 for the simulation and in vivo studies, respectively. UNet+JE benefitted from the robustness of joint estimation and the fast image improvement provided by the neural network, enabling the method to provide high quality 3D image correction under a wide range of motion corruption within shorter runtimes.

A Multi-Stage Fine-Tuning and Ensembling Strategy for Pancreatic Tumor Segmentation in Diagnostic and Therapeutic MRI

Omer Faruk Durugol, Maximilian Rokuss, Yannick Kirchhoff, Klaus H. Maier-Hein

arxiv logopreprintAug 29 2025
Automated segmentation of Pancreatic Ductal Adenocarcinoma (PDAC) from MRI is critical for clinical workflows but is hindered by poor tumor-tissue contrast and a scarcity of annotated data. This paper details our submission to the PANTHER challenge, addressing both diagnostic T1-weighted (Task 1) and therapeutic T2-weighted (Task 2) segmentation. Our approach is built upon the nnU-Net framework and leverages a deep, multi-stage cascaded pre-training strategy, starting from a general anatomical foundation model and sequentially fine-tuning on CT pancreatic lesion datasets and the target MRI modalities. Through extensive five-fold cross-validation, we systematically evaluated data augmentation schemes and training schedules. Our analysis revealed a critical trade-off, where aggressive data augmentation produced the highest volumetric accuracy, while default augmentations yielded superior boundary precision (achieving a state-of-the-art MASD of 5.46 mm and HD95 of 17.33 mm for Task 1). For our final submission, we exploited this finding by constructing custom, heterogeneous ensembles of specialist models, essentially creating a mix of experts. This metric-aware ensembling strategy proved highly effective, achieving a top cross-validation Tumor Dice score of 0.661 for Task 1 and 0.523 for Task 2. Our work presents a robust methodology for developing specialized, high-performance models in the context of limited data and complex medical imaging tasks (Team MIC-DKFZ).

Multimodal feature distinguishing and deep learning approach to detect lung disease from MRI images.

Alanazi TM

pubmed logopapersAug 29 2025
Precise and early detection and diagnosis of lung diseases reduce the severity of life risk and further spread of infections in patients. Computer-based image processing techniques utilize magnetic resonance imaging (MRI) as input for computing, detecting, segmenting, etc., processes for improving the processing efficacy. This article introduces a Multimodal Feature Distinguishing Method (MFDM) for augmenting lung disease detection precision. The method distinguishes the extractable features of an MRI lung input using a homogeneity measure. Depending on the possible differentiations for heterogeneity feature detection, the training using a transformer network is pursued. This network performs differentiation verification and training classification independently and integrates the same for identifying heterogeneous features. The integration classifications are used for detecting the infected region based on feature precision. If the differentiation fails, then the transformer process reinitiates its process from the last known homogeneity feature between successive segments. Therefore, the distinguishing multimodal features between successive segments are validated for different differentiation levels, augmenting the accuracy. Thus, the introduced system ensures 8.78% of sensitivity, 8.81% of precision 9.75% of differentiation time while analyzing various lung features. Then, the effective results indicate that the MFDM model was successfully utilized in medical applications to improve the disease recognition rate.

Federated Fine-tuning of SAM-Med3D for MRI-based Dementia Classification

Kaouther Mouheb, Marawan Elbatel, Janne Papma, Geert Jan Biessels, Jurgen Claassen, Huub Middelkoop, Barbara van Munster, Wiesje van der Flier, Inez Ramakers, Stefan Klein, Esther E. Bron

arxiv logopreprintAug 29 2025
While foundation models (FMs) offer strong potential for AI-based dementia diagnosis, their integration into federated learning (FL) systems remains underexplored. In this benchmarking study, we systematically evaluate the impact of key design choices: classification head architecture, fine-tuning strategy, and aggregation method, on the performance and efficiency of federated FM tuning using brain MRI data. Using a large multi-cohort dataset, we find that the architecture of the classification head substantially influences performance, freezing the FM encoder achieves comparable results to full fine-tuning, and advanced aggregation methods outperform standard federated averaging. Our results offer practical insights for deploying FMs in decentralized clinical settings and highlight trade-offs that should guide future method development.

Identifying key brain pathology in bipolar and unipolar depression using a region-specific brain aging trajectories approach: Insights from the Taiwan Aging and Mental Illness Cohort.

Zhu JD, Chi IJ, Hsu HY, Tsai SJ, Yang AC

pubmed logopapersAug 29 2025
Identifying key areas of brain dysfunction in mental illness is critical for developing precision diagnosis and treatment. This study aimed to develop region-specific brain aging trajectory prediction models using multimodal magnetic resonance imaging (MRI) to identify similarities and differences in abnormal aging between bipolar disorder (BD) and major depressive disorder (MDD) and pinpoint key brain regions of structural and functional change specific to each disorder. Neuroimaging data from 340 healthy controls, 110 BD participants, and 68 MDD participants were included from the Taiwan Aging and Mental Illness cohort. We constructed 228 models using T1-weighted MRI, resting-state functional MRI, and diffusion tensor imaging data. Gaussian process regression was used to train models for estimating brain aging trajectories using structural and functional maps across various brain regions. Our models demonstrated robust performance, revealing accelerated aging in 66 gray matter regions in BD and 67 in MDD, with 13 regions common to both disorders. The BD group showed accelerated aging in 17 regions on functional maps, whereas no such regions were found in MDD. Fractional anisotropy analysis identified 43 aging white matter tracts in BD and 39 in MDD, with 16 tracts common to both disorders. Importantly, there were also unique brain regions with accelerated aging specific to each disorder. These findings highlight the potential of brain aging trajectories as biomarkers for BD and MDD, offering insights into distinct and overlapping neuroanatomical changes. Incorporating region-specific changes in brain structure and function over time could enhance the understanding and treatment of mental illness.

Enhanced glioma semantic segmentation using U-net and pre-trained backbone U-net architectures.

Khorasani A

pubmed logopapersAug 29 2025
Gliomas are known to have different sub-regions within the tumor, including the edema, necrotic, and active tumor regions. Segmenting of these regions is very important for glioma treatment decisions and management. This paper aims to demonstrate the application of U-Net and pre-trained U-Net backbone networks in glioma semantic segmentation, utilizing different magnetic resonance imaging (MRI) image weights. The data used in this study for network training, validation, and testing is the Multimodal Brain Tumor Segmentation (BraTS) 2021 challenge. In this study, we applied the U-Net and different pre-trained Backbone U-Net for the semantic segmentation of glioma regions. The ResNet, Inception, and VGG networks, which are pre-trained using the ImageNet dataset, have been used as the Backbone in the U-Net architecture. The Accuracy (ACC) and Intersection over Union (IoU) were employed to assess the performance of the networks. The most prominent finding to emerge from this study is that trained ResNet-U-Net with T<sub>1</sub> post-contrast enhancement (T<sub>1</sub>Gd) has the highest ACC and IoU for the necrotic and active tumor regions semantic segmentation in glioma. It was also demonstrated that a trained ResNet-U-Net with T<sub>2</sub> Fluid-Attenuated Inversion Recovery (T<sub>2</sub>-FLAIR) is a suitable combination for edema segmentation in glioma. Our study further validates that the proposed framework's architecture and modules are scientifically grounded and practical, enabling the extraction and aggregation of valuable semantic information to enhance glioma semantic segmentation capability. It demonstrates how useful the ResNet-U-Net will be for physicians to extract glioma regions automatically.

Mapping heterogeneity in the neuroanatomical correlates of depression

Watts, D., Mallard, T. T., Dall' Aglio, L., Giangrande, E., Kennedy, C., Cai, N., Choi, K. W., Ge, T., Smoller, J.

medrxiv logopreprintAug 29 2025
Major depressive disorder (MDD) affects millions worldwide, yet its neurobiological underpinnings remain elusive. Neuroimaging studies have yielded inconsistent results, hindered by small sample sizes and heterogeneous depression definitions. We sought to address these limitations by leveraging the UK Biobanks extensive neuroimaging data (n=30,122) to investigate how depression phenotyping depth influences neuroanatomic profiles of MDD. We examined 256 brain structural features, obtained from T1- and diffusion-weighted brain imaging, and nine depression phenotypes, ranging from self-reported symptoms (shallow definitions) to clinical diagnoses (deep). Multivariable logistic regression, machine learning classifiers, and feature transfer approaches were used to explore correlational patterns, predictive accuracy and the transferability of important features across depression definitions. For white matter microstructure, we observed widespread fractional anisotropy decreases and mean diffusivity increases. In contrast, cortical thickness and surface area were less consistently associated across depression definitions, and demonstrated weaker associations. Machine learning classifiers showed varying performance in distinguishing depression cases from controls, with shallow phenotypes achieving similar discriminative performance (AUC=0.807) and slightly higher positive predictive value (PPV=0.655) compared to deep phenotypes (AUC=0.831, PPV=0.456), when sensitivity was standardized at 80%. However, when shallow phenotypes were downsampled to match deep phenotype case/control ratios, performance degraded substantially (AUC=0.690). Together, these results suggest that while core white-matter alterations are shared across phenotyping strategies, shallow phenotypes require approximately twice the sample size of deep phenotypes to achieve comparable classification performance, underscoring the fundamental power-specificity tradeoff in psychiatric neuroimaging research.

Liver fat quantification at 0.55 T enabled by locally low-rank enforced deep learning reconstruction.

Helo M, Nickel D, Kannengiesser S, Kuestner T

pubmed logopapersAug 29 2025
The emergence of new medications for fatty liver conditions has increased the need for reliable and widely available assessment of MRI proton density fat fraction (MRI-PDFF). Whereas low-field MRI presents a promising solution, its utilization is challenging due to the low SNR. This work aims to enhance SNR and enable precise PDFF quantification at low-field MRI using a novel locally low-rank deep learning-based (LLR-DL) reconstruction. LLR-DL alternates between regularized SENSE and a neural network (U-Net) throughout several iterations, operating on complex-valued data. The network processes the spectral projection onto singular value bases, which are computed on local patches across the echoes dimension. The output of the network is recast into the basis of the original echoes and used as a prior for the following iteration. The final echoes are processed by a multi-echo Dixon algorithm. Two different protocols were proposed for imaging at 0.55 T. An iron-and-fat phantom and 10 volunteers were scanned on both 0.55 and 1.5 T systems. Linear regression, t-statistics, and Bland-Altman analyses were conducted. LLR-DL achieved significantly improved image quality compared to the conventional reconstruction technique, with a 32.7% increase in peak SNR and a 25% improvement in structural similarity index. PDFF repeatability was 2.33% in phantoms (0% to 100%) and 0.79% in vivo (3% to 18%), with narrow cross-field strength limits of agreement below 1.67% in phantoms and 1.75% in vivo. An LLR-DL reconstruction was developed and investigated to enable precise PDFF quantification at 0.55 T and improve consistency with 1.5 T results.

Fusion model integrating multi-sequence MRI radiomics and habitat imaging for predicting pathological complete response in breast cancer treated with neoadjuvant therapy.

Xu S, Ying Y, Hu Q, Li X, Li Y, Xiong H, Chen Y, Ye Q, Li X, Liu Y, Ai T, Du Y

pubmed logopapersAug 29 2025
This study aimed to develop a predictive model integrating multi-sequence MRI radiomics, deep learning features, and habitat imaging to forecast pathological complete response (pCR) in breast cancer patients undergoing neoadjuvant therapy (NAT). A retrospective analysis included 203 breast cancer patients treated with NAT from May 2018 to January 2023. Patients were divided into training (n = 162) and test (n = 41) sets. Radiomics features were extracted from intratumoral and peritumoral regions in multi-sequence MRI (T2WI, DWI, and DCE-MRI) datasets. Habitat imaging was employed to analyze tumor subregions, characterizing heterogeneity within the tumor. We constructed and validated machine learning models, including a fusion model integrating all features, using Receiver Operating Characteristic (ROC) and Precision-Recall (PR) curves, decision curve analysis (DCA), and confusion matrices. Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) analyses were performed for model interpretability. The fusion model achieved superior predictive performance compared to single-region models, with AUCs of 0.913 (95% CI: 0.770-1.000) in the test set. PR curve analysis showed improved precision-recall balance, while DCA indicated higher clinical benefit. Confusion matrix analysis confirmed the model's classification accuracy. SHAP revealed DCE_LLL_DependenceUniformity as the most critical feature for predicting pCR and PC72 for non-pCR. LIME provided patient-specific insights into feature contributions. Integrating multi-dimensional MRI features with habitat imaging enhances pCR prediction in breast cancer. The fusion model offers a robust, non-invasive tool for guiding individualized treatment strategies while providing transparent interpretability through SHAP and LIME analyses.
Page 33 of 1621612 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.