Sort by:
Page 79 of 1221217 results

Efficacy of a large language model in classifying branch-duct intraductal papillary mucinous neoplasms.

Sato M, Yasaka K, Abe S, Kurashima J, Asari Y, Kiryu S, Abe O

pubmed logopapersJun 11 2025
Appropriate categorization based on magnetic resonance imaging (MRI) findings is important for managing intraductal papillary mucinous neoplasms (IPMNs). In this study, a large language model (LLM) that classifies IPMNs based on MRI findings was developed, and its performance was compared with that of less experienced human readers. The medical image management and processing systems of our hospital were searched to identify MRI reports of branch-duct IPMNs (BD-IPMNs). They were assigned to the training, validation, and testing datasets in chronological order. The model was trained on the training dataset, and the best-performing model on the validation dataset was evaluated on the test dataset. Furthermore, two radiology residents (Readers 1 and 2) and an intern (Reader 3) manually sorted the reports in the test dataset. The accuracy, sensitivity, and time required for categorizing were compared between the model and readers. The accuracy of the fine-tuned LLM for the test dataset was 0.966, which was comparable to that of Readers 1 and 2 (0.931-0.972) and significantly better than that of Reader 3 (0.907). The fine-tuned LLM had an area under the receiver operating characteristic curve of 0.982 for the classification of cyst diameter ≥ 10 mm, which was significantly superior to that of Reader 3 (0.944). Furthermore, the fine-tuned LLM (25 s) completed the test dataset faster than the readers (1,887-2,646 s). The fine-tuned LLM classified BD-IPMNs based on MRI findings with comparable performance to that of radiology residents and significantly reduced the time required.

Automated Segmentation of Thoracic Aortic Lumen and Vessel Wall on 3D Bright- and Black-Blood MRI using nnU-Net.

Cesario M, Littlewood SJ, Nadel J, Fletcher TJ, Fotaki A, Castillo-Passi C, Hajhosseiny R, Pouliopoulos J, Jabbour A, Olivero R, Rodríguez-Palomares J, Kooi ME, Prieto C, Botnar RM

pubmed logopapersJun 11 2025
Magnetic resonance angiography (MRA) is an important tool for aortic assessment in several cardiovascular diseases. Assessment of MRA images relies on manual segmentation; a time-intensive process that is subject to operator variability. We aimed to optimize and validate two deep-learning models for automatic segmentation of the aortic lumen and vessel wall in high-resolution ECG-triggered free-breathing respiratory motion-corrected 3D bright- and black-blood MRA images. Manual segmentation, serving as the ground truth, was performed on 25 bright-blood and 15 black-blood 3D MRA image sets acquired with the iT2PrepIR-BOOST sequence (1.5T) in thoracic aortopathy patients. The training was performed with nnU-Net for bright-blood (lumen) and black-blood image sets (lumen and vessel wall). Training consisted of a 70:20:10% training: validation: testing split. Inference was run on datasets (single vendor) from different centres (UK, Spain, and Australia), sequences (iT2PrepIR-BOOST, T2 prepared CMRA, and TWIST MRA), acquired resolutions (from 0.9 mm<sup>3</sup> to 3 mm<sup>3</sup>), and field strengths (0.55T, 1.5T, and 3T). Predictive measurements comprised Dice Similarity Coefficient (DSC), and Intersection over Union (IoU). Postprocessing (3D slicer) included centreline extraction, diameter measurement, and curved planar reformatting (CPR). The optimal configuration was the 3D U-Net. Bright blood segmentation at 1.5T on iT2PrepIR-BOOST datasets (1.3 and 1.8 mm<sup>3</sup>) and 3D CMRA datasets (0.9 mm<sup>3</sup>) resulted in DSC ≥ 0.96 and IoU ≥ 0.92. For bright-blood segmentation on 3D CMRA at 0.55T, the nnUNet achieved DSC and IoU scores of 0.93 and 0.88 at 1.5 mm³, and 0.68 and 0.52 at 3.0 mm³, respectively. DSC and IoU scores of 0.89 and 0.82 were obtained for CMRA image sets (1 mm<sup>3</sup>) at 1.5T (Barcelona dataset). DSC and IoU score of the BRnnUNet model were 0.90 and 0.82 respectively for the contrast-enhanced dataset (TWIST MRA). Lumen segmentation on black blood 1.5T iT2PrepIR-BOOST image sets achieved DSC ≥ 0.95 and IoU ≥ 0.90, and vessel wall segmentation resulted in DSC ≥ 0.80 and IoU ≥ 0.67. Automated centreline tracking, diameter measurement and CPR were successfully implemented in all subjects. Automated aortic lumen and wall segmentation on 3D bright- and black-blood image sets demonstrated excellent agreement with ground truth. This technique demonstrates a fast and comprehensive assessment of aortic morphology with great potential for future clinical application in various cardiovascular diseases.

Towards a general-purpose foundation model for fMRI analysis

Cheng Wang, Yu Jiang, Zhihao Peng, Chenxin Li, Changbae Bang, Lin Zhao, Jinglei Lv, Jorge Sepulcre, Carl Yang, Lifang He, Tianming Liu, Daniel Barron, Quanzheng Li, Randy Hirschtick, Byung-Hoon Kim, Xiang Li, Yixuan Yuan

arxiv logopreprintJun 11 2025
Functional Magnetic Resonance Imaging (fMRI) is essential for studying brain function and diagnosing neurological disorders, but current analysis methods face reproducibility and transferability issues due to complex pre-processing and task-specific models. We introduce NeuroSTORM (Neuroimaging Foundation Model with Spatial-Temporal Optimized Representation Modeling), a generalizable framework that directly learns from 4D fMRI volumes and enables efficient knowledge transfer across diverse applications. NeuroSTORM is pre-trained on 28.65 million fMRI frames (>9,000 hours) from over 50,000 subjects across multiple centers and ages 5 to 100. Using a Mamba backbone and a shifted scanning strategy, it efficiently processes full 4D volumes. We also propose a spatial-temporal optimized pre-training approach and task-specific prompt tuning to improve transferability. NeuroSTORM outperforms existing methods across five tasks: age/gender prediction, phenotype prediction, disease diagnosis, fMRI-to-image retrieval, and task-based fMRI classification. It demonstrates strong clinical utility on datasets from hospitals in the U.S., South Korea, and Australia, achieving top performance in disease diagnosis and cognitive phenotype prediction. NeuroSTORM provides a standardized, open-source foundation model to improve reproducibility and transferability in fMRI-based clinical research.

RCMIX model based on pre-treatment MRI imaging predicts T-downstage in MRI-cT4 stage rectal cancer.

Bai F, Liao L, Tang Y, Wu Y, Wang Z, Zhao H, Huang J, Wang X, Ding P, Wu X, Cai Z

pubmed logopapersJun 11 2025
Neoadjuvant therapy (NAT) is the standard treatment strategy for MRI-defined cT4 rectal cancer. Predicting tumor regression can guide the resection plane to some extent. Here, we covered pre-treatment MRI imaging of 363 cT4 rectal cancer patients receiving NAT and radical surgery from three hospitals: Center 1 (n = 205), Center 2 (n = 109) and Center 3 (n = 52). We propose a machine learning model named RCMIX, which incorporates a multilayer perceptron algorithm based on 19 pre-treatment MRI radiomic features and 2 clinical features in cT4 rectal cancer patients receiving NAT. The model was trained on 205 cases of cT4 rectal cancer patients, achieving an AUC of 0.903 (95% confidence interval, 0.861-0.944) in predicting T-downstage. It also achieved AUC of 0.787 (0.699-0.874) and 0.773 (0.646-0.901) in two independent test cohorts, respectively. cT4 rectal cancer patients who were predicted as Well T-downstage by the RCMIX model had significantly better disease-free survival than those predicted as Poor T-downstage. Our study suggests that the RCMIX model demonstrates satisfactory performance in predicting T-downstage by NAT for cT4 rectal cancer patients, which may provide critical insights to improve surgical strategies.

Implementation of biomedical segmentation for brain tumor utilizing an adapted U-net model.

Alkhalid FF, Salih NZ

pubmed logopapersJun 11 2025
Using radio signals from a magnetic field, magnetic resonance imaging (MRI) represents a medical procedure that produces images to provide more information than typical scans. Diagnosing brain tumors from MRI is difficult because of the wide range of tumor shapes, areas, and visual features, thus universal and automated system to handle this task is required. Among the best deep learning methods, the U-Net architecture is the most widely used in diagnostic medical images. Therefore, U-Net-based attention is the most effective automated model in medical image segmentation dealing with various modalities. The self-attention structures that are used in the U-Net design allow for fast global preparation and better feature visualization. This research aims to study the progress of U-Net design and show how it improves the performance of brain tumor segmentation. We have investigated three U-Net designs (standard U-Net, Attention U-Net, and self-attention U-Net) for five epochs to find the last segmentation. An MRI image dataset that includes 3064 images from the Kaggle website is used to give a more comprehensive overview. Also, we offer a comparison with several studies that are based on U-Net structures to illustrate the evolution of this network from an accuracy standpoint. U-Net-based self-attention has demonstrated superior performance compared to other studies because self-attention can enhance segmentation quality, particularly for unclear structures, by concentrating on the most significant parts. Four main metrics are applied with a loss function of 5.03 %, a validation loss function of 4.82 %, a validation accuracy of 98.49 %, and an accuracy of 98.45 %.

Predicting pragmatic language abilities from brain structural MRI in preschool children with ASD by NBS-Predict.

Qian L, Ding N, Fang H, Xiao T, Sun B, Gao H, Ke X

pubmed logopapersJun 11 2025
Pragmatics plays a crucial role in effectively conveying messages across various social communication contexts. This aspect is frequently highlighted in the challenges experienced by children diagnosed with autism spectrum disorder (ASD). Notably, there remains a paucity of research investigating how the structural connectome (SC) predicts pragmatic language abilities within this population. Using diffusion tensor imaging (DTI) and deterministic tractography, we constructed the whole-brain white matter structural network (WMSN) in a cohort comprising 92 children with ASD and 52 typically developing (TD) preschoolers, matched for age and gender. We employed network-based statistic (NBS)-Predict, a novel methodology that integrates machine learning (ML) with NBS, to identify dysconnected subnetworks associated with ASD, and then to predict pragmatic language abilities based on the SC derived from the whole-brain WMSN in the ASD group. Initially, NBS-Predict identified a subnetwork characterized by 42 reduced connections across 37 brain regions (p = 0.01), achieving a highest classification accuracy of 79.4% (95% CI: 0.791 ~ 0.796). The dysconnected regions were predominantly localized within the brain's frontotemporal and subcortical areas, with the right superior medial frontal gyrus (SFGmed.R) emerging as the region exhibiting the most extensive disconnection. Moreover, NBS-Predict demonstrated that the optimal correlation coefficient between the predicted pragmatic language scores and the actual measured scores was 0.220 (95% CI: 0.174 ~ 0.265). This analysis revealed a significant association between the pragmatic language abilities of the ASD cohort and the white matter connections linking the SFGmed.R with the bilateral anterior cingulate gyrus (ACG). In summary, our findings suggest that the subnetworks displaying the most significant abnormal connections were concentrated in the frontotemporal and subcortical regions among the ASD group. Furthermore, the observed abnormalities in the white matter connection pathways between the SFGmed.R and ACG may underlie the neurobiological basis for pragmatic language deficits in preschool children with ASD.

Cross-dataset Evaluation of Dementia Longitudinal Progression Prediction Models

Zhang, C., An, L., Wulan, N., Nguyen, K.-N., Orban, C., Chen, P., Chen, C., Zhou, J. H., Liu, K., Yeo, B. T. T., Alzheimer's Disease Neuroimaging Initiative,, Australian Imaging Biomarkers and Lifestyle Study of Aging,

medrxiv logopreprintJun 11 2025
IntroductionAccurately predicting Alzheimers Disease (AD) progression is useful for clinical care. The 2019 TADPOLE (The Alzheimers Disease Prediction Of Longitudinal Evolution) challenge evaluated 92 algorithms from 33 teams worldwide. Unlike typical clinical prediction studies, TADPOLE accommodates (1) variable number of observed timepoints across patients, (2) missing data across modalities and visits, and (3) prediction over an open-ended time horizon, which better reflects real-world data. However, TADPOLE only used the Alzheimers Disease Neuroimaging Initiative (ADNI) dataset, so how well top algorithms generalize to other cohorts remains unclear. MethodsWe tested five algorithms in three external datasets covering 2,312 participants and 13,200 timepoints. The algorithms included FROG, the overall TADPOLE winner, which utilized a unique Longitudinal-to-Cross-sectional (L2C) transformation to convert variable-length longitudinal histories into feature vectors of the same length across participants (i.e., same-length feature vectors). We also considered two FROG variants. One variant unified all XGBoost models from the original FROG with a single feedforward neural network (FNN), which we referred to as L2C-FNN. We also included minimal recurrent neural networks (MinimalRNN), which was ranked second at publication time, as well as AD Course Map (AD-Map), which outperformed MinimalRNN at publication time. All five models - three FROG variants, MinimalRNN and AD-Map - were trained on ADNI and tested on the external datasets. ResultsL2C-FNN performed the best overall. In the case of predicting cognition and ventricle volume, L2C-FNN and AD-Map were the best. For clinical diagnosis prediction, L2C-FNN was the best, while AD-Map was the worst. L2C-FNN also maintained its edge over other models, regardless of the number of observed timepoints, and regardless of the prediction horizon from 0 to 6 years into the future. ConclusionsL2C-FNN shows strong potential for both short-term and long-term dementia progression prediction. Pretrained ADNI models are available: https://github.com/ThomasYeoLab/CBIG/tree/master/stable_projects/predict_phenotypes/Zhang2025_L2CFNN.

Towards Practical Alzheimer's Disease Diagnosis: A Lightweight and Interpretable Spiking Neural Model

Changwei Wu, Yifei Chen, Yuxin Du, Jinying Zong, Jie Dong, Mingxuan Liu, Yong Peng, Jin Fan, Feiwei Qin, Changmiao Wang

arxiv logopreprintJun 11 2025
Early diagnosis of Alzheimer's Disease (AD), especially at the mild cognitive impairment (MCI) stage, is vital yet hindered by subjective assessments and the high cost of multimodal imaging modalities. Although deep learning methods offer automated alternatives, their energy inefficiency and computational demands limit real-world deployment, particularly in resource-constrained settings. As a brain-inspired paradigm, spiking neural networks (SNNs) are inherently well-suited for modeling the sparse, event-driven patterns of neural degeneration in AD, offering a promising foundation for interpretable and low-power medical diagnostics. However, existing SNNs often suffer from weak expressiveness and unstable training, which restrict their effectiveness in complex medical tasks. To address these limitations, we propose FasterSNN, a hybrid neural architecture that integrates biologically inspired LIF neurons with region-adaptive convolution and multi-scale spiking attention. This design enables sparse, efficient processing of 3D MRI while preserving diagnostic accuracy. Experiments on benchmark datasets demonstrate that FasterSNN achieves competitive performance with substantially improved efficiency and stability, supporting its potential for practical AD screening. Our source code is available at https://github.com/wuchangw/FasterSNN.

Automated Whole-Brain Focal Cortical Dysplasia Detection Using MR Fingerprinting With Deep Learning.

Ding Z, Morris S, Hu S, Su TY, Choi JY, Blümcke I, Wang X, Sakaie K, Murakami H, Alexopoulos AV, Jones SE, Najm IM, Ma D, Wang ZI

pubmed logopapersJun 10 2025
Focal cortical dysplasia (FCD) is a common pathology for pharmacoresistant focal epilepsy, yet detection of FCD on clinical MRI is challenging. Magnetic resonance fingerprinting (MRF) is a novel quantitative imaging technique providing fast and reliable tissue property measurements. The aim of this study was to develop an MRF-based deep-learning (DL) framework for whole-brain FCD detection. We included patients with pharmacoresistant focal epilepsy and pathologically/radiologically diagnosed FCD, as well as age-matched and sex-matched healthy controls (HCs). All participants underwent 3D whole-brain MRF and clinical MRI scans. T1, T2, gray matter (GM), and white matter (WM) tissue fraction maps were reconstructed from a dictionary-matching algorithm based on the MRF acquisition. A 3D ROI was manually created for each lesion. All MRF maps and lesion labels were registered to the Montreal Neurological Institute space. Mean and SD T1 and T2 maps were calculated voxel-wise across using HC data. T1 and T2 <i>z</i>-score maps for each patient were generated by subtracting the mean HC map and dividing by the SD HC map. MRF-based morphometric maps were produced in the same manner as in the morphometric analysis program (MAP), based on MRF GM and WM maps. A no-new U-Net model was trained using various input combinations, with performance evaluated through leave-one-patient-out cross-validation. We compared model performance using various input combinations from clinical MRI and MRF to assess the impact of different input types on model effectiveness. We included 40 patients with FCD (mean age 28.1 years, 47.5% female; 11 with FCD IIa, 14 with IIb, 12 with mMCD, 3 with MOGHE) and 67 HCs. The DL model with optimal performance used all MRF-based inputs, including MRF-synthesized T1w, T1z, and T2z maps; tissue fraction maps; and morphometric maps. The patient-level sensitivity was 80% with an average of 1.7 false positives (FPs) per patient. Sensitivity was consistent across subtypes, lobar locations, and lesional/nonlesional clinical MRI. Models using clinical images showed lower sensitivity and higher FPs. The MRF-DL model also outperformed the established MAP18 pipeline in sensitivity, FPs, and lesion label overlap. The MRF-DL framework demonstrated efficacy for whole-brain FCD detection. Multiparametric MRF features from a single scan offer promising inputs for developing a deep-learning tool capable of detecting subtle epileptic lesions.

Multivariate brain morphological patterns across mood disorders: key roles of frontotemporal and cerebellar areas.

Kandilarova S, Maggioni E, Squarcina L, Najar D, Homadi M, Tassi E, Stoyanov D, Brambilla P

pubmed logopapersJun 10 2025
Differentiating major depressive disorder (MDD) from bipolar disorder (BD) remains a significant clinical challenge, as both disorders exhibit overlapping symptoms but require distinct treatment approaches. Advances in voxel-based morphometry and surface-based morphometry have facilitated the identification of structural brain abnormalities that may serve as diagnostic biomarkers. This study aimed to explore the relationships between brain morphological features, such as grey matter volume (GMV) and cortical thickness (CT), and demographic and clinical variables in patients with MDD and BD and healthy controls (HC) using multivariate analysis methods. A total of 263 participants, including 120 HC, 95 patients with MDD and 48 patients with BD, underwent T1-weighted MRI. GMV and CT were computed for standardised brain regions, followed by multivariate partial least squares (PLS) regression to assess associations with demographic and diagnostic variables. Reductions in frontotemporal CT were observed in MDD and BD compared with HC, but distinct trends between BD and MDD were also detected for the CT of selective temporal, frontal and parietal regions. Differential patterns in cerebellar GMV were also identified, with lobule CI larger in MDD and lobule CII larger in BD. Additionally, BD showed the same trend as ageing concerning reductions in CT and posterior cerebellar and striatal GMV. Depression severity showed a transdiagnostic link with reduced frontotemporal CT. This study highlights shared and distinct structural brain alterations in MDD and BD, emphasising the potential of neuroimaging biomarkers to enhance diagnostic accuracy. Accelerated cortical thinning and differential cerebellar changes in BD may serve as targets for future research and clinical interventions. Our findings underscore the value of objective neuroimaging markers in increasing the precision of mood disorder diagnoses, improving treatment outcomes.
Page 79 of 1221217 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.