Sort by:
Page 10 of 1601593 results

Learning neuroimaging models from health system-scale data

Yiwei Lyu, Samir Harake, Asadur Chowdury, Soumyanil Banerjee, Rachel Gologorsky, Shixuan Liu, Anna-Katharina Meissner, Akshay Rao, Chenhui Zhao, Akhil Kondepudi, Cheng Jiang, Xinhai Hou, Rushikesh S. Joshi, Volker Neuschmelting, Ashok Srinivasan, Dawn Kleindorfer, Brian Athey, Vikas Gulani, Aditya Pandey, Honglak Lee, Todd Hollon

arxiv logopreprintSep 23 2025
Neuroimaging is a ubiquitous tool for evaluating patients with neurological diseases. The global demand for magnetic resonance imaging (MRI) studies has risen steadily, placing significant strain on health systems, prolonging turnaround times, and intensifying physician burnout \cite{Chen2017-bt, Rula2024-qp-1}. These challenges disproportionately impact patients in low-resource and rural settings. Here, we utilized a large academic health system as a data engine to develop Prima, the first vision language model (VLM) serving as an AI foundation for neuroimaging that supports real-world, clinical MRI studies as input. Trained on over 220,000 MRI studies, Prima uses a hierarchical vision architecture that provides general and transferable MRI features. Prima was tested in a 1-year health system-wide study that included 30K MRI studies. Across 52 radiologic diagnoses from the major neurologic disorders, including neoplastic, inflammatory, infectious, and developmental lesions, Prima achieved a mean diagnostic area under the ROC curve of 92.0, outperforming other state-of-the-art general and medical AI models. Prima offers explainable differential diagnoses, worklist priority for radiologists, and clinical referral recommendations across diverse patient demographics and MRI systems. Prima demonstrates algorithmic fairness across sensitive groups and can help mitigate health system biases, such as prolonged turnaround times for low-resource populations. These findings highlight the transformative potential of health system-scale VLMs and Prima's role in advancing AI-driven healthcare.

The presymptomatic and early manifestations of semantic dementia.

Whiteside DJ, Rouse MA, Jones PS, Coyle-Gilchrist I, Murley AG, Stockton K, Hughes LE, Bethlehem RAI, Warrier V, Lambon Ralph MA, Rittman T, Rowe JB

pubmed logopapersSep 23 2025
People with semantic dementia (SD) or semantic variant primary progressive aphasia typically present with marked atrophy of the anterior temporal lobe, and thereafter progress more slowly than other forms of frontotemporal dementia. This suggests a prolonged prodromal phase with accumulation of neuropathology and minimal symptoms, about which little is known. To study early and presymptomatic SD, we first examine a well-characterised cohort of people with SD recruited from the Cambridge Centre for Frontotemporal Dementia. Five people with early SD had coincidental MRI prior to the onset of symptoms, or were healthy volunteers in research with anterior temporal lobe atrophy as an incidental finding. We model longitudinal imaging changes in left- and right-lateralised SD to predict atrophy at symptom onset. We then assess 61,203 participants with structural brain MRI in the UK Biobank to find individuals with imaging changes in keeping with SD but with no neurodegenerative diagnosis. To identify these individuals in UK Biobank, we design an ensemble-based classifier, differentiating baseline structural MRI in SD from healthy controls and patients with other neurodegenerative diseases, including other causes of frontotemporal lobar degeneration. We train the classifier on a Cambridge-based cohort (SD n=47, other neurodegenerative diseases n=498, healthy controls n=88) and test it on a combined cohort from the Neuroimaging in Frontotemporal Dementia study and the Alzheimer's Disease Neuroimaging Initiative (SD n=42, other neurodegenerative n=449, healthy control n=127). From our case series, we find people with marked atrophy three to five years before recognition of symptom onset in left- or right-predominant SD. We present right-lateralised cases with subtle multimodal semantic impairment, found concurrently with only mild behavioural disturbance. We show that imaging measures can be used to reliably and accurately differentiate clinical SD from other neurodegenerative diseases (recall 0.88, precision 0.95, F1 score 0.91). We find individuals with no neurodegenerative diagnosis in the UK Biobank with striking left-lateralised (prevalence ages 45-85 4.8/100,000) or right-lateralised (5.9/100,000) anterior temporal lobe atrophy, with deficits on cognitive testing suggestive of semantic impairment. These individuals show progressive involvement of other cognitive domains in longitudinal follow-up. Together, our findings suggest that (i) there is a burden of incipient early anterior temporal lobe atrophy in older populations, with comparable prevalence of left- and right-sided cases from this prospective unbiased approach to identification, (ii) substantial atrophy is required for manifest symptoms, particularly in right-lateralised cases, and (iii) semantic deficits across multiple domains can be detected in the early symptomatic phase.

Graph-Radiomic Learning (GrRAiL) Descriptor to Characterize Imaging Heterogeneity in Confounding Tumor Pathologies

Dheerendranath Battalapalli, Apoorva Safai, Maria Jaramillo, Hyemin Um, Gustavo Adalfo Pineda Ortiz, Ulas Bagci, Manmeet Singh Ahluwalia, Marwa Ismail, Pallavi Tiwari

arxiv logopreprintSep 23 2025
A significant challenge in solid tumors is reliably distinguishing confounding pathologies from malignant neoplasms on routine imaging. While radiomics methods seek surrogate markers of lesion heterogeneity on CT/MRI, many aggregate features across the region of interest (ROI) and miss complex spatial relationships among varying intensity compositions. We present a new Graph-Radiomic Learning (GrRAiL) descriptor for characterizing intralesional heterogeneity (ILH) on clinical MRI scans. GrRAiL (1) identifies clusters of sub-regions using per-voxel radiomic measurements, then (2) computes graph-theoretic metrics to quantify spatial associations among clusters. The resulting weighted graphs encode higher-order spatial relationships within the ROI, aiming to reliably capture ILH and disambiguate confounding pathologies from malignancy. To assess efficacy and clinical feasibility, GrRAiL was evaluated in n=947 subjects spanning three use cases: differentiating tumor recurrence from radiation effects in glioblastoma (GBM; n=106) and brain metastasis (n=233), and stratifying pancreatic intraductal papillary mucinous neoplasms (IPMNs) into no+low vs high risk (n=608). In a multi-institutional setting, GrRAiL consistently outperformed state-of-the-art baselines - Graph Neural Networks (GNNs), textural radiomics, and intensity-graph analysis. In GBM, cross-validation (CV) and test accuracies for recurrence vs pseudo-progression were 89% and 78% with >10% test-accuracy gains over comparators. In brain metastasis, CV and test accuracies for recurrence vs radiation necrosis were 84% and 74% (>13% improvement). For IPMN risk stratification, CV and test accuracies were 84% and 75%, showing >10% improvement.

MOIS-SAM2: Exemplar-based Segment Anything Model 2 for multilesion interactive segmentation of neurofibromas in whole-body MRI

Georgii Kolokolnikov, Marie-Lena Schmalhofer, Sophie Goetz, Lennart Well, Said Farschtschi, Victor-Felix Mautner, Inka Ristow, Rene Werner

arxiv logopreprintSep 23 2025
Background and Objectives: Neurofibromatosis type 1 is a genetic disorder characterized by the development of numerous neurofibromas (NFs) throughout the body. Whole-body MRI (WB-MRI) is the clinical standard for detection and longitudinal surveillance of NF tumor growth. Existing interactive segmentation methods fail to combine high lesion-wise precision with scalability to hundreds of lesions. This study proposes a novel interactive segmentation model tailored to this challenge. Methods: We introduce MOIS-SAM2, a multi-object interactive segmentation model that extends the state-of-the-art, transformer-based, promptable Segment Anything Model 2 (SAM2) with exemplar-based semantic propagation. MOIS-SAM2 was trained and evaluated on 119 WB-MRI scans from 84 NF1 patients acquired using T2-weighted fat-suppressed sequences. The dataset was split at the patient level into a training set and four test sets (one in-domain and three reflecting different domain shift scenarios, e.g., MRI field strength variation, low tumor burden, differences in clinical site and scanner vendor). Results: On the in-domain test set, MOIS-SAM2 achieved a scan-wise DSC of 0.60 against expert manual annotations, outperforming baseline 3D nnU-Net (DSC: 0.54) and SAM2 (DSC: 0.35). Performance of the proposed model was maintained under MRI field strength shift (DSC: 0.53) and scanner vendor variation (DSC: 0.50), and improved in low tumor burden cases (DSC: 0.61). Lesion detection F1 scores ranged from 0.62 to 0.78 across test sets. Preliminary inter-reader variability analysis showed model-to-expert agreement (DSC: 0.62-0.68), comparable to inter-expert agreement (DSC: 0.57-0.69). Conclusions: The proposed MOIS-SAM2 enables efficient and scalable interactive segmentation of NFs in WB-MRI with minimal user input and strong generalization, supporting integration into clinical workflows.

Exploiting Cross-modal Collaboration and Discrepancy for Semi-supervised Ischemic Stroke Lesion Segmentation from Multi-sequence MRI Images.

Cao Y, Qin T, Liu Y

pubmed logopapersSep 23 2025
Accurate ischemic stroke lesion segmentation is useful to define the optimal reperfusion treatment and unveil the stroke etiology. Despite the importance of diffusion-weighted MRI (DWI) for stroke diagnosis, learning from multi-sequence MRI images like apparent diffusion coefficient (ADC) can capitalize on the complementary nature of information from various modalities and show strong potential to improve the performance of segmentation. However, existing deep learning-based methods require large amounts of well-annotated data from multiple modalities for training, while acquiring such datasets is often impractical. We conduct the exploration of semi-supervised stroke lesion segmentation from multi-sequence MRI images by utilizing unlabeled data to improve performance using limited annotation and propose a novel framework by exploiting cross-modality collaboration and discrepancy to efficiently utilize unlabeled data. Specifically, we adopt a cross-modal bidirectional copy-paste strategy to enable information collaboration between different modalities and a cross-modal discrepancy-informed correction strategy to efficiently learn from limited labeled multi-sequence MRI data and abundant unlabeled data. Extensive experiments on the ischemic stroke lesion segmentation (ISLES 22) dataset demonstrate that our method efficiently utilizes unlabeled data with 12.32% DSC improvements compared with a supervised baseline using 10% annotations and outperforms existing semi-supervised segmentation methods with better performance.

MOIS-SAM2: Exemplar-based Segment Anything Model 2 for multilesion interactive segmentation of neurobromas in whole-body MRI

Georgii Kolokolnikov, Marie-Lena Schmalhofer, Sophie Götz, Lennart Well, Said Farschtschi, Victor-Felix Mautner, Inka Ristow, Rene Werner

arxiv logopreprintSep 23 2025
Background and Objectives: Neurofibromatosis type 1 is a genetic disorder characterized by the development of numerous neurofibromas (NFs) throughout the body. Whole-body MRI (WB-MRI) is the clinical standard for detection and longitudinal surveillance of NF tumor growth. Existing interactive segmentation methods fail to combine high lesion-wise precision with scalability to hundreds of lesions. This study proposes a novel interactive segmentation model tailored to this challenge. Methods: We introduce MOIS-SAM2, a multi-object interactive segmentation model that extends the state-of-the-art, transformer-based, promptable Segment Anything Model 2 (SAM2) with exemplar-based semantic propagation. MOIS-SAM2 was trained and evaluated on 119 WB-MRI scans from 84 NF1 patients acquired using T2-weighted fat-suppressed sequences. The dataset was split at the patient level into a training set and four test sets (one in-domain and three reflecting different domain shift scenarios, e.g., MRI field strength variation, low tumor burden, differences in clinical site and scanner vendor). Results: On the in-domain test set, MOIS-SAM2 achieved a scan-wise DSC of 0.60 against expert manual annotations, outperforming baseline 3D nnU-Net (DSC: 0.54) and SAM2 (DSC: 0.35). Performance of the proposed model was maintained under MRI field strength shift (DSC: 0.53) and scanner vendor variation (DSC: 0.50), and improved in low tumor burden cases (DSC: 0.61). Lesion detection F1 scores ranged from 0.62 to 0.78 across test sets. Preliminary inter-reader variability analysis showed model-to-expert agreement (DSC: 0.62-0.68), comparable to inter-expert agreement (DSC: 0.57-0.69). Conclusions: The proposed MOIS-SAM2 enables efficient and scalable interactive segmentation of NFs in WB-MRI with minimal user input and strong generalization, supporting integration into clinical workflows.

Enhancing AI-based decision support system with automatic brain tumor segmentation for EGFR mutation classification.

Gökmen N, Kocadağlı O, Cevik S, Aktan C, Eghbali R, Liu C

pubmed logopapersSep 23 2025
Glioblastoma (GBM) carries poor prognosis; epidermal-growth-factor-receptor (EGFR) mutations further shorten survival. We propose a fully automated MRI-based decision-support system (DSS) that segments GBM and classifies EGFR status, reducing reliance on invasive biopsy. The segmentation module (UNet SI) fuses multiresolution, entropy-ranked shearlet features with CNN features, preserving fine detail through identity long-skip connections, to yield a Lightweight 1.9 M-parameter network. Tumour masks are fed to an Inception ResNet-v2 classifier via a 512-D bottleneck. The pipeline was five-fold cross-validated on 98 contrast-enhanced T1-weighted scans (Memorial Hospital; Ethics 24.12.2021/008) and externally validated on BraTS 2019. On the Memorial cohort UNet SI achieved Dice 0.873, Jaccard 0.853, SSIM 0.992, HD95 24.19 mm. EGFR classification reached Accuracy 0.960, Precision 1.000, Recall 0.871, AUC 0.94, surpassing published state-of-the-art results. Inference time is ≤ 0.18 s per slice on a 4 GB GPU. By combining shearlet-enhanced segmentation with streamlined classification, the DSS delivers superior EGFR prediction and is suitable for integration into routine clinical workflows.

Feature-Based Machine Learning for Brain Metastasis Detection Using Clinical MRI

Rahi, A., Shafiabadi, M. H.

medrxiv logopreprintSep 22 2025
Brain metastases represent one of the most common intracranial malignancies, yet early and accurate detection remains challenging, particularly in clinical datasets with limited availability of healthy controls. In this study, we developed a feature-based machine learning framework to classify patients with and without brain metastases using multi-modal clinical MRI scans. A dataset of 50 subjects from the UCSF Brain Metastases collection was analyzed, including pre- and post-contrast T1-weighted images and corresponding segmentation masks. We designed advanced feature extraction strategies capturing intensity, enhancement patterns, texture gradients, and histogram-based metrics, resulting in 44 quantitative descriptors per subject. To address the severe class imbalance (46 metastasis vs. 4 non-metastasis cases), we applied minority oversampling and noise-based augmentation, combined with stratified cross-validation. Among multiple classifiers, Random Forest consistently achieved the highest performance with an average accuracy of 96.7% and an area under the ROC curve (AUC) of 0.99 across five folds. The proposed approach highlights the potential of handcrafted radiomic-like features coupled with machine learning to improve metastasis detection in heterogeneous clinical MRI cohorts. These findings underscore the importance of methodological strategies for handling imbalanced data and support the integration of feature-based models as complementary tools for brain metastasis screening and research.

Linking dynamic connectivity states to cognitive decline and anatomical changes in Alzheimer's disease.

Tessadori J, Galazzo IB, Storti SF, Pini L, Brusini L, Cruciani F, Sona D, Menegaz G, Murino V

pubmed logopapersSep 22 2025
Alterations in brain connectivity provide early indications of neurodegenerative diseases like Alzheimer's disease (AD). Here, we present a novel framework that integrates a Hidden Markov Model (HMM) within the architecture of a convolutional neural network (CNN) to analyze dynamic functional connectivity (dFC) in resting-state functional magnetic resonance imaging (rs-fMRI). Our unsupervised approach captures recurring connectivity states in a large cohort of subjects spanning the Alzheimer's disease continuum, including healthy controls, individuals with mild cognitive impairment (MCI), and patients with clinically diagnosed AD. We propose a deep neural model with embedded HMM dynamics to identify stable recurring brain states from resting-state fMRI. These states exhibit distinct connectivity patterns and are differentially expressed across the Alzheimer's disease continuum. Our analysis shows that the fraction of time each state is active varies systematically with disease severity, highlighting dynamic network alterations that track neurodegeneration. Our findings suggest that the disruption of dynamic connectivity patterns in AD may follow a two-stage trajectory, where early shifts toward integrative network states give way to reduced connectivity organization as the disease progresses. This framework offers a promising tool for early diagnosis and monitoring of AD, and may have broader applications in the study of other neurodegenerative conditions.

Measurement Score-Based MRI Reconstruction with Automatic Coil Sensitivity Estimation

Tingjun Liu, Chicago Y. Park, Yuyang Hu, Hongyu An, Ulugbek S. Kamilov

arxiv logopreprintSep 22 2025
Diffusion-based inverse problem solvers (DIS) have recently shown outstanding performance in compressed-sensing parallel MRI reconstruction by combining diffusion priors with physical measurement models. However, they typically rely on pre-calibrated coil sensitivity maps (CSMs) and ground truth images, making them often impractical: CSMs are difficult to estimate accurately under heavy undersampling and ground-truth images are often unavailable. We propose Calibration-free Measurement Score-based diffusion Model (C-MSM), a new method that eliminates these dependencies by jointly performing automatic CSM estimation and self-supervised learning of measurement scores directly from k-space data. C-MSM reconstructs images by approximating the full posterior distribution through stochastic sampling over partial measurement posterior scores, while simultaneously estimating CSMs. Experiments on the multi-coil brain fastMRI dataset show that C-MSM achieves reconstruction performance close to DIS with clean diffusion priors -- even without access to clean training data and pre-calibrated CSMs.
Page 10 of 1601593 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.