Sort by:
Page 8 of 59587 results

Model-unrolled fast MRI with weakly supervised lesion enhancement.

Ju F, He Y, Wang F, Li X, Niu C, Lian C, Ma J

pubmed logopapersSep 15 2025
The utility of Magnetic Resonance Imaging (MRI) in anomaly detection and disease diagnosis is well recognized. However, the current imaging protocol is often hindered by long scanning durations and a misalignment between the scanning process and the specific requirements of subsequent clinical assessments. While recent studies have actively explored accelerated MRI techniques, the majority have concentrated on improving overall image quality across all voxel locations, overlooking the attention to specific abnormalities that hold clinical significance. To address this discrepancy, we propose a model-unrolled deep-learning method, guided by weakly supervised lesion attention, for accelerated MRI oriented by downstream clinical needs. In particular, we construct a lesion-focused MRI reconstruction model, which incorporates customized learnable regularizations that can be learned efficiently by using only image-level labels to improve potential lesion reconstruction but preserve overall image quality. We then design a dedicated iterative algorithm to solve this task-driven reconstruction model, which is further unfolded as a cascaded deep network for lesion-focused fast imaging. Comprehensive experiments on two public datasets, i.e., fastMRI and Stanford Knee MRI Multi-Task Evaluation (SKM-TEA), demonstrate that our approach, referred to as Lesion-Focused MRI (LF-MRI), surpassed existing accelerated MRI methods by relatively large margins. Remarkably, LF-MRI led to substantial improvements in areas showing pathology. The source code and pretrained models will be publicly available at https://github.com/ladderlab-xjtu/LF-MRI.

Normative Modelling of Brain Volume for Diagnostic and Prognostic Stratification in Multiple Sclerosis

Korbmacher, M., Lie, I. A., Wesnes, K., Westman, E., Espeseth, T., Andreassen, O., Westlye, L., Wergeland, S., Harbo, H. F., Nygaard, G. O., Myhr, K.-M., Hogestol, E. A., Torkildsen, O.

medrxiv logopreprintSep 15 2025
BackgroundBrain atrophy is a hallmark of multiple sclerosis (MS). For clinical translatability and individual-level predictions, brain atrophy needs to be put into context of the broader population, using reference or normative models. MethodsReference models of MRI-derived brain volumes were established from a large healthy control (HC) multi-cohort dataset (N=63 115, 51% females). The reference models were applied to two independent MS cohorts (N=362, T1w-scans=953, follow-up time up to 12 years) to assess deviations from the reference, defined as Z-values. We assessed the overlap of deviation profiles and their stability over time using individual-level transitions towards or out of significant reference deviation states (|Z|>1{middle dot}96). A negative binomial model was used for case-control comparisons of the number of extreme deviations. Linear models were used to assess differences in Z-score deviations between MS and propensity-matched HCs, and associations with clinical scores at baseline and over time. The utilized normative BrainReference models, scripts and usage instructions are freely available. FindingsWe identified a temporally stable, brain morphometric phenotype of MS. The right and left thalami most consistently showed significantly lower-than-reference volumes in MS (25% and 26% overlap across the sample). The number of such extreme smaller-than-reference values was 2{middle dot}70 in MS compared to HC (4{middle dot}51 versus 1{middle dot}67). Additional deviations indicated stronger disability (Expanded Disability Status Scale: {beta}=0{middle dot}22, 95% CI 0{middle dot}12 to 0{middle dot}32), Paced Auditory Serial Addition Test score ({beta}=-0{middle dot}27, 95% CI -0{middle dot}52 to -0{middle dot}02), and Fatigue Severity Score ({beta}=0{middle dot}29, 95% CI 0{middle dot}05 to 0{middle dot}53) at baseline, and over time with EDSS ({beta}=0{middle dot}07, 95% CI 0{middle dot}02 to 0{middle dot}13). We additionally provide detailed maps of reference-deviations and their associations with clinical assessments. InterpretationWe present a heterogenous brain phenotype of MS which is associated with clinical manifestations, and particularly implicating the thalamus. The findings offer potential to aid diagnosis and prognosis of MS. FundingNorwegian MS-union, Research Council of Norway (#223273; #324252); the South-Eastern Norway Regional Health Authority (#2022080); and the European Unions Horizon2020 Research and Innovation Programme (#847776, #802998). Research in contextO_ST_ABSEvidence before this studyC_ST_ABSReference values and normative models have yet to be widely applied to neuroimaging assessments of neurological disorders such as multiple sclerosis (MS). We conducted a literature search in PubMed and Embase (Jan 1, 2000-September 12, 2025) using the terms "MRI" AND "multiple sclerosis", with and without the keywords "normative model*" and "atrophy", without language restrictions. While normative models have been applied in psychiatric and developmental disorders, few studies have addressed their use in neurological conditions. Existing MS research has largely focused on global atrophy and has not provided regional reference charts or established links to clinical and cognitive outcomes. Added value of this studyWe provide regionally detailed brain morphometry maps derived from a heterogeneous MS cohort spanning wide ranges of age, sex, clinical phenotype, disease duration, disability, and scanner characteristics. By leveraging normative modelling, our approach enables individualised brain phenotyping of MS in relation to a population based normative sample. The analyses reveal clinically meaningful and spatially consistent patterns of smaller brain volumes, particularly in the thalamus and frontal cortical regions, which are linked to disability, cognitive impairment, and fatigue. Robustness across scanners, centres, and longitudinal follow-up supports the stability and generalisability of these findings to real-world MS populations. Implications of all the available evidenceNormative modelling offers an individualised, sensitive, and interpretable approach to quantifying brain structure in MS by providing individual-specific reference values, supporting earlier detection of neurodegeneration and improved patient stratification. A consistent pattern of thalamic and fronto-parietal deviations defines a distinct morphometric profile of MS, with potential utility for early and personalised diagnosis and disease monitoring in clinical practice and clinical trials.

Enhancing 3D Medical Image Understanding with Pretraining Aided by 2D Multimodal Large Language Models.

Chen Q, Yao X, Ye H, Hong Y

pubmed logopapersSep 15 2025
Understanding 3D medical image volumes is critical in the medical field, yet existing 3D medical convolution and transformer-based self-supervised learning (SSL) methods often lack deep semantic comprehension. Recent advancements in multimodal large language models (MLLMs) provide a promising approach to enhance image understanding through text descriptions. To leverage these 2D MLLMs for improved 3D medical image understanding, we propose Med3DInsight, a novel pretraining framework that integrates 3D image encoders with 2D MLLMs via a specially designed plane-slice-aware transformer module. Additionally, our model employs a partial optimal transport based alignment, demonstrating greater tolerance to noise introduced by potential noises in LLM-generated content. Med3DInsight introduces a new paradigm for scalable multimodal 3D medical representation learning without requiring human annotations. Extensive experiments demonstrate our state-of-the-art performance on two downstream tasks, i.e., segmentation and classification, across various public datasets with CT and MRI modalities, outperforming current SSL methods. Med3DInsight can be seamlessly integrated into existing 3D medical image understanding networks, potentially enhancing their performance. Our source code, generated datasets, and pre-trained models will be available upon acceptance.

A Fully Open and Generalizable Foundation Model for Ultrasound Clinical Applications

Hongyuan Zhang, Yuheng Wu, Mingyang Zhao, Zhiwei Chen, Rebecca Li, Fei Zhu, Haohan Zhao, Xiaohua Yuan, Meng Yang, Chunli Qiu, Xiang Cong, Haiyan Chen, Lina Luan, Randolph H. L. Wong, Huai Liao, Colin A Graham, Shi Chang, Guowei Tao, Dong Yi, Zhen Lei, Nassir Navab, Sebastien Ourselin, Jiebo Luo, Hongbin Liu, Gaofeng Meng

arxiv logopreprintSep 15 2025
Artificial intelligence (AI) that can effectively learn ultrasound representations by integrating multi-source data holds significant promise for advancing clinical care. However, the scarcity of large labeled datasets in real-world clinical environments and the limited generalizability of task-specific models have hindered the development of generalizable clinical AI models for ultrasound applications. In this study, we present EchoCare, a novel ultrasound foundation model for generalist clinical use, developed via self-supervised learning on our curated, publicly available, large-scale dataset EchoCareData. EchoCareData comprises 4.5 million ultrasound images, sourced from over 23 countries across 5 continents and acquired via a diverse range of distinct imaging devices, thus encompassing global cohorts that are multi-center, multi-device, and multi-ethnic. Unlike prior studies that adopt off-the-shelf vision foundation model architectures, we introduce a hierarchical classifier into EchoCare to enable joint learning of pixel-level and representation-level features, capturing both global anatomical contexts and local ultrasound characteristics. With minimal training, EchoCare outperforms state-of-the-art comparison models across 10 representative ultrasound benchmarks of varying diagnostic difficulties, spanning disease diagnosis, lesion segmentation, organ detection, landmark prediction, quantitative regression, imaging enhancement and report generation. The code and pretrained model are publicly released, rendering EchoCare accessible for fine-tuning and local adaptation, supporting extensibility to additional applications. EchoCare provides a fully open and generalizable foundation model to boost the development of AI technologies for diverse clinical ultrasound applications.

MultiMAE for Brain MRIs: Robustness to Missing Inputs Using Multi-Modal Masked Autoencoder

Ayhan Can Erdur, Christian Beischl, Daniel Scholz, Jiazhen Pan, Benedikt Wiestler, Daniel Rueckert, Jan C Peeken

arxiv logopreprintSep 14 2025
Missing input sequences are common in medical imaging data, posing a challenge for deep learning models reliant on complete input data. In this work, inspired by MultiMAE [2], we develop a masked autoencoder (MAE) paradigm for multi-modal, multi-task learning in 3D medical imaging with brain MRIs. Our method treats each MRI sequence as a separate input modality, leveraging a late-fusion-style transformer encoder to integrate multi-sequence information (multi-modal) and individual decoder streams for each modality for multi-task reconstruction. This pretraining strategy guides the model to learn rich representations per modality while also equipping it to handle missing inputs through cross-sequence reasoning. The result is a flexible and generalizable encoder for brain MRIs that infers missing sequences from available inputs and can be adapted to various downstream applications. We demonstrate the performance and robustness of our method against an MAE-ViT baseline in downstream segmentation and classification tasks, showing absolute improvement of $10.1$ overall Dice score and $0.46$ MCC over the baselines with missing input sequences. Our experiments demonstrate the strength of this pretraining strategy. The implementation is made available.

Disentanglement of Biological and Technical Factors via Latent Space Rotation in Clinical Imaging Improves Disease Pattern Discovery

Jeanny Pan, Philipp Seeböck, Christoph Fürböck, Svitlana Pochepnia, Jennifer Straub, Lucian Beer, Helmut Prosch, Georg Langs

arxiv logopreprintSep 14 2025
Identifying new disease-related patterns in medical imaging data with the help of machine learning enlarges the vocabulary of recognizable findings. This supports diagnostic and prognostic assessment. However, image appearance varies not only due to biological differences, but also due to imaging technology linked to vendors, scanning- or re- construction parameters. The resulting domain shifts impedes data representation learning strategies and the discovery of biologically meaningful cluster appearances. To address these challenges, we introduce an approach to actively learn the domain shift via post-hoc rotation of the data latent space, enabling disentanglement of biological and technical factors. Results on real-world heterogeneous clinical data showcase that the learned disentangled representation leads to stable clusters representing tissue-types across different acquisition settings. Cluster consistency is improved by +19.01% (ARI), +16.85% (NMI), and +12.39% (Dice) compared to the entangled representation, outperforming four state-of-the-art harmonization methods. When using the clusters to quantify tissue composition on idiopathic pulmonary fibrosis patients, the learned profiles enhance Cox survival prediction. This indicates that the proposed label-free framework facilitates biomarker discovery in multi-center routine imaging data. Code is available on GitHub https://github.com/cirmuw/latent-space-rotation-disentanglement.

Open-Source AI for Vastus Lateralis and Adipose Tissue Segmentation to Assess Muscle Size and Quality.

White MS, Horikawa-Strakovsky A, Mayer KP, Noehren BW, Wen Y

pubmed logopapersSep 13 2025
Ultrasound imaging is a clinically feasible method for assessing muscle size and quality, but manual processing is time-consuming and difficult to scale. Existing artificial intelligence (AI) models measure muscle cross-sectional area, but they do not include assessments of muscle quality or account for the influence of subcutaneous adipose tissue thickness on echo intensity measurements. We developed an open-source AI model to accurately segment the vastus lateralis and subcutaneous adipose tissue in B-mode images for automating measurements of muscle size and quality. The model was trained on 612 ultrasound images from 44 participants who had anterior cruciate ligament reconstruction. Model generalizability was evaluated on a test set of 50 images from 14 unique participants. A U-Net architecture with ResNet50 backbone was used for segmentation. Performance was assessed using the Dice coefficient and Intersection over Union (IoU). Agreement between model predictions and manual measurements was evaluated using intraclass correlation coefficients (ICCs), R² values and standard errors of measurement (SEM). Dice coefficients were 0.9095 and 0.9654 for subcutaneous adipose tissue and vastus lateralis segmentation, respectively. Excellent agreement was observed between model predictions and manual measurements for cross-sectional area (ICC = 0.986), echo intensity (ICC = 0.991) and subcutaneous adipose tissue thickness (ICC = 0.996). The model demonstrated high reliability with low SEM values for clinical measurements (cross-sectional area: 1.15 cm², echo intensity: 1.28-1.78 a.u.). We developed an open-source AI model that accurately segments the vastus lateralis and subcutaneous adipose tissue in B-mode ultrasound images, enabling automated measurements of muscle size and quality.

Building a General SimCLR Self-Supervised Foundation Model Across Neurological Diseases to Advance 3D Brain MRI Diagnoses

Emily Kaczmarek, Justin Szeto, Brennan Nichyporuk, Tal Arbel

arxiv logopreprintSep 12 2025
3D structural Magnetic Resonance Imaging (MRI) brain scans are commonly acquired in clinical settings to monitor a wide range of neurological conditions, including neurodegenerative disorders and stroke. While deep learning models have shown promising results analyzing 3D MRI across a number of brain imaging tasks, most are highly tailored for specific tasks with limited labeled data, and are not able to generalize across tasks and/or populations. The development of self-supervised learning (SSL) has enabled the creation of large medical foundation models that leverage diverse, unlabeled datasets ranging from healthy to diseased data, showing significant success in 2D medical imaging applications. However, even the very few foundation models for 3D brain MRI that have been developed remain limited in resolution, scope, or accessibility. In this work, we present a general, high-resolution SimCLR-based SSL foundation model for 3D brain structural MRI, pre-trained on 18,759 patients (44,958 scans) from 11 publicly available datasets spanning diverse neurological diseases. We compare our model to Masked Autoencoders (MAE), as well as two supervised baselines, on four diverse downstream prediction tasks in both in-distribution and out-of-distribution settings. Our fine-tuned SimCLR model outperforms all other models across all tasks. Notably, our model still achieves superior performance when fine-tuned using only 20% of labeled training samples for predicting Alzheimer's disease. We use publicly available code and data, and release our trained model at https://github.com/emilykaczmarek/3D-Neuro-SimCLR, contributing a broadly applicable and accessible foundation model for clinical brain MRI analysis.

MultiASNet: Multimodal Label Noise Robust Framework for the Classification of Aortic Stenosis in Echocardiography.

Wu V, Fung A, Khodabakhshian B, Abdelsamad B, Vaseli H, Ahmadi N, Goco JAD, Tsang MY, Luong C, Abolmaesumi P, Tsang TSM

pubmed logopapersSep 12 2025
Aortic stenosis (AS), a prevalent and serious heart valve disorder, requires early detection but remains difficult to diagnose in routine practice. Although echocardiography with Doppler imaging is the clinical standard, these assessments are typically limited to trained specialists. Point-of-care ultrasound (POCUS) offers an accessible alternative for AS screening but is restricted to basic 2D B-mode imaging, often lacking the analysis Doppler provides. Our project introduces MultiASNet, a multimodal machine learning framework designed to enhance AS screening with POCUS by combining 2D B-mode videos with structured data from echocardiography reports, including Doppler parameters. Using contrastive learning, MultiASNet aligns video features with report features in tabular form from the same patient to improve interpretive quality. To address misalignment where a single report corresponds to multiple video views, some irrelevant to AS diagnosis, we use cross-attention in a transformer-based video and tabular network to assign less importance to irrelevant report data. The model integrates structured data only during training, enabling independent use with B-mode videos during inference for broader accessibility. MultiASNet also incorporates sample selection to counteract label noise from observer variability, yielding improved accuracy on two datasets. We achieved balanced accuracy scores of 93.0% on a private dataset and 83.9% on the public TMED-2 dataset for AS detection. For severity classification, balanced accuracy scores were 80.4% and 59.4% on the private and public datasets, respectively. This model facilitates reliable AS screening in non-specialist settings, bridging the gap left by Doppler data while reducing noise-related errors. Our code is publicly available at github.com/DeepRCL/MultiASNet.

SSL-AD: Spatiotemporal Self-Supervised Learning for Generalizability and Adaptability Across Alzheimer's Prediction Tasks and Datasets

Emily Kaczmarek, Justin Szeto, Brennan Nichyporuk, Tal Arbel

arxiv logopreprintSep 12 2025
Alzheimer's disease is a progressive, neurodegenerative disorder that causes memory loss and cognitive decline. While there has been extensive research in applying deep learning models to Alzheimer's prediction tasks, these models remain limited by lack of available labeled data, poor generalization across datasets, and inflexibility to varying numbers of input scans and time intervals between scans. In this study, we adapt three state-of-the-art temporal self-supervised learning (SSL) approaches for 3D brain MRI analysis, and add novel extensions designed to handle variable-length inputs and learn robust spatial features. We aggregate four publicly available datasets comprising 3,161 patients for pre-training, and show the performance of our model across multiple Alzheimer's prediction tasks including diagnosis classification, conversion detection, and future conversion prediction. Importantly, our SSL model implemented with temporal order prediction and contrastive learning outperforms supervised learning on six out of seven downstream tasks. It demonstrates adaptability and generalizability across tasks and number of input images with varying time intervals, highlighting its capacity for robust performance across clinical applications. We release our code and model publicly at https://github.com/emilykaczmarek/SSL-AD.
Page 8 of 59587 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.