Sort by:
Page 14 of 1241236 results

MultiMAE for Brain MRIs: Robustness to Missing Inputs Using Multi-Modal Masked Autoencoder

Ayhan Can Erdur, Christian Beischl, Daniel Scholz, Jiazhen Pan, Benedikt Wiestler, Daniel Rueckert, Jan C Peeken

arxiv logopreprintSep 14 2025
Missing input sequences are common in medical imaging data, posing a challenge for deep learning models reliant on complete input data. In this work, inspired by MultiMAE [2], we develop a masked autoencoder (MAE) paradigm for multi-modal, multi-task learning in 3D medical imaging with brain MRIs. Our method treats each MRI sequence as a separate input modality, leveraging a late-fusion-style transformer encoder to integrate multi-sequence information (multi-modal) and individual decoder streams for each modality for multi-task reconstruction. This pretraining strategy guides the model to learn rich representations per modality while also equipping it to handle missing inputs through cross-sequence reasoning. The result is a flexible and generalizable encoder for brain MRIs that infers missing sequences from available inputs and can be adapted to various downstream applications. We demonstrate the performance and robustness of our method against an MAE-ViT baseline in downstream segmentation and classification tasks, showing absolute improvement of $10.1$ overall Dice score and $0.46$ MCC over the baselines with missing input sequences. Our experiments demonstrate the strength of this pretraining strategy. The implementation is made available.

Multimodal Machine Learning for Diagnosis of Multiple Sclerosis Using Optical Coherence Tomography in Pediatric Cases

Chen, C., Soltanieh, S., Rajapaksa, S., Khalvati, F., Yeh, E. A.

medrxiv logopreprintSep 14 2025
Background and ObjectivesIdentifying MS in children early and distinguishing it from other neuroinflammatory conditions of childhood is critical, as early therapeutic intervention can improve outcomes. The anterior visual pathway has been demonstrated to be of central importance in diagnostic considerations for MS and has recently been identified as a fifth topography in the McDonald Diagnostic Criteria for MS. Optical coherence tomography (OCT) provides high-resolution retinal imaging and reflects the structural integrity of the retinal nerve fiber and ganglion cell inner plexiform layers. Whether multimodal deep learning models can use OCT alone to diagnose pediatric MS (POMS) is unknown. MethodsWe analyzed 3D OCT scans collected prospectively through the Neuroinflammatory Registry of the Hospital for Sick Children (REB#1000005356). Raw macular and optic nerve head images, and 52 automatically segmented features were included. We evaluated three classification approaches: (1) deep learning models (e.g. ResNet, DenseNet) for representation learning followed by classical ML classifiers, (2) ML models trained on OCT-derived features, and (3) multimodal models combining both via early and late fusion. ResultsScans from individuals with POMS (onset 16.0 {+/-} 3.1 years, 51.0%F; 211 scans) and 29 children with non-inflammatory neurological conditions (13.1 {+/-} 4.0 years, 69.0%F, 52 scans) were included. The early fusion model achieved the highest performance (AUC: 0.87, F1: 0.87, Accuracy: 90%), outperforming both unimodal and late fusion models. The best unimodal feature-based model (SVC) yielded an AUC of 0.84, F1 of 0.85 and an accuracy of 85%, while the best image-based model (ResNet101 with Random Forest) achieved an AUC of 0.87, F1 of 0.79, and accuracy of 84%. Late fusion underperformed, reaching 82% accuracy but failing in the minority class. DiscussionMultimodal learning with early fusion significantly enhances diagnostic performance by combining spatial retinal information with clinically relevant structural features. This approach captures complementary patterns associated with MS pathology and shows promise as an AI-driven tool to support pediatric neuroinflammatory diagnosis.

Enhancement Without Contrast: Stability-Aware Multicenter Machine Learning for Glioma MRI Imaging

Sajad Amiri, Shahram Taeb, Sara Gharibi, Setareh Dehghanfard, Somayeh Sadat Mehrnia, Mehrdad Oveisi, Ilker Hacihaliloglu, Arman Rahmim, Mohammad R. Salmanpour

arxiv logopreprintSep 13 2025
Gadolinium-based contrast agents (GBCAs) are central to glioma imaging but raise safety, cost, and accessibility concerns. Predicting contrast enhancement from non-contrast MRI using machine learning (ML) offers a safer alternative, as enhancement reflects tumor aggressiveness and informs treatment planning. Yet scanner and cohort variability hinder robust model selection. We propose a stability-aware framework to identify reproducible ML pipelines for multicenter prediction of glioma MRI contrast enhancement. We analyzed 1,446 glioma cases from four TCIA datasets (UCSF-PDGM, UPENN-GB, BRATS-Africa, BRATS-TCGA-LGG). Non-contrast T1WI served as input, with enhancement derived from paired post-contrast T1WI. Using PyRadiomics under IBSI standards, 108 features were extracted and combined with 48 dimensionality reduction methods and 25 classifiers, yielding 1,200 pipelines. Rotational validation was trained on three datasets and tested on the fourth. Cross-validation prediction accuracies ranged from 0.91 to 0.96, with external testing achieving 0.87 (UCSF-PDGM), 0.98 (UPENN-GB), and 0.95 (BRATS-Africa), with an average of 0.93. F1, precision, and recall were stable (0.87 to 0.96), while ROC-AUC varied more widely (0.50 to 0.82), reflecting cohort heterogeneity. The MI linked with ETr pipeline consistently ranked highest, balancing accuracy and stability. This framework demonstrates that stability-aware model selection enables reliable prediction of contrast enhancement from non-contrast glioma MRI, reducing reliance on GBCAs and improving generalizability across centers. It provides a scalable template for reproducible ML in neuro-oncology and beyond.

Chat GPT-4 shows high agreement in MRI protocol selection compared to board-certified neuroradiologists.

Bendella Z, Wichtmann BD, Clauberg R, Keil VC, Lehnen NC, Haase R, Sáez LC, Wiest IC, Kather JN, Endler C, Radbruch A, Paech D, Deike K

pubmed logopapersSep 13 2025
The aim of this study was to determine whether ChatGPT-4 can correctly suggest MRI protocols and additional MRI sequences based on real-world Radiology Request Forms (RRFs) as well as to investigate the ability of ChatGPT-4 to suggest time saving protocols. Retrospectively, 1,001 RRFs of our Department of Neuroradiology (in-house dataset), 200 RRFs of an independent Department of General Radiology (independent dataset) and 300 RRFs from an external, foreign Department of Neuroradiology (external dataset) were included. Patients' age, sex, and clinical information were extracted from the RRFs and used to prompt ChatGPT- 4 to choose an adequate MRI protocol from predefined institutional lists. Four independent raters then assessed its performance. Additionally, ChatGPT-4 was tasked with creating case-specific protocols aimed at saving time. Two and 7 of 1,001 protocol suggestions of ChatGPT-4 were rated "unacceptable" in the in-house dataset for reader 1 and 2, respectively. No protocol suggestions were rated "unacceptable" in both the independent and external dataset. When assessing the inter-reader agreement, Coheńs weighted ĸ ranged from 0.88 to 0.98 (each p < 0.001). ChatGPT-4's freely composed protocols were approved in 766/1,001 (76.5 %) and 140/300 (46.67 %) cases of the in-house and external dataset with mean time savings (standard deviation) of 3:51 (minutes:seconds) (±2:40) minutes and 2:59 (±3:42) minutes per adopted in-house and external MRI protocol. ChatGPT-4 demonstrated a very high agreement with board-certified (neuro-)radiologists in selecting MRI protocols and was able to suggest approved time saving protocols from the set of available sequences.

Building a General SimCLR Self-Supervised Foundation Model Across Neurological Diseases to Advance 3D Brain MRI Diagnoses

Emily Kaczmarek, Justin Szeto, Brennan Nichyporuk, Tal Arbel

arxiv logopreprintSep 12 2025
3D structural Magnetic Resonance Imaging (MRI) brain scans are commonly acquired in clinical settings to monitor a wide range of neurological conditions, including neurodegenerative disorders and stroke. While deep learning models have shown promising results analyzing 3D MRI across a number of brain imaging tasks, most are highly tailored for specific tasks with limited labeled data, and are not able to generalize across tasks and/or populations. The development of self-supervised learning (SSL) has enabled the creation of large medical foundation models that leverage diverse, unlabeled datasets ranging from healthy to diseased data, showing significant success in 2D medical imaging applications. However, even the very few foundation models for 3D brain MRI that have been developed remain limited in resolution, scope, or accessibility. In this work, we present a general, high-resolution SimCLR-based SSL foundation model for 3D brain structural MRI, pre-trained on 18,759 patients (44,958 scans) from 11 publicly available datasets spanning diverse neurological diseases. We compare our model to Masked Autoencoders (MAE), as well as two supervised baselines, on four diverse downstream prediction tasks in both in-distribution and out-of-distribution settings. Our fine-tuned SimCLR model outperforms all other models across all tasks. Notably, our model still achieves superior performance when fine-tuned using only 20% of labeled training samples for predicting Alzheimer's disease. We use publicly available code and data, and release our trained model at https://github.com/emilykaczmarek/3D-Neuro-SimCLR, contributing a broadly applicable and accessible foundation model for clinical brain MRI analysis.

SSL-AD: Spatiotemporal Self-Supervised Learning for Generalizability and Adaptability Across Alzheimer's Prediction Tasks and Datasets

Emily Kaczmarek, Justin Szeto, Brennan Nichyporuk, Tal Arbel

arxiv logopreprintSep 12 2025
Alzheimer's disease is a progressive, neurodegenerative disorder that causes memory loss and cognitive decline. While there has been extensive research in applying deep learning models to Alzheimer's prediction tasks, these models remain limited by lack of available labeled data, poor generalization across datasets, and inflexibility to varying numbers of input scans and time intervals between scans. In this study, we adapt three state-of-the-art temporal self-supervised learning (SSL) approaches for 3D brain MRI analysis, and add novel extensions designed to handle variable-length inputs and learn robust spatial features. We aggregate four publicly available datasets comprising 3,161 patients for pre-training, and show the performance of our model across multiple Alzheimer's prediction tasks including diagnosis classification, conversion detection, and future conversion prediction. Importantly, our SSL model implemented with temporal order prediction and contrastive learning outperforms supervised learning on six out of seven downstream tasks. It demonstrates adaptability and generalizability across tasks and number of input images with varying time intervals, highlighting its capacity for robust performance across clinical applications. We release our code and model publicly at https://github.com/emilykaczmarek/SSL-AD.

Three-Dimensional Radiomics and Machine Learning for Predicting Postoperative Outcomes in Laminoplasty for Cervical Spondylotic Myelopathy: A Clinical-Radiomics Model.

Zheng B, Zhu Z, Ma K, Liang Y, Liu H

pubmed logopapersSep 12 2025
This study aims to explore a method based on three-dimensional cervical spinal cord reconstruction, radiomics feature extraction, and machine learning to build a postoperative prognosis prediction model for patients with cervical spondylotic myelopathy (CSM). It also evaluates the predictive performance of different cervical spinal cord segmentation strategies and machine learning algorithms. A retrospective analysis is conducted on 126 CSM patients who underwent posterior single-door laminoplasty from January 2017 to December 2022. Three different cervical spinal cord segmentation strategies (narrowest segment, surgical segment, and entire cervical cord C1-C7) are applied to preoperative MRI images for radiomics feature extraction. Good clinical prognosis is defined as a postoperative JOA recovery rate ≥50%. By comparing the performance of 8 machine learning algorithms, the optimal cervical spinal cord segmentation strategy and classifier are selected. Subsequently, clinical features (smoking history, diabetes, preoperative JOA score, and cSVA) are combined with radiomics features to construct a clinical-radiomics prediction model. Among the three cervical spinal cord segmentation strategies, the SVM model based on the narrowest segment performed best (AUC=0.885). Among clinical features, smoking history, diabetes, preoperative JOA score, and cSVA are important indicators for prognosis prediction. When clinical features are combined with radiomics features, the fusion model achieved excellent performance on the test set (accuracy=0.895, AUC=0.967), significantly outperforming either the clinical model or the radiomics model alone. This study validates the feasibility and superiority of three-dimensional radiomics combined with machine learning in predicting postoperative prognosis for CSM. The combination of radiomics features based on the narrowest segment and clinical features can yield a highly accurate prognosis prediction model, providing new insights for clinical assessment and individualized treatment decisions. Future studies need to further validate the stability and generalizability of this model in multi-center, large-sample cohorts.

Updates in Cerebrovascular Imaging.

Ali H, Abu Qdais A, Chatterjee A, Abdalkader M, Raz E, Nguyen TN, Al Kasab S

pubmed logopapersSep 12 2025
Cerebrovascular imaging has undergone significant advances, enhancing the diagnosis and management of cerebrovascular diseases such as stroke, aneurysms, and arteriovenous malformations. This chapter explores key imaging modalities, including non-contrast computed tomography, computed tomography angiography, magnetic resonance imaging (MRI), and digital subtraction angiography. Innovations such as high-resolution vessel wall imaging, artificial intelligence (AI)-driven stroke detection, and advanced perfusion imaging have improved diagnostic accuracy and treatment selection. Additionally, novel techniques like 7-T MRI, molecular imaging, and functional ultrasound provide deeper insights into vascular pathology. AI and machine learning applications are revolutionizing automated detection and prognostication, expediting treatment decisions. Challenges remain in standardization, radiation exposure, and accessibility. However, continued technological advances, multimodal imaging integration, and AI-driven automation promise a future of precise, non-invasive cerebrovascular diagnostics, ultimately improving patient outcomes.

The best diagnostic approach for classifying ischemic stroke onset time: A systematic review and meta-analysis.

Zakariaee SS, Kadir DH, Molazadeh M, Abdi S

pubmed logopapersSep 12 2025
The success of intravenous thrombolysis with tPA (IV-tPA) as the fastest and easiest treatment for stroke patients is closely related to time since stroke onset (TSS). Administering IV-tPA after the recommended time interval (< 4.5 h) increases the risk of cerebral hemorrhage. Despite advances in diagnostic approaches have been made, the determination of TSS remains a clinical challenge. In this study, the performances of different diagnostic approaches were investigated to classify TSS. A systematic literature search was conducted in Web of Science, Pubmed, Scopus, Embase, and Cochrane databases until July 2025. The overall AUC, sensitivity, and specificity magnitudes with their 95%CIs were determined for each diagnostic approach to evaluate their classification performances. This systematic review retrieved a total number of 9030 stroke patients until July 2025. The results showed that the human readings of DWI-FLAIR mismatch as the current gold standard method with AUC = 0.71 (95%CI: 0.66-0.76), sensitivity = 0.62 (95%CI: 0.54-0.71), and specificity = 0.78 (95%CI: 0.72-0.84) has a moderate performance to identify the TSS. ML model fed by radiomic features of CT data with AUC = 0.89 (95%CI: 0.80-0.98), sensitivity = 0.85 (95%CI: 0.75-0.96), and specificity = 0.86 (95%CI: 0.73-1.00) has the best performance in classifying TSS among the models reviewed. ML models fed by radiomic features better classify TSS than the human reading of DWI-FLAIR mismatch. An efficient AI model fed by CT radiomic data could yield the best classification performance to determine patients' eligibility for IV-tPA treatment and improve treatment outcomes.

Deep learning-powered temperature prediction for optimizing transcranial MR-guided focused ultrasound treatment.

Xiong Y, Yang M, Arkin M, Li Y, Duan C, Bian X, Lu H, Zhang L, Wang S, Ren X, Li X, Zhang M, Zhou X, Pan L, Lou X

pubmed logopapersSep 12 2025
Precise temperature control is challenging during transcranial MR-guided focused ultrasound (MRgFUS) treatment. The aim of this study was to develop a deep learning model integrating the treatment parameters for each sonication, along with patient-specific clinical information and skull metrics, for prediction of the MRgFUS therapeutic temperature. This is a retrospective analysis of sonications from patients with essential tremor or Parkinson's disease who underwent unilateral MRgFUS thalamotomy or pallidothalamic tractotomy at a single hospital from January 2019 to June 2023. For model training, a dataset of 600 sonications (72 patients) was used, while a validation dataset comprising 199 sonications (18 patients) was used to assess model performance. Additionally, an external dataset of 146 sonications (20 patients) was used for external validation. The developed deep learning model, called Fust-Net, achieved high predictive accuracy, with normalized mean absolute errors of 1.655°C for the internal dataset and 2.432°C for the external dataset, which closely matched the actual temperature. The graded evaluation showed that Fust-Net achieved an effective temperature prediction rate of 82.6%. These results showcase the exciting potential of Fust-Net for achieving precise temperature control during MRgFUS treatment, opening new doors for enhanced precision and safety in clinical applications.
Page 14 of 1241236 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.