Sort by:
Page 19 of 58579 results

Exploring factors driving the evolution of chronic lesions in multiple sclerosis using machine learning.

Hu H, Ye L, Wu P, Shi Z, Chen G, Li Y

pubmed logopapersJun 17 2025
The study aimed to identify factors influencing the evolution of chronic lesions in multiple sclerosis (MS) using a machine learning approach. Longitudinal data were collected from individuals with relapsing-remitting multiple sclerosis (RRMS). The "iron rim" sign was identified using quantitative susceptibility mapping (QSM), and microstructural damage was quantified via T1/fluid attenuated inversion recovery (FLAIR) ratios. Additional data included baseline lesion volume, cerebral T2-hyperintense lesion volume, iron rim lesion volume, the proportion of iron rim lesion volume, gender, age, disease duration (DD), disability and cognitive scores, use of disease-modifying therapy, and follow-up intervals. These features were integrated into machine learning models (logistic regression (LR), random forest (RF), and support vector machine (SVM)) to predict lesion volume change, with the most predictive model selected for feature importance analysis. The study included 47 RRMS individuals (mean age, 30.6 ± 8.0 years [standard deviation], 6 males) and 833 chronic lesions. Machine learning model development results showed that the SVM model demonstrated superior predictive efficiency, with an AUC of 0.90 in the training set and 0.81 in the testing set. Feature importance analysis identified the top three features were the "iron rim" sign of lesions, DD, and the T1/FLAIR ratios of the lesions. This study developed a machine learning model to predict the volume outcome of MS lesions. Feature importance analysis identified chronic inflammation around the lesion, DD, and the microstructural damage as key factors influencing volume change in chronic MS lesions. Question The evolution of different chronic lesions in MS exhibits variability, and the driving factors influencing these outcomes remain to be further investigated. Findings A SVM learning model was developed to predict chronic MS lesion volume changes, integrating lesion characteristics, lesion burden, and clinical data. Clinical relevance Chronic inflammation surrounding lesions, DD, and microstructural damage are key factors influencing the evolution of chronic MS lesions.

Application of Convolutional Neural Network Denoising to Improve Cone Beam CT Myelographic Images.

Madhavan AA, Zhou Z, Thorne J, Kodet ML, Cutsforth-Gregory JK, Schievink WI, Mark IT, Schueler BA, Yu L

pubmed logopapersJun 17 2025
Cone beam CT is an imaging modality that provides high-resolution, cross-sectional imaging in the fluoroscopy suite. In neuroradiology, cone beam CT has been used for various applications including temporal bone imaging and during spinal and cerebral angiography. Furthermore, cone beam CT has been shown to improve imaging of spinal CSF leaks during myelography. One drawback of cone beam CT is that images have a relatively high noise level. In this technical report, we describe the first application of a high-resolution convolutional neural network to denoise cone beam CT myelographic images. We show examples of the resulting improvement in image quality for a variety of types of spinal CSF leaks. Further application of this technique is warranted to demonstrate its clinical utility and potential use for other cone beam CT applications.ABBREVIATIONS: CBCT = cone beam CT; CB-CTM = cone beam CT myelography; CTA = CT angiography; CVF = CSF-venous fistula; DSM = digital subtraction myelography; EID = energy integrating detector; FBP = filtered back-projection; SNR = signal-to-noise ratio.

NeuroMoE: A Transformer-Based Mixture-of-Experts Framework for Multi-Modal Neurological Disorder Classification

Wajih Hassan Raza, Aamir Bader Shah, Yu Wen, Yidan Shen, Juan Diego Martinez Lemus, Mya Caryn Schiess, Timothy Michael Ellmore, Renjie Hu, Xin Fu

arxiv logopreprintJun 17 2025
The integration of multi-modal Magnetic Resonance Imaging (MRI) and clinical data holds great promise for enhancing the diagnosis of neurological disorders (NDs) in real-world clinical settings. Deep Learning (DL) has recently emerged as a powerful tool for extracting meaningful patterns from medical data to aid in diagnosis. However, existing DL approaches struggle to effectively leverage multi-modal MRI and clinical data, leading to suboptimal performance. To address this challenge, we utilize a unique, proprietary multi-modal clinical dataset curated for ND research. Based on this dataset, we propose a novel transformer-based Mixture-of-Experts (MoE) framework for ND classification, leveraging multiple MRI modalities-anatomical (aMRI), Diffusion Tensor Imaging (DTI), and functional (fMRI)-alongside clinical assessments. Our framework employs transformer encoders to capture spatial relationships within volumetric MRI data while utilizing modality-specific experts for targeted feature extraction. A gating mechanism with adaptive fusion dynamically integrates expert outputs, ensuring optimal predictive performance. Comprehensive experiments and comparisons with multiple baselines demonstrate that our multi-modal approach significantly enhances diagnostic accuracy, particularly in distinguishing overlapping disease states. Our framework achieves a validation accuracy of 82.47\%, outperforming baseline methods by over 10\%, highlighting its potential to improve ND diagnosis by applying multi-modal learning to real-world clinical data.

Recognition and diagnosis of Alzheimer's Disease using T1-weighted magnetic resonance imaging via integrating CNN and Swin vision transformer.

Wang Y, Sheng H, Wang X

pubmed logopapersJun 17 2025
Alzheimer's disease is a debilitating neurological disorder that requires accurate diagnosis for the most effective therapy and care. This article presents a new vision transformer model specifically created to evaluate magnetic resonance imaging data from the Alzheimer's Disease Neuroimaging Initiative dataset in order to categorize cases of Alzheimer's disease. Contrary to models that rely on convolutional neural networks, the vision transformer has the ability to capture large relationships between far-apart pixels in the images. The suggested architecture has shown exceptional outcomes, as its precision has emphasized its capacity to detect and distinguish significant characteristics from MRI scans, hence enabling the precise classification of Alzheimer's disease subtypes and various stages. The model utilizes both the elements from convolutional neural network and vision transformer models to extract both local and global visual patterns, facilitating the accurate categorization of various Alzheimer's disease classifications. We specifically focus on the term 'dementia in patients with Alzheimer's disease' to describe individuals who have progressed to the dementia stage as a result of AD, distinguishing them from those in earlier stages of the disease. Precise categorization of Alzheimer's disease has significant therapeutic importance, as it enables timely identification, tailored treatment strategies, disease monitoring, and prognostic assessment. The stated high accuracy indicates that the suggested vision transformer model has the capacity to assist healthcare providers and researchers in generating well-informed and precise evaluations of individuals with Alzheimer's disease.

Toward general text-guided multimodal brain MRI synthesis for diagnosis and medical image analysis.

Wang Y, Xiong H, Sun K, Bai S, Dai L, Ding Z, Liu J, Wang Q, Liu Q, Shen D

pubmed logopapersJun 17 2025
Multimodal brain magnetic resonance imaging (MRI) offers complementary insights into brain structure and function, thereby improving the diagnostic accuracy of neurological disorders and advancing brain-related research. However, the widespread applicability of MRI is substantially limited by restricted scanner accessibility and prolonged acquisition times. Here, we present TUMSyn, a text-guided universal MRI synthesis model capable of generating brain MRI specified by textual imaging metadata from routinely acquired scans. We ensure the reliability of TUMSyn by constructing a brain MRI database comprising 31,407 3D images across 7 MRI modalities from 13 worldwide centers and pre-training an MRI-specific text encoder to process text prompts effectively. Experiments on diverse datasets and physician assessments indicate that TUMSyn-generated images can be utilized along with acquired MRI scan(s) to facilitate large-scale MRI-based screening and diagnosis of multiple brain diseases, substantially reducing the time and cost of MRI in the healthcare system.

Radiologist-AI workflow can be modified to reduce the risk of medical malpractice claims

Bernstein, M., Sheppard, B., Bruno, M. A., Lay, P. S., Baird, G. L.

medrxiv logopreprintJun 16 2025
BackgroundArtificial Intelligence (AI) is rapidly changing the legal landscape of radiology. Results from a previous experiment suggested that providing AI error rates can reduce perceived radiologist culpability, as judged by mock jury members (4). The current study advances this work by examining whether the radiologists behavior also impacts perceptions of liability. Methods. Participants (n=282) read about a hypothetical malpractice case where a 50-year-old who visited the Emergency Department with acute neurological symptoms received a brain CT scan to determine if bleeding was present. An AI system was used by the radiologist who interpreted imaging. The AI system correctly flagged the case as abnormal. Nonetheless, the radiologist concluded no evidence of bleeding, and the blood-thinner t-PA was administered. Participants were randomly assigned to either a 1.) single-read condition, where the radiologist interpreted the CT once after seeing AI feedback, or 2.) a double-read condition, where the radiologist interpreted the CT twice, first without AI and then with AI feedback. Participants were then told the patient suffered irreversible brain damage due to the missed brain bleed, resulting in the patient (plaintiff) suing the radiologist (defendant). Participants indicated whether the radiologist met their duty of care to the patient (yes/no). Results. Hypothetical jurors were more likely to side with the plaintiff in the single-read condition (106/142, 74.7%) than in the double-read condition (74/140, 52.9%), p=0.0002. Conclusion. This suggests that the penalty for disagreeing with correct AI can be mitigated when images are interpreted twice, or at least if a radiologist gives an interpretation before AI is used.

Integration of MRI radiomics and germline genetics to predict the IDH mutation status of gliomas.

Nakase T, Henderson GA, Barba T, Bareja R, Guerra G, Zhao Q, Francis SS, Gevaert O, Kachuri L

pubmed logopapersJun 16 2025
The molecular profiling of gliomas for isocitrate dehydrogenase (IDH) mutations currently relies on resected tumor samples, highlighting the need for non-invasive, preoperative biomarkers. We investigated the integration of glioma polygenic risk scores (PRS) and radiographic features for prediction of IDH mutation status. We used 256 radiomic features, a glioma PRS and demographic information in 158 glioma cases within elastic net and neural network models. The integration of glioma PRS with radiomics increased the area under the receiver operating characteristic curve (AUC) for distinguishing IDH-wildtype vs. IDH-mutant glioma from 0.83 to 0.88 (P<sub>ΔAUC</sub> = 6.9 × 10<sup>-5</sup>) in the elastic net model and from 0.91 to 0.92 (P<sub>ΔAUC</sub> = 0.32) in the neural network model. Incorporating age at diagnosis and sex further improved the classifiers (elastic net: AUC = 0.93, neural network: AUC = 0.93). Patients predicted to have IDH-mutant vs. IDH-wildtype tumors had significantly lower mortality risk (hazard ratio (HR) = 0.18, 95% CI: 0.08-0.40, P = 2.1 × 10<sup>-5</sup>), comparable to prognostic trajectories for biopsy-confirmed IDH status. The augmentation of imaging-based classifiers with genetic risk profiles may help delineate molecular subtypes and improve the timely, non-invasive clinical assessment of glioma patients.

Think deep in the tractography game: deep learning for tractography computing and analysis.

Zhang F, Théberge A, Jodoin PM, Descoteaux M, O'Donnell LJ

pubmed logopapersJun 16 2025
Tractography is a challenging process with complex rules, driving continuous algorithmic evolution to address its challenges. Meanwhile, deep learning has tackled similarly difficult tasks, such as mastering the Go board game and animating sophisticated robots. Given its transformative impact in these areas, deep learning has the potential to revolutionize tractography within the framework of existing rules. This work provides a brief summary of recent advances and challenges in deep learning-based tractography computing and analysis.

Rate of brain aging associates with future executive function in Asian children and older adults.

Cheng SF, Yue WL, Ng KK, Qian X, Liu S, Tan TWK, Nguyen KN, Leong RLF, Hilal S, Cheng CY, Tan AP, Law EC, Gluckman PD, Chen CL, Chong YS, Meaney MJ, Chee MWL, Yeo BTT, Zhou JH

pubmed logopapersJun 16 2025
Brain age has emerged as a powerful tool to understand neuroanatomical aging and its link to health outcomes like cognition. However, there remains a lack of studies investigating the rate of brain aging and its relationship to cognition. Furthermore, most brain age models are trained and tested on cross-sectional data from primarily Caucasian, adult participants. It is thus unclear how well these models generalize to non-Caucasian participants, especially children. Here, we tested a previously published deep learning model on Singaporean elderly participants (55-88 years old) and children (4-11 years old). We found that the model directly generalized to the elderly participants, but model finetuning was necessary for children. After finetuning, we found that the rate of change in brain age gap was associated with future executive function performance in both elderly participants and children. We further found that lateral ventricles and frontal areas contributed to brain age prediction in elderly participants, while white matter and posterior brain regions were more important in predicting brain age of children. Taken together, our results suggest that there is potential for generalizing brain age models to diverse populations. Moreover, the longitudinal change in brain age gap reflects developing and aging processes in the brain, relating to future cognitive function.

MultiViT2: A Data-augmented Multimodal Neuroimaging Prediction Framework via Latent Diffusion Model

Bi Yuda, Jia Sihan, Gao Yutong, Abrol Anees, Fu Zening, Calhoun Vince

arxiv logopreprintJun 16 2025
Multimodal medical imaging integrates diverse data types, such as structural and functional neuroimaging, to provide complementary insights that enhance deep learning predictions and improve outcomes. This study focuses on a neuroimaging prediction framework based on both structural and functional neuroimaging data. We propose a next-generation prediction model, \textbf{MultiViT2}, which combines a pretrained representative learning base model with a vision transformer backbone for prediction output. Additionally, we developed a data augmentation module based on the latent diffusion model that enriches input data by generating augmented neuroimaging samples, thereby enhancing predictive performance through reduced overfitting and improved generalizability. We show that MultiViT2 significantly outperforms the first-generation model in schizophrenia classification accuracy and demonstrates strong scalability and portability.
Page 19 of 58579 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.