Sort by:
Page 75 of 1231228 results

DiffM<sup>4</sup>RI: A Latent Diffusion Model with Modality Inpainting for Synthesizing Missing Modalities in MRI Analysis.

Ye W, Guo Z, Ren Y, Tian Y, Shen Y, Chen Z, He J, Ke J, Shen Y

pubmed logopapersJun 17 2025
Foundation Models (FMs) have shown great promise for multimodal medical image analysis such as Magnetic Resonance Imaging (MRI). However, certain MRI sequences may be unavailable due to various constraints, such as limited scanning time, patient discomfort, or scanner limitations. The absence of certain modalities can hinder the performance of FMs in clinical applications, making effective missing modality imputation crucial for ensuring their applicability. Previous approaches, including generative adversarial networks (GANs), have been employed to synthesize missing modalities in either a one-to-one or many-to-one manner. However, these methods have limitations, as they require training a new model for different missing scenarios and are prone to mode collapse, generating limited diversity in the synthesized images. To address these challenges, we propose DiffM<sup>4</sup>RI, a diffusion model for many-to-many missing modality imputation in MRI. DiffM<sup>4</sup>RI innovatively formulates the missing modality imputation as a modality-level inpainting task, enabling it to handle arbitrary missing modality situations without the need for training multiple networks. Experiments on the BraTs datasets demonstrate DiffM<sup>4</sup>RI can achieve an average SSIM improvement of 0.15 over MustGAN, 0.1 over SynDiff, and 0.02 over VQ-VAE-2. These results highlight the potential of DiffM<sup>4</sup>RI in enhancing the reliability of FMs in clinical applications. The code is available at https://github.com/27yw/DiffM4RI.

Integrating Radiomics with Deep Learning Enhances Multiple Sclerosis Lesion Delineation

Nadezhda Alsahanova, Pavel Bartenev, Maksim Sharaev, Milos Ljubisavljevic, Taleb Al. Mansoori, Yauhen Statsenko

arxiv logopreprintJun 17 2025
Background: Accurate lesion segmentation is critical for multiple sclerosis (MS) diagnosis, yet current deep learning approaches face robustness challenges. Aim: This study improves MS lesion segmentation by combining data fusion and deep learning techniques. Materials and Methods: We suggested novel radiomic features (concentration rate and R\'enyi entropy) to characterize different MS lesion types and fused these with raw imaging data. The study integrated radiomic features with imaging data through a ResNeXt-UNet architecture and attention-augmented U-Net architecture. Our approach was evaluated on scans from 46 patients (1102 slices), comparing performance before and after data fusion. Results: The radiomics-enhanced ResNeXt-UNet demonstrated high segmentation accuracy, achieving significant improvements in precision and sensitivity over the MRI-only baseline and a Dice score of 0.774$\pm$0.05; p<0.001 according to Bonferroni-adjusted Wilcoxon signed-rank tests. The radiomics-enhanced attention-augmented U-Net model showed a greater model stability evidenced by reduced performance variability (SDD = 0.18 $\pm$ 0.09 vs. 0.21 $\pm$ 0.06; p=0.03) and smoother validation curves with radiomics integration. Conclusion: These results validate our hypothesis that fusing radiomics with raw imaging data boosts segmentation performance and stability in state-of-the-art models.

Transformer-augmented lightweight U-Net (UAAC-Net) for accurate MRI brain tumor segmentation.

Varghese NE, John A, C UDA, Pillai MJ

pubmed logopapersJun 17 2025
Accurate segmentation of brain tumor images, particularly gliomas in MRI scans, is crucial for early diagnosis, monitoring progression, and evaluating tumor structure and therapeutic response. A novel lightweight, transformer-based U-Net model for brain tumor segmentation, integrating attention mechanisms and multi-layer feature extraction via atrous convolution to capture long-range relationships and contextual information across image regions is proposed in this work. The model performance is evaluated on the publicly accessible BraTS 2020 dataset using evaluation metrics such as the Dice coefficient, accuracy, mean Intersection over Union (IoU), sensitivity, and specificity. The proposed model outperforms many of the existing methods, such as MimicNet, Swin Transformer-based UNet and hybrid multiresolution-based UNet, and is capable of handling a variety of segmentation issues. The experimental results demonstrate that the proposed model acheives an accuracy of 98.23%, a Dice score of 0.9716, and a mean IoU of 0.8242 during training when compared to the current state-of-the-art methods.

Enhancing cerebral infarct classification by automatically extracting relevant fMRI features.

Dobromyslin VI, Zhou W

pubmed logopapersJun 17 2025
Accurate detection of cortical infarct is critical for timely treatment and improved patient outcomes. Current brain imaging methods often require invasive procedures that primarily assess blood vessel and structural white matter damage. There is a need for non-invasive approaches, such as functional MRI (fMRI), that better reflect neuronal viability. This study utilized automated machine learning (auto-ML) techniques to identify novel infarct-specific fMRI biomarkers specifically related to chronic cortical infarcts. We analyzed resting-state fMRI data from the multi-center ADNI dataset, which included 20 chronic infarct patients and 30 cognitively normal (CN) controls. This study utilized automated machine learning (auto-ML) techniques to identify novel fMRI biomarkers specifically related to chronic cortical infarcts. Surface-based registration methods were applied to minimize partial-volume effects typically associated with lower resolution fMRI data. We evaluated the performance of 7 previously known fMRI biomarkers alongside 107 new auto-generated fMRI biomarkers across 33 different classification models. Our analysis identified 6 new fMRI biomarkers that substantially improved infarct detection performance compared to previously established metrics. The best-performing combination of biomarkers and classifiers achieved a cross-validation ROC score of 0.791, closely matching the accuracy of diffusion-weighted imaging methods used in acute stroke detection. Our proposed auto-ML fMRI infarct-detection technique demonstrated robustness across diverse imaging sites and scanner types, highlighting the potential of automated feature extraction to significantly enhance non-invasive infarct detection.

Effects of patient and imaging factors on small bowel motility scores derived from deep learning-based segmentation of cine MRI.

Heo S, Yun J, Kim DW, Park SY, Choi SH, Kim K, Jung KW, Myung SJ, Park SH

pubmed logopapersJun 17 2025
Small bowel motility can be quantified using cine MRI, but the influence of patient and imaging factors on motility scores remains unclear. This study evaluated whether patient and imaging factors affect motility scores derived from deep learning-based segmentation of cine MRI. Fifty-four patients (mean age 53.6 ± 16.4 years; 34 women) with chronic constipation or suspected colonic pseudo-obstruction who underwent cine MRI covering the entire small bowel between 2022 and 2023 were included. A deep learning algorithm was developed to segment small bowel regions, and motility was quantified with an optical flow-based algorithm, producing a motility score for each slice. Associations of motility scores with patient factors (age, sex, body mass index, symptoms, and bowel distension) and MRI slice-related factors (anatomical location, bowel area, and anteroposterior position) were analyzed using linear mixed models. Deep learning-based small bowel segmentation achieved a mean volumetric Dice similarity coefficient of 75.4 ± 18.9%, with a manual correction time of 26.5 ± 13.5 s. Median motility scores per patient ranged from 26.4 to 64.4, with an interquartile range of 3.1-26.6. Multivariable analysis revealed that MRI slice-related factors, including anatomical location with mixed ileum and jejunum (β = -4.9; p = 0.01, compared with ileum dominant), bowel area (first order β = -0.2, p < 0.001; second order β = 5.7 × 10<sup>-4</sup>, p < 0.001), and anteroposterior position (first order β = -51.5, p < 0.001; second order β = 28.8, p = 0.004) were significantly associated with motility scores. Patient factors showed no association with motility scores. Small bowel motility scores were significantly associated with MRI slice-related factors. Determining global motility without adjusting for these factors may be limited. Question Global small bowel motility can be quantified from cine MRI; however, the confounding factors affecting motility scores remain unclear. Findings Motility scores were significantly influenced by MRI slice-related factors, including anatomical location, bowel area, and anteroposterior position. Clinical relevance Adjusting for slice-related factors is essential for accurate interpretation of small bowel motility scores on cine MRI.

Exploring factors driving the evolution of chronic lesions in multiple sclerosis using machine learning.

Hu H, Ye L, Wu P, Shi Z, Chen G, Li Y

pubmed logopapersJun 17 2025
The study aimed to identify factors influencing the evolution of chronic lesions in multiple sclerosis (MS) using a machine learning approach. Longitudinal data were collected from individuals with relapsing-remitting multiple sclerosis (RRMS). The "iron rim" sign was identified using quantitative susceptibility mapping (QSM), and microstructural damage was quantified via T1/fluid attenuated inversion recovery (FLAIR) ratios. Additional data included baseline lesion volume, cerebral T2-hyperintense lesion volume, iron rim lesion volume, the proportion of iron rim lesion volume, gender, age, disease duration (DD), disability and cognitive scores, use of disease-modifying therapy, and follow-up intervals. These features were integrated into machine learning models (logistic regression (LR), random forest (RF), and support vector machine (SVM)) to predict lesion volume change, with the most predictive model selected for feature importance analysis. The study included 47 RRMS individuals (mean age, 30.6 ± 8.0 years [standard deviation], 6 males) and 833 chronic lesions. Machine learning model development results showed that the SVM model demonstrated superior predictive efficiency, with an AUC of 0.90 in the training set and 0.81 in the testing set. Feature importance analysis identified the top three features were the "iron rim" sign of lesions, DD, and the T1/FLAIR ratios of the lesions. This study developed a machine learning model to predict the volume outcome of MS lesions. Feature importance analysis identified chronic inflammation around the lesion, DD, and the microstructural damage as key factors influencing volume change in chronic MS lesions. Question The evolution of different chronic lesions in MS exhibits variability, and the driving factors influencing these outcomes remain to be further investigated. Findings A SVM learning model was developed to predict chronic MS lesion volume changes, integrating lesion characteristics, lesion burden, and clinical data. Clinical relevance Chronic inflammation surrounding lesions, DD, and microstructural damage are key factors influencing the evolution of chronic MS lesions.

NeuroMoE: A Transformer-Based Mixture-of-Experts Framework for Multi-Modal Neurological Disorder Classification

Wajih Hassan Raza, Aamir Bader Shah, Yu Wen, Yidan Shen, Juan Diego Martinez Lemus, Mya Caryn Schiess, Timothy Michael Ellmore, Renjie Hu, Xin Fu

arxiv logopreprintJun 17 2025
The integration of multi-modal Magnetic Resonance Imaging (MRI) and clinical data holds great promise for enhancing the diagnosis of neurological disorders (NDs) in real-world clinical settings. Deep Learning (DL) has recently emerged as a powerful tool for extracting meaningful patterns from medical data to aid in diagnosis. However, existing DL approaches struggle to effectively leverage multi-modal MRI and clinical data, leading to suboptimal performance. To address this challenge, we utilize a unique, proprietary multi-modal clinical dataset curated for ND research. Based on this dataset, we propose a novel transformer-based Mixture-of-Experts (MoE) framework for ND classification, leveraging multiple MRI modalities-anatomical (aMRI), Diffusion Tensor Imaging (DTI), and functional (fMRI)-alongside clinical assessments. Our framework employs transformer encoders to capture spatial relationships within volumetric MRI data while utilizing modality-specific experts for targeted feature extraction. A gating mechanism with adaptive fusion dynamically integrates expert outputs, ensuring optimal predictive performance. Comprehensive experiments and comparisons with multiple baselines demonstrate that our multi-modal approach significantly enhances diagnostic accuracy, particularly in distinguishing overlapping disease states. Our framework achieves a validation accuracy of 82.47\%, outperforming baseline methods by over 10\%, highlighting its potential to improve ND diagnosis by applying multi-modal learning to real-world clinical data.

Toward general text-guided multimodal brain MRI synthesis for diagnosis and medical image analysis.

Wang Y, Xiong H, Sun K, Bai S, Dai L, Ding Z, Liu J, Wang Q, Liu Q, Shen D

pubmed logopapersJun 17 2025
Multimodal brain magnetic resonance imaging (MRI) offers complementary insights into brain structure and function, thereby improving the diagnostic accuracy of neurological disorders and advancing brain-related research. However, the widespread applicability of MRI is substantially limited by restricted scanner accessibility and prolonged acquisition times. Here, we present TUMSyn, a text-guided universal MRI synthesis model capable of generating brain MRI specified by textual imaging metadata from routinely acquired scans. We ensure the reliability of TUMSyn by constructing a brain MRI database comprising 31,407 3D images across 7 MRI modalities from 13 worldwide centers and pre-training an MRI-specific text encoder to process text prompts effectively. Experiments on diverse datasets and physician assessments indicate that TUMSyn-generated images can be utilized along with acquired MRI scan(s) to facilitate large-scale MRI-based screening and diagnosis of multiple brain diseases, substantially reducing the time and cost of MRI in the healthcare system.

Recognition and diagnosis of Alzheimer's Disease using T1-weighted magnetic resonance imaging via integrating CNN and Swin vision transformer.

Wang Y, Sheng H, Wang X

pubmed logopapersJun 17 2025
Alzheimer's disease is a debilitating neurological disorder that requires accurate diagnosis for the most effective therapy and care. This article presents a new vision transformer model specifically created to evaluate magnetic resonance imaging data from the Alzheimer's Disease Neuroimaging Initiative dataset in order to categorize cases of Alzheimer's disease. Contrary to models that rely on convolutional neural networks, the vision transformer has the ability to capture large relationships between far-apart pixels in the images. The suggested architecture has shown exceptional outcomes, as its precision has emphasized its capacity to detect and distinguish significant characteristics from MRI scans, hence enabling the precise classification of Alzheimer's disease subtypes and various stages. The model utilizes both the elements from convolutional neural network and vision transformer models to extract both local and global visual patterns, facilitating the accurate categorization of various Alzheimer's disease classifications. We specifically focus on the term 'dementia in patients with Alzheimer's disease' to describe individuals who have progressed to the dementia stage as a result of AD, distinguishing them from those in earlier stages of the disease. Precise categorization of Alzheimer's disease has significant therapeutic importance, as it enables timely identification, tailored treatment strategies, disease monitoring, and prognostic assessment. The stated high accuracy indicates that the suggested vision transformer model has the capacity to assist healthcare providers and researchers in generating well-informed and precise evaluations of individuals with Alzheimer's disease.

DGG-XNet: A Hybrid Deep Learning Framework for Multi-Class Brain Disease Classification with Explainable AI

Sumshun Nahar Eity, Mahin Montasir Afif, Tanisha Fairooz, Md. Mortuza Ahmmed, Md Saef Ullah Miah

arxiv logopreprintJun 17 2025
Accurate diagnosis of brain disorders such as Alzheimer's disease and brain tumors remains a critical challenge in medical imaging. Conventional methods based on manual MRI analysis are often inefficient and error-prone. To address this, we propose DGG-XNet, a hybrid deep learning model integrating VGG16 and DenseNet121 to enhance feature extraction and classification. DenseNet121 promotes feature reuse and efficient gradient flow through dense connectivity, while VGG16 contributes strong hierarchical spatial representations. Their fusion enables robust multiclass classification of neurological conditions. Grad-CAM is applied to visualize salient regions, enhancing model transparency. Trained on a combined dataset from BraTS 2021 and Kaggle, DGG-XNet achieved a test accuracy of 91.33\%, with precision, recall, and F1-score all exceeding 91\%. These results highlight DGG-XNet's potential as an effective and interpretable tool for computer-aided diagnosis (CAD) of neurodegenerative and oncological brain disorders.
Page 75 of 1231228 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.