Sort by:
Page 102 of 1261251 results

Whole Brain 3D T1 Mapping in Multiple Sclerosis Using Standard Clinical Images Compared to MP2RAGE and MR Fingerprinting.

Snyder J, Blevins G, Smyth P, Wilman AH

pubmed logopapersJun 1 2025
Quantitative T1 and T2 mapping is a useful tool to assess properties of healthy and diseased tissues. However, clinical diagnostic imaging remains dominated by relaxation-weighted imaging without direct collection of relaxation maps. Dedicated research sequences such as MR fingerprinting can save time and improve resolution over classical gold standard quantitative MRI (qMRI) methods, although they are not widely adopted in clinical studies. We investigate the use of clinical sequences in conjunction with prior knowledge provided by machine learning to elucidate T1 maps of brain in routine imaging studies without the need for specialized sequences. A classification learner was trained on T1w (magnetization prepared rapid gradient echo [MPRAGE]) and T2w (fluid-attenuated inversion recovery [FLAIR]) data (2.6 million voxels) from multiple sclerosis (MS) patients at 3T, compared to gold standard inversion recovery fast spin echo T1 maps in five healthy subjects, and tested on eight MS patients. In the MS patient test, the results of the machine learner-produced T1 maps were compared to MP2RAGE and MR fingerprinting T1 maps in seven tissue regions of the brain: cortical grey matter, white matter, cerebrospinal fluid, caudate, putamen and globus pallidus. Additionally, T1s in lesion-segmented tissue was compared using the three different methods. The machine learner (ML) method had excellent agreement with MP2RAGE, with all average tissue deviations less than 3.2%, with T1 lesion variation of 0.1%-5.3% across the eight patients. The machine learning method provides a valuable and accurate estimation of T1 values in the human brain while using data from standard clinical sequences and allowing retrospective reconstruction from past studies without the need for new quantitative techniques.

Res-Net-Based Modeling and Morphologic Analysis of Deep Medullary Veins Using Multi-Echo GRE at 7 T MRI.

Li Z, Liang L, Zhang J, Fan X, Yang Y, Yang H, Wang Q, An J, Xue R, Zhuo Y, Qian H, Zhang Z

pubmed logopapersJun 1 2025
The pathological changes in deep medullary veins (DMVs) have been reported in various diseases. However, accurate modeling and quantification of DMVs remain challenging. We aim to propose and assess an automated approach for modeling and quantifying DMVs at 7 Tesla (7 T) MRI. A multi-echo-input Res-Net was developed for vascular segmentation, and a minimum path loss function was used for modeling and quantifying the geometric parameter of DMVs. Twenty-one patients diagnosed as subcortical vascular dementia (SVaD) and 20 condition matched controls were included in this study. The amplitude and phase images of gradient echo with five echoes were acquired at 7 T. Ten GRE images were manually labeled by two neurologists and compared with the results obtained by our proposed method. Independent samples t test and Pearson correlation were used for statistical analysis in our study, and p value < 0.05 was considered significant. No significant offset was found in centerlines obtained by human labeling and our algorithm (p = 0.734). The length difference between the proposed method and manual labeling was smaller than the error between different clinicians (p < 0.001). Patients with SVaD exhibited fewer DMVs (mean difference = -60.710 ± 21.810, p = 0.011) and higher curvature (mean difference = 0.12 ± 0.022, p < 0.0001), corresponding to their higher Vascular Dementia Assessment Scale-Cog (VaDAS-Cog) scores (mean difference = 4.332 ± 1.992, p = 0.036) and lower Mini-Mental State Examination (MMSE) (mean difference = -3.071 ± 1.443, p = 0.047). The MMSE scores were positively correlated with the numbers of DMVs (r = 0.437, p = 0.037) and were negatively correlated with the curvature (r = -0.426, p = 0.042). In summary, we proposed a novel framework for automated quantifying the morphologic parameters of DMVs. These characteristics of DMVs are expected to help the research and diagnosis of cerebral small vessel diseases with DMV lesions.

Explainable deep stacking ensemble model for accurate and transparent brain tumor diagnosis.

Haque R, Khan MA, Rahman H, Khan S, Siddiqui MIH, Limon ZH, Swapno SMMR, Appaji A

pubmed logopapersJun 1 2025
Early detection of brain tumors in MRI images is vital for improving treatment results. However, deep learning models face challenges like limited dataset diversity, class imbalance, and insufficient interpretability. Most studies rely on small, single-source datasets and do not combine different feature extraction techniques for better classification. To address these challenges, we propose a robust and explainable stacking ensemble model for multiclass brain tumor classification. To address these challenges, we propose a stacking ensemble model that combines EfficientNetB0, MobileNetV2, GoogleNet, and Multi-level CapsuleNet, using CatBoost as the meta-learner for improved feature aggregation and classification accuracy. This ensemble approach captures complex tumor characteristics while enhancing robustness and interpretability. The proposed model integrates EfficientNetB0, MobileNetV2, GoogleNet, and a Multi-level CapsuleNet within a stacking framework, utilizing CatBoost as the meta-learner to improve feature aggregation and classification accuracy. We created two large MRI datasets by merging data from four sources: BraTS, Msoud, Br35H, and SARTAJ. To tackle class imbalance, we applied Borderline-SMOTE and data augmentation. We also utilized feature extraction methods, along with PCA and Gray Wolf Optimization (GWO). Our model was validated through confidence interval analysis and statistical tests, demonstrating superior performance. Error analysis revealed misclassification trends, and we assessed computational efficiency regarding inference speed and resource usage. The proposed ensemble achieved 97.81% F1 score and 98.75% PR AUC on M1, and 98.32% F1 score with 99.34% PR AUC on M2. Moreover, the model consistently surpassed state-of-the-art CNNs, Vision Transformers, and other ensemble methods in classifying brain tumors across individual four datasets. Finally, we developed a web-based diagnostic tool that enables clinicians to interact with the proposed model and visualize decision-critical regions in MRI scans using Explainable Artificial Intelligence (XAI). This study connects high-performing AI models with real clinical applications, providing a reliable, scalable, and efficient diagnostic solution for brain tumor classification.

A radiomics approach to distinguish Progressive Supranuclear Palsy Richardson's syndrome from other phenotypes starting from MR images.

Pisani N, Abate F, Avallone AR, Barone P, Cesarelli M, Amato F, Picillo M, Ricciardi C

pubmed logopapersJun 1 2025
Progressive Supranuclear Palsy (PSP) is an uncommon neurodegenerative disorder with different clinical onset, including Richardson's syndrome (PSP-RS) and other variant phenotypes (vPSP). Recognising the clinical progression of different phenotypes would enhance the accuracy of detection and treatment of PSP. The study goal was to identify radiomic biomarkers for distinguishing PSP phenotypes extracted from T1-weighted magnetic resonance images (MRI). Forty PSP patients (20 PSP-RS and 20 vPSP) took part in the present work. Radiomic features were collected from 21 regions of interest (ROIs) mainly from frontal cortex, supratentorial white matter, basal nuclei, brainstem, cerebellum, 3rd and 4th ventricles. After features selection, three tree-based machine learning (ML) classifiers were implemented to classify PSP phenotypes. 10 out of 21 ROIs performed best about sensitivity, specificity, accuracy and area under the receiver operating characteristic curve (AUCROC). Particularly, features extracted from the pons region obtained the best accuracy (0.92) and AUCROC (0.83) values while by using the other 10 ROIs, evaluation metrics range from 0.67 to 0.83. Eight features of the Gray Level Dependence Matrix were recurrently extracted for the 10 ROIs. Furthermore, by combining these ROIs, the results exceeded 0.83 in phenotypes classification and the selected areas were brain stem, pons, occipital white matter, precentral gyrus and thalamus regions. Based on the achieved results, our proposed approach could represent a promising tool for distinguishing PSP-RS from vPSP.

Deep learning-driven multi-class classification of brain strokes using computed tomography: A step towards enhanced diagnostic precision.

Kulathilake CD, Udupihille J, Abeysundara SP, Senoo A

pubmed logopapersJun 1 2025
To develop and validate deep learning models leveraging CT imaging for the prediction and classification of brain stroke conditions, with the potential to enhance accuracy and support clinical decision-making. This retrospective, bi-center study included data from 250 patients, with a dataset of 8186 CT images collected from 2017 to 2022. Two AI models were developed using the Expanded ResNet101 deep learning framework as a two-step model. Model performance was evaluated using confusion matrices, supplemented by external validation with an independent dataset. External validation was conducted by an expert and two external members. Overall accuracy, confidence intervals, Cohen's Kappa value, and McNemar's test P-values were calculated. A total of 8186 CT images were incorporated, with 6386 images used for the training and 900 datasets for testing and validation in Model 01. Further, 1619 CT images were used for training and 600 datasets for testing and validation in Model 02. The average accuracy, precision, and F1 score for both models were assessed: Model 01 achieved 99.6 %, 99.4 %, and 99.6 % respectively, whereas Model 02 achieved 99.2 %, 98.8 %, and 99.1 %. The external validation accuracies were 78.6 % (95 % CI: 0.73,0.83; P < 0.001) and 60.2 % (95 % CI: 0.48,0.70; P < 0.001) for Models 01 and 02 respectively, as evaluated by the expert. Deep learning models demonstrated high accuracy, precision, and F1 scores in predicting outcomes for brain stroke patients. With larger cohort and diverse radiologic mimics, these models could support clinicians in prognosis and decision-making.

High-Performance Computing-Based Brain Tumor Detection Using Parallel Quantum Dilated Convolutional Neural Network.

Shinde SS, Pande A

pubmed logopapersJun 1 2025
In the healthcare field, brain tumor causes irregular development of cells in the brain. One of the popular ways to identify the brain tumor and its progression is magnetic resonance imaging (MRI). However, existing methods often suffer from high computational complexity, noise interference, and limited accuracy, which affect the early diagnosis of brain tumor. For resolving such issues, a high-performance computing model, such as big data-based detection, is utilized. As a result, this work proposes a novel approach named parallel quantum dilated convolutional neural network (PQDCNN)-based brain tumor detection using the Map-Reducer. The data partitioning is the prime process, which is done using the Fuzzy local information C-means clustering (FLICM). The partitioned data is subjected to the map reducer. In the mapper, the Medav filtering removes the noise, and the tumor area segmentation is done by a transformer model named TransBTSV2. After segmenting the tumor part, image augmentation and feature extraction are done. In the reducer phase, the brain tumor is detected using the proposed PQDCNN. Furthermore, the efficiency of PQDCNN is validated using the accuracy, sensitivity, and specificity metrics, and the ideal values of 91.52%, 91.69%, and 92.26% are achieved.

FedSynthCT-Brain: A federated learning framework for multi-institutional brain MRI-to-CT synthesis.

Raggio CB, Zabaleta MK, Skupien N, Blanck O, Cicone F, Cascini GL, Zaffino P, Migliorelli L, Spadea MF

pubmed logopapersJun 1 2025
The generation of Synthetic Computed Tomography (sCT) images has become a pivotal methodology in modern clinical practice, particularly in the context of Radiotherapy (RT) treatment planning. The use of sCT enables the calculation of doses, pushing towards Magnetic Resonance Imaging (MRI) guided radiotherapy treatments. Moreover, with the introduction of MRI-Positron Emission Tomography (PET) hybrid scanners, the derivation of sCT from MRI can improve the attenuation correction of PET images. Deep learning methods for MRI-to-sCT have shown promising results, but their reliance on single-centre training dataset limits generalisation capabilities to diverse clinical settings. Moreover, creating centralised multi-centre datasets may pose privacy concerns. To address the aforementioned issues, we introduced FedSynthCT-Brain, an approach based on the Federated Learning (FL) paradigm for MRI-to-sCT in brain imaging. This is among the first applications of FL for MRI-to-sCT, employing a cross-silo horizontal FL approach that allows multiple centres to collaboratively train a U-Net-based deep learning model. We validated our method using real multicentre data from four European and American centres, simulating heterogeneous scanner types and acquisition modalities, and tested its performance on an independent dataset from a centre outside the federation. In the case of the unseen centre, the federated model achieved a median Mean Absolute Error (MAE) of 102.0 HU across 23 patients, with an interquartile range of 96.7-110.5 HU. The median (interquartile range) for the Structural Similarity Index (SSIM) and the Peak Signal to Noise Ratio (PNSR) were 0.89 (0.86-0.89) and 26.58 (25.52-27.42), respectively. The analysis of the results showed acceptable performances of the federated approach, thus highlighting the potential of FL to enhance MRI-to-sCT to improve generalisability and advancing safe and equitable clinical applications while fostering collaboration and preserving data privacy.

An Intelligent Model of Segmentation and Classification Using Enhanced Optimization-Based Attentive Mask RCNN and Recurrent MobileNet With LSTM for Multiple Sclerosis Types With Clinical Brain MRI.

Gopichand G, Bhargavi KN, Ramprasad MVS, Kodavanti PV, Padmavathi M

pubmed logopapersJun 1 2025
In healthcare sector, magnetic resonance imaging (MRI) images are taken for multiple sclerosis (MS) assessment, classification, and management. However, interpreting an MRI scan requires an exceptional amount of skill because abnormalities on scans are frequently inconsistent with clinical symptoms, making it difficult to convert the findings into effective treatment strategies. Furthermore, MRI is an expensive process, and its frequent utilization to monitor an illness increases healthcare costs. To overcome these drawbacks, this research employs advanced technological approaches to develop a deep learning system for classifying types of MS through clinical brain MRI scans. The major innovation of this model is to influence the convolution network with attention concept and recurrent-based deep learning for classifying the disorder; this also proposes an optimization algorithm for tuning the parameter to enhance the performance. Initially, the total images as 3427 are collected from database, in which the collected samples are categorized for training and testing phase. Here, the segmentation is carried out by adaptive and attentive-based mask regional convolution neural network (AA-MRCNN). In this phase, the MRCNN's parameters are finely tuned with an enhanced pine cone optimization algorithm (EPCOA) to guarantee outstanding efficiency. Further, the segmented image is given to recurrent MobileNet with long short term memory (RM-LSTM) for getting the classification outcomes. Through experimental analysis, this deep learning model is acquired 95.4% for accuracy, 95.3% for sensitivity, and 95.4% for specificity. Hence, these results prove that it has high potential for appropriately classifying the sclerosis disorder.

Deep learning for multiple sclerosis lesion classification and stratification using MRI.

Umirzakova S, Shakhnoza M, Sevara M, Whangbo TK

pubmed logopapersJun 1 2025
Multiple sclerosis (MS) is a chronic neurological disease characterized by inflammation, demyelination, and neurodegeneration within the central nervous system. Conventional magnetic resonance imaging (MRI) techniques often struggle to detect small or subtle lesions, particularly in challenging regions such as the cortical gray matter and brainstem. This study introduces a novel deep learning-based approach, combined with a robust preprocessing pipeline and optimized MRI protocols, to improve the precision of MS lesion classification and stratification. We designed a convolutional neural network (CNN) architecture specifically tailored for high-resolution T2-weighted imaging (T2WI), augmented by deep learning-based reconstruction (DLR) techniques. The model incorporates dual attention mechanisms, including spatial and channel attention modules, to enhance feature extraction. A comprehensive preprocessing pipeline was employed, featuring bias field correction, skull stripping, image registration, and intensity normalization. The proposed framework was trained and validated on four publicly available datasets and evaluated using precision, sensitivity, specificity, and area under the curve (AUC) metrics. The model demonstrated exceptional performance, achieving a precision of 96.27 %, sensitivity of 95.54 %, specificity of 94.70 %, and an AUC of 0.975. It outperformed existing state-of-the-art methods, particularly in detecting lesions in underdiagnosed regions such as the cortical gray matter and brainstem. The integration of advanced attention mechanisms enabled the model to focus on critical MRI features, leading to significant improvements in lesion classification and stratification. This study presents a novel and scalable approach for MS lesion detection and classification, offering a practical solution for clinical applications. By integrating advanced deep learning techniques with optimized MRI protocols, the proposed framework achieves superior diagnostic accuracy and generalizability, paving the way for enhanced patient care and more personalized treatment strategies. This work sets a new benchmark for MS diagnosis and management in both research and clinical practice.

MEF-Net: Multi-scale and edge feature fusion network for intracranial hemorrhage segmentation in CT images.

Zhang X, Zhang S, Jiang Y, Tian L

pubmed logopapersJun 1 2025
Intracranial Hemorrhage (ICH) refers to cerebral bleeding resulting from ruptured blood vessels within the brain. Delayed and inaccurate diagnosis and treatment of ICH can lead to fatality or disability. Therefore, early and precise diagnosis of intracranial hemorrhage is crucial for protecting patients' lives. Automatic segmentation of hematomas in CT images can provide doctors with essential diagnostic support and improve diagnostic efficiency. CT images of intracranial hemorrhage exhibit characteristics such as multi-scale, multi-target, and blurred edges. This paper proposes a Multi-scale and Edge Feature Fusion Network (MEF-Net) to effectively extract multi-scale and edge features and fully fuse these features through a fusion mechanism. The network first extracts the multi-scale features and edge features of the image through the encoder and the edge detection module respectively, then fuses the deep information, and employs the multi-kernel attention module to process the shallow features, enhancing the multi-target recognition capability. Finally, the feature maps from each module are combined to produce the segmentation result. Experimental results indicate that this method has achieved average DICE scores of 0.7508 and 0.7443 in two public datasets respectively, surpassing those of several advanced methods in medical image segmentation currently available. The proposed MEF-Net significantly improves the accuracy of intracranial hemorrhage segmentation.
Page 102 of 1261251 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.