Sort by:
Page 29 of 55543 results

FedSynthCT-Brain: A federated learning framework for multi-institutional brain MRI-to-CT synthesis.

Raggio CB, Zabaleta MK, Skupien N, Blanck O, Cicone F, Cascini GL, Zaffino P, Migliorelli L, Spadea MF

pubmed logopapersJun 1 2025
The generation of Synthetic Computed Tomography (sCT) images has become a pivotal methodology in modern clinical practice, particularly in the context of Radiotherapy (RT) treatment planning. The use of sCT enables the calculation of doses, pushing towards Magnetic Resonance Imaging (MRI) guided radiotherapy treatments. Moreover, with the introduction of MRI-Positron Emission Tomography (PET) hybrid scanners, the derivation of sCT from MRI can improve the attenuation correction of PET images. Deep learning methods for MRI-to-sCT have shown promising results, but their reliance on single-centre training dataset limits generalisation capabilities to diverse clinical settings. Moreover, creating centralised multi-centre datasets may pose privacy concerns. To address the aforementioned issues, we introduced FedSynthCT-Brain, an approach based on the Federated Learning (FL) paradigm for MRI-to-sCT in brain imaging. This is among the first applications of FL for MRI-to-sCT, employing a cross-silo horizontal FL approach that allows multiple centres to collaboratively train a U-Net-based deep learning model. We validated our method using real multicentre data from four European and American centres, simulating heterogeneous scanner types and acquisition modalities, and tested its performance on an independent dataset from a centre outside the federation. In the case of the unseen centre, the federated model achieved a median Mean Absolute Error (MAE) of 102.0 HU across 23 patients, with an interquartile range of 96.7-110.5 HU. The median (interquartile range) for the Structural Similarity Index (SSIM) and the Peak Signal to Noise Ratio (PNSR) were 0.89 (0.86-0.89) and 26.58 (25.52-27.42), respectively. The analysis of the results showed acceptable performances of the federated approach, thus highlighting the potential of FL to enhance MRI-to-sCT to improve generalisability and advancing safe and equitable clinical applications while fostering collaboration and preserving data privacy.

An Intelligent Model of Segmentation and Classification Using Enhanced Optimization-Based Attentive Mask RCNN and Recurrent MobileNet With LSTM for Multiple Sclerosis Types With Clinical Brain MRI.

Gopichand G, Bhargavi KN, Ramprasad MVS, Kodavanti PV, Padmavathi M

pubmed logopapersJun 1 2025
In healthcare sector, magnetic resonance imaging (MRI) images are taken for multiple sclerosis (MS) assessment, classification, and management. However, interpreting an MRI scan requires an exceptional amount of skill because abnormalities on scans are frequently inconsistent with clinical symptoms, making it difficult to convert the findings into effective treatment strategies. Furthermore, MRI is an expensive process, and its frequent utilization to monitor an illness increases healthcare costs. To overcome these drawbacks, this research employs advanced technological approaches to develop a deep learning system for classifying types of MS through clinical brain MRI scans. The major innovation of this model is to influence the convolution network with attention concept and recurrent-based deep learning for classifying the disorder; this also proposes an optimization algorithm for tuning the parameter to enhance the performance. Initially, the total images as 3427 are collected from database, in which the collected samples are categorized for training and testing phase. Here, the segmentation is carried out by adaptive and attentive-based mask regional convolution neural network (AA-MRCNN). In this phase, the MRCNN's parameters are finely tuned with an enhanced pine cone optimization algorithm (EPCOA) to guarantee outstanding efficiency. Further, the segmented image is given to recurrent MobileNet with long short term memory (RM-LSTM) for getting the classification outcomes. Through experimental analysis, this deep learning model is acquired 95.4% for accuracy, 95.3% for sensitivity, and 95.4% for specificity. Hence, these results prove that it has high potential for appropriately classifying the sclerosis disorder.

Deep learning for multiple sclerosis lesion classification and stratification using MRI.

Umirzakova S, Shakhnoza M, Sevara M, Whangbo TK

pubmed logopapersJun 1 2025
Multiple sclerosis (MS) is a chronic neurological disease characterized by inflammation, demyelination, and neurodegeneration within the central nervous system. Conventional magnetic resonance imaging (MRI) techniques often struggle to detect small or subtle lesions, particularly in challenging regions such as the cortical gray matter and brainstem. This study introduces a novel deep learning-based approach, combined with a robust preprocessing pipeline and optimized MRI protocols, to improve the precision of MS lesion classification and stratification. We designed a convolutional neural network (CNN) architecture specifically tailored for high-resolution T2-weighted imaging (T2WI), augmented by deep learning-based reconstruction (DLR) techniques. The model incorporates dual attention mechanisms, including spatial and channel attention modules, to enhance feature extraction. A comprehensive preprocessing pipeline was employed, featuring bias field correction, skull stripping, image registration, and intensity normalization. The proposed framework was trained and validated on four publicly available datasets and evaluated using precision, sensitivity, specificity, and area under the curve (AUC) metrics. The model demonstrated exceptional performance, achieving a precision of 96.27 %, sensitivity of 95.54 %, specificity of 94.70 %, and an AUC of 0.975. It outperformed existing state-of-the-art methods, particularly in detecting lesions in underdiagnosed regions such as the cortical gray matter and brainstem. The integration of advanced attention mechanisms enabled the model to focus on critical MRI features, leading to significant improvements in lesion classification and stratification. This study presents a novel and scalable approach for MS lesion detection and classification, offering a practical solution for clinical applications. By integrating advanced deep learning techniques with optimized MRI protocols, the proposed framework achieves superior diagnostic accuracy and generalizability, paving the way for enhanced patient care and more personalized treatment strategies. This work sets a new benchmark for MS diagnosis and management in both research and clinical practice.

MEF-Net: Multi-scale and edge feature fusion network for intracranial hemorrhage segmentation in CT images.

Zhang X, Zhang S, Jiang Y, Tian L

pubmed logopapersJun 1 2025
Intracranial Hemorrhage (ICH) refers to cerebral bleeding resulting from ruptured blood vessels within the brain. Delayed and inaccurate diagnosis and treatment of ICH can lead to fatality or disability. Therefore, early and precise diagnosis of intracranial hemorrhage is crucial for protecting patients' lives. Automatic segmentation of hematomas in CT images can provide doctors with essential diagnostic support and improve diagnostic efficiency. CT images of intracranial hemorrhage exhibit characteristics such as multi-scale, multi-target, and blurred edges. This paper proposes a Multi-scale and Edge Feature Fusion Network (MEF-Net) to effectively extract multi-scale and edge features and fully fuse these features through a fusion mechanism. The network first extracts the multi-scale features and edge features of the image through the encoder and the edge detection module respectively, then fuses the deep information, and employs the multi-kernel attention module to process the shallow features, enhancing the multi-target recognition capability. Finally, the feature maps from each module are combined to produce the segmentation result. Experimental results indicate that this method has achieved average DICE scores of 0.7508 and 0.7443 in two public datasets respectively, surpassing those of several advanced methods in medical image segmentation currently available. The proposed MEF-Net significantly improves the accuracy of intracranial hemorrhage segmentation.

Metabolic Dysfunction-Associated Steatotic Liver Disease Is Associated With Accelerated Brain Ageing: A Population-Based Study.

Wang J, Yang R, Miao Y, Zhang X, Paillard-Borg S, Fang Z, Xu W

pubmed logopapersJun 1 2025
Metabolic dysfunction-associated steatotic liver disease (MASLD) is linked to cognitive decline and dementia risk. We aimed to investigate the association between MASLD and brain ageing and explore the role of low-grade inflammation. Within the UK Biobank, 30 386 chronic neurological disorders-free participants who underwent brain magnetic resonance imaging (MRI) scans were included. Individuals were categorised into no MASLD/related SLD and MASLD/related SLD (including subtypes of MASLD, MASLD with increased alcohol intake [MetALD] and MASLD with other combined aetiology). Brain age was estimated using machine learning by 1079 brain MRI phenotypes. Brain age gap (BAG) was calculated as the difference between brain age and chronological age. Low-grade inflammation (INFLA) was calculated based on white blood cell count, platelet, neutrophil granulocyte to lymphocyte ratio and C-reactive protein. Data were analysed using linear regression and structural equation models. At baseline, 7360 (24.2%) participants had MASLD/related SLD. Compared to participants with no MASLD/related SLD, those with MASLD/related SLD had significantly larger BAG (β = 0.86, 95% CI = 0.70, 1.02), as well as those with MASLD (β = 0.59, 95% CI = 0.41, 0.77) or MetALD (β = 1.57, 95% CI = 1.31, 1.83). The association between MASLD/related SLD and larger BAG was significant across middle-aged (< 60) and older (≥ 60) adults, males and females, and APOE ɛ4 carriers and non-carriers. INFLA mediated 13.53% of the association between MASLD/related SLD and larger BAG (p < 0.001). MASLD/related SLD, as well as MASLD and MetALD, is associated with accelerated brain ageing, even among middle-aged adults and APOE ɛ4 non-carriers. Low-grade systemic inflammation may partially mediate this association.

A magnetic resonance imaging (MRI)-based deep learning radiomics model predicts recurrence-free survival in lung cancer patients after surgical resection of brain metastases.

Li B, Li H, Chen J, Xiao F, Fang X, Guo R, Liang M, Wu Z, Mao J, Shen J

pubmed logopapersJun 1 2025
To develop and validate a magnetic resonance imaging (MRI)-based deep learning radiomics model (DLRM) to predict recurrence-free survival (RFS) in lung cancer patients after surgical resection of brain metastases (BrMs). A total of 215 lung cancer patients with BrMs confirmed by surgical pathology were retrospectively included in five centres, 167 patients were assigned to the training cohort, and 48 to the external test cohort. All patients underwent regular follow-up brain MRIs. Clinical and morphological MRI models for predicting RFS were built using univariate and multivariate Cox regressions, respectively. Handcrafted and deep learning (DL) signatures were constructed from BrMs pretreatment MR images using the least absolute shrinkage and selection operator (LASSO) method, respectively. A DLRM was established by integrating the clinical and morphological MRI predictors, handcrafted and DL signatures based on the multivariate Cox regression coefficients. The Harrell C-index, area under the receiver operating characteristic curve (AUC), and Kaplan-Meier's survival analysis were used to evaluate model performance. The DLRM showed satisfactory performance in predicting RFS and 6- to 18-month intracranial recurrence in lung cancer patients after BrMs resection, achieving a C-index of 0.79 and AUCs of 0.84-0.90 in the training set and a C-index of 0.74 and AUCs of 0.71-0.85 in the external test set. The DLRM outperformed the clinical model, morphological MRI model, handcrafted signature, DL signature, and clinical-morphological MRI model in predicting RFS (P < 0.05). The DLRM successfully classified patients into high-risk and low-risk intracranial recurrence groups (P < 0.001). This MRI-based DLRM could predict RFS in lung cancer patients after surgical resection of BrMs.

Radiomics across modalities: a comprehensive review of neurodegenerative diseases.

Inglese M, Conti A, Toschi N

pubmed logopapersJun 1 2025
Radiomics allows extraction from medical images of quantitative features that are able to reveal tissue patterns that are generally invisible to human observers. Despite the challenges in visually interpreting radiomic features and the computational resources required to generate them, they hold significant value in downstream automated processing. For instance, in statistical or machine learning frameworks, radiomic features enhance sensitivity and specificity, making them indispensable for tasks such as diagnosis, prognosis, prediction, monitoring, image-guided interventions, and evaluating therapeutic responses. This review explores the application of radiomics in neurodegenerative diseases, with a focus on Alzheimer's disease, Parkinson's disease, Huntington's disease, and multiple sclerosis. While radiomics literature often focuses on magnetic resonance imaging (MRI) and computed tomography (CT), this review also covers its broader application in nuclear medicine, with use cases of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) radiomics. Additionally, we review integrated radiomics, where features from multiple imaging modalities are fused to improve model performance. This review also highlights the growing integration of radiomics with artificial intelligence and the need for feature standardisation and reproducibility to facilitate its translation into clinical practice.

Advancing Intracranial Aneurysm Detection: A Comprehensive Systematic Review and Meta-analysis of Deep Learning Models Performance, Clinical Integration, and Future Directions.

Delfan N, Abbasi F, Emamzadeh N, Bahri A, Parvaresh Rizi M, Motamedi A, Moshiri B, Iranmehr A

pubmed logopapersJun 1 2025
Cerebral aneurysms pose a significant risk to patient safety, particularly when ruptured, emphasizing the need for early detection and accurate prediction. Traditional diagnostic methods, reliant on clinician-based evaluations, face challenges in sensitivity and consistency, prompting the exploration of deep learning (DL) systems for improved performance. This systematic review and meta-analysis assessed the performance of DL models in detecting and predicting intracranial aneurysms compared to clinician-based evaluations. Imaging modalities included CT angiography (CTA), digital subtraction angiography (DSA), and time-of-flight MR angiography (TOF-MRA). Data on lesion-wise sensitivity, specificity, and the impact of DL assistance on clinician performance were analyzed. Subgroup analyses evaluated DL sensitivity by aneurysm size and location, and interrater agreement was measured using Fleiss' κ. DL systems achieved an overall lesion-wise sensitivity of 90 % and specificity of 94 %, outperforming human diagnostics. Clinician specificity improved significantly with DL assistance, increasing from 83 % to 85 % in the patient-wise scenario and from 93 % to 95 % in the lesion-wise scenario. Similarly, clinician sensitivity also showed notable improvement with DL assistance, rising from 82 % to 96 % in the patient-wise scenario and from 82 % to 88 % in the lesion-wise scenario. Subgroup analysis showed DL sensitivity varied with aneurysm size and location, reaching 100 % for aneurysms larger than 10 mm. Additionally, DL assistance improved interrater agreement among clinicians, with Fleiss' κ increasing from 0.668 to 0.862. DL models demonstrate transformative potential in managing cerebral aneurysms by enhancing diagnostic accuracy, reducing missed cases, and supporting clinical decision-making. However, further validation in diverse clinical settings and seamless integration into standard workflows are necessary to fully realize the benefits of DL-driven diagnostics.

Virtual monochromatic image-based automatic segmentation strategy using deep learning method.

Chen L, Yu S, Chen Y, Wei X, Yang J, Guo C, Zeng W, Yang C, Zhang J, Li T, Lin C, Le X, Zhang Y

pubmed logopapersJun 1 2025
The image quality of single-energy CT (SECT) limited the accuracy of automatic segmentation. Dual-energy CT (DECT) may potentially improve automatic segmentation yet the performance and strategy have not been investigated thoroughly. Based on DECT-generated virtual monochromatic images (VMIs), this study proposed a novel deep learning model (MIAU-Net) and evaluated the segmentation performance on the head organs-at-risk (OARs). The VMIs from 40 keV to 190 keV were retrospectively generated at intervals of 10 keV using the DECT of 46 patients. Images with expert delineation were used for training, validation, and testing MIAU-Net for automatic segmentation. Theperformance of MIAU-Net was compared with the existingU-Net, Attention-UNet, nnU-Net and TransFuse methods based on Dice Similarity Coefficient (DSC). Correlationanalysis was performed to evaluate and optimize the impact of different virtual energies on the accuracy of segmentation. Using MIAU-Net, average DSCs across all virtual energy levels were 93.78 %, 81.75 %, 84.46 %, 92.85 %, 94.40 %, and 84.75 % for the brain stem, optic chiasm, lens, mandible, eyes, and optic nerves, respectively, higher than the previous publications using SECT. MIAU-Net achieved the highest average DSC (88.84 %) and the lowest parameters (14.54 M) in all tested models. The results suggested that 60 keV-80 keV is the optimal VMI energy level for soft tissue delineation, while 100 keV is optimal for skeleton segmentation. This work proposed and validated a novel deep learning model for automatic segmentation based on DECT, suggesting potential advantages and OAR-specific optimal energy of using VMIs for automatic delineation.

Artificial intelligence in fetal brain imaging: Advancements, challenges, and multimodal approaches for biometric and structural analysis.

Wang L, Fatemi M, Alizad A

pubmed logopapersJun 1 2025
Artificial intelligence (AI) is transforming fetal brain imaging by addressing key challenges in diagnostic accuracy, efficiency, and data integration in prenatal care. This review explores AI's application in enhancing fetal brain imaging through ultrasound (US) and magnetic resonance imaging (MRI), with a particular focus on multimodal integration to leverage their complementary strengths. By critically analyzing state-of-the-art AI methodologies, including deep learning frameworks and attention-based architectures, this study highlights significant advancements alongside persistent challenges. Notable barriers include the scarcity of diverse and high-quality datasets, computational inefficiencies, and ethical concerns surrounding data privacy and security. Special attention is given to multimodal approaches that integrate US and MRI, combining the accessibility and real-time imaging of US with the superior soft tissue contrast of MRI to improve diagnostic precision. Furthermore, this review emphasizes the transformative potential of AI in fostering clinical adoption through innovations such as real-time diagnostic tools and human-AI collaboration frameworks. By providing a comprehensive roadmap for future research and implementation, this study underscores AI's potential to redefine fetal imaging practices, enhance diagnostic accuracy, and ultimately improve perinatal care outcomes.
Page 29 of 55543 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.