Sort by:
Page 103 of 1261251 results

Metabolic Dysfunction-Associated Steatotic Liver Disease Is Associated With Accelerated Brain Ageing: A Population-Based Study.

Wang J, Yang R, Miao Y, Zhang X, Paillard-Borg S, Fang Z, Xu W

pubmed logopapersJun 1 2025
Metabolic dysfunction-associated steatotic liver disease (MASLD) is linked to cognitive decline and dementia risk. We aimed to investigate the association between MASLD and brain ageing and explore the role of low-grade inflammation. Within the UK Biobank, 30 386 chronic neurological disorders-free participants who underwent brain magnetic resonance imaging (MRI) scans were included. Individuals were categorised into no MASLD/related SLD and MASLD/related SLD (including subtypes of MASLD, MASLD with increased alcohol intake [MetALD] and MASLD with other combined aetiology). Brain age was estimated using machine learning by 1079 brain MRI phenotypes. Brain age gap (BAG) was calculated as the difference between brain age and chronological age. Low-grade inflammation (INFLA) was calculated based on white blood cell count, platelet, neutrophil granulocyte to lymphocyte ratio and C-reactive protein. Data were analysed using linear regression and structural equation models. At baseline, 7360 (24.2%) participants had MASLD/related SLD. Compared to participants with no MASLD/related SLD, those with MASLD/related SLD had significantly larger BAG (β = 0.86, 95% CI = 0.70, 1.02), as well as those with MASLD (β = 0.59, 95% CI = 0.41, 0.77) or MetALD (β = 1.57, 95% CI = 1.31, 1.83). The association between MASLD/related SLD and larger BAG was significant across middle-aged (< 60) and older (≥ 60) adults, males and females, and APOE ɛ4 carriers and non-carriers. INFLA mediated 13.53% of the association between MASLD/related SLD and larger BAG (p < 0.001). MASLD/related SLD, as well as MASLD and MetALD, is associated with accelerated brain ageing, even among middle-aged adults and APOE ɛ4 non-carriers. Low-grade systemic inflammation may partially mediate this association.

A magnetic resonance imaging (MRI)-based deep learning radiomics model predicts recurrence-free survival in lung cancer patients after surgical resection of brain metastases.

Li B, Li H, Chen J, Xiao F, Fang X, Guo R, Liang M, Wu Z, Mao J, Shen J

pubmed logopapersJun 1 2025
To develop and validate a magnetic resonance imaging (MRI)-based deep learning radiomics model (DLRM) to predict recurrence-free survival (RFS) in lung cancer patients after surgical resection of brain metastases (BrMs). A total of 215 lung cancer patients with BrMs confirmed by surgical pathology were retrospectively included in five centres, 167 patients were assigned to the training cohort, and 48 to the external test cohort. All patients underwent regular follow-up brain MRIs. Clinical and morphological MRI models for predicting RFS were built using univariate and multivariate Cox regressions, respectively. Handcrafted and deep learning (DL) signatures were constructed from BrMs pretreatment MR images using the least absolute shrinkage and selection operator (LASSO) method, respectively. A DLRM was established by integrating the clinical and morphological MRI predictors, handcrafted and DL signatures based on the multivariate Cox regression coefficients. The Harrell C-index, area under the receiver operating characteristic curve (AUC), and Kaplan-Meier's survival analysis were used to evaluate model performance. The DLRM showed satisfactory performance in predicting RFS and 6- to 18-month intracranial recurrence in lung cancer patients after BrMs resection, achieving a C-index of 0.79 and AUCs of 0.84-0.90 in the training set and a C-index of 0.74 and AUCs of 0.71-0.85 in the external test set. The DLRM outperformed the clinical model, morphological MRI model, handcrafted signature, DL signature, and clinical-morphological MRI model in predicting RFS (P < 0.05). The DLRM successfully classified patients into high-risk and low-risk intracranial recurrence groups (P < 0.001). This MRI-based DLRM could predict RFS in lung cancer patients after surgical resection of BrMs.

Radiomics across modalities: a comprehensive review of neurodegenerative diseases.

Inglese M, Conti A, Toschi N

pubmed logopapersJun 1 2025
Radiomics allows extraction from medical images of quantitative features that are able to reveal tissue patterns that are generally invisible to human observers. Despite the challenges in visually interpreting radiomic features and the computational resources required to generate them, they hold significant value in downstream automated processing. For instance, in statistical or machine learning frameworks, radiomic features enhance sensitivity and specificity, making them indispensable for tasks such as diagnosis, prognosis, prediction, monitoring, image-guided interventions, and evaluating therapeutic responses. This review explores the application of radiomics in neurodegenerative diseases, with a focus on Alzheimer's disease, Parkinson's disease, Huntington's disease, and multiple sclerosis. While radiomics literature often focuses on magnetic resonance imaging (MRI) and computed tomography (CT), this review also covers its broader application in nuclear medicine, with use cases of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) radiomics. Additionally, we review integrated radiomics, where features from multiple imaging modalities are fused to improve model performance. This review also highlights the growing integration of radiomics with artificial intelligence and the need for feature standardisation and reproducibility to facilitate its translation into clinical practice.

Advancing Intracranial Aneurysm Detection: A Comprehensive Systematic Review and Meta-analysis of Deep Learning Models Performance, Clinical Integration, and Future Directions.

Delfan N, Abbasi F, Emamzadeh N, Bahri A, Parvaresh Rizi M, Motamedi A, Moshiri B, Iranmehr A

pubmed logopapersJun 1 2025
Cerebral aneurysms pose a significant risk to patient safety, particularly when ruptured, emphasizing the need for early detection and accurate prediction. Traditional diagnostic methods, reliant on clinician-based evaluations, face challenges in sensitivity and consistency, prompting the exploration of deep learning (DL) systems for improved performance. This systematic review and meta-analysis assessed the performance of DL models in detecting and predicting intracranial aneurysms compared to clinician-based evaluations. Imaging modalities included CT angiography (CTA), digital subtraction angiography (DSA), and time-of-flight MR angiography (TOF-MRA). Data on lesion-wise sensitivity, specificity, and the impact of DL assistance on clinician performance were analyzed. Subgroup analyses evaluated DL sensitivity by aneurysm size and location, and interrater agreement was measured using Fleiss' κ. DL systems achieved an overall lesion-wise sensitivity of 90 % and specificity of 94 %, outperforming human diagnostics. Clinician specificity improved significantly with DL assistance, increasing from 83 % to 85 % in the patient-wise scenario and from 93 % to 95 % in the lesion-wise scenario. Similarly, clinician sensitivity also showed notable improvement with DL assistance, rising from 82 % to 96 % in the patient-wise scenario and from 82 % to 88 % in the lesion-wise scenario. Subgroup analysis showed DL sensitivity varied with aneurysm size and location, reaching 100 % for aneurysms larger than 10 mm. Additionally, DL assistance improved interrater agreement among clinicians, with Fleiss' κ increasing from 0.668 to 0.862. DL models demonstrate transformative potential in managing cerebral aneurysms by enhancing diagnostic accuracy, reducing missed cases, and supporting clinical decision-making. However, further validation in diverse clinical settings and seamless integration into standard workflows are necessary to fully realize the benefits of DL-driven diagnostics.

Virtual monochromatic image-based automatic segmentation strategy using deep learning method.

Chen L, Yu S, Chen Y, Wei X, Yang J, Guo C, Zeng W, Yang C, Zhang J, Li T, Lin C, Le X, Zhang Y

pubmed logopapersJun 1 2025
The image quality of single-energy CT (SECT) limited the accuracy of automatic segmentation. Dual-energy CT (DECT) may potentially improve automatic segmentation yet the performance and strategy have not been investigated thoroughly. Based on DECT-generated virtual monochromatic images (VMIs), this study proposed a novel deep learning model (MIAU-Net) and evaluated the segmentation performance on the head organs-at-risk (OARs). The VMIs from 40 keV to 190 keV were retrospectively generated at intervals of 10 keV using the DECT of 46 patients. Images with expert delineation were used for training, validation, and testing MIAU-Net for automatic segmentation. Theperformance of MIAU-Net was compared with the existingU-Net, Attention-UNet, nnU-Net and TransFuse methods based on Dice Similarity Coefficient (DSC). Correlationanalysis was performed to evaluate and optimize the impact of different virtual energies on the accuracy of segmentation. Using MIAU-Net, average DSCs across all virtual energy levels were 93.78 %, 81.75 %, 84.46 %, 92.85 %, 94.40 %, and 84.75 % for the brain stem, optic chiasm, lens, mandible, eyes, and optic nerves, respectively, higher than the previous publications using SECT. MIAU-Net achieved the highest average DSC (88.84 %) and the lowest parameters (14.54 M) in all tested models. The results suggested that 60 keV-80 keV is the optimal VMI energy level for soft tissue delineation, while 100 keV is optimal for skeleton segmentation. This work proposed and validated a novel deep learning model for automatic segmentation based on DECT, suggesting potential advantages and OAR-specific optimal energy of using VMIs for automatic delineation.

Artificial intelligence in fetal brain imaging: Advancements, challenges, and multimodal approaches for biometric and structural analysis.

Wang L, Fatemi M, Alizad A

pubmed logopapersJun 1 2025
Artificial intelligence (AI) is transforming fetal brain imaging by addressing key challenges in diagnostic accuracy, efficiency, and data integration in prenatal care. This review explores AI's application in enhancing fetal brain imaging through ultrasound (US) and magnetic resonance imaging (MRI), with a particular focus on multimodal integration to leverage their complementary strengths. By critically analyzing state-of-the-art AI methodologies, including deep learning frameworks and attention-based architectures, this study highlights significant advancements alongside persistent challenges. Notable barriers include the scarcity of diverse and high-quality datasets, computational inefficiencies, and ethical concerns surrounding data privacy and security. Special attention is given to multimodal approaches that integrate US and MRI, combining the accessibility and real-time imaging of US with the superior soft tissue contrast of MRI to improve diagnostic precision. Furthermore, this review emphasizes the transformative potential of AI in fostering clinical adoption through innovations such as real-time diagnostic tools and human-AI collaboration frameworks. By providing a comprehensive roadmap for future research and implementation, this study underscores AI's potential to redefine fetal imaging practices, enhance diagnostic accuracy, and ultimately improve perinatal care outcomes.

Deep learning-based MRI reconstruction with Artificial Fourier Transform Network (AFTNet).

Yang Y, Zhang Y, Li Z, Tian JS, Dagommer M, Guo J

pubmed logopapersJun 1 2025
Deep complex-valued neural networks (CVNNs) provide a powerful way to leverage complex number operations and representations and have succeeded in several phase-based applications. However, previous networks have not fully explored the impact of complex-valued networks in the frequency domain. Here, we introduce a unified complex-valued deep learning framework - Artificial Fourier Transform Network (AFTNet) - which combines domain-manifold learning and CVNNs. AFTNet can be readily used to solve image inverse problems in domain transformation, especially for accelerated magnetic resonance imaging (MRI) reconstruction and other applications. While conventional methods typically utilize magnitude images or treat the real and imaginary components of k-space data as separate channels, our approach directly processes raw k-space data in the frequency domain, utilizing complex-valued operations. This allows for a mapping between the frequency (k-space) and image domain to be determined through cross-domain learning. We show that AFTNet achieves superior accelerated MRI reconstruction compared to existing approaches. Furthermore, our approach can be applied to various tasks, such as denoised magnetic resonance spectroscopy (MRS) reconstruction and datasets with various contrasts. The AFTNet presented here is a valuable preprocessing component for different preclinical studies and provides an innovative alternative for solving inverse problems in imaging and spectroscopy. The code is available at: https://github.com/yanting-yang/AFT-Net.

Toward Noninvasive High-Resolution In Vivo pH Mapping in Brain Tumors by <sup>31</sup>P-Informed deepCEST MRI.

Schüre JR, Rajput J, Shrestha M, Deichmann R, Hattingen E, Maier A, Nagel AM, Dörfler A, Steidl E, Zaiss M

pubmed logopapersJun 1 2025
The intracellular pH (pH<sub>i</sub>) is critical for understanding various pathologies, including brain tumors. While conventional pH<sub>i</sub> measurement through <sup>31</sup>P-MRS suffers from low spatial resolution and long scan times, <sup>1</sup>H-based APT-CEST imaging offers higher resolution with shorter scan times. This study aims to directly predict <sup>31</sup>P-pH<sub>i</sub> maps from CEST data by using a fully connected neuronal network. Fifteen tumor patients were scanned on a 3-T Siemens PRISMA scanner and received <sup>1</sup>H-based CEST and T1 measurement, as well as <sup>31</sup>P-MRS. A neural network was trained voxel-wise on CEST and T1 data to predict <sup>31</sup>P-pH<sub>i</sub> values, using data from 11 patients for training and 4 for testing. The predicted pH<sub>i</sub> maps were additionally down-sampled to the original the <sup>31</sup>P-pH<sub>i</sub> resolution, to be able to calculate the RMSE and analyze the correlation, while higher resolved predictions were compared with conventional CEST metrics. The results demonstrated a general correspondence between the predicted deepCEST pH<sub>i</sub> maps and the measured <sup>31</sup>P-pH<sub>i</sub> in test patients. However, slight discrepancies were also observed, with a RMSE of 0.04 pH units in tumor regions. High-resolution predictions revealed tumor heterogeneity and features not visible in conventional CEST data, suggesting the model captures unique pH information and is not simply a T1 segmentation. The deepCEST pH<sub>i</sub> neural network enables the APT-CEST hidden pH-sensitivity and offers pH<sub>i</sub> maps with higher spatial resolution in shorter scan time compared with <sup>31</sup>P-MRS. Although this approach is constrained by the limitations of the acquired data, it can be extended with additional CEST features for future studies, thereby offering a promising approach for 3D pH imaging in a clinical environment.

An Optimized Framework of QSM Mask Generation Using Deep Learning: QSMmask-Net.

Lee G, Jung W, Sakaie KE, Oh SH

pubmed logopapersJun 1 2025
Quantitative susceptibility mapping (QSM) provides the spatial distribution of magnetic susceptibility within tissues through sequential steps: phase unwrapping and echo combination, mask generation, background field removal, and dipole inversion. Accurate mask generation is crucial, as masks excluding regions outside the brain and without holes are necessary to minimize errors and streaking artifacts during QSM reconstruction. Variations in susceptibility values can arise from different mask generation methods, highlighting the importance of optimizing mask creation. In this study, we propose QSMmask-net, a deep neural network-based method for generating precise QSM masks. QSMmask-net achieved the highest Dice score compared to other mask generation methods. Mean susceptibility values using QSMmask-net masks showed the lowest differences from manual masks (ground truth) in simulations and healthy controls (no significant difference, p > 0.05). Linear regression analysis confirmed a strong correlation with manual masks for hemorrhagic lesions (slope = 0.9814 ± 0.007, intercept = 0.0031 ± 0.001, R<sup>2</sup> = 0.9992, p < 0.05). We have demonstrated that mask generation methods can affect the susceptibility value estimations. QSMmask-net reduces the labor required for mask generation while providing mask quality comparable to manual methods. The proposed method enables users without specialized expertise to create optimized masks, potentially broadening QSM applicability efficiently.

Exploring the Limitations of Virtual Contrast Prediction in Brain Tumor Imaging: A Study of Generalization Across Tumor Types and Patient Populations.

Caragliano AN, Macula A, Colombo Serra S, Fringuello Mingo A, Morana G, Rossi A, Alì M, Fazzini D, Tedoldi F, Valbusa G, Bifone A

pubmed logopapersJun 1 2025
Accurate and timely diagnosis of brain tumors is critical for patient management and treatment planning. Magnetic resonance imaging (MRI) is a widely used modality for brain tumor detection and characterization, often aided by the administration of gadolinium-based contrast agents (GBCAs) to improve tumor visualization. Recently, deep learning models have shown remarkable success in predicting contrast-enhancement in medical images, thereby reducing the need of GBCAs and potentially minimizing patient discomfort and risks. In this paper, we present a study aimed at investigating the generalization capabilities of a neural network trained to predict full contrast in brain tumor images from noncontrast MRI scans. While initial results exhibited promising performance on a specific tumor type at a certain stage using a specific dataset, our attempts to extend this success to other tumor types and diverse patient populations yielded unexpected challenges and limitations. Through a rigorous analysis of the factor contributing to these negative results, we aim to shed light on the complexities associated with generalizing contrast enhancement prediction in medical brain tumor imaging, offering valuable insights for future research and clinical applications.
Page 103 of 1261251 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.