Sort by:
Page 12 of 91901 results

Deep Learning Reconstruction Combined With Conventional Acceleration Improves Image Quality of 3 T Brain MRI and Does Not Impact Quantitative Diffusion Metrics.

Wilpert C, Russe MF, Weiss J, Voss C, Rau S, Strecker R, Reisert M, Bedin R, Urbach H, Zaitsev M, Bamberg F, Rau A

pubmed logopapersAug 1 2025
Deep learning reconstruction of magnetic resonance imaging (MRI) allows to either improve image quality of accelerated sequences or to generate high-resolution data. We evaluated the interaction of conventional acceleration and Deep Resolve Boost (DRB)-based reconstruction techniques of a single-shot echo-planar imaging (ssEPI) diffusion-weighted imaging (DWI) on image quality features in cerebral 3 T brain MRI and compared it with a state-of-the-art DWI sequence. In this prospective study, 24 patients received a standard of care ssEPI DWI and 5 additional adapted ssEPI DWI sequences, 3 of those with DRB reconstruction. Qualitative analysis encompassed rating of image quality, noise, sharpness, and artifacts. Quantitative analysis compared apparent diffusion coefficient (ADC) values region-wise between the different DWI sequences. Intraclass correlations, paired sampled t test, Wilcoxon signed rank test, and weighted Cohen κ were used. Compared with the reference standard, the acquisition time was significantly improved in accelerated DWI from 75 seconds up to 50% (39 seconds; P < 0.001). All tested DRB-reconstructed sequences showed significantly improved image quality, sharpness, and reduced noise ( P < 0.001). Highest image quality was observed for the combination of conventional acceleration and DL reconstruction. In singular slices, more artifacts were observed for DRB-reconstructed sequences ( P < 0.001). While in general high consistency was found between ADC values, increasing differences in ADC values were noted with increasing acceleration and application of DRB. Falsely pathological ADCs were rarely observed near frontal poles and optic chiasm attributable to susceptibility-related artifacts due to adjacent sinuses. In this comparative study, we found that the combination of conventional acceleration and DRB reconstruction improves image quality and enables faster acquisition of ssEPI DWI. Nevertheless, a tradeoff between increased acceleration with risk of stronger artifacts and high-resolution with longer acquisition time needs to be considered, especially for application in cerebral MRI.

Transparent brain tumor detection using DenseNet169 and LIME.

Abraham LA, Palanisamy G, Veerapu G

pubmed logopapersAug 1 2025
A crucial area of research in the field of medical imaging is that of brain tumor classification, which greatly aids diagnosis and facilitates treatment planning. This paper proposes DenseNet169-LIME-TumorNet, a model based on deep learning and an integrated combination of DenseNet169 with LIME to boost the performance of brain tumor classification and its interpretability. The model was trained and evaluated on the publicly available Brain Tumor MRI Dataset containing 2,870 images spanning three tumor types. Dense169-LIME-TumorNet achieves a classification accuracy of 98.78%, outperforming widely used architectures including Inception V3, ResNet50, MobileNet V2, EfficientNet variants, and other DenseNet configurations. The integration of LIME provides visual explanations that enhance transparency and reliability in clinical decision-making. Furthermore, the model demonstrates minimal computational overhead, enabling faster inference and deployment in resource-constrained clinical environments, thereby highlighting its practical utility for real-time diagnostic support. Work in the future should run towards creating generalization through the adoption of a multi-modal learning approach, hybrid deep learning development, and real-time application development for AI-assisted diagnosis.

Deep Learning-Based Signal Amplification of T1-Weighted Single-Dose Images Improves Metastasis Detection in Brain MRI.

Haase R, Pinetz T, Kobler E, Bendella Z, Zülow S, Schievelkamp AH, Schmeel FC, Panahabadi S, Stylianou AM, Paech D, Foltyn-Dumitru M, Wagner V, Schlamp K, Heussel G, Holtkamp M, Heussel CP, Vahlensieck M, Luetkens JA, Schlemmer HP, Haubold J, Radbruch A, Effland A, Deuschl C, Deike K

pubmed logopapersAug 1 2025
Double-dose contrast-enhanced brain imaging improves tumor delineation and detection of occult metastases but is limited by concerns about gadolinium-based contrast agents' effects on patients and the environment. The purpose of this study was to test the benefit of a deep learning-based contrast signal amplification in true single-dose T1-weighted (T-SD) images creating artificial double-dose (A-DD) images for metastasis detection in brain magnetic resonance imaging. In this prospective, multicenter study, a deep learning-based method originally trained on noncontrast, low-dose, and T-SD brain images was applied to T-SD images of 30 participants (mean age ± SD, 58.5 ± 11.8 years; 23 women) acquired externally between November 2022 and June 2023. Four readers with different levels of experience independently reviewed T-SD and A-DD images for metastases with 4 weeks between readings. A reference reader reviewed additionally acquired true double-dose images to determine any metastases present. Performances were compared using Mid-p McNemar tests for sensitivity and Wilcoxon signed rank tests for false-positive findings. All readers found more metastases using A-DD images. The 2 experienced neuroradiologists achieved the same level of sensitivity using T-SD images (62 of 91 metastases, 68.1%). While the increase in sensitivity using A-DD images was only descriptive for 1 of them (A-DD: 65 of 91 metastases, +3.3%, P = 0.424), the second neuroradiologist benefited significantly with a sensitivity increase of 12.1% (73 of 91 metastases, P = 0.008). The 2 less experienced readers (1 resident and 1 fellow) both found significantly more metastases on A-DD images (resident, T-SD: 61.5%, A-DD: 68.1%, P = 0.039; fellow, T-SD: 58.2%, A-DD: 70.3%, P = 0.008). They were therefore able to use A-DD images to increase their sensitivity to the neuroradiologists' initial level on regular T-SD images. False-positive findings did not differ significantly between sequences. However, readers showed descriptively more false-positive findings on A-DD images. The benefit in sensitivity particularly applied to metastases ≤5 mm (5.7%-17.3% increase in sensitivity). A-DD images can improve the detectability of brain metastases without a significant loss of precision and could therefore represent a potentially valuable addition to regular single-dose brain imaging.

Utility of artificial intelligence in radiosurgery for pituitary adenoma: a deep learning-based automated segmentation model and evaluation of its clinical applicability.

Černý M, May J, Hamáčková L, Hallak H, Novotný J, Baručić D, Kybic J, May M, Májovský M, Link MJ, Balasubramaniam N, Síla D, Babničová M, Netuka D, Liščák R

pubmed logopapersAug 1 2025
The objective of this study was to develop a deep learning model for automated pituitary adenoma segmentation in MRI scans for stereotactic radiosurgery planning and to assess its accuracy and efficiency in clinical settings. An nnU-Net-based model was trained on MRI scans with expert segmentations of 582 patients treated with Leksell Gamma Knife over the course of 12 years. The accuracy of the model was evaluated by a human expert on a separate dataset of 146 previously unseen patients. The primary outcome was the comparison of expert ratings between the predicted segmentations and a control group consisting of original manual segmentations. Secondary outcomes were the influence of tumor volume, previous surgery, previous stereotactic radiosurgery (SRS), and endocrinological status on expert ratings, performance in a subgroup of nonfunctioning macroadenomas (measuring 1000-4000 mm3) without previous surgery and/or radiosurgery, and influence of using additional MRI modalities as model input and time cost reduction. The model achieved Dice similarity coefficients of 82.3%, 63.9%, and 79.6% for tumor, normal gland, and optic nerve, respectively. A human expert rated 20.6% of the segmentations as applicable in treatment planning without any modifications, 52.7% as applicable with minor manual modifications, and 26.7% as inapplicable. The ratings for predicted segmentations were lower than for the control group of original segmentations (p < 0.001). Larger tumor volume, history of a previous radiosurgery, and nonfunctioning pituitary adenoma were associated with better expert ratings (p = 0.005, p = 0.007, and p < 0.001, respectively). In the subgroup without previous surgery, although expert ratings were more favorable, the association did not reach statistical significance (p = 0.074). In the subgroup of noncomplex cases (n = 9), 55.6% of the segmentations were rated as applicable without any manual modifications and no segmentations were rated as inapplicable. Manually improving inaccurate segmentations instead of creating them from scratch led to 53.6% reduction of the time cost (p < 0.001). The results were applicable for treatment planning with either no or minor manual modifications, demonstrating a significant increase in the efficiency of the planning process. The predicted segmentations can be loaded into the planning software used in clinical practice for treatment planning. The authors discuss some considerations of the clinical utility of the automated segmentation models, as well as their integration within established clinical workflows, and outline directions for future research.

Cerebral Amyloid Deposition With <sup>18</sup>F-Florbetapir PET Mediates Retinal Vascular Density and Cognitive Impairment in Alzheimer's Disease.

Chen Z, He HL, Qi Z, Bi S, Yang H, Chen X, Xu T, Jin ZB, Yan S, Lu J

pubmed logopapersAug 1 2025
Alzheimer's disease (AD) is accompanied by alterations in retinal vascular density (VD), but the mechanisms remain unclear. This study investigated the relationship among cerebral amyloid-β (Aβ) deposition, VD, and cognitive decline. We enrolled 92 participants, including 47 AD patients and 45 healthy control (HC) participants. VD across retinal subregions was quantified using deep learning-based fundus photography, and cerebral Aβ deposition was measured with <sup>18</sup>F-florbetapir (<sup>18</sup>F-AV45) PET/MRI. Using the minimum bounding circle of the optic disc as the diameter (papilla-diameter, PD), VD (total, 0.5-1.0 PD, 1.0-1.5 PD, 1.5-2.0 PD, 2.0-2.5 PD) was calculated. Standardized uptake value ratio (SUVR) for Aβ deposition was computed for global and regional cortical areas, using the cerebellar cortex as the reference region. Cognitive performance was assessed with the Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA). Pearson correlation, multiple linear regression, and mediation analyses were used to explore Aβ deposition, VD, and cognition. AD patients exhibited significantly lower VD in all subregions compared to HC (p < 0.05). Reduced VD correlated with higher SUVR in the global cortex and a decline in cognitive abilities (p < 0.05). Mediation analysis indicated that VD influenced MMSE and MoCA through SUVR in the global cortex, with the most pronounced effects observed in the 1.0-1.5 PD range. Retinal VD is associated with cognitive decline, a relationship primarily mediated by cerebral Aβ deposition measured via <sup>18</sup>F-AV45 PET. These findings highlight the potential of retinal VD as a biomarker for early detection in AD.

Keyword-based AI assistance in the generation of radiology reports: A pilot study.

Dong F, Nie S, Chen M, Xu F, Li Q

pubmed logopapersAug 1 2025
Radiology reporting is a time-intensive process, and artificial intelligence (AI) shows potential for textual processing in radiology reporting. In this study, we proposed a keyword-based AI-assisted radiology reporting paradigm and evaluated its potential for clinical implementation. Using MRI data from 100 patients with intracranial tumors, two radiology residents independently wrote both a routine complete report (routine report) and a keyword report for each patient. Based on the keyword reports and a designed prompt, AI-assisted reports were generated (AI-generated reports). The results demonstrated median reporting time reduction ratios of 27.1% and 28.8% (mean, 28.0%) for the two residents, with no significant difference in quality scores between AI-generated and routine reports (p > 0.50). AI-generated reports showed primary diagnosis accuracies of 68.0% (Resident 1) and 76.0% (Resident 2) (mean, 72.0%). These findings suggest that the keyword-based AI-assisted reporting paradigm exhibits significant potential for clinical translation.

Development and Validation of a Brain Aging Biomarker in Middle-Aged and Older Adults: Deep Learning Approach.

Li Z, Li J, Li J, Wang M, Xu A, Huang Y, Yu Q, Zhang L, Li Y, Li Z, Wu X, Bu J, Li W

pubmed logopapersAug 1 2025
Precise assessment of brain aging is crucial for early detection of neurodegenerative disorders and aiding clinical practice. Existing magnetic resonance imaging (MRI)-based methods excel in this task, but they still have room for improvement in capturing local morphological variations across brain regions and preserving the inherent neurobiological topological structures. To develop and validate a deep learning framework incorporating both connectivity and complexity for accurate brain aging estimation, facilitating early identification of neurodegenerative diseases. We used 5889 T1-weighted MRI scans from the Alzheimer's Disease Neuroimaging Initiative dataset. We proposed a novel brain vision graph neural network (BVGN), incorporating neurobiologically informed feature extraction modules and global association mechanisms to provide a sensitive deep learning-based imaging biomarker. Model performance was evaluated using mean absolute error (MAE) against benchmark models, while generalization capability was further validated on an external UK Biobank dataset. We calculated the brain age gap across distinct cognitive states and conducted multiple logistic regressions to compare its discriminative capacity against conventional cognitive-related variables in distinguishing cognitively normal (CN) and mild cognitive impairment (MCI) states. Longitudinal track, Cox regression, and Kaplan-Meier plots were used to investigate the longitudinal performance of the brain age gap. The BVGN model achieved an MAE of 2.39 years, surpassing current state-of-the-art approaches while obtaining an interpretable saliency map and graph theory supported by medical evidence. Furthermore, its performance was validated on the UK Biobank cohort (N=34,352) with an MAE of 2.49 years. The brain age gap derived from BVGN exhibited significant difference across cognitive states (CN vs MCI vs Alzheimer disease; P<.001), and demonstrated the highest discriminative capacity between CN and MCI than general cognitive assessments, brain volume features, and apolipoprotein E4 carriage (area under the receiver operating characteristic curve [AUC] of 0.885 vs AUC ranging from 0.646 to 0.815). Brain age gap exhibited clinical feasibility combined with Functional Activities Questionnaire, with improved discriminative capacity in models achieving lower MAEs (AUC of 0.945 vs 0.923 and 0.911; AUC of 0.935 vs 0.900 and 0.881). An increasing brain age gap identified by BVGN may indicate underlying pathological changes in the CN to MCI progression, with each unit increase linked to a 55% (hazard ratio=1.55, 95% CI 1.13-2.13; P=.006) higher risk of cognitive decline in individuals who are CN and a 29% (hazard ratio=1.29, 95% CI 1.09-1.51; P=.002) increase in individuals with MCI. BVGN offers a precise framework for brain aging assessment, demonstrates strong generalization on an external large-scale dataset, and proposes novel interpretability strategies to elucidate multiregional cooperative aging patterns. The brain age gap derived from BVGN is validated as a sensitive biomarker for early identification of MCI and predicting cognitive decline, offering substantial potential for clinical applications.

Enhanced Detection of Age-Related and Cognitive Declines Using Automated Hippocampal-To-Ventricle Ratio in Alzheimer's Patients.

Fernandez-Lozano S, Fonov V, Schoemaker D, Pruessner J, Potvin O, Duchesne S, Collins DL

pubmed logopapersAug 1 2025
The hippocampal-to-ventricle ratio (HVR) is a biomarker of medial temporal atrophy, particularly useful in the assessment of neurodegeneration in diseases such as Alzheimer's disease (AD). To minimize subjectivity and inter-rater variability, an automated, accurate, precise, and reliable segmentation technique for the hippocampus (HC) and surrounding cerebro-spinal fluid (CSF) filled spaces-such as the temporal horns of the lateral ventricles-is essential. We trained and evaluated three automated methods for the segmentation of both HC and CSF (Multi-Atlas Label Fusion (MALF), Nonlinear Patch-Based Segmentation (NLPB), and a Convolutional Neural Network (CNN)). We then evaluated these methods, including the widely used FreeSurfer technique, using baseline T1w MRIs of 1641 participants from the AD Neuroimaging Initiative study with various degree of atrophy associated with their cognitive status on the spectrum from cognitively healthy to clinically probable AD. Our gold standard consisted in manual segmentation of HC and CSF from 80 cognitively healthy individuals. We calculated HC volumes and HVR and compared all methods in terms of segmentation reliability, similarity across methods, sensitivity in detecting between-group differences and associations with age, scores of the learning subtest of the Rey Auditory Verbal Learning Test (RAVLT) and the Alzheimer's Disease Assessment Scale 13 (ADAS13) scores. Cross validation demonstrated that the CNN method yielded more accurate HC and CSF segmentations when compared to MALF and NLPB, demonstrating higher volumetric overlap (Dice Kappa = 0.94) and correlation (rho = 0.99) with the manual labels. It was also the most reliable method in clinical data application, showing minimal failures. Our comparisons yielded high correlations between FreeSurfer, CNN and NLPB volumetric values. HVR yielded higher control:AD effect sizes than HC volumes among all segmentation methods, reinforcing the significance of HVR in clinical distinction. The positive association with age was significantly stronger for HVR compared to HC volumes on all methods except FreeSurfer. Memory associations with HC volumes or HVR were only significant for individuals with mild cognitive impairment. Finally, the HC volumes and HVR showed comparable negative associations with ADAS13, particularly in the mild cognitive impairment cohort. This study provides an evaluation of automated segmentation methods centered to estimate HVR, emphasizing the superior performance of a CNN-based algorithm. The findings underscore the pivotal role of accurate segmentation in HVR calculations for precise clinical applications, contributing valuable insights into medial temporal lobe atrophy in neurodegenerative disorders, especially AD.

MR-AIV reveals <i>in vivo</i> brain-wide fluid flow with physics-informed AI.

Toscano JD, Guo Y, Wang Z, Vaezi M, Mori Y, Karniadakis GE, Boster KAS, Kelley DH

pubmed logopapersAug 1 2025
The circulation of cerebrospinal and interstitial fluid plays a vital role in clearing metabolic waste from the brain, and its disruption has been linked to neurological disorders. However, directly measuring brain-wide fluid transport-especially in the deep brain-has remained elusive. Here, we introduce magnetic resonance artificial intelligence velocimetry (MR-AIV), a framework featuring a specialized physics-informed architecture and optimization method that reconstructs three-dimensional fluid velocity fields from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). MR-AIV unveils brain-wide velocity maps while providing estimates of tissue permeability and pressure fields-quantities inaccessible to other methods. Applied to the brain, MR-AIV reveals a functional landscape of interstitial and perivascular flow, quantitatively distinguishing slow diffusion-driven transport (∼ 0.1 µm/s) from rapid advective flow (∼ 3 µm/s). This approach enables new investigations into brain clearance mechanisms and fluid dynamics in health and disease, with broad potential applications to other porous media systems, from geophysics to tissue mechanics.

Weakly Supervised Intracranial Aneurysm Detection and Segmentation in MR angiography via Multi-task UNet with Vesselness Prior

Erin Rainville, Amirhossein Rasoulian, Hassan Rivaz, Yiming Xiao

arxiv logopreprintAug 1 2025
Intracranial aneurysms (IAs) are abnormal dilations of cerebral blood vessels that, if ruptured, can lead to life-threatening consequences. However, their small size and soft contrast in radiological scans often make it difficult to perform accurate and efficient detection and morphological analyses, which are critical in the clinical care of the disorder. Furthermore, the lack of large public datasets with voxel-wise expert annotations pose challenges for developing deep learning algorithms to address the issues. Therefore, we proposed a novel weakly supervised 3D multi-task UNet that integrates vesselness priors to jointly perform aneurysm detection and segmentation in time-of-flight MR angiography (TOF-MRA). Specifically, to robustly guide IA detection and segmentation, we employ the popular Frangi's vesselness filter to derive soft cerebrovascular priors for both network input and an attention block to conduct segmentation from the decoder and detection from an auxiliary branch. We train our model on the Lausanne dataset with coarse ground truth segmentation, and evaluate it on the test set with refined labels from the same database. To further assess our model's generalizability, we also validate it externally on the ADAM dataset. Our results demonstrate the superior performance of the proposed technique over the SOTA techniques for aneurysm segmentation (Dice = 0.614, 95%HD =1.38mm) and detection (false positive rate = 1.47, sensitivity = 92.9%).
Page 12 of 91901 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.