Sort by:
Page 83 of 1201200 results

A ViTUNeT-based model using YOLOv8 for efficient LVNC diagnosis and automatic cleaning of dataset.

de Haro S, Bernabé G, García JM, González-Férez P

pubmed logopapersJun 4 2025
Left ventricular non-compaction is a cardiac condition marked by excessive trabeculae in the left ventricle's inner wall. Although various methods exist to measure these structures, the medical community still lacks consensus on the best approach. Previously, we developed DL-LVTQ, a tool based on a UNet neural network, to quantify trabeculae in this region. In this study, we expand the dataset to include new patients with Titin cardiomyopathy and healthy individuals with fewer trabeculae, requiring retraining of our models to enhance predictions. We also propose ViTUNeT, a neural network architecture combining U-Net and Vision Transformers to segment the left ventricle more accurately. Additionally, we train a YOLOv8 model to detect the ventricle and integrate it with ViTUNeT model to focus on the region of interest. Results from ViTUNet and YOLOv8 are similar to DL-LVTQ, suggesting dataset quality limits further accuracy improvements. To test this, we analyze MRI images and develop a method using two YOLOv8 models to identify and remove problematic images, leading to better results. Combining YOLOv8 with deep learning networks offers a promising approach for improving cardiac image analysis and segmentation.

Deep learning reveals pathology-confirmed neuroimaging signatures in Alzheimer's, vascular and Lewy body dementias.

Wang D, Honnorat N, Toledo JB, Li K, Charisis S, Rashid T, Benet Nirmala A, Brandigampala SR, Mojtabai M, Seshadri S, Habes M

pubmed logopapersJun 3 2025
Concurrent neurodegenerative and vascular pathologies pose a diagnostic challenge in the clinical setting, with histopathology remaining the definitive modality for dementia-type diagnosis. To address this clinical challenge, we introduce a neuropathology-based, data-driven, multi-label deep-learning framework to identify and quantify in vivo biomarkers for Alzheimer's disease (AD), vascular dementia (VD) and Lewy body dementia (LBD) using antemortem T1-weighted MRI scans of 423 demented and 361 control participants from National Alzheimer's Coordinating Center and Alzheimer's Disease Neuroimaging Initiative datasets. Based on the best-performing deep-learning model, explainable heat maps were extracted to visualize disease patterns, and the novel Deep Signature of Pathology Atrophy REcognition (DeepSPARE) indices were developed, where a higher DeepSPARE score indicates more brain alterations associated with that specific pathology. A substantial discrepancy in clinical and neuropathological diagnosis was observed in the demented patients: 71% had more than one pathology, but 67% were diagnosed clinically as AD only. Based on these neuropathological diagnoses and leveraging cross-validation principles, the deep-learning model achieved the best performance, with a balanced accuracy of 0.844, 0.839 and 0.623 for AD, VD and LBD, respectively, and was used to generate the explainable deep-learning heat maps and DeepSPARE indices. The explainable deep-learning heat maps revealed distinct neuroimaging brain alteration patterns for each pathology: (i) the AD heat map highlighted bilateral hippocampal regions; (ii) the VD heat map emphasized white matter regions; and (iii) the LBD heat map exposed occipital alterations. The DeepSPARE indices were validated by examining their associations with cognitive testing and neuropathological and neuroimaging measures using linear mixed-effects models. The DeepSPARE-AD index was associated with Mini-Mental State Examination, the Trail Making Test B, memory, hippocampal volume, Braak stages, Consortium to Establish a Registry for Alzheimer's Disease (CERAD) scores and Thal phases [false-discovery rate (FDR)-adjusted P < 0.05]. The DeepSPARE-VD index was associated with white matter hyperintensity volume and cerebral amyloid angiopathy (FDR-adjusted P < 0.001), and the DeepSPARE-LBD index was associated with Lewy body stages (FDR-adjusted P < 0.05). The findings were replicated in an out-of-sample Alzheimer's Disease Neuroimaging Initiative dataset by testing associations with cognitive, imaging, plasma and CSF measures. CSF and plasma tau phosphorylated at threonine-181 (pTau181) were significantly associated with DeepSPARE-AD in the AD and mild cognitive impairment amyloid-β positive (AD/MCIΑβ+) group (FDR-adjusted P < 0.001), and CSF α-synuclein was associated solely with DeepSPARE-LBD (FDR-adjusted P = 0.036). Overall, these findings demonstrate the advantages of our innovative deep-learning framework in detecting antemortem neuroimaging signatures linked to different pathologies. The newly deep-learning-derived DeepSPARE indices are precise, pathology-sensitive and single-valued non-invasive neuroimaging metrics, bridging the traditional widely available in vivo T1 imaging with histopathology.

Upper Airway Volume Predicts Brain Structure and Cognition in Adolescents.

Kanhere A, Navarathna N, Yi PH, Parekh VS, Pickle J, Cloak CC, Ernst T, Chang L, Li D, Redline S, Isaiah A

pubmed logopapersJun 3 2025
One in ten children experiences sleep-disordered breathing (SDB). Untreated SDB is associated with poor cognition, but the underlying mechanisms are less understood. We assessed the relationship between magnetic resonance imaging (MRI)-derived upper airway volume and children's cognition and regional cortical gray matter volumes. We used five-year data from the Adolescent Brain Cognitive Development study (n=11,875 children, 9-10 years at baseline). Upper airway volumes were derived using a deep learning model applied to 5,552,640 brain MRI slices. The primary outcome was the Total Cognition Composite score from the National Institutes of Health Toolbox (NIH-TB). Secondary outcomes included other NIH-TB measures and cortical gray matter volumes. The habitual snoring group had significantly smaller airway volumes than non-snorers (mean difference=1.2 cm<sup>3</sup>; 95% CI, 1.0-1.4 cm<sup>3</sup>; P<0.001). Deep learning-derived airway volume predicted the Total Cognition Composite score (estimated mean difference=3.68 points; 95% CI, 2.41-4.96; P<0.001) per one-unit increase in the natural log of airway volume (~2.7-fold raw volume increase). This airway volume increase was also associated with an average 0.02 cm<sup>3</sup> increase in right temporal pole volume (95% CI, 0.01-0.02 cm<sup>3</sup>; P<0.001). Similar airway volume predicted most NIH-TB domain scores and multiple frontal and temporal gray matter volumes. These brain volumes mediated the relationship between airway volume and cognition. We demonstrate a novel application of deep learning-based airway segmentation in a large pediatric cohort. Upper airway volume is a potential biomarker for cognitive outcomes in pediatric SDB, offers insights into neurobiological mechanisms, and informs future studies on risk stratification. This article is open access and distributed under the terms of the Creative Commons Attribution Non-Commercial No Derivatives License 4.0 (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Artificial intelligence vs human expertise: A comparison of plantar fascia thickness measurements through MRI imaging.

Alyanak B, Çakar İ, Dede BT, Yıldızgören MT, Bağcıer F

pubmed logopapersJun 3 2025
This study aims to evaluate the reliability of plantar fascia thickness measurements performed by ChatGPT-4 using magnetic resonance imaging (MRI) compared to those obtained by an experienced clinician. In this retrospective, single-center study, foot MRI images from the hospital archive were analysed. Plantar fascia thickness was measured under both blinded and non-blinded conditions by an experienced clinician and ChatGPT-4 at two separate time points. Measurement reliability was assessed using the intraclass correlation coefficient (ICC), mean absolute error (MAE), and mean relative error (MRE). A total of 41 participants (32 females, 9 males) were included. The average plantar fascia thickness measured by the clinician was 4.20 ± 0.80 mm and 4.25 ± 0.92 mm under blinded and non-blinded conditions, respectively, while ChatGPT-4's measurements were 6.47 ± 1.30 mm and 6.46 ± 1.31 mm, respectively. Human evaluators demonstrated excellent agreement (ICC = 0.983-0.989), whereas ChatGPT-4 exhibited low reliability (ICC = 0.391-0.432). In thin plantar fascia cases, ChatGPT-4's error rate was higher, with MAE = 2.70 mm, MRE = 77.17 % under blinded conditions, and MAE = 2.91 mm, MRE = 87.02 % under non-blinded conditions. ChatGPT-4 demonstrated lower reliability in plantar fascia thickness measurements compared to an experienced clinician, with increased error rates in thin structures. These findings highlight the limitations of AI-based models in medical image analysis and emphasize the need for further refinement before clinical implementation.

Super-resolution sodium MRI of human gliomas at 3T using physics-based generative artificial intelligence.

Raymond C, Yao J, Kolkovsky ALL, Feiweier T, Clifford B, Meyer H, Zhong X, Han F, Cho NS, Sanvito F, Oshima S, Salamon N, Liau LM, Patel KS, Everson RG, Cloughesy TF, Ellingson BM

pubmed logopapersJun 3 2025
Sodium neuroimaging provides unique insights into the cellular and metabolic properties of brain tumors. However, at 3T, sodium neuroimaging MRI's low signal-to-noise ratio (SNR) and resolution discourages routine clinical use. We evaluated the recently developed Anatomically constrained GAN using physics-based synthetic MRI artifacts" (ATHENA) for high-resolution sodium neuroimaging of brain tumors at 3T. We hypothesized the model would improve the image quality while preserving the inherent sodium information. 4,573 proton MRI scans from 1,390 suspected brain tumor patients were used for training. Sodium and proton MRI datasets from Twenty glioma patients were collected for validation. Twenty-four image-guided biopsies from seven patients were available for sodium-proton exchanger (NHE1) expression evaluation on immunohistochemistry. High-resolution synthetic sodium images were generated using the ATHENA model, then compared to native sodium MRI and NHE1 protein expression from image-guided biopsy samples. The ATHENA produced synthetic-sodium MR with significantly improved SNR (native SNR 18.20 ± 7.04; synthetic SNR 23.83 ± 9.33, P = 0.0079). The synthetic-sodium values were consistent with the native measurements (P = 0.2058), with a strong linear correlation within contrast-enhancing areas of the tumor (R<sup>2</sup> = 0.7565, P = 0.0005), T2-hyperintense (R<sup>2</sup> = 0.7325, P < 0.0001), and necrotic areas (R<sup>2</sup> = 0.7678, P < 0.0001). The synthetic-sodium MR and the relative NHE1 expression from image-guided biopsies were better correlated for the synthetic (ρ = 0.3269, P < 0.0001) than the native (ρ = 0.1732, P = 0.0276) with higher sodium signal in samples expressing elevated NHE1 (P < 0.0001). ATHENA generates high-resolution synthetic-sodium MRI at 3T, enabling clinically attainable multinuclear imaging for brain tumors that retain the inherent information from the native sodium. The resulting synthetic sodium significantly correlates with tissue expression, potentially supporting its utility as a non-invasive marker of underlying sodium homeostasis in brain tumors.

Patient-specific prediction of glioblastoma growth via reduced order modeling and neural networks.

Cerrone D, Riccobelli D, Gazzoni S, Vitullo P, Ballarin F, Falco J, Acerbi F, Manzoni A, Zunino P, Ciarletta P

pubmed logopapersJun 3 2025
Glioblastoma is among the most aggressive brain tumors in adults, characterized by patient-specific invasion patterns driven by the underlying brain microstructure. In this work, we present a proof-of-concept for a mathematical model of GBL growth, enabling real-time prediction and patient-specific parameter identification from longitudinal neuroimaging data. The framework exploits a diffuse-interface mathematical model to describe the tumor evolution and a reduced-order modeling strategy, relying on proper orthogonal decomposition, trained on synthetic data derived from patient-specific brain anatomies reconstructed from magnetic resonance imaging and diffusion tensor imaging. A neural network surrogate learns the inverse mapping from tumor evolution to model parameters, achieving significant computational speed-up while preserving high accuracy. To ensure robustness and interpretability, we perform both global and local sensitivity analyses, identifying the key biophysical parameters governing tumor dynamics and assessing the stability of the inverse problem solution. These results establish a methodological foundation for future clinical deployment of patient-specific digital twins in neuro-oncology.

Machine learning for classification of pediatric bipolar disorder with and without psychotic symptoms based on thalamic subregional structural volume.

Gao W, Zhang K, Jiao Q, Su L, Cui D, Lu S, Yang R

pubmed logopapersJun 3 2025
The thalamus plays a crucial role in sensory processing, emotional regulation, and cognitive functions, and its dysregulation may be implicated in psychosis. The aim of the present study was to examine the differences in thalamic subregional volumes between pediatric bipolar disorder patients with (P-PBD) and without psychotic symptoms (NP-PBD). Participants including 28 P-PBD, 26 NP-PBD, and 18 healthy controls (HCs) underwent structural magnetic resonance imaging (sMRI) scanning using a 3.0T MRI scanner. All T1-weighted imaging data were processed by FreeSurfer 7.4.0 software. The volumetric differences of thalamic subregions among three groups were compared by using analyses of covariance (ANCOVA) and post-hoc analyses. Additionally, we applied a standard support vector classification (SVC) model for pairwise comparison among the three groups to identify brain regions with significant volumetric differences. The ANCOVA revealed that significant volumetric differences were observed in the left pulvinar anterior (L_PuA) and left reuniens medial ventral (L_MV-re) thalamus among three groups. Post-hoc analysis revealed that patients with P-PBD exhibited decreased volumes in the L_PuA and L_MV-re when compared to the NP-PBD group and HCs, respectively. Furthermore, the SVC model revealed that the L_MV-re volume exhibited the best capacity to discriminate P-PBD from NP-PBD and HCs. The present findings demonstrated that reduced thalamic subregional volumes in the L_PuA and L_MV-re might be associated with psychotic symptoms in PBD.

MRI super-resolution reconstruction using efficient diffusion probabilistic model with residual shifting.

Safari M, Wang S, Eidex Z, Li Q, Qiu RLJ, Middlebrooks EH, Yu DS, Yang X

pubmed logopapersJun 3 2025
Magnetic resonance imaging (MRI) is essential in clinical and research contexts, providing exceptional soft-tissue contrast. However, prolonged acquisition times often lead to patient discomfort and motion artifacts. Diffusion-based deep learning super-resolution (SR) techniques reconstruct high-resolution (HR) images from low-resolution (LR) pairs, but they involve extensive sampling steps, limiting real-time application. To overcome these issues, this study introduces a residual error-shifting mechanism markedly reducing sampling steps while maintaining vital anatomical details, thereby accelerating MRI reconstruction. We developed Res-SRDiff, a novel diffusion-based SR framework incorporating residual error shifting into the forward diffusion process. This integration aligns the degraded HR and LR distributions, enabling efficient HR image reconstruction. We evaluated Res-SRDiff using ultra-high-field brain T1 MP2RAGE maps and T2-weighted prostate images, benchmarking it against Bicubic, Pix2pix, CycleGAN, SPSR, I2SB, and TM-DDPM methods. Quantitative assessments employed peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), gradient magnitude similarity deviation (GMSD), and learned perceptual image patch similarity (LPIPS). Additionally, we qualitatively and quantitatively assessed the proposed framework's individual components through an ablation study and conducted a Likert-based image quality evaluation. Res-SRDiff significantly surpassed most comparison methods regarding PSNR, SSIM, and GMSD for both datasets, with statistically significant improvements (p-values≪0.05). The model achieved high-fidelity image reconstruction using only four sampling steps, drastically reducing computation time to under one second per slice. In contrast, traditional methods like TM-DDPM and I2SB required approximately 20 and 38 seconds per slice, respectively. Qualitative analysis showed Res-SRDiff effectively preserved fine anatomical details and lesion morphologies. The Likert study indicated that our method received the highest scores, 4.14±0.77(brain) and 4.80±0.40(prostate). Res-SRDiff demonstrates efficiency and accuracy, markedly improving computational speed and image quality. Incorporating residual error shifting into diffusion-based SR facilitates rapid, robust HR image reconstruction, enhancing clinical MRI workflow and advancing medical imaging research. Code available at https://github.com/mosaf/Res-SRDiff.

Deep learning-based automatic segmentation of arterial vessel walls and plaques in MR vessel wall images for quantitative assessment.

Yang L, Yang X, Gong Z, Mao Y, Lu SS, Zhu C, Wan L, Huang J, Mohd Noor MH, Wu K, Li C, Cheng G, Li Y, Liang D, Liu X, Zheng H, Hu Z, Zhang N

pubmed logopapersJun 3 2025
To develop and validate a deep-learning-based automatic method for vessel walls and atherosclerotic plaques segmentation for quantitative evaluation in MR vessel wall images. A total of 193 patients (107 patients for training and validation, 39 patients for internal test, 47 patients for external test) with atherosclerotic plaque from five centers underwent T1-weighted MRI scans and were included in the dataset. The first step of the proposed method was constructing a purely learning-based convolutional neural network (CNN) named Vessel-SegNet to segment the lumen and the vessel wall. The second step is using the vessel wall priors (including manual prior and Tversky-loss-based automatic prior) to improve the plaque segmentation, which utilizes the morphological similarity between the vessel wall and the plaque. The Dice similarity coefficient (DSC), intraclass correlation coefficient (ICC), etc., were used to evaluate the similarity, agreement, and correlations. Most of the DSCs for lumen and vessel wall segmentation were above 90%. The introduction of vessel wall priors can increase the DSC for plaque segmentation by over 10%, reaching 88.45%. Compared to dice-loss-based vessel wall priors, the Tversky-loss-based priors can further improve DSC by nearly 3%, reaching 82.84%. Most of the ICC values between the Vessel-SegNet and manual methods in the 6 quantitative measurements are greater than 85% (p-value < 0.001). The proposed CNN-based segmentation model can quickly and accurately segment vessel walls and plaques for quantitative evaluation. Due to the lack of testing with other equipment, populations, and anatomical studies, the reliability of the research results still requires further exploration. Question How can the accuracy and efficiency of vessel component segmentation for quantification, including the lumen, vessel wall, and plaque, be improved? Findings Improved CNN models, manual/automatic vessel wall priors, and Tversky loss can improve the performance of semi-automatic/automatic vessel components segmentation for quantification. Clinical relevance Manual segmentation of vessel components is a time-consuming yet important process. Rapid and accurate segmentation of the lumen, vessel walls, and plaques for quantification assessment helps patients obtain more accurate, efficient, and timely stroke risk assessments and clinical recommendations.

Redefining diagnostic lesional status in temporal lobe epilepsy with artificial intelligence.

Gleichgerrcht E, Kaestner E, Hassanzadeh R, Roth RW, Parashos A, Davis KA, Bagić A, Keller SS, Rüber T, Stoub T, Pardoe HR, Dugan P, Drane DL, Abrol A, Calhoun V, Kuzniecky RI, McDonald CR, Bonilha L

pubmed logopapersJun 3 2025
Despite decades of advancements in diagnostic MRI, 30%-50% of temporal lobe epilepsy (TLE) patients remain categorized as 'non-lesional' (i.e. MRI negative) based on visual assessment by human experts. MRI-negative patients face diagnostic uncertainty and significant delays in treatment planning. Quantitative MRI studies have demonstrated that MRI-negative patients often exhibit a TLE-specific pattern of temporal and limbic atrophy that might be too subtle for the human eye to detect. This signature pattern could be translated successfully into clinical use via advances in artificial intelligence in computer-aided MRI interpretation, thereby improving the detection of brain 'lesional' patterns associated with TLE. Here, we tested this hypothesis by using a three-dimensional convolutional neural network applied to a dataset of 1178 scans from 12 different centres, which was able to differentiate TLE from healthy controls with high accuracy (85.9% ± 2.8%), significantly outperforming support vector machines based on hippocampal (74.4% ± 2.6%) and whole-brain (78.3% ± 3.3%) volumes. Our analysis focused subsequently on a subset of patients who achieved sustained seizure freedom post-surgery as a gold standard for confirming TLE. Importantly, MRI-negative patients from this cohort were accurately identified as TLE 82.7% ± 0.9% of the time, an encouraging finding given that clinically these were all patients considered to be MRI negative (i.e. not radiographically different from controls). The saliency maps from the convolutional neural network revealed that limbic structures, particularly medial temporal, cingulate and orbitofrontal areas, were most influential in classification, confirming the importance of the well-established TLE signature atrophy pattern for diagnosis. Indeed, the saliency maps were similar in MRI-positive and MRI-negative TLE groups, suggesting that even when humans cannot distinguish more subtle levels of atrophy, these MRI-negative patients are on the same continuum common across all TLE patients. As such, artificial intelligence can identify TLE lesional patterns, and artificial intelligence-aided diagnosis has the potential to enhance the neuroimaging diagnosis of TLE greatly and to redefine the concept of 'lesional' TLE.
Page 83 of 1201200 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.