Sort by:
Page 29 of 1621612 results

An MRI-pathology foundation model for noninvasive diagnosis and grading of prostate cancer.

Shao L, Liang C, Yan Y, Zhu H, Jiang X, Bao M, Zang P, Huang X, Zhou H, Nie P, Wang L, Li J, Zhang S, Ren S

pubmed logopapersSep 2 2025
Prostate cancer is a leading health concern for men, yet current clinical assessments of tumor aggressiveness rely on invasive procedures that often lead to inconsistencies. There remains a critical need for accurate, noninvasive diagnosis and grading methods. Here we developed a foundation model trained on multiparametric magnetic resonance imaging (MRI) and paired pathology data for noninvasive diagnosis and grading of prostate cancer. Our model, MRI-based Predicted Transformer for Prostate Cancer (MRI-PTPCa), was trained under contrastive learning on nearly 1.3 million image-pathology pairs from over 5,500 patients in discovery, modeling, external and prospective cohorts. During real-world testing, prediction of MRI-PTPCa demonstrated consistency with pathology and superior performance (area under the curve above 0.978; grading accuracy 89.1%) compared with clinical measures and other prediction models. This work introduces a scalable, noninvasive approach to prostate cancer diagnosis and grading, offering a robust tool to support clinical decision-making while reducing reliance on biopsies.

Advanced Deep Learning Architecture for the Early and Accurate Detection of Autism Spectrum Disorder Using Neuroimaging

Ud Din, A., Fatima, N., Bibi, N.

medrxiv logopreprintSep 2 2025
Autism Spectrum Disorder (ASD) is a neurological condition that affects the brain, leading to challenges in speech, communication, social interaction, repetitive behaviors, and motor skills. This research aims to develop a deep learning based model for the accurate diagnosis and classification of autistic symptoms in children, thereby benefiting both patients and their families. Existing literature indicates that classification methods typically analyze region based summaries of Functional Magnetic Resonance Imaging (fMRI). However, few studies have explored the diagnosis of ASD using brain imaging. The complexity and heterogeneity of biomedical data modeling for big data analysis related to ASD remain unclear. In the present study, the Autism Brain Imaging Data Exchange 1 (ABIDE-1) dataset was utilized, comprising 1,112 participants, including 539 individuals with ASD and 573 controls from 17 different sites. The dataset, originally in NIfTI format, required conversion to a computer-readable extension. For ASD classification, the researcher proposed and implemented a VGG20 architecture. This deep learning VGG20 model was applied to neuroimages to distinguish ASD from the non ASD cases. Four evaluation metrics were employed which are recall, precision, F1-score, and accuracy. Experimental results indicated that the proposed model achieved an accuracy of 61%. Prior to this work, machine learning algorithms had been applied to the ABIDE-1 dataset, but deep learning techniques had not been extensively utilized for this dataset and the methods implied in this study as this research is conducted to facilitate the early diagnosis of ASD.

Super-Resolution MR Spectroscopic Imaging via Diffusion Models for Tumor Metabolism Mapping.

Alsubaie M, Perera SM, Gu L, Subasi SB, Andronesi OC, Li X

pubmed logopapersSep 2 2025
High-resolution magnetic resonance spectroscopic imaging (MRSI) plays a crucial role in characterizing tumor metabolism and guiding clinical decisions for glioma patients. However, due to inherently low metabolite concentrations and signal-to-noise ratio (SNR) limitations, MRSI data are often acquired at low spatial resolution, hindering accurate visualization of tumor heterogeneity and margins. In this study, we propose a novel deep learning framework based on conditional denoising diffusion probabilistic models for super-resolution reconstruction of MRSI, with a particular focus on mutant isocitrate dehydrogenase (IDH) gliomas. The model progressively transforms noise into high-fidelity metabolite maps through a learned reverse diffusion process, conditioned on low-resolution inputs. Leveraging a Self-Attention UNet backbone, the proposed approach integrates global contextual features and achieves superior detail preservation. On simulated patient data, the proposed method achieved Structural Similarity Index Measure (SSIM) values of 0.956, 0.939, and 0.893; Peak Signal-to-Noise Ratio (PSNR) values of 29.73, 27.84, and 26.39 dB; and Learned Perceptual Image Patch Similarity (LPIPS) values of 0.025, 0.036, and 0.045 for upsampling factors of 2, 4, and 8, respectively, with LPIPS improvements statistically significant compared to all baselines ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>p</mi> <mo><</mo> <mn>0.01</mn></mrow> </math> ). We validated the framework on in vivo MRSI from healthy volunteers and glioma patients, where it accurately reconstructed small lesions, preserved critical textural and structural information, and enhanced tumor boundary delineation in metabolic ratio maps, revealing heterogeneity not visible in other approaches. These results highlight the promise of diffusion-based deep learning models as clinically relevant tools for noninvasive, high-resolution metabolic imaging in glioma and potentially other neurological disorders.

Fusion of Deep Transfer Learning and Radiomics in MRI-Based Prediction of Post-Surgical Recurrence in Soft Tissue Sarcoma.

Wang Y, Wang T, Zheng F, Hao W, Hao Q, Zhang W, Yin P, Hong N

pubmed logopapersSep 2 2025
Soft  tissue sarcomas (STS) are heterogeneous malignancies with high recurrence rates (33-39%) post-surgery, necessitating improved prognostic tools. This study proposes a fusion model integrating deep transfer learning and radiomics from MRI to predict postoperative STS recurrence. Axial T2-weighted fat-suppressed imaging (T<sub>2</sub>WI) of 803 STS patients from two institutions was retrospectively collected and divided into training (n = 527), internal validation (n = 132), and external validation (n = 144) cohorts. Tumor segmentation was performed using the SegResNet model within the Auto3DSeg framework. Radiomic features and deep learning features were extracted. Feature selection employed LASSO regression, and the deep learning radiomic (DLR) model combined radiomic and deep learning signatures. Using the features, nine models were constructed based on three classifiers. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, accuracy, negative predictive value, and positive predictive value were calculated for performance evaluation. The SegResNet model achieved Dice coefficients of 0.728 after refinement. Recurrence rates were 22.8% (120/527) in the training, 25.0% (33/132) in the internal validation, and 32.6% (47/144) in the external validation cohorts. The DLR model (ExtraTrees) demonstrated superior performance, achieving an AUC of 0.818 in internal validation and 0.809 in external validation, better than the radiomic model (0.710, 0.612) and the deep learning model (0.751, 0.667). Sensitivity and specificity ranged from 0.702 to 0.976 and 0.732 to 0.830, respectively. Decision curve analysis confirmed superior clinical utility. The DLR model provides a robust, non-invasive tool for preoperative STS recurrence prediction, enabling personalized treatment decisions and postoperative management.

A Preliminary Study on an Intelligent Segmentation and Classification Model for Amygdala-Hippocampus MRI Images in Alzheimer's Disease.

Liu S, Zhou K, Geng D

pubmed logopapersSep 2 2025
This study developed a deep learning model for segmenting and classifying the amygdala-hippocampus in Alzheimer's disease (AD), using a large-scale neuroimaging dataset to improve early AD detection and intervention. We collected 1000 healthy controls (HC) and 1000 AD patients as internal training data from 15 Chinese medical centers. The independent external validation dataset was sourced from another three centers. All subjects underwent neuroimaging and neuropsychological assessments. A semi-automated annotation pipeline was used: the amygdala-hippocampus of 200 cases in each group were manually annotated to train the U²-Net segmentation model, followed by model annotation of 800 cases with iterative refinement. The DenseNet-121 architecture was built for automated classification. The robustness of the model was evaluated using an external validation set. All 18 medical centers were distributed across diverse geographical regions in China. AD patients had lower MMSE/MoCA scores. Amygdala and hippocampal volumes were smaller in AD. Semi-automated annotation improved segmentation with DSC all exceeding 0.88 (P<0.001). The final DSC of the 2000-case cohort was 0.914 in the training set and 0.896 in the testing set. The classification model achieved an AUC of 0.905. The external validation set comprised 100 cases in each group, and it can achieve an AUC of 0.835. The amygdala-hippocampus recognition precision may be improved by the deep learning-based semi-automated approach and classification model, which will help with AD evaluation, diagnosis, and clinical AI application.

Diffusion-QSM: diffusion model with timetravel and resampling refinement for quantitative susceptibility mapping.

Zhang M, Liu C, Zhang Y, Wei H

pubmed logopapersSep 2 2025
Quantitative susceptibility mapping (QSM) is a useful magnetic resonance imaging technique. We aim to propose a deep learning (DL)-based method for QSM reconstruction that is robust to data perturbations. We developed Diffusion-QSM, a diffusion model-based method with a time-travel and resampling refinement module for high-quality QSM reconstruction. First, the diffusion prior is trained unconditionally on high-quality QSM images, without requiring explicit information about the measured tissue phase, thereby enhancing generalization performance. Subsequently, during inference, the physical constraints from the QSM forward model and measurement are integrated into the output of the diffusion model to guide the sampling process toward realistic image representations. In addition, a time-travel and resampling module is employed during the later sampling stage to refine the image quality, resulting in an improved reconstruction without significantly prolonging the time. Experimental results show that Diffusion-QSM outperforms traditional and unsupervised DL methods for QSM reconstruction using simulation, in vivo and ex vivo data and shows better generalization capability than supervised DL methods when processing out-of-distribution data. Diffusion-QSM successfully unifies data-driven diffusion priors and subjectspecific physics constraints, enabling generalizable, high-quality QSM reconstruction under diverse perturbations, including image contrast, resolution and scan direction. This work advances QSM reconstruction by bridging the generalization gap in deep learning. The excellent quality and generalization capability underscore its potential for various realistic applications.

Artificial Intelligence for Alzheimer's disease diagnosis through T1-weighted MRI: A systematic review.

Basanta-Torres S, Rivas-Fernández MÁ, Galdo-Alvarez S

pubmed logopapersSep 2 2025
Alzheimer's disease (AD) is a leading cause of dementia worldwide, characterized by heterogeneous neuropathological changes and progressive cognitive decline. Despite the numerous studies, there are still no effective treatments beyond those that aim to slow progression and compensate the impairment. Neuroimaging techniques provide a comprehensive view of brain changes, with magnetic resonance imaging (MRI) playing a key role due to its non-invasive nature and wide availability. The T1-weighted MRI sequence is frequently used due to its prevalence in most MRI protocols, generating large datasets, ideal for artificial intelligence (AI) applications. AI, particularly machine learning (ML) and deep learning (DL) techniques, has been increasingly utilized to model these datasets and classify individuals along the AD continuum. This systematic review evaluates studies using AI to classify more than two stages of AD based on T1-weighted MRI data. Convolutional neural networks (CNNs) are the most widely applied, achieving an average classification accuracy of 85.93 % (range: 51.80-100 %; median: 87.70 %). These good results are due to CNNs' ability to extract hierarchical features directly from raw imaging data, reducing the need for extensive preprocessing. Non-convolutional neural networks and traditional ML approaches also demonstrated strong performance, with mean accuracies of 82.50 % (range: 57.61-99.38 %; median: 86.67 %) and 84.22 % (range: 33-99.10 %; median: 87.75 %), respectively, underscoring importance of input data selection. Despite promising outcomes, challenges remain, including methodological heterogeneity, overfitting risks, and a reliance on the ADNI database, which limits dataset diversity. Addressing these limitations is critical to advancing AI's clinical application for early detection, improved classification, and enhanced patient outcomes.

Application and assessment of deep learning to routine 2D T2 FLEX spine imaging at 1.5T.

Shaikh IS, Milshteyn E, Chulsky S, Maclellan CJ, Soman S

pubmed logopapersSep 2 2025
2D T2 FSE is an essential routine spine MRI sequence, allowing assessment of fractures, soft tissues, and pathology. Fat suppression using a DIXON-type approach (2D FLEX) improves water/fat separation. Recently, a deep learning (DL) reconstruction (AIR™ Recon DL, GE HealthCare) became available for 2D FLEX, offering increased signal-to-noise ratio (SNR), reduced artifacts, and sharper images. This study aimed to compare DL-reconstructed versus non-DL-reconstructed spine 2D T2 FLEX images for diagnostic image quality and quantitative metrics at 1.5T. Forty-one patients with clinically indicated cervical or lumbar spine MRI were scanned between May and August 2023 on a 1.5T Voyager (GE HealthCare). A 2D T2 FLEX sequence was acquired, and DL-based reconstruction (noise reduction strength: 75%) was applied. Raw data were also reconstructed without DL. Three readers (CAQ-neuroradiologist, PGY-6 neuroradiology fellow, PGY-2 radiology resident) rated diagnostic preference (0 = non-DL, 1 = DL, 2 = equivalent) for 39 cases. Quantitative measures (SNR, total variation [TV], number of edges, and fat fraction [FF]) were compared using paired t-tests with significance set at p < .05. Among evaluations, 79.5% preferred DL, 11% found images equivalent, and 9.4% favored non-DL, with strong inter-rater agreement (p < .001, Fleiss' Kappa = 0.99). DL images had higher SNR, lower TV, and fewer edges (p < .001), indicating effective noise reduction. FF remained statistically unchanged in subcutaneous fat (p = .25) but differed slightly in vertebral bodies (1.4% difference, p = .01). DL reconstruction notably improved image quality by enhancing SNR and reducing noise without clinically meaningful changes in fat quantification. These findings support the use of DL-enhanced 2D T2 FLEX in routine spine imaging at 1.5T. Incorporating DL-based reconstruction into standard spine MRI protocols can increase diagnostic confidence and workflow efficiency. Further studies with larger cohorts and diverse pathologies are warranted to refine this approach and explore potential benefits for clinical decision-making.

Optimizing and Evaluating Robustness of AI for Brain Metastasis Detection and Segmentation via Loss Functions and Multi-dataset Training

Han, Y., Pathak, P., Award, O., Mohamed, A. S. R., Ugarte, V., Zhou, B., Hamstra, D. A., Echeverria, A. E., Mekdash, H. A., Siddiqui, Z. A., Sun, B.

medrxiv logopreprintSep 2 2025
Purpose: Accurate detection and segmentation of brain metastases (BM) from MRI are critical for the appropriate management of cancer patients. This study investigates strategies to enhance the robustness of artificial intelligence (AI)-based BM detection and segmentation models. Method: A DeepMedic-based network with a loss function, tunable with a sensitivity/specificity tradeoff weighting factor \alpha- was trained on T1 post-contrast MRI datasets from two institutions (514 patients, 4520 lesions). Robustness was evaluated on an external dataset from a third institution dataset (91 patients, 397 lesions), featuring ground truth annotations from two physicians. We investigated the impact of loss function weighting factor, \alpha and training dataset combinations. Detection performance (sensitivity, precision, F1 score) and segmentation accuracy (Dice similarity, and 95% Hausdorff distance (HD95)) were evaluated using one physician contours as the reference standard. The optimal AI model was then directly compared to the performance of the second physician. Results: Varying demonstrated a trade-off between sensitivity (higher ) and precision (lower ), with =0.5 yielding the best F1 score (0.80 {+/-} 0.04 vs. 0.78 {+/-} 0.04 for =0.95 and 0.72 {+/-} 0.03 for =0.99) on the external dataset. The optimally trained model achieved detection performance comparable to the physician (F1: AI=0.83 {+/-} 0.04, Physician=0.83 {+/-} 0.04), but slightly underperformed in segmentation (Dice: 0.79 {+/-} 0.04 vs. AI=0.74 {+/-} 0.03; HD95: 2.8 {+/-} 0.14 mm vs. AI=3.18 {+/-} 0.16 mm, p<0.05). Conclusion: The derived optimal model achieves detection and segmentation performance comparable to an expert physician in a parallel comparison.

3D MR Neurography of Craniocervical Nerves: Comparing Double-Echo Steady-State and Postcontrast STIR with Deep Learning-Based Reconstruction at 1.5T.

Ensle F, Zecca F, Kerber B, Lohezic M, Wen Y, Kroschke J, Pawlus K, Guggenberger R

pubmed logopapersSep 2 2025
3D MR neurography is a useful diagnostic tool in head and neck disorders, but neurographic imaging remains challenging in this region. Optimal sequences for nerve visualization have not yet been established and may also differ between nerves. While deep learning (DL) reconstruction can enhance nerve depiction, particularly at 1.5T, studies in the head and neck are lacking. The purpose of this study was to compare double echo steady-state (DESS) and postcontrast STIR sequences in DL-reconstructed 3D MR neurography of the extraforaminal cranial and spinal nerves at 1.5T. Eighteen consecutive examinations of 18 patients undergoing head-and-neck MRI at 1.5T were retrospectively included (mean age: 51 ± 14 years, 11 women). 3D DESS and postcontrast 3D STIR sequences were obtained as part of the standard protocol, and reconstructed with a prototype DL algorithm. Two blinded readers qualitatively evaluated visualization of the inferior alveolar, lingual, facial, hypoglossal, greater occipital, lesser occipital, and greater auricular nerves, as well as overall image quality, vascular suppression, and artifacts. Additionally, apparent SNR and contrast-to-noise ratios of the inferior alveolar and greater occipital nerve were measured. Visual ratings and quantitative measurements, respectively, were compared between sequences by using Wilcoxon signed-rank test. DESS demonstrated significantly improved visualization of the lesser occipital nerve, greater auricular nerve, and proximal greater occipital nerve (<i>P</i> < .015). Postcontrast STIR showed significantly enhanced visualization of the lingual nerve, hypoglossal nerve, and distal inferior alveolar nerve (<i>P</i> < .001). The facial nerve, proximal inferior alveolar nerve, and distal greater occipital nerve did not demonstrate significant differences in visualization between sequences (<i>P</i> > .08). There was also no significant difference for overall image quality and artifacts. Postcontrast STIR achieved superior vascular suppression, reaching statistical significance for 1 reader (<i>P</i> = .039). Quantitatively, there was no significant difference between sequences (<i>P</i> > .05). Our findings suggest that 3D DESS generally provides improved visualization of spinal nerves, while postcontrast 3D STIR facilitates enhanced delineation of extraforaminal cranial nerves.
Page 29 of 1621612 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.