Sort by:
Page 30 of 1621612 results

Navigator motion-resolved MR fingerprinting using implicit neural representation: Feasibility for free-breathing three-dimensional whole-liver multiparametric mapping.

Li C, Li J, Zhang J, Solomon E, Dimov AV, Spincemaille P, Nguyen TD, Prince MR, Wang Y

pubmed logopapersSep 2 2025
To develop a multiparametric free-breathing three-dimensional, whole-liver quantitative maps of water T<sub>1</sub>, water T<sub>2</sub>, fat fraction (FF) and R<sub>2</sub>*. A multi-echo 3D stack-of-spiral gradient-echo sequence with inversion recovery and T<sub>2</sub>-prep magnetization preparations was implemented for multiparametric MRI. Fingerprinting and a neural network based on implicit neural representation (FINR) were developed to simultaneously reconstruct the motion deformation fields, the static images, perform water-fat separation, and generate T<sub>1</sub>, T<sub>2</sub>, R<sub>2</sub>*, and FF maps. FINR performance was evaluated in 10 healthy subjects by comparison with quantitative maps generated using conventional breath-holding imaging. FINR consistently generated sharp images in all subjects free of motion artifacts. FINR showed minimal bias and narrow 95% limits of agreement for T<sub>1</sub>, T<sub>2</sub>, R<sub>2</sub>*, and FF values in the liver compared with conventional imaging. FINR training took about 3 h per subject, and FINR inference took less than 1 min to produce static images and motion deformation fields. FINR is a promising approach for 3D whole-liver T<sub>1</sub>, T<sub>2</sub>, R<sub>2</sub>*, and FF mapping in a single free-breathing continuous scan.

3D MR Neurography of Craniocervical Nerves: Comparing Double-Echo Steady-State and Postcontrast STIR with Deep Learning-Based Reconstruction at 1.5T.

Ensle F, Zecca F, Kerber B, Lohezic M, Wen Y, Kroschke J, Pawlus K, Guggenberger R

pubmed logopapersSep 2 2025
3D MR neurography is a useful diagnostic tool in head and neck disorders, but neurographic imaging remains challenging in this region. Optimal sequences for nerve visualization have not yet been established and may also differ between nerves. While deep learning (DL) reconstruction can enhance nerve depiction, particularly at 1.5T, studies in the head and neck are lacking. The purpose of this study was to compare double echo steady-state (DESS) and postcontrast STIR sequences in DL-reconstructed 3D MR neurography of the extraforaminal cranial and spinal nerves at 1.5T. Eighteen consecutive examinations of 18 patients undergoing head-and-neck MRI at 1.5T were retrospectively included (mean age: 51 ± 14 years, 11 women). 3D DESS and postcontrast 3D STIR sequences were obtained as part of the standard protocol, and reconstructed with a prototype DL algorithm. Two blinded readers qualitatively evaluated visualization of the inferior alveolar, lingual, facial, hypoglossal, greater occipital, lesser occipital, and greater auricular nerves, as well as overall image quality, vascular suppression, and artifacts. Additionally, apparent SNR and contrast-to-noise ratios of the inferior alveolar and greater occipital nerve were measured. Visual ratings and quantitative measurements, respectively, were compared between sequences by using Wilcoxon signed-rank test. DESS demonstrated significantly improved visualization of the lesser occipital nerve, greater auricular nerve, and proximal greater occipital nerve (<i>P</i> < .015). Postcontrast STIR showed significantly enhanced visualization of the lingual nerve, hypoglossal nerve, and distal inferior alveolar nerve (<i>P</i> < .001). The facial nerve, proximal inferior alveolar nerve, and distal greater occipital nerve did not demonstrate significant differences in visualization between sequences (<i>P</i> > .08). There was also no significant difference for overall image quality and artifacts. Postcontrast STIR achieved superior vascular suppression, reaching statistical significance for 1 reader (<i>P</i> = .039). Quantitatively, there was no significant difference between sequences (<i>P</i> > .05). Our findings suggest that 3D DESS generally provides improved visualization of spinal nerves, while postcontrast 3D STIR facilitates enhanced delineation of extraforaminal cranial nerves.

Advanced Deep Learning Architecture for the Early and Accurate Detection of Autism Spectrum Disorder Using Neuroimaging

Ud Din, A., Fatima, N., Bibi, N.

medrxiv logopreprintSep 2 2025
Autism Spectrum Disorder (ASD) is a neurological condition that affects the brain, leading to challenges in speech, communication, social interaction, repetitive behaviors, and motor skills. This research aims to develop a deep learning based model for the accurate diagnosis and classification of autistic symptoms in children, thereby benefiting both patients and their families. Existing literature indicates that classification methods typically analyze region based summaries of Functional Magnetic Resonance Imaging (fMRI). However, few studies have explored the diagnosis of ASD using brain imaging. The complexity and heterogeneity of biomedical data modeling for big data analysis related to ASD remain unclear. In the present study, the Autism Brain Imaging Data Exchange 1 (ABIDE-1) dataset was utilized, comprising 1,112 participants, including 539 individuals with ASD and 573 controls from 17 different sites. The dataset, originally in NIfTI format, required conversion to a computer-readable extension. For ASD classification, the researcher proposed and implemented a VGG20 architecture. This deep learning VGG20 model was applied to neuroimages to distinguish ASD from the non ASD cases. Four evaluation metrics were employed which are recall, precision, F1-score, and accuracy. Experimental results indicated that the proposed model achieved an accuracy of 61%. Prior to this work, machine learning algorithms had been applied to the ABIDE-1 dataset, but deep learning techniques had not been extensively utilized for this dataset and the methods implied in this study as this research is conducted to facilitate the early diagnosis of ASD.

Artificial Intelligence for Alzheimer's disease diagnosis through T1-weighted MRI: A systematic review.

Basanta-Torres S, Rivas-Fernández MÁ, Galdo-Alvarez S

pubmed logopapersSep 2 2025
Alzheimer's disease (AD) is a leading cause of dementia worldwide, characterized by heterogeneous neuropathological changes and progressive cognitive decline. Despite the numerous studies, there are still no effective treatments beyond those that aim to slow progression and compensate the impairment. Neuroimaging techniques provide a comprehensive view of brain changes, with magnetic resonance imaging (MRI) playing a key role due to its non-invasive nature and wide availability. The T1-weighted MRI sequence is frequently used due to its prevalence in most MRI protocols, generating large datasets, ideal for artificial intelligence (AI) applications. AI, particularly machine learning (ML) and deep learning (DL) techniques, has been increasingly utilized to model these datasets and classify individuals along the AD continuum. This systematic review evaluates studies using AI to classify more than two stages of AD based on T1-weighted MRI data. Convolutional neural networks (CNNs) are the most widely applied, achieving an average classification accuracy of 85.93 % (range: 51.80-100 %; median: 87.70 %). These good results are due to CNNs' ability to extract hierarchical features directly from raw imaging data, reducing the need for extensive preprocessing. Non-convolutional neural networks and traditional ML approaches also demonstrated strong performance, with mean accuracies of 82.50 % (range: 57.61-99.38 %; median: 86.67 %) and 84.22 % (range: 33-99.10 %; median: 87.75 %), respectively, underscoring importance of input data selection. Despite promising outcomes, challenges remain, including methodological heterogeneity, overfitting risks, and a reliance on the ADNI database, which limits dataset diversity. Addressing these limitations is critical to advancing AI's clinical application for early detection, improved classification, and enhanced patient outcomes.

Association Between Plasma Metabolomic Profile and Machine Learning-Based Brain Age.

Li Y, Wang J, Miao Y, Dunk MM, Liu Y, Fang Z, Zhang Q, Xu W

pubmed logopapersSep 1 2025
Metabolomics has been associated with cognitive decline and dementia, but the relationship between metabolites and brain aging remains unclear. We aimed to investigate the associations of metabolomics with brain age assessed by neuroimaging and to explore whether these relationships vary according to apolipoprotein E (APOE) ε4. This study included 17,770 chronic brain disorder-free participants aged 40-69 years from UK Biobank who underwent neuroimaging scans an average of 9 years after baseline. A total of 249 plasma metabolites were measured using nuclear magnetic resonance spectroscopy at baseline. Brain age was estimated using LASSO regression and 1079 brain MRI phenotypes and brain age gap (BAG; i.e., brain age minus chronological age) was calculated. Data were analyzed using linear regression. We identified 64 and 77 metabolites associated with brain age and BAG, respectively, of which 55 overlapped. Lipids (including cholesterol, cholesteryl esters, free cholesterol, phospholipids, and total lipids) in S/M-HDL, as well as phospholipids and triglycerides as a percentage of total lipids in different-density lipoproteins, were associated with larger BAG. The percentages of cholesterol, cholesteryl esters, and free cholesterol to total lipids in VLDL, LDL, and HDL of different particle sizes were associated with smaller BAG. The associations of LA/FA, omega-6/FA, SFA/FA, and phospholipids to total lipids in L-HDL with brain age were consistent across APOE ε4 carriers and non-carriers (all p for interaction > 0.05). Plasma metabolites show remarkably widespread associations with brain aging regardless of APOE ε4 genetic risk. Metabolic profiles could serve as an early indicator of accelerated brain aging.

Magnetic Resonance-Based Artificial Intelligence- Supported Osteochondral Allograft Transplantation for Massive Osteochondral Defects of the Knee.

Hangody G, Szoldán P, Egyed Z, Szabó E, Hangody LR, Hangody L

pubmed logopapersSep 1 2025
Transplantation of fresh osteochondral allografts is a possible biological resurfacing option to substitute massive bone loss and provide proper gliding surfaces for extended and deep osteochondral lesions of weight-bearing articular surfaces. Limited chondrocyte survival and technical difficulties may compromise the efficacy of osteochondral transfers. As experimental data suggest that minimizing the time between graft harvest and implantation may improve chondrocyte survival rate a <48 hours donor to recipient time was used to repair massive osteochondral defects. For optimal graft congruency, a magnetic resonance-based artificial intelligence algorithm was also developed to provide proper technical support. Based on 3 years of experience, increased survival rate of transplanted chondrocytes and improved clinical outcomes were observed.

Cross-channel feature transfer 3D U-Net for automatic segmentation of the perilymph and endolymph fluid spaces in hydrops MRI.

Yoo TW, Yeo CD, Lee EJ, Oh IS

pubmed logopapersSep 1 2025
The identification of endolymphatic hydrops (EH) using magnetic resonance imaging (MRI) is crucial for understanding inner ear disorders such as Meniere's disease and sudden low-frequency hearing loss. The EH ratio is calculated as the ratio of the endolymphatic fluid space to the perilymphatic fluid space. We propose a novel cross-channel feature transfer (CCFT) 3D U-Net for fully automated segmentation of the perilymphatic and endolymphatic fluid spaces in hydrops MRI. The model exhibits state-of-the-art performance in segmenting the endolymphatic fluid space by transferring magnetic resonance cisternography (MRC) features to HYDROPS-Mi2 (HYbriD of Reversed image Of Positive endolymph signal and native image of positive perilymph Signal multiplied with the heavily T2-weighted MR cisternography). Experimental results using the CCFT module showed that the segmentation performance of the perilymphatic space was 0.9459 for the Dice similarity coefficient (DSC) and 0.8975 for the intersection over union (IOU), and that of the endolymphatic space was 0.8053 for the DSC and 0.6778 for the IOU.

Multimodal dynamic hierarchical clustering model for post-stroke cognitive impairment prediction.

Bai C, Li T, Zheng Y, Yuan G, Zheng J, Zhao H

pubmed logopapersSep 1 2025
Post-stroke cognitive impairment (PSCI) is a common and debilitating consequence of stroke that often arises from complex interactions between diverse brain alterations. The accurate early prediction of PSCI is critical for guiding personalized interventions. However, existing methods often struggle to capture complex structural disruptions and integrate multimodal information effectively. This study proposes the multimodal dynamic hierarchical clustering network (MDHCNet), a graph neural network designed for accurate and interpretable PSCI prediction. MDHCNet constructs brain graphs from diffusion-weighted imaging, magnetic resonance angiography, and T1- and T2-weighted images and integrates them with clinical features using a hierarchical cross-modal fusion module. Experimental results using a real-world stroke cohort demonstrated that MDHCNet consistently outperformed deep learning baselines. Ablation studies validated the benefits of multimodal fusion, while saliency-based interpretation highlighted discriminative brain regions associated with cognitive decline. These findings suggest that MDHCNet is an effective and explainable tool for early PSCI prediction, with the potential to support individualized clinical decision-making in stroke rehabilitation.

Multidisciplinary Consensus Prostate Contours on Magnetic Resonance Imaging: Educational Atlas and Reference Standard for Artificial Intelligence Benchmarking.

Song Y, Dornisch AM, Dess RT, Margolis DJA, Weinberg EP, Barrett T, Cornell M, Fan RE, Harisinghani M, Kamran SC, Lee JH, Li CX, Liss MA, Rusu M, Santos J, Sonn GA, Vidic I, Woolen SA, Dale AM, Seibert TM

pubmed logopapersSep 1 2025
Evaluation of artificial intelligence (AI) algorithms for prostate segmentation is challenging because ground truth is lacking. We aimed to: (1) create a reference standard data set with precise prostate contours by expert consensus, and (2) evaluate various AI tools against this standard. We obtained prostate magnetic resonance imaging cases from six institutions from the Qualitative Prostate Imaging Consortium. A panel of 4 experts (2 genitourinary radiologists and 2 prostate radiation oncologists) meticulously developed consensus prostate segmentations on axial T<sub>2</sub>-weighted series. We evaluated the performance of 6 AI tools (3 commercially available and 3 academic) using Dice scores, distance from reference contour, and volume error. The panel achieved consensus prostate segmentation on each slice of all 68 patient cases included in the reference data set. We present 2 patient examples to serve as contouring guides. Depending on the AI tool, median Dice scores (across patients) ranged from 0.80 to 0.94 for whole prostate segmentation. For a typical (median) patient, AI tools had a mean error over the prostate surface ranging from 1.3 to 2.4 mm. They maximally deviated 3.0 to 9.4 mm outside the prostate and 3.0 to 8.5 mm inside the prostate for a typical patient. Error in prostate volume measurement for a typical patient ranged from 4.3% to 31.4%. We established an expert consensus benchmark for prostate segmentation. The best-performing AI tools have typical accuracy greater than that reported for radiation oncologists using computed tomography scans (the most common clinical approach for radiation therapy planning). Physician review remains essential to detect occasional major errors.

Detection of Microscopic Glioblastoma Infiltration in Peritumoral Edema Using Interactive Deep Learning With DTI Biomarkers: Testing via Stereotactic Biopsy.

Tu J, Shen C, Liu J, Hu B, Chen Z, Yan Y, Li C, Xiong J, Daoud AM, Wang X, Li Y, Zhu F

pubmed logopapersSep 1 2025
Microscopic tumor cell infiltration beyond contrast-enhancing regions influences glioblastoma prognosis but remains undetectable using conventional MRI. To develop and evaluate the glioblastoma infiltrating area interactive detection framework (GIAIDF), an interactive deep-learning framework that integrates diffusion tensor imaging (DTI) biomarkers for identifying microscopic infiltration within peritumoral edema. Retrospective. A total of 73 training patients (51.13 ± 13.87 years; 47 M/26F) and 25 internal validation patients (52.82 ± 10.76 years; 14 M/11F) from Center 1; 25 external validation patients (47.29 ± 11.39 years; 16 M/9F) from Center 2; 13 prospective biopsy patients (45.62 ± 9.28 years; 8 M/5F) from Center 1. 3.0 T MRI including three-dimensional contrast-enhanced T1-weighted BRAVO sequence (repetition time = 7.8 ms, echo time = 3.0 ms, inversion time = 450 ms, slice thickness = 1 mm), three-dimensional T2-weighted fluid-attenuated inversion recovery (repetition time = 7000 ms, echo time = 120 ms, inversion time = 2000 ms, slice thickness = 1 mm), and diffusion tensor imaging (repetition time = 8500 ms, echo time = 63 ms, slice thickness = 2 mm). Histopathology of 25 stereotactic biopsy specimens served as the reference standard. Primary metrics included AUC, accuracy, sensitivity, and specificity. GIAIDF heatmaps were co-registered to biopsy trajectories using Ratio-FAcpcic (0.16-0.22) as interactive priors. ROC analysis (DeLong's method) for AUC; recall, precision, and F1 score for prediction validation. GIAIDF demonstrated recall = 0.800 ± 0.060, precision = 0.915 ± 0.057, F1 = 0.852 ± 0.044 in internal validation (n = 25) and recall = 0.778 ± 0.053, precision = 0.890 ± 0.051, F1 = 0.829 ± 0.040 in external validation (n = 25). Among 13 patients undergoing stereotactic biopsy, 25 peri-ED specimens were analyzed: 18 without tumor cell infiltration and seven with infiltration, achieving AUC = 0.929 (95% CI: 0.804-1.000), sensitivity = 0.714, specificity = 0.944, and accuracy = 0.880. Infiltrated sites showed significantly higher risk scores (0.549 ± 0.194 vs. 0.205 ± 0.175 in non-infiltrated sites, p < 0.001). This study has provided a potential tool, GIAIDF, to identify regions of GBM infiltration within areas of peri-ED based on preoperative MR images.
Page 30 of 1621612 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.