Sort by:
Page 80 of 91901 results

AI-powered integration of multimodal imaging in precision medicine for neuropsychiatric disorders.

Huang W, Shu N

pubmed logopapersMay 20 2025
Neuropsychiatric disorders have complex pathological mechanism, pronounced clinical heterogeneity, and a prolonged preclinical phase, which presents a challenge for early diagnosis and development of precise intervention strategies. With the development of large-scale multimodal neuroimaging datasets and advancement of artificial intelligence (AI) algorithms, the integration of multimodal imaging with AI techniques has emerged as a pivotal avenue for early detection and tailoring individualized treatment for neuropsychiatric disorders. To support these advances, in this review, we outline multimodal neuroimaging techniques, AI methods, and strategies for multimodal data fusion. We highlight applications of multimodal AI based on neuroimaging data in precision medicine for neuropsychiatric disorders, discussing challenges in clinical adoption, their emerging solutions, and future directions.

Neuroimaging Characterization of Acute Traumatic Brain Injury with Focus on Frontline Clinicians: Recommendations from the 2024 National Institute of Neurological Disorders and Stroke Traumatic Brain Injury Classification and Nomenclature Initiative Imaging Working Group.

Mac Donald CL, Yuh EL, Vande Vyvere T, Edlow BL, Li LM, Mayer AR, Mukherjee P, Newcombe VFJ, Wilde EA, Koerte IK, Yurgelun-Todd D, Wu YC, Duhaime AC, Awwad HO, Dams-O'Connor K, Doperalski A, Maas AIR, McCrea MA, Umoh N, Manley GT

pubmed logopapersMay 20 2025
Neuroimaging screening and surveillance is one of the first frontline diagnostic tools leveraged in the acute assessment (first 24 h postinjury) of patients suspected to have traumatic brain injury (TBI). While imaging, in particular computed tomography, is used almost universally in emergency departments worldwide to evaluate possible features of TBI, there is no currently agreed-upon reporting system, standard terminology, or framework to contextualize brain imaging findings with other available medical, psychosocial, and environmental data. In 2023, the NIH-National Institute of Neurological Disorders and Stroke convened six working groups of international experts in TBI to develop a new framework for nomenclature and classification. The goal of this effort was to propose a more granular system of injury classification that incorporates recent progress in imaging biomarkers, blood-based biomarkers, and injury and recovery modifiers to replace the commonly used Glasgow Coma Scale-based diagnosis groups of mild, moderate, and severe TBI, which have shown relatively poor diagnostic, prognostic, and therapeutic utility. Motivated by prior efforts to standardize the nomenclature for pathoanatomic imaging findings of TBI for research and clinical trials, along with more recent studies supporting the refinement of the originally proposed definitions, the Imaging Working Group sought to update and expand this application specifically for consideration of use in clinical practice. Here we report the recommendations of this working group to enable the translation of structured imaging common data elements to the standard of care. These leverage recent advances in imaging technology, electronic medical record (EMR) systems, and artificial intelligence (AI), along with input from key stakeholders, including patients with lived experience, caretakers, providers across medical disciplines, radiology industry partners, and policymakers. It was recommended that (1) there would be updates to the definitions of key imaging features used for this system of classification and that these should be further refined as new evidence of the underlying pathology driving the signal change is identified; (2) there would be an efficient, integrated tool embedded in the EMR imaging reporting system developed in collaboration with industry partners; (3) this would include AI-generated evidence-based feature clusters with diagnostic, prognostic, and therapeutic implications; and (4) a "patient translator" would be developed in parallel to assist patients and families in understanding these imaging features. In addition, important disclaimers would be provided regarding known limitations of current technology until such time as they are overcome, such as resolution and sequence parameter considerations. The end goal is a multifaceted TBI characterization model incorporating clinical, imaging, blood biomarker, and psychosocial and environmental modifiers to better serve patients not only acutely but also through the postinjury continuum in the days, months, and years that follow TBI.

Deep-Learning Reconstruction for 7T MP2RAGE and SPACE MRI: Improving Image Quality at High Acceleration Factors.

Liu Z, Patel V, Zhou X, Tao S, Yu T, Ma J, Nickel D, Liebig P, Westerhold EM, Mojahed H, Gupta V, Middlebrooks EH

pubmed logopapersMay 20 2025
Deep learning (DL) reconstruction has been successful in realizing otherwise impracticable acceleration factors and improving image quality in conventional MRI field strengths; however, there has been limited application to ultra-high field MRI.The objective of this study was to evaluate the performance of a prototype DL-based image reconstruction technique in 7T MRI of the brain utilizing MP2RAGE and SPACE acquisitions, in comparison to reconstructions in conventional compressed sensing (CS) and controlled aliasing in parallel imaging (CAIPIRINHA) techniques. This retrospective study involved 60 patients who underwent 7T brain MRI between June 2024 and October 2024, comprised of 30 patients with MP2RAGE data and 30 patients with SPACE FLAIR data. Each set of raw data was reconstructed with DL-based reconstruction and conventional reconstruction. Image quality was independently assessed by two neuroradiologists using a 5-point Likert scale, which included overall image quality, artifacts, sharpness, structural conspicuity, and noise level. Inter-observer agreement was determined using top-box analysis. Contrast-to-noise ratio (CNR) and noise levels were quantitatively evaluated and compared using the Wilcoxon signed-rank test. DL-based reconstruction resulted in a significant increase in overall image quality and a reduction in subjective noise level for both MP2RAGE and SPACE FLAIR data (all P<0.001), with no significant differences in image artifacts (all P>0.05). When compared to standard reconstruction, the implementation of DL-based reconstruction yielded an increase in CNR of 49.5% [95% CI 33.0-59.0%] for MP2RAGE data and 90.6% [95% CI 73.2-117.7%] for SPACE FLAIR data, along with a decrease in noise of 33.5% [95% CI 23.0-38.0%] for MP2RAGE data and 47.5% [95% CI 41.9-52.6%] for SPACE FLAIR data. DL-based reconstruction of 7T MRI significantly enhanced image quality compared to conventional reconstruction without introducing image artifacts. The achievable high acceleration factors have the potential to substantially improve image quality and resolution in 7T MRI. CAIPIRINHA = Controlled Aliasing In Parallel Imaging Results IN Higher Acceleration; CNR = contrast-to-noise ratio; CS = compressed sensing; DL = deep learning; MNI = Montreal Neurological Institute; MP2RAGE = Magnetization-Prepared 2 Rapid Acquisition Gradient Echoes; SPACE = Sampling Perfection with Application-Optimized Contrasts using Different Flip Angle Evolutions.

XDementNET: An Explainable Attention Based Deep Convolutional Network to Detect Alzheimer Progression from MRI data

Soyabul Islam Lincoln, Mirza Mohd Shahriar Maswood

arxiv logopreprintMay 20 2025
A common neurodegenerative disease, Alzheimer's disease requires a precise diagnosis and efficient treatment, particularly in light of escalating healthcare expenses and the expanding use of artificial intelligence in medical diagnostics. Many recent studies shows that the combination of brain Magnetic Resonance Imaging (MRI) and deep neural networks have achieved promising results for diagnosing AD. Using deep convolutional neural networks, this paper introduces a novel deep learning architecture that incorporates multiresidual blocks, specialized spatial attention blocks, grouped query attention, and multi-head attention. The study assessed the model's performance on four publicly accessible datasets and concentrated on identifying binary and multiclass issues across various categories. This paper also takes into account of the explainability of AD's progression and compared with state-of-the-art methods namely Gradient Class Activation Mapping (GradCAM), Score-CAM, Faster Score-CAM, and XGRADCAM. Our methodology consistently outperforms current approaches, achieving 99.66\% accuracy in 4-class classification, 99.63\% in 3-class classification, and 100\% in binary classification using Kaggle datasets. For Open Access Series of Imaging Studies (OASIS) datasets the accuracies are 99.92\%, 99.90\%, and 99.95\% respectively. The Alzheimer's Disease Neuroimaging Initiative-1 (ADNI-1) dataset was used for experiments in three planes (axial, sagittal, and coronal) and a combination of all planes. The study achieved accuracies of 99.08\% for axis, 99.85\% for sagittal, 99.5\% for coronal, and 99.17\% for all axis, and 97.79\% and 8.60\% respectively for ADNI-2. The network's ability to retrieve important information from MRI images is demonstrated by its excellent accuracy in categorizing AD stages.

A Skull-Adaptive Framework for AI-Based 3D Transcranial Focused Ultrasound Simulation

Vinkle Srivastav, Juliette Puel, Jonathan Vappou, Elijah Van Houten, Paolo Cabras, Nicolas Padoy

arxiv logopreprintMay 19 2025
Transcranial focused ultrasound (tFUS) is an emerging modality for non-invasive brain stimulation and therapeutic intervention, offering millimeter-scale spatial precision and the ability to target deep brain structures. However, the heterogeneous and anisotropic nature of the human skull introduces significant distortions to the propagating ultrasound wavefront, which require time-consuming patient-specific planning and corrections using numerical solvers for accurate targeting. To enable data-driven approaches in this domain, we introduce TFUScapes, the first large-scale, high-resolution dataset of tFUS simulations through anatomically realistic human skulls derived from T1-weighted MRI images. We have developed a scalable simulation engine pipeline using the k-Wave pseudo-spectral solver, where each simulation returns a steady-state pressure field generated by a focused ultrasound transducer placed at realistic scalp locations. In addition to the dataset, we present DeepTFUS, a deep learning model that estimates normalized pressure fields directly from input 3D CT volumes and transducer position. The model extends a U-Net backbone with transducer-aware conditioning, incorporating Fourier-encoded position embeddings and MLP layers to create global transducer embeddings. These embeddings are fused with U-Net encoder features via feature-wise modulation, dynamic convolutions, and cross-attention mechanisms. The model is trained using a combination of spatially weighted and gradient-sensitive loss functions, enabling it to approximate high-fidelity wavefields. The TFUScapes dataset is publicly released to accelerate research at the intersection of computational acoustics, neurotechnology, and deep learning. The project page is available at https://github.com/CAMMA-public/TFUScapes.

Non-orthogonal kV imaging guided patient position verification in non-coplanar radiation therapy with dataset-free implicit neural representation.

Ye S, Chen Y, Wang S, Xing L, Gao Y

pubmed logopapersMay 19 2025
Cone-beam CT (CBCT) is crucial for patient alignment and target verification in radiation therapy (RT). However, for non-coplanar beams, potential collisions between the treatment couch and the on-board imaging system limit the range that the gantry can be rotated. Limited-angle measurements are often insufficient to generate high-quality volumetric images for image-domain registration, therefore limiting the use of CBCT for position verification. An alternative to image-domain registration is to use a few 2D projections acquired by the onboard kV imager to register with the 3D planning CT for patient position verification, which is referred to as 2D-3D registration. The 2D-3D registration involves converting the 3D volume into a set of digitally reconstructed radiographs (DRRs) expected to be comparable to the acquired 2D projections. The domain gap between the generated DRRs and the acquired projections can happen due to the inaccurate geometry modeling in DRR generation and artifacts in the actual acquisitions. We aim to improve the efficiency and accuracy of the challenging 2D-3D registration problem in non-coplanar RT with limited-angle CBCT scans. We designed an accelerated, dataset-free, and patient-specific 2D-3D registration framework based on an implicit neural representation (INR) network and a composite similarity measure. The INR network consists of a lightweight three-layer multilayer perception followed by average pooling to calculate rigid motion parameters, which are used to transform the original 3D volume to the moving position. The Radon transform and imaging specifications at the moving position are used to generate DRRs with higher accuracy. We designed a composite similarity measure consisting of pixel-wise intensity difference and gradient differences between the generated DRRs and acquired projections to further reduce the impact of their domain gap on registration accuracy. We evaluated the proposed method on both simulation data and real phantom data acquired from a Varian TrueBeam machine. Comparisons with a conventional non-deep-learning registration approach and ablation studies on the composite similarity measure were conducted to demonstrate the efficacy of the proposed method. In the simulation data experiments, two X-ray projections of a head-and-neck image with <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><msup><mn>45</mn> <mo>∘</mo></msup> <annotation>${45}^\circ$</annotation></semantics> </math> discrepancy were used for the registration. The accuracy of the registration results was evaluated on experiments set up at four different moving positions with ground-truth moving parameters. The proposed method achieved sub-millimeter accuracy in translations and sub-degree accuracy in rotations. In the phantom experiments, a head-and-neck phantom was scanned at three different positions involving couch translations and rotations. We achieved translation errors of <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo><</mo> <mn>2</mn> <mspace></mspace> <mi>mm</mi></mrow> <annotation>$< 2\nobreakspace {\rm mm}$</annotation></semantics> </math> and subdegree accuracy for pitch and roll. Experiments on registration using different numbers of projections with varying angle discrepancies demonstrate the improved accuracy and robustness of the proposed method, compared to both the conventional registration approach and the proposed approach without certain components of the composite similarity measure. We proposed a dataset-free lightweight INR-based registration with a composite similarity measure for the challenging 2D-3D registration problem with limited-angle CBCT scans. Comprehensive evaluations of both simulation data and experimental phantom data demonstrated the efficiency, accuracy, and robustness of the proposed method.

Functional MRI Analysis of Cortical Regions to Distinguish Lewy Body Dementia From Alzheimer's Disease.

Kashyap B, Hanson LR, Gustafson SK, Sherman SJ, Sughrue ME, Rosenbloom MH

pubmed logopapersMay 19 2025
Cortical regions such as parietal area H (PH) and the fundus of the superior temporal sulcus (FST) are involved in higher visual function and may play a role in dementia with Lewy bodies (DLB), which is frequently associated with hallucinations. The authors evaluated functional connectivity between these two regions for distinguishing participants with DLB from those with Alzheimer's disease (AD) or mild cognitive impairment (MCI) and from cognitively normal (CN) individuals to identify a functional connectivity MRI signature for DLB. Eighteen DLB participants completed cognitive testing and functional MRI scans and were matched to AD or MCI and CN individuals whose data were obtained from the Alzheimer's Disease Neuroimaging Initiative database (https://adni.loni.usc.edu). Images were analyzed with data from Human Connectome Project (HCP) comparison individuals by using a machine learning-based subject-specific HCP atlas based on diffusion tractography. Bihemispheric functional connectivity of the PH to left FST regions was reduced in the DLB group compared with the AD and CN groups (mean±SD connectivity score=0.307±0.009 vs. 0.456±0.006 and 0.433±0.006, respectively). No significant differences were detected among the groups in connectivity within basal ganglia structures, and no significant correlations were observed between neuropsychological testing results and functional connectivity between the PH and FST regions. Performances on clock-drawing and number-cancelation tests were significantly and negatively correlated with connectivity between the right caudate nucleus and right substantia nigra for DLB participants but not for AD or CN participants. The functional connectivity between PH and FST regions is uniquely affected by DLB and may help distinguish this condition from AD.

Morphometric and radiomics analysis toward the prediction of epilepsy associated with supratentorial low-grade glioma in children.

Tsai ML, Hsieh KL, Liu YL, Yang YS, Chang H, Wong TT, Peng SJ

pubmed logopapersMay 19 2025
Understanding the impact of epilepsy on pediatric brain tumors is crucial to diagnostic precision and optimal treatment selection. This study investigated MRI radiomics features, tumor location, voxel-based morphometry (VBM) for gray matter density, and tumor volumetry to differentiate between children with low grade glioma (LGG)-associated epilepsies and those without, and further identified key radiomics features for predicting of epilepsy risk in children with supratentorial LGG to construct an epilepsy prediction model. A total of 206 radiomics features of tumors and voxel-based morphometric analysis of tumor location features were extracted from T2-FLAIR images in a primary cohort of 48 children with LGG with epilepsy (N = 23) or without epilepsy (N = 25), prior to surgery. Feature selection was performed using the minimum redundancy maximum relevance algorithm, and leave-one-out cross-validation was applied to assess the predictive performance of radiomics and tumor location signatures in differentiating epilepsy-associated LGG from non-epilepsy cases. Voxel-based morphometric analysis showed significant positive t-scores within bilateral temporal cortex and negative t-scores in basal ganglia between epilepsy and non-epilepsy groups. Eight radiomics features were identified as significant predictors of epilepsy in LGG, encompassing characteristics of 2 locations, 2 shapes, 1 image gray scale intensity, and 3 textures. The most important predictor was temporal lobe involvement, followed by high dependence high grey level emphasis, elongation, area density, information correlation 1, midbrain and intensity range. The Linear Support Vector Machine (SVM) model yielded the best prediction performance, when implemented with a combination of radiomics features and tumor location features, as evidenced by the following metrics: precision (0.955), recall (0.913), specificity (0.960), accuracy (0.938), F-1 score (0.933), and area under curve (AUC) (0.950). Our findings demonstrated the efficacy of machine learning models based on radiomics features and voxel-based anatomical locations in predicting the risk of epilepsy in supratentorial LGG. This model provides a highly accurate tool for distinguishing epilepsy-associated LGG in children, supporting precise treatment planning. Not applicable.

New approaches to lesion assessment in multiple sclerosis.

Preziosa P, Filippi M, Rocca MA

pubmed logopapersMay 19 2025
To summarize recent advancements in artificial intelligence-driven lesion segmentation and novel neuroimaging modalities that enhance the identification and characterization of multiple sclerosis (MS) lesions, emphasizing their implications for clinical use and research. Artificial intelligence, particularly deep learning approaches, are revolutionizing MS lesion assessment and segmentation, improving accuracy, reproducibility, and efficiency. Artificial intelligence-based tools now enable automated detection not only of T2-hyperintense white matter lesions, but also of specific lesion subtypes, including gadolinium-enhancing, central vein sign-positive, paramagnetic rim, cortical, and spinal cord lesions, which hold diagnostic and prognostic value. Novel neuroimaging techniques such as quantitative susceptibility mapping (QSM), χ-separation imaging, and soma and neurite density imaging (SANDI), together with PET, are providing deeper insights into lesion pathology, better disentangling their heterogeneities and clinical relevance. Artificial intelligence-powered lesion segmentation tools hold great potential for improving fast, accurate and reproducible lesional assessment in the clinical scenario, thus improving MS diagnosis, monitoring, and treatment response assessment. Emerging neuroimaging modalities may contribute to advance the understanding MS pathophysiology, provide more specific markers of disease progression, and novel potential therapeutic targets.

Multiple deep learning models based on MRI images in discriminating glioblastoma from solitary brain metastases: a multicentre study.

Kong C, Yan D, Liu K, Yin Y, Ma C

pubmed logopapersMay 19 2025
Development of a deep learning model for accurate preoperative identification of glioblastoma and solitary brain metastases by combining multi-centre and multi-sequence magnetic resonance images and comparison of the performance of different deep learning models. Clinical data and MR images of a total of 236 patients with pathologically confirmed glioblastoma and single brain metastases were retrospectively collected from January 2019 to May 2024 at Provincial Hospital of Shandong First Medical University, and the data were randomly divided into a training set and a test set according to the ratio of 8:2, in which the training set contained 197 cases and the test set contained 39 cases; the images were preprocessed and labeled with the tumor regions. The images were pre-processed and labeled with tumor regions, and different MRI sequences were input individually or in combination to train the deep learning model 3D ResNet-18, and the optimal sequence combinations were obtained by five-fold cross-validation enhancement of the data inputs and training of the deep learning models 3D Vision Transformer (3D Vit), 3D DenseNet, and 3D VGG; the working characteristic curves (ROCs) of subjects were plotted, and the area under the curve (AUC) was calculated. The area under the curve (AUC), accuracy, precision, recall and F1 score were used to evaluate the discriminative performance of the models. In addition, 48 patients with glioblastoma and single brain metastases from January 2020 to December 2022 were collected from the Affiliated Cancer Hospital of Shandong First Medical University as an external test set to compare the discriminative performance, robustness and generalization ability of the four deep learning models. In the comparison of the discriminative effect of different MRI sequences, the three sequence combinations of T1-CE, T2, and T2-Flair gained discriminative effect, with the accuracy and AUC values of 0.8718 and 0.9305, respectively; after the four deep learning models were inputted into the aforementioned sequence combinations, the accuracy and AUC of the external validation of the 3D ResNet-18 model were 0.8125, respectively, 0.8899, all of which are the highest among all models. A combination of multi-sequence MR images and a deep learning model can efficiently identify glioblastoma and solitary brain metastases preoperatively, and the deep learning model 3D ResNet-18 has the highest efficacy in identifying the two types of tumours.
Page 80 of 91901 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.