Sort by:
Page 11 of 1241236 results

An Efficient Neuro-framework for Brain Tumor Classification Using a CNN-based Self-supervised Learning Approach with Genetic Optimizations.

Ravali P, Reddy PCS, Praveen P

pubmed logopapersSep 18 2025
Accurate and non-invasive grading of glioma brain tumors from MRI scans is challenging due to limited labeled data and the complexity of clinical evaluation. This study aims to develop a robust and efficient deep learning framework for improved glioma classification using MRI images. A multi-stage framework is proposed, starting with SimCLR-based self-supervised learning for representation learning without labels, followed by Deep Embedded Clustering to extract and group features effectively. EfficientNet-B7 is used for initial classification due to its parameter efficiency. A weighted ensemble of EfficientNet-B7, ResNet-50, and DenseNet-121 is employed for the final classification. Hyperparameters are fine-tuned using a Differential Evolution-optimized Genetic Algorithm to enhance accuracy and training efficiency. EfficientNet-B7 achieved approximately 88-90% classification accuracy. The weighted ensemble improved this to approximately 93%. Genetic optimization further enhanced accuracy by 3-5% and reduced training time by 15%. The framework overcomes data scarcity and limited feature extraction issues in traditional CNNs. The combination of self-supervised learning, clustering, ensemble modeling, and evolutionary optimization provides improved performance and robustness, though it requires significant computational resources and further clinical validation. The proposed framework offers an accurate and scalable solution for glioma classification from MRI images. It supports faster, more reliable clinical decision-making and holds promise for real-world diagnostic applications.

Brain-HGCN: A Hyperbolic Graph Convolutional Network for Brain Functional Network Analysis

Junhao Jia, Yunyou Liu, Cheng Yang, Yifei Sun, Feiwei Qin, Changmiao Wang, Yong Peng

arxiv logopreprintSep 18 2025
Functional magnetic resonance imaging (fMRI) provides a powerful non-invasive window into the brain's functional organization by generating complex functional networks, typically modeled as graphs. These brain networks exhibit a hierarchical topology that is crucial for cognitive processing. However, due to inherent spatial constraints, standard Euclidean GNNs struggle to represent these hierarchical structures without high distortion, limiting their clinical performance. To address this limitation, we propose Brain-HGCN, a geometric deep learning framework based on hyperbolic geometry, which leverages the intrinsic property of negatively curved space to model the brain's network hierarchy with high fidelity. Grounded in the Lorentz model, our model employs a novel hyperbolic graph attention layer with a signed aggregation mechanism to distinctly process excitatory and inhibitory connections, ultimately learning robust graph-level representations via a geometrically sound Fr\'echet mean for graph readout. Experiments on two large-scale fMRI datasets for psychiatric disorder classification demonstrate that our approach significantly outperforms a wide range of state-of-the-art Euclidean baselines. This work pioneers a new geometric deep learning paradigm for fMRI analysis, highlighting the immense potential of hyperbolic GNNs in the field of computational psychiatry.

NeuroRAD-FM: A Foundation Model for Neuro-Oncology with Distributionally Robust Training

Moinak Bhattacharya, Angelica P. Kurtz, Fabio M. Iwamoto, Prateek Prasanna, Gagandeep Singh

arxiv logopreprintSep 18 2025
Neuro-oncology poses unique challenges for machine learning due to heterogeneous data and tumor complexity, limiting the ability of foundation models (FMs) to generalize across cohorts. Existing FMs also perform poorly in predicting uncommon molecular markers, which are essential for treatment response and risk stratification. To address these gaps, we developed a neuro-oncology specific FM with a distributionally robust loss function, enabling accurate estimation of tumor phenotypes while maintaining cross-institution generalization. We pretrained self-supervised backbones (BYOL, DINO, MAE, MoCo) on multi-institutional brain tumor MRI and applied distributionally robust optimization (DRO) to mitigate site and class imbalance. Downstream tasks included molecular classification of common markers (MGMT, IDH1, 1p/19q, EGFR), uncommon alterations (ATRX, TP53, CDKN2A/2B, TERT), continuous markers (Ki-67, TP53), and overall survival prediction in IDH1 wild-type glioblastoma at UCSF, UPenn, and CUIMC. Our method improved molecular prediction and reduced site-specific embedding differences. At CUIMC, mean balanced accuracy rose from 0.744 to 0.785 and AUC from 0.656 to 0.676, with the largest gains for underrepresented endpoints (CDKN2A/2B accuracy 0.86 to 0.92, AUC 0.73 to 0.92; ATRX AUC 0.69 to 0.82; Ki-67 accuracy 0.60 to 0.69). For survival, c-index improved at all sites: CUIMC 0.592 to 0.597, UPenn 0.647 to 0.672, UCSF 0.600 to 0.627. Grad-CAM highlighted tumor and peri-tumoral regions, confirming interpretability. Overall, coupling FMs with DRO yields more site-invariant representations, improves prediction of common and uncommon markers, and enhances survival discrimination, underscoring the need for prospective validation and integration of longitudinal and interventional signals to advance precision neuro-oncology.

Technical Feasibility of Quantitative Susceptibility Mapping Radiomics for Predicting Deep Brain Stimulation Outcomes in Parkinson Disease.

Roberts AG, Zhang J, Tozlu C, Romano D, Akkus S, Kim H, Sabuncu MR, Spincemaille P, Li J, Wang Y, Wu X, Kopell BH

pubmed logopapersSep 18 2025
Parkinson disease (PD) patients with motor complications are often considered for deep brain stimulation (DBS) surgery. Predicting symptom improvement to separate DBS responders and nonresponders remains an unmet need. Currently, DBS candidacy is evaluated using the levodopa challenge test (LCT) to confirm dopamine responsiveness and diagnosis. However, prediction of DBS success by measuring presurgical symptom improvement associated with levodopa dosage changes is highly problematic. Quantitative susceptibility mapping (QSM) is a recently developed MRI method that depicts brain iron distribution. As the substantia nigra and subthalamic nuclei are well visualized, QSM has been used in presurgical planning of DBS. Spatial features resulting from iron distribution in these nuclei have been previously linked with disease progression and motor symptom severity. Given its clear target depiction and prior findings regarding susceptibility and PD, this study demonstrates the technical feasibility of predicting DBS outcomes from presurgical QSM. A novel presurgical QSM radiomics approach using a regression model is presented to predict DBS outcome according to spatial features in QSM deep gray nuclei. To overcome limited and noisy training data, data augmentation using label noise injection or "compensation" was used to improve outcome prediction of the regression model. The QSM radiomics model was evaluated on 67 patients with PD who underwent DBS at 2 medical centers. The QSM radiomics model predicted DBS improvement in the Unified Parkinson Disease Rating Scale at Center 1 and Center 2 with Pearson correlation , () and , (), respectively. LCT failed to predict DBS improvement at Center 1 and Center 2 with Pearson correlation () and (), respectively. QSM radiomics has potential to accurately predict DBS outcome in treating patients with PD, offering a valuable alternative to the time-consuming and low-accuracy LCT.

MRI on a Budget: Leveraging Low and Ultra-Low Intensity Technology in Africa.

Ussi KK, Mtenga RB

pubmed logopapersSep 18 2025
Magnetic resonance imaging (MRI) is a cornerstone of brain and spine diagnostics. Yet, access across Africa is limited by high installation costs, power requirements, and the need for specialized shielding and facilities. Low-and ultra low-field (ULF) MRI systems operating below 0.3 T are emerging as a practical alternative to expand neuroimaging capacity in resource-constrained settings. However, its faced with challenges that hinder its use in clinical setting. Technological advances that seek to tackle these challenges such as permanent Halbach array magnets, portable scanner designs such as those successfully deployed in Uganda and Malawi, and deep learning methods including convolutional neural network electromagnetic interference cancellation and residual U-Net image reconstruction have improved image quality and reduced noise, making ULF MRI increasingly viable. We review the state of low-field MRI technology, its application in point-of-care and rural contexts, and the specific limitations that remain, including reduced signal-to-noise ratio, larger voxel size requirements, and susceptibility to motion artifacts. Although not a replacement for high-field scanners in detecting subtle or small lesions, low-field MRI offers a promising pathway to broaden diagnostic imaging availability, support clinical decision-making, and advance equitable neuroimaging research in under-resourced regions.ABBREVIATIONS: CNN=Convolutional neural network; EMI=Electromagnetic interference; FID=Free induction wave; LMIC=Low and middle income countries; MRI=Magnetic Resonance Imaging; NCDs=Non communicable diseases; RF=Radiofrequency Pulse; SNR= Signal to noise ratio; TBI=Traumatic brain Injury.

MDFNet: a multi-dimensional feature fusion model based on structural magnetic resonance imaging representations for brain age estimation.

Zhang C, Nan P, Song L, Wang Y, Su K, Zheng Q

pubmed logopapersSep 18 2025
Brain age estimation plays a significant role in understanding the aging process and its relationship with neurodegenerative diseases. The aim of the study is to devise a unified multi-dimensional feature fusion model (MDFNet) to enhance the brain age estimation solely on structural MRI but with a diverse representation of whole brain, tissue segmentation of gray matter volume, node message passing of brain network, edge-based graph path convolution of brain connectivity, and demographic data. The MDFNet was developed by devising and integrating a whole-brain-level Euclidean-Convolution channel (WBEC-channel), a tissue-level Euclidean-convolution channel (TEC-channel), a Graph-convolution channel based on node message passing (nodeGCN-channel) and an edge-based graph path convolution channel on brain connectivity (edgeGCN-channel), and a multilayer perceptron (MLP) channel for demographic data (MLP-channel) to enhance the multi-dimensional feature fusion. The MDFNet was validated on 1872 healthy subjects from four public datasets, and applied to an independent cohort of Alzheimer's Disease (AD) patients. The interpretability analysis and normative modeling of the MDFNet in brain age estimation were also performed. The MDFNet achieved a superior performance of Mean Absolute Error (MAE) of 4.396 ± 0.244 years, a Pearson Correlation Coefficient (PCC) of 0.912 ± 0.002, and a Spearman's Rank Correlation (SRCC) of 0.819 ± 0.015 when comparing with the state-of-the-art deep learning models. The AD group exhibited a significantly greater brain age gap (BAG) than health group (P < 0.05), and the normative modeling also exhibited a significantly higher mean Z-scores of AD patients than healthy subjects (P < 0.05). The interpretability was also visualized at both the group and individual level, enhancing the reliability of the MDFNet. The MDFNet enhanced the brain age estimation solely on structural MRI by employing a multi-dimensional feature integration strategy.

Integrating artificial intelligence with Gamma Knife radiosurgery in treating meningiomas and schwannomas: a review.

Alhosanie TN, Hammo B, Klaib AF, Alshudifat A

pubmed logopapersSep 18 2025
Meningiomas and schwannomas are benign tumors that affect the central nervous system, comprising up to one-third of intracranial neoplasms. Gamma Knife radiosurgery (GKRS), or stereotactic radiosurgery (SRS), is a form of radiation therapy. Although referred to as "surgery," GKRS does not involve incisions. The GK medical device effectively utilizes highly focused gamma rays to treat lesions or tumors, primarily in the brain. In radiation oncology, machine learning (ML) has been used in various aspects, including outcome prediction, quality control, treatment planning, and image segmentation. This review will showcase the advantages of integrating artificial intelligence with Gamma Knife technology in treating schwannomas and meningiomas.This review adheres to PRISMA guidelines. We searched the PubMed, Scopus, and IEEE databases to identify studies published between 2021 and March 2025 that met our inclusion and exclusion criteria. The focus was on AI algorithms applied to patients with vestibular schwannoma and meningioma treated with GKRS. Two reviewers participated in the data extraction and quality assessment process.A total of nine studies were reviewed in this analysis. One distinguished deep learning (DL) model is a dual-pathway convolutional neural network (CNN) that integrates T1-weighted (T1W) and T2-weighted (T2W) MRI scans. This model was tested on 861 patients who underwent GKRS, achieving a Dice Similarity Coefficient (DSC) of 0.90. ML-based radiomics models have also demonstrated that certain radiomic features can predict the response of vestibular schwannomas and meningiomas to radiosurgery. Among these, the neural network model exhibited the best performance. AI models were also employed to predict complications following GKRS, such as peritumoral edema. A Random Survival Forest (RSF) model was developed using clinical, semantic, and radiomics variables, achieving a C-index score of 0.861 and 0.780. This model enables the classification of patients into high-risk and low-risk categories for developing post-GKRS edema.AI and ML models show great potential in tumor segmentation, volumetric assessment, and predicting treatment outcomes for vestibular schwannomas and meningiomas treated with GKRS. However, their successful clinical implementation relies on overcoming challenges related to external validation, standardization, and computational demands. Future research should focus on large-scale, multi-institutional validation studies, integrating multimodal data, and developing cost-effective strategies for deploying AI technologies.

Assessing the Feasibility of Deep Learning-Based Attenuation Correction Using Photon Emission Data in 18F-FDG Images for Dedicated Head and Neck PET Scanners.

Shahrbabaki Mofrad M, Ghafari A, Amiri Tehrani Zade A, Aghahosseini F, Ay M, Farzenefar S, Sheikhzadeh P

pubmed logopapersSep 18 2025
&#xD;This study aimed to evaluate the use of deep learning techniques to produce measured attenuation-corrected (MAC) images from non-attenuation-corrected (NAC) F-FDG PET images, focusing on head and neck imaging.&#xD;Materials and Methods:&#xD;A Residual Network (ResNet) was used to train 2D head and neck PET images from 114 patients (12,068 slices) without pathology or artifacts. For validation during training and testing, 21 and 24 patient images without pathology and artifacts were used, and 12 images with pathologies were used for independent testing. Prediction accuracy was assessed using metrics such as RMSE, SSIM, PSNR, and MSE. The impact of unseen pathologies on the network was evaluated by measuring contrast and SNR in tumoral/hot regions of both reference and predicted images. Statistical significance between the contrast and SNR of reference and predicted images was assessed using a paired-sample t-test.&#xD;Results:&#xD;Two nuclear medicine physicians evaluated the predicted head and neck MAC images, finding them visually similar to reference images. In the normal test group, PSNR, SSIM, RMSE, and MSE were 44.02 ± 1.77, 0.99 ± 0.002, 0.007 ± 0.0019, and 0.000053 ± 0.000030, respectively. For the pathological test group, values were 43.14 ± 2.10, 0.99 ± 0.005, 0.0078 ± 0.0015, and 0.000063 ± 0.000026, respectively. No significant differences were found in SNR and contrast between reference and test images without pathology (p-value>0.05), but significant differences were found in pathological images (p-value <0.05)&#xD;Conclusion:&#xD;The deep learning network demonstrated the ability to directly generate head and neck MAC images that closely resembled the reference images. With additional training data, the model has the potential to be utilized in dedicated head and neck PET scanners without the requirement of computed tomography [CT] for attenuation correction.&#xD.

DLMUSE: Robust Brain Segmentation in Seconds Using Deep Learning.

Bashyam VM, Erus G, Cui Y, Wu D, Hwang G, Getka A, Singh A, Aidinis G, Baik K, Melhem R, Mamourian E, Doshi J, Davison A, Nasrallah IM, Davatzikos C

pubmed logopapersSep 17 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To introduce an open-source deep learning brain segmentation model for fully automated brain MRI segmentation, enabling rapid segmentation and facilitating large-scale neuroimaging research. Materials and Methods In this retrospective study, a deep learning model was developed using a diverse training dataset of 1900 MRI scans (ages 24-93 with a mean of 65 years (SD: 11.5 years) and 1007 females and 893 males) with reference labels generated using a multiatlas segmentation method with human supervision. The final model was validated using 71391 scans from 14 studies. Segmentation quality was assessed using Dice similarity and Pearson correlation coefficients with reference segmentations. Downstream predictive performance for brain age and Alzheimer's disease was evaluated by fitting machine learning models. Statistical significance was assessed using Mann-Whittney U and McNemar's tests. Results The DLMUSE model achieved high correlation (r = 0.93-0.95) and agreement (median Dice scores = 0.84-0.89) with reference segmentations across the testing dataset. Prediction of brain age using DLMUSE features achieved a mean absolute error of 5.08 years, similar to that of the reference method (5.15 years, <i>P</i> = .56). Classification of Alzheimer's disease using DLMUSE features achieved an accuracy of 89% and F1-score of 0.80, which were comparable to values achieved by the reference method (89% and 0.79, respectively). DLMUSE segmentation speed was over 10000 times faster than that of the reference method (3.5 seconds vs 14 hours). Conclusion DLMUSE enabled rapid brain MRI segmentation, with performance comparable to that of state-of-theart methods across diverse datasets. The resulting open-source tools and user-friendly web interface can facilitate large-scale neuroimaging research and wide utilization of advanced segmentation methods. ©RSNA, 2025.

Decision Strategies in AI-Based Ensemble Models in Opportunistic Alzheimer's Detection from Structural MRI.

Hammonds SK, Eftestøl T, Kurz KD, Fernandez-Quilez A

pubmed logopapersSep 17 2025
Alzheimer's disease (AD) is a neurodegenerative condition and the most common form of dementia. Recent developments in AD treatment call for robust diagnostic tools to facilitate medical decision-making. Despite progress for early diagnostic tests, there remains uncertainty about clinical use. Structural magnetic resonance imaging (MRI), as a readily available imaging tool in the current AD diagnostic pathway, in combination with artificial intelligence, offers opportunities of added value beyond symptomatic evaluation. However, MRI studies in AD tend to suffer from small datasets and consequently limited generalizability. Although ensemble models take advantage of the strengths of several models to improve performance and generalizability, there is little knowledge of how the different ensemble models compare performance-wise and the relationship between detection performance and model calibration. The latter is especially relevant for clinical translatability. In our study, we applied three ensemble decision strategies with three different deep learning architectures for multi-class AD detection with structural MRI. For two of the three architectures, the weighted average was the best decision strategy in terms of balanced accuracy and calibration error. In contrast to the base models, the results of the ensemble models showed that the best detection performance corresponded to the lowest calibration error, independent of the architecture. For each architecture, the best ensemble model reduced the estimated calibration error compared to the base model average from (1) 0.174±0.01 to 0.164±0.04, (2) 0.182±0.02 to 0.141±0.04, and (3) 0.269±0.08 to 0.240±0.04 and increased the balanced accuracy from (1) 0.527±0.05 to 0.608±0.06, (2) 0.417±0.03 to 0.456±0.04, and (3) 0.348±0.02 to 0.371±0.03.
Page 11 of 1241236 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.