Sort by:
Page 50 of 53522 results

Cross-scale prediction of glioblastoma MGMT methylation status based on deep learning combined with magnetic resonance images and pathology images

Wu, X., Wei, W., Li, Y., Ma, M., Hu, Z., Xu, Y., Hu, W., Chen, G., Zhao, R., Kang, X., Yin, H., Xi, Y.

medrxiv logopreprintMay 8 2025
BackgroundIn glioblastoma (GBM), promoter methylation of the O6-methylguanine-DNA methyltransferase (MGMT) is associated with beneficial chemotherapy but has not been accurately evaluated based on radiological and pathological sections. To develop and validate an MRI and pathology image-based deep learning radiopathomics model for predicting MGMT promoter methylation in patients with GBM. MethodsA retrospective collection of pathologically confirmed isocitrate dehydrogenase (IDH) wild-type GBM patients (n=207) from three centers was performed, all of whom underwent MRI scanning within 2 weeks prior to surgery. The pre-trained ResNet50 was used as the feature extractor. Features of 1024 dimensions were extracted from MRI and pathological images, respectively, and the features were screened for modeling. Then feature fusion was performed by calculating the normalized multimode MRI fusion features and pathological features, and prediction models of MGMT based on deep learning radiomics, pathomics, and radiopathomics (DLRM, DLPM, DLRPM) were constructed and applied to internal and external validation cohorts. ResultsIn the training, internal and external validation cohorts, the DLRPM further improved the predictive performance, with a significantly better predictive performance than the DLRM and DLPM, with AUCs of 0.920 (95% CI 0.870-0.968), 0.854 (95% CI 0.702-1), and 0.840 (95% CI 0.625-1). ConclusionWe developed and validated cross-scale radiology and pathology models for predicting MGMT methylation status, with DLRPM predicting the best performance, and this cross-scale approach paves the way for further research and clinical applications in the future.

Automated Emergent Large Vessel Occlusion Detection Using Viz.ai Software and Its Impact on Stroke Workflow Metrics and Patient Outcomes in Stroke Centers: A Systematic Review and Meta-analysis.

Sarhan K, Azzam AY, Moawad MHED, Serag I, Abbas A, Sarhan AE

pubmed logopapersMay 8 2025
The implementation of artificial intelligence (AI), particularly Viz.ai software in stroke care, has emerged as a promising tool to enhance the detection of large vessel occlusion (LVO) and to improve stroke workflow metrics and patient outcomes. The aim of this systematic review and meta-analysis is to evaluate the impact of Viz.ai on stroke workflow efficiency in hospitals and on patients' outcomes. Following the PRISMA guidelines, we conducted a comprehensive search on electronic databases, including PubMed, Web of Science, and Scopus databases, to obtain relevant studies until 25 October 2024. Our primary outcomes were door-to-groin puncture (DTG) time, CT scan-to-start of endovascular treatment (EVT) time, CT scan-to-recanalization time, and door-in-door-out time. Secondary outcomes included symptomatic intracranial hemorrhage (ICH), any ICH, mortality, mRS score < 2 at 90 days, and length of hospital stay. A total of 12 studies involving 15,595 patients were included in our analysis. The pooled analysis demonstrated that the implementation of the Viz.ai algorithm was associated with lesser CT scan to EVT time (SMD -0.71, 95% CI [-0.98, -0.44], p < 0.001) and DTG time (SMD -0.50, 95% CI [-0.66, -0.35], p < 0.001) as well as CT to recanalization time (SMD -0.55, 95% CI [-0.76, -0.33], p < 0.001). Additionally, patients in the post-AI group had significantly lower door-in door-out time than the pre-AI group (SMD -0.49, 95% CI [-0.71, -0.28], p < 0.001). Despite the workflow metrics improvement, our analysis did not reveal statistically significant differences in patient clinical outcomes (p > 0.05). Our results suggest that the integration of the Viz.ai platform in stroke care holds significant potential for reducing EVT delays in patients with LVO and optimizing stroke flow metrics in comprehensive stroke centers. Further studies are required to validate its efficacy in improving clinical outcomes in patients with LVO.

Interpretable MRI-Based Deep Learning for Alzheimer's Risk and Progression

Lu, B., Chen, Y.-R., Li, R.-X., Zhang, M.-K., Yan, S.-Z., Chen, G.-Q., Castellanos, F. X., Thompson, P. M., Lu, J., Han, Y., Yan, C.-G.

medrxiv logopreprintMay 7 2025
Timely intervention for Alzheimers disease (AD) requires early detection. The development of immunotherapies targeting amyloid-beta and tau underscores the need for accessible, time-efficient biomarkers for early diagnosis. Here, we directly applied our previously developed MRI-based deep learning model for AD to the large Chinese SILCODE cohort (722 participants, 1,105 brain MRI scans). The model -- initially trained on North American data -- demonstrated robust cross-ethnic generalization, without any retraining or fine-tuning, achieving an AUC of 91.3% in AD classification with a sensitivity of 95.2%. It successfully identified 86.7% of individuals at risk of AD progression more than 5 years in advance. Individuals identified as high-risk exhibited significantly shorter median progression times. By integrating an interpretable deep learning brain risk map approach, we identified AD brain subtypes, including an MCI subtype associated with rapid cognitive decline. The models risk scores showed significant correlations with cognitive measures and plasma biomarkers, such as tau proteins and neurofilament light chain (NfL). These findings underscore the exceptional generalizability and clinical utility of MRI-based deep learning models, especially in large and diverse populations, offering valuable tools for early therapeutic intervention. The model has been made open-source and deployed to a free online website for AD risk prediction, to assist in early screening and intervention.

Neuroanatomical-Based Machine Learning Prediction of Alzheimer's Disease Across Sex and Age

Jogeshwar, B. K., Lu, S., Nephew, B. C.

medrxiv logopreprintMay 7 2025
Alzheimers Disease (AD) is a progressive neurodegenerative disorder characterized by cognitive decline and memory loss. In 2024, in the US alone, it affected approximately 1 in 9 people aged 65 and older, equivalent to 6.9 million individuals. Early detection and accurate AD diagnosis are crucial for improving patient outcomes. Magnetic resonance imaging (MRI) has emerged as a valuable tool for examining brain structure and identifying potential AD biomarkers. This study performs predictive analyses by employing machine learning techniques to identify key brain regions associated with AD using numerical data derived from anatomical MRI scans, going beyond standard statistical methods. Using the Random Forest Algorithm, we achieved 92.87% accuracy in detecting AD from Mild Cognitive Impairment and Cognitive Normals. Subgroup analyses across nine sex- and age-based cohorts (69-76 years, 77-84 years, and unified 69-84 years) revealed the hippocampus, amygdala, and entorhinal cortex as consistent top-rank predictors. These regions showed distinct volume reductions across age and sex groups, reflecting distinct age- and sex-related neuroanatomical patterns. For instance, younger males and females (aged 69-76) exhibited volume decreases in the right hippocampus, suggesting its importance in the early stages of AD. Older males (77-84) showed substantial volume decreases in the left inferior temporal cortex. Additionally, the left middle temporal cortex showed decreased volume in females, suggesting a potential female-specific influence, while the right entorhinal cortex may have a male-specific impact. These age-specific sex differences could inform clinical research and treatment strategies, aiding in identifying neuroanatomical markers and therapeutic targets for future clinical interventions.

3D Brain MRI Classification for Alzheimer Diagnosis Using CNN with Data Augmentation

Thien Nhan Vo, Bac Nam Ho, Thanh Xuan Truong

arxiv logopreprintMay 7 2025
A three-dimensional convolutional neural network was developed to classify T1-weighted brain MRI scans as healthy or Alzheimer. The network comprises 3D convolution, pooling, batch normalization, dense ReLU layers, and a sigmoid output. Using stochastic noise injection and five-fold cross-validation, the model achieved test set accuracy of 0.912 and area under the ROC curve of 0.961, an improvement of approximately 0.027 over resizing alone. Sensitivity and specificity both exceeded 0.90. These results align with prior work reporting up to 0.10 gain via synthetic augmentation. The findings demonstrate the effectiveness of simple augmentation for 3D MRI classification and motivate future exploration of advanced augmentation methods and architectures such as 3D U-Net and vision transformers.

Advancing 3D Medical Image Segmentation: Unleashing the Potential of Planarian Neural Networks in Artificial Intelligence

Ziyuan Huang, Kevin Huggins, Srikar Bellur

arxiv logopreprintMay 7 2025
Our study presents PNN-UNet as a method for constructing deep neural networks that replicate the planarian neural network (PNN) structure in the context of 3D medical image data. Planarians typically have a cerebral structure comprising two neural cords, where the cerebrum acts as a coordinator, and the neural cords serve slightly different purposes within the organism's neurological system. Accordingly, PNN-UNet comprises a Deep-UNet and a Wide-UNet as the nerve cords, with a densely connected autoencoder performing the role of the brain. This distinct architecture offers advantages over both monolithic (UNet) and modular networks (Ensemble-UNet). Our outcomes on a 3D MRI hippocampus dataset, with and without data augmentation, demonstrate that PNN-UNet outperforms the baseline UNet and several other UNet variants in image segmentation.

An imageless magnetic resonance framework for fast and cost-effective decision-making

Alba González-Cebrián, Pablo García-Cristóbal, Fernando Galve, Efe Ilıcak, Viktor Van Der Valk, Marius Staring, Andrew Webb, Joseba Alonso

arxiv logopreprintMay 7 2025
Magnetic Resonance Imaging (MRI) is the gold standard in countless diagnostic procedures, yet hardware complexity, long scans, and cost preclude rapid screening and point-of-care use. We introduce Imageless Magnetic Resonance Diagnosis (IMRD), a framework that bypasses k-space sampling and image reconstruction by analyzing raw one-dimensional MR signals. We identify potentially impactful embodiments where IMRD requires only optimized pulse sequences for time-domain contrast, minimal low-field hardware, and pattern recognition algorithms to answer clinical closed queries and quantify lesion burden. As a proof of concept, we simulate multiple sclerosis lesions in silico within brain phantoms and deploy two extremely fast protocols (approximately 3 s), with and without spatial information. A 1D convolutional neural network achieves AUC close to 0.95 for lesion detection and R2 close to 0.99 for volume estimation. We also perform robustness tests under reduced signal-to-noise ratio, partial signal omission, and relaxation-time variability. By reframing MR signals as direct diagnostic metrics, IMRD paves the way for fast, low-cost MR screening and monitoring in resource-limited environments.

Alterations in static and dynamic functional network connectivity in chronic low back pain: a resting-state network functional connectivity and machine learning study.

Liu H, Wan X

pubmed logopapersMay 7 2025
Low back pain (LBP) is a prevalent pain condition whose persistence can lead to changes in the brain regions responsible for sensory, cognitive, attentional, and emotional processing. Previous neuroimaging studies have identified various structural and functional abnormalities in patients with LBP; however, how the static and dynamic large-scale functional network connectivity (FNC) of the brain is affected in these patients remains unclear. Forty-one patients with chronic low back pain (cLBP) and 42 healthy controls underwent resting-state functional MRI scanning. The independent component analysis method was employed to extract the resting-state networks. Subsequently, we calculate and compare between groups for static intra- and inter-network functional connectivity. In addition, we investigated the differences between dynamic functional network connectivity and dynamic temporal metrics between cLBP patients and healthy controls. Finally, we tried to distinguish cLBP patients from healthy controls by support vector machine method. The results showed that significant reductions in functional connectivity within the network were found within the DMN,DAN, and ECN in cLBP patients. Significant between-group differences were also found in static FNC and in each state of dynamic FNC. In addition, in terms of dynamic temporal metrics, fraction time and mean dwell time were significantly altered in cLBP patients. In conclusion, our study suggests the existence of static and dynamic large-scale brain network alterations in patients with cLBP. The findings provide insights into the neural mechanisms underlying various brain function abnormalities and altered pain experiences in patients with cLBP.

Automated Detection of Black Hole Sign for Intracerebral Hemorrhage Patients Using Self-Supervised Learning.

Wang H, Schwirtlich T, Houskamp EJ, Hutch MR, Murphy JX, do Nascimento JS, Zini A, Brancaleoni L, Giacomozzi S, Luo Y, Naidech AM

pubmed logopapersMay 7 2025
Intracerebral Hemorrhage (ICH) is a devastating form of stroke. Hematoma expansion (HE), growth of the hematoma on interval scans, predicts death and disability. Accurate prediction of HE is crucial for targeted interventions to improve patient outcomes. The black hole sign (BHS) on non-contrast computed tomography (CT) scans is a predictive marker for HE. An automated method to recognize the BHS and predict HE could speed precise patient selection for treatment. In. this paper, we presented a novel framework leveraging self-supervised learning (SSL) techniques for BHS identification on head CT images. A ResNet-50 encoder model was pre-trained on over 1.7 million unlabeled head CT images. Layers for binary classification were added on top of the pre-trained model. The resulting model was fine-tuned using the training data and evaluated on the held-out test set to collect AUC and F1 scores. The evaluations were performed on scan and slice levels. We ran different panels, one using two multi-center datasets for external validation and one including parts of them in the pre-training RESULTS: Our model demonstrated strong performance in identifying BHS when compared with the baseline model. Specifically, the model achieved scan-level AUC scores between 0.75-0.89 and F1 scores between 0.60-0.70. Furthermore, it exhibited robustness and generalizability across an external dataset, achieving a scan-level AUC score of up to 0.85 and an F1 score of up to 0.60, while it performed less well on another dataset with more heterogeneous samples. The negative effects could be mitigated after including parts of the external datasets in the fine-tuning process. This study introduced a novel framework integrating SSL into medical image classification, particularly on BHS identification from head CT scans. The resulting pre-trained head CT encoder model showed potential to minimize manual annotation, which would significantly reduce labor, time, and costs. After fine-tuning, the framework demonstrated promising performance for a specific downstream task, identifying the BHS to predict HE, upon comprehensive evaluation on diverse datasets. This approach holds promise for enhancing medical image analysis, particularly in scenarios with limited data availability. ICH = Intracerebral Hemorrhage; HE = Hematoma Expansion; BHS = Black Hole Sign; CT = Computed Tomography; SSL = Self-supervised Learning; AUC = Area Under the receiver operator Curve; CNN = Convolutional Neural Network; SimCLR = Simple framework for Contrastive Learning of visual Representation; HU = Hounsfield Unit; CLAIM = Checklist for Artificial Intelligence in Medical Imaging; VNA = Vendor Neutral Archive; DICOM = Digital Imaging and Communications in Medicine; NIfTI = Neuroimaging Informatics Technology Initiative; INR = International Normalized Ratio; GPU= Graphics Processing Unit; NIH= National Institutes of Health.

Artificial Intelligence based radiomic model in Craniopharyngiomas: A Systematic Review and Meta-Analysis on Diagnosis, Segmentation, and Classification.

Mohammadzadeh I, Hajikarimloo B, Niroomand B, Faizi N, Faizi N, Habibi MA, Mohammadzadeh S, Soltani R

pubmed logopapersMay 7 2025
Craniopharyngiomas (CPs) are rare, benign brain tumors originating from Rathke's pouch remnants, typically located in the sellar/parasellar region. Accurate differentiation is crucial due to varying prognoses, with ACPs having higher recurrence and worse outcomes. MRI struggles with overlapping features, complicating diagnosis. this study evaluates the role of Artificial Intelligence (AI) in diagnosing, segmenting, and classifying CPs, emphasizing its potential to improve clinical decision-making, particularly for radiologists and neurosurgeons. This systematic review and meta-analysis assess AI applications in diagnosing, segmenting, and classifying on CPs patients. a comprehensive search was conducted across PubMed, Scopus, Embase and Web of Science for studies employing AI models in patients with CP. Performance metrics such as sensitivity, specificity, accuracy, and area under the curve (AUC) were extracted and synthesized. Eleven studies involving 1916 patients were included in the analysis. The pooled results revealed a sensitivity of 0.740 (95% CI: 0.673-0.808), specificity of 0.813 (95% CI: 0.729-0.898), and accuracy of 0.746 (95% CI: 0.679-0.813). The area under the curve (AUC) for diagnosis was 0.793 (95% CI: 0.719-0.866), and for classification, it was 0.899 (95% CI: 0.846-0.951). The sensitivity for segmentation was found to be 0.755 (95% CI: 0.704-0.805). AI-based models show strong potential in enhancing the diagnostic accuracy and clinical decision-making process for CPs. These findings support the use of AI tools for more reliable preoperative assessment, leading to better treatment planning and patient outcomes. Further research with larger datasets is needed to optimize and validate AI applications in clinical practice.
Page 50 of 53522 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.