Sort by:
Page 106 of 1111106 results

Neuroanatomical-Based Machine Learning Prediction of Alzheimer's Disease Across Sex and Age

Jogeshwar, B. K., Lu, S., Nephew, B. C.

medrxiv logopreprintMay 7 2025
Alzheimers Disease (AD) is a progressive neurodegenerative disorder characterized by cognitive decline and memory loss. In 2024, in the US alone, it affected approximately 1 in 9 people aged 65 and older, equivalent to 6.9 million individuals. Early detection and accurate AD diagnosis are crucial for improving patient outcomes. Magnetic resonance imaging (MRI) has emerged as a valuable tool for examining brain structure and identifying potential AD biomarkers. This study performs predictive analyses by employing machine learning techniques to identify key brain regions associated with AD using numerical data derived from anatomical MRI scans, going beyond standard statistical methods. Using the Random Forest Algorithm, we achieved 92.87% accuracy in detecting AD from Mild Cognitive Impairment and Cognitive Normals. Subgroup analyses across nine sex- and age-based cohorts (69-76 years, 77-84 years, and unified 69-84 years) revealed the hippocampus, amygdala, and entorhinal cortex as consistent top-rank predictors. These regions showed distinct volume reductions across age and sex groups, reflecting distinct age- and sex-related neuroanatomical patterns. For instance, younger males and females (aged 69-76) exhibited volume decreases in the right hippocampus, suggesting its importance in the early stages of AD. Older males (77-84) showed substantial volume decreases in the left inferior temporal cortex. Additionally, the left middle temporal cortex showed decreased volume in females, suggesting a potential female-specific influence, while the right entorhinal cortex may have a male-specific impact. These age-specific sex differences could inform clinical research and treatment strategies, aiding in identifying neuroanatomical markers and therapeutic targets for future clinical interventions.

Artificial Intelligence based radiomic model in Craniopharyngiomas: A Systematic Review and Meta-Analysis on Diagnosis, Segmentation, and Classification.

Mohammadzadeh I, Hajikarimloo B, Niroomand B, Faizi N, Faizi N, Habibi MA, Mohammadzadeh S, Soltani R

pubmed logopapersMay 7 2025
Craniopharyngiomas (CPs) are rare, benign brain tumors originating from Rathke's pouch remnants, typically located in the sellar/parasellar region. Accurate differentiation is crucial due to varying prognoses, with ACPs having higher recurrence and worse outcomes. MRI struggles with overlapping features, complicating diagnosis. this study evaluates the role of Artificial Intelligence (AI) in diagnosing, segmenting, and classifying CPs, emphasizing its potential to improve clinical decision-making, particularly for radiologists and neurosurgeons. This systematic review and meta-analysis assess AI applications in diagnosing, segmenting, and classifying on CPs patients. a comprehensive search was conducted across PubMed, Scopus, Embase and Web of Science for studies employing AI models in patients with CP. Performance metrics such as sensitivity, specificity, accuracy, and area under the curve (AUC) were extracted and synthesized. Eleven studies involving 1916 patients were included in the analysis. The pooled results revealed a sensitivity of 0.740 (95% CI: 0.673-0.808), specificity of 0.813 (95% CI: 0.729-0.898), and accuracy of 0.746 (95% CI: 0.679-0.813). The area under the curve (AUC) for diagnosis was 0.793 (95% CI: 0.719-0.866), and for classification, it was 0.899 (95% CI: 0.846-0.951). The sensitivity for segmentation was found to be 0.755 (95% CI: 0.704-0.805). AI-based models show strong potential in enhancing the diagnostic accuracy and clinical decision-making process for CPs. These findings support the use of AI tools for more reliable preoperative assessment, leading to better treatment planning and patient outcomes. Further research with larger datasets is needed to optimize and validate AI applications in clinical practice.

An imageless magnetic resonance framework for fast and cost-effective decision-making

Alba González-Cebrián, Pablo García-Cristóbal, Fernando Galve, Efe Ilıcak, Viktor Van Der Valk, Marius Staring, Andrew Webb, Joseba Alonso

arxiv logopreprintMay 7 2025
Magnetic Resonance Imaging (MRI) is the gold standard in countless diagnostic procedures, yet hardware complexity, long scans, and cost preclude rapid screening and point-of-care use. We introduce Imageless Magnetic Resonance Diagnosis (IMRD), a framework that bypasses k-space sampling and image reconstruction by analyzing raw one-dimensional MR signals. We identify potentially impactful embodiments where IMRD requires only optimized pulse sequences for time-domain contrast, minimal low-field hardware, and pattern recognition algorithms to answer clinical closed queries and quantify lesion burden. As a proof of concept, we simulate multiple sclerosis lesions in silico within brain phantoms and deploy two extremely fast protocols (approximately 3 s), with and without spatial information. A 1D convolutional neural network achieves AUC close to 0.95 for lesion detection and R2 close to 0.99 for volume estimation. We also perform robustness tests under reduced signal-to-noise ratio, partial signal omission, and relaxation-time variability. By reframing MR signals as direct diagnostic metrics, IMRD paves the way for fast, low-cost MR screening and monitoring in resource-limited environments.

A deep learning model combining circulating tumor cells and radiological features in the multi-classification of mediastinal lesions in comparison with thoracic surgeons: a large-scale retrospective study.

Wang F, Bao M, Tao B, Yang F, Wang G, Zhu L

pubmed logopapersMay 7 2025
CT images and circulating tumor cells (CTCs) are indispensable for diagnosing the mediastinal lesions by providing radiological and intra-tumoral information. This study aimed to develop and validate a deep multimodal fusion network (DMFN) combining CTCs and CT images for the multi-classification of mediastinal lesions. In this retrospective diagnostic study, we enrolled 1074 patients with 1500 enhanced CT images and 1074 CTCs results between Jan 1, 2020, and Dec 31, 2023. Patients were divided into the training cohort (n = 434), validation cohort (n = 288), and test cohort (n = 352). The DMFN and monomodal convolutional neural network (CNN) models were developed and validated using the CT images and CTCs results. The diagnostic performances of DMFN and monomodal CNN models were based on the Paraffin-embedded pathologies from surgical tissues. The predictive abilities were compared with thoracic resident physicians, attending physicians, and chief physicians by the area under the receiver operating characteristic (ROC) curve, and diagnostic results were visualized in the heatmap. For binary classification, the predictive performances of DMFN (AUC = 0.941, 95% CI 0.901-0.982) were better than the monomodal CNN model (AUC = 0.710, 95% CI 0.664-0.756). In addition, the DMFN model achieved better predictive performances than the thoracic chief physicians, attending physicians, and resident physicians (P = 0.054, 0.020, 0.016) respectively. For the multiclassification, the DMFN achieved encouraging predictive abilities (AUC = 0.884, 95%CI 0.837-0.931), significantly outperforming the monomodal CNN (AUC = 0.722, 95%CI 0.705-0.739), also better than the chief physicians (AUC = 0.787, 95%CI 0.714-0.862), attending physicians (AUC = 0.632, 95%CI 0.612-0.654), and resident physicians (AUC = 0.541, 95%CI 0.508-0.574). This study showed the feasibility and effectiveness of CNN model combing CT images and CTCs levels in predicting the diagnosis of mediastinal lesions. It could serve as a useful method to assist thoracic surgeons in improving diagnostic accuracy and has the potential to make management decisions.

3D Brain MRI Classification for Alzheimer Diagnosis Using CNN with Data Augmentation

Thien Nhan Vo, Bac Nam Ho, Thanh Xuan Truong

arxiv logopreprintMay 7 2025
A three-dimensional convolutional neural network was developed to classify T1-weighted brain MRI scans as healthy or Alzheimer. The network comprises 3D convolution, pooling, batch normalization, dense ReLU layers, and a sigmoid output. Using stochastic noise injection and five-fold cross-validation, the model achieved test set accuracy of 0.912 and area under the ROC curve of 0.961, an improvement of approximately 0.027 over resizing alone. Sensitivity and specificity both exceeded 0.90. These results align with prior work reporting up to 0.10 gain via synthetic augmentation. The findings demonstrate the effectiveness of simple augmentation for 3D MRI classification and motivate future exploration of advanced augmentation methods and architectures such as 3D U-Net and vision transformers.

Automated Detection of Black Hole Sign for Intracerebral Hemorrhage Patients Using Self-Supervised Learning.

Wang H, Schwirtlich T, Houskamp EJ, Hutch MR, Murphy JX, do Nascimento JS, Zini A, Brancaleoni L, Giacomozzi S, Luo Y, Naidech AM

pubmed logopapersMay 7 2025
Intracerebral Hemorrhage (ICH) is a devastating form of stroke. Hematoma expansion (HE), growth of the hematoma on interval scans, predicts death and disability. Accurate prediction of HE is crucial for targeted interventions to improve patient outcomes. The black hole sign (BHS) on non-contrast computed tomography (CT) scans is a predictive marker for HE. An automated method to recognize the BHS and predict HE could speed precise patient selection for treatment. In. this paper, we presented a novel framework leveraging self-supervised learning (SSL) techniques for BHS identification on head CT images. A ResNet-50 encoder model was pre-trained on over 1.7 million unlabeled head CT images. Layers for binary classification were added on top of the pre-trained model. The resulting model was fine-tuned using the training data and evaluated on the held-out test set to collect AUC and F1 scores. The evaluations were performed on scan and slice levels. We ran different panels, one using two multi-center datasets for external validation and one including parts of them in the pre-training RESULTS: Our model demonstrated strong performance in identifying BHS when compared with the baseline model. Specifically, the model achieved scan-level AUC scores between 0.75-0.89 and F1 scores between 0.60-0.70. Furthermore, it exhibited robustness and generalizability across an external dataset, achieving a scan-level AUC score of up to 0.85 and an F1 score of up to 0.60, while it performed less well on another dataset with more heterogeneous samples. The negative effects could be mitigated after including parts of the external datasets in the fine-tuning process. This study introduced a novel framework integrating SSL into medical image classification, particularly on BHS identification from head CT scans. The resulting pre-trained head CT encoder model showed potential to minimize manual annotation, which would significantly reduce labor, time, and costs. After fine-tuning, the framework demonstrated promising performance for a specific downstream task, identifying the BHS to predict HE, upon comprehensive evaluation on diverse datasets. This approach holds promise for enhancing medical image analysis, particularly in scenarios with limited data availability. ICH = Intracerebral Hemorrhage; HE = Hematoma Expansion; BHS = Black Hole Sign; CT = Computed Tomography; SSL = Self-supervised Learning; AUC = Area Under the receiver operator Curve; CNN = Convolutional Neural Network; SimCLR = Simple framework for Contrastive Learning of visual Representation; HU = Hounsfield Unit; CLAIM = Checklist for Artificial Intelligence in Medical Imaging; VNA = Vendor Neutral Archive; DICOM = Digital Imaging and Communications in Medicine; NIfTI = Neuroimaging Informatics Technology Initiative; INR = International Normalized Ratio; GPU= Graphics Processing Unit; NIH= National Institutes of Health.

Interpretable MRI-Based Deep Learning for Alzheimer's Risk and Progression

Lu, B., Chen, Y.-R., Li, R.-X., Zhang, M.-K., Yan, S.-Z., Chen, G.-Q., Castellanos, F. X., Thompson, P. M., Lu, J., Han, Y., Yan, C.-G.

medrxiv logopreprintMay 7 2025
Timely intervention for Alzheimers disease (AD) requires early detection. The development of immunotherapies targeting amyloid-beta and tau underscores the need for accessible, time-efficient biomarkers for early diagnosis. Here, we directly applied our previously developed MRI-based deep learning model for AD to the large Chinese SILCODE cohort (722 participants, 1,105 brain MRI scans). The model -- initially trained on North American data -- demonstrated robust cross-ethnic generalization, without any retraining or fine-tuning, achieving an AUC of 91.3% in AD classification with a sensitivity of 95.2%. It successfully identified 86.7% of individuals at risk of AD progression more than 5 years in advance. Individuals identified as high-risk exhibited significantly shorter median progression times. By integrating an interpretable deep learning brain risk map approach, we identified AD brain subtypes, including an MCI subtype associated with rapid cognitive decline. The models risk scores showed significant correlations with cognitive measures and plasma biomarkers, such as tau proteins and neurofilament light chain (NfL). These findings underscore the exceptional generalizability and clinical utility of MRI-based deep learning models, especially in large and diverse populations, offering valuable tools for early therapeutic intervention. The model has been made open-source and deployed to a free online website for AD risk prediction, to assist in early screening and intervention.

Alterations in static and dynamic functional network connectivity in chronic low back pain: a resting-state network functional connectivity and machine learning study.

Liu H, Wan X

pubmed logopapersMay 7 2025
Low back pain (LBP) is a prevalent pain condition whose persistence can lead to changes in the brain regions responsible for sensory, cognitive, attentional, and emotional processing. Previous neuroimaging studies have identified various structural and functional abnormalities in patients with LBP; however, how the static and dynamic large-scale functional network connectivity (FNC) of the brain is affected in these patients remains unclear. Forty-one patients with chronic low back pain (cLBP) and 42 healthy controls underwent resting-state functional MRI scanning. The independent component analysis method was employed to extract the resting-state networks. Subsequently, we calculate and compare between groups for static intra- and inter-network functional connectivity. In addition, we investigated the differences between dynamic functional network connectivity and dynamic temporal metrics between cLBP patients and healthy controls. Finally, we tried to distinguish cLBP patients from healthy controls by support vector machine method. The results showed that significant reductions in functional connectivity within the network were found within the DMN,DAN, and ECN in cLBP patients. Significant between-group differences were also found in static FNC and in each state of dynamic FNC. In addition, in terms of dynamic temporal metrics, fraction time and mean dwell time were significantly altered in cLBP patients. In conclusion, our study suggests the existence of static and dynamic large-scale brain network alterations in patients with cLBP. The findings provide insights into the neural mechanisms underlying various brain function abnormalities and altered pain experiences in patients with cLBP.

Deep learning approaches for classification tasks in medical X-ray, MRI, and ultrasound images: a scoping review.

Laçi H, Sevrani K, Iqbal S

pubmed logopapersMay 7 2025
Medical images occupy the largest part of the existing medical information and dealing with them is challenging not only in terms of management but also in terms of interpretation and analysis. Hence, analyzing, understanding, and classifying them, becomes a very expensive and time-consuming task, especially if performed manually. Deep learning is considered a good solution for image classification, segmentation, and transfer learning tasks since it offers a large number of algorithms to solve such complex problems. PRISMA-ScR guidelines have been followed to conduct the scoping review with the aim of exploring how deep learning is being used to classify a broad spectrum of diseases diagnosed using an X-ray, MRI, or Ultrasound image modality.Findings contribute to the existing research by outlining the characteristics of the adopted datasets and the preprocessing or augmentation techniques applied to them. The authors summarized all relevant studies based on the deep learning models used and the accuracy achieved for classification. Whenever possible, they included details about the hardware and software configurations, as well as the architectural components of the models employed. Moreover, the models that achieved the highest accuracy in disease classification were highlighted, along with their strengths. The authors also discussed the limitations of the current approaches and proposed future directions for medical image classification.

STG: Spatiotemporal Graph Neural Network with Fusion and Spatiotemporal Decoupling Learning for Prognostic Prediction of Colorectal Cancer Liver Metastasis

Yiran Zhu, Wei Yang, Yan su, Zesheng Li, Chengchang Pan, Honggang Qi

arxiv logopreprintMay 6 2025
We propose a multimodal spatiotemporal graph neural network (STG) framework to predict colorectal cancer liver metastasis (CRLM) progression. Current clinical models do not effectively integrate the tumor's spatial heterogeneity, dynamic evolution, and complex multimodal data relationships, limiting their predictive accuracy. Our STG framework combines preoperative CT imaging and clinical data into a heterogeneous graph structure, enabling joint modeling of tumor distribution and temporal evolution through spatial topology and cross-modal edges. The framework uses GraphSAGE to aggregate spatiotemporal neighborhood information and leverages supervised and contrastive learning strategies to enhance the model's ability to capture temporal features and improve robustness. A lightweight version of the model reduces parameter count by 78.55%, maintaining near-state-of-the-art performance. The model jointly optimizes recurrence risk regression and survival analysis tasks, with contrastive loss improving feature representational discriminability and cross-modal consistency. Experimental results on the MSKCC CRLM dataset show a time-adjacent accuracy of 85% and a mean absolute error of 1.1005, significantly outperforming existing methods. The innovative heterogeneous graph construction and spatiotemporal decoupling mechanism effectively uncover the associations between dynamic tumor microenvironment changes and prognosis, providing reliable quantitative support for personalized treatment decisions.
Page 106 of 1111106 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.