Sort by:
Page 31 of 2352341 results

Regional attention-enhanced vision transformer for accurate Alzheimer's disease classification using sMRI data.

Jomeiri A, Habibizad Navin A, Shamsi M

pubmed logopapersSep 12 2025
Alzheimer's disease (AD) poses a significant global health challenge, necessitating early and accurate diagnosis to enable timely intervention. Structural MRI (sMRI) is a key imaging modality for detecting AD-related brain atrophy, yet traditional deep learning models like convolutional neural networks (CNNs) struggle to capture complex spatial dependencies critical for AD diagnosis. This study introduces the Regional Attention-Enhanced Vision Transformer (RAE-ViT), a novel framework designed for AD classification using sMRI data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. RAE-ViT leverages regional attention mechanisms to prioritize disease-critical brain regions, such as the hippocampus and ventricles, while integrating hierarchical self-attention and multi-scale feature extraction to model both localized and global structural patterns. Evaluated on 1152 sMRI scans (255 AD, 521 MCI, 376 NC), RAE-ViT achieved state-of-the-art performance with 94.2 % accuracy, 91.8 % sensitivity, 95.7 % specificity, and an AUC of 0.96, surpassing standard ViTs (89.5 %) and CNN-based models (e.g., ResNet-50: 87.8 %). The model's interpretable attention maps align closely with clinical biomarkers (Dice: 0.89 hippocampus, 0.85 ventricles), enhancing diagnostic reliability. Robustness to scanner variability (92.5 % accuracy on 1.5T scans) and noise (92.5 % accuracy under 10 % Gaussian noise) further supports its clinical applicability. A preliminary multimodal extension integrating sMRI and PET data improved accuracy to 95.8 %. Future work will focus on optimizing RAE-ViT for edge devices, incorporating multimodal data (e.g., PET, fMRI, genetic), and exploring self-supervised and federated learning to enhance generalizability and privacy. RAE-ViT represents a significant advancement in AI-driven AD diagnosis, offering potential for early detection and improved patient outcomes.

SSL-AD: Spatiotemporal Self-Supervised Learning for Generalizability and Adaptability Across Alzheimer's Prediction Tasks and Datasets

Emily Kaczmarek, Justin Szeto, Brennan Nichyporuk, Tal Arbel

arxiv logopreprintSep 12 2025
Alzheimer's disease is a progressive, neurodegenerative disorder that causes memory loss and cognitive decline. While there has been extensive research in applying deep learning models to Alzheimer's prediction tasks, these models remain limited by lack of available labeled data, poor generalization across datasets, and inflexibility to varying numbers of input scans and time intervals between scans. In this study, we adapt three state-of-the-art temporal self-supervised learning (SSL) approaches for 3D brain MRI analysis, and add novel extensions designed to handle variable-length inputs and learn robust spatial features. We aggregate four publicly available datasets comprising 3,161 patients for pre-training, and show the performance of our model across multiple Alzheimer's prediction tasks including diagnosis classification, conversion detection, and future conversion prediction. Importantly, our SSL model implemented with temporal order prediction and contrastive learning outperforms supervised learning on six out of seven downstream tasks. It demonstrates adaptability and generalizability across tasks and number of input images with varying time intervals, highlighting its capacity for robust performance across clinical applications. We release our code and model publicly at https://github.com/emilykaczmarek/SSL-AD.

Multi-pathology Chest X-ray Classification with Rejection Mechanisms

Yehudit Aperstein, Amit Tzahar, Alon Gottlib, Tal Verber, Ravit Shagan Damti, Alexander Apartsin

arxiv logopreprintSep 12 2025
Overconfidence in deep learning models poses a significant risk in high-stakes medical imaging tasks, particularly in multi-label classification of chest X-rays, where multiple co-occurring pathologies must be detected simultaneously. This study introduces an uncertainty-aware framework for chest X-ray diagnosis based on a DenseNet-121 backbone, enhanced with two selective prediction mechanisms: entropy-based rejection and confidence interval-based rejection. Both methods enable the model to abstain from uncertain predictions, improving reliability by deferring ambiguous cases to clinical experts. A quantile-based calibration procedure is employed to tune rejection thresholds using either global or class-specific strategies. Experiments conducted on three large public datasets (PadChest, NIH ChestX-ray14, and MIMIC-CXR) demonstrate that selective rejection improves the trade-off between diagnostic accuracy and coverage, with entropy-based rejection yielding the highest average AUC across all pathologies. These results support the integration of selective prediction into AI-assisted diagnostic workflows, providing a practical step toward safer, uncertainty-aware deployment of deep learning in clinical settings.

GLAM: Geometry-Guided Local Alignment for Multi-View VLP in Mammography

Yuexi Du, Lihui Chen, Nicha C. Dvornek

arxiv logopreprintSep 12 2025
Mammography screening is an essential tool for early detection of breast cancer. The speed and accuracy of mammography interpretation have the potential to be improved with deep learning methods. However, the development of a foundation visual language model (VLM) is hindered by limited data and domain differences between natural and medical images. Existing mammography VLMs, adapted from natural images, often ignore domain-specific characteristics, such as multi-view relationships in mammography. Unlike radiologists who analyze both views together to process ipsilateral correspondence, current methods treat them as independent images or do not properly model the multi-view correspondence learning, losing critical geometric context and resulting in suboptimal prediction. We propose GLAM: Global and Local Alignment for Multi-view mammography for VLM pretraining using geometry guidance. By leveraging the prior knowledge about the multi-view imaging process of mammograms, our model learns local cross-view alignments and fine-grained local features through joint global and local, visual-visual, and visual-language contrastive learning. Pretrained on EMBED [14], one of the largest open mammography datasets, our model outperforms baselines across multiple datasets under different settings.

Three-Dimensional Radiomics and Machine Learning for Predicting Postoperative Outcomes in Laminoplasty for Cervical Spondylotic Myelopathy: A Clinical-Radiomics Model.

Zheng B, Zhu Z, Ma K, Liang Y, Liu H

pubmed logopapersSep 12 2025
This study aims to explore a method based on three-dimensional cervical spinal cord reconstruction, radiomics feature extraction, and machine learning to build a postoperative prognosis prediction model for patients with cervical spondylotic myelopathy (CSM). It also evaluates the predictive performance of different cervical spinal cord segmentation strategies and machine learning algorithms. A retrospective analysis is conducted on 126 CSM patients who underwent posterior single-door laminoplasty from January 2017 to December 2022. Three different cervical spinal cord segmentation strategies (narrowest segment, surgical segment, and entire cervical cord C1-C7) are applied to preoperative MRI images for radiomics feature extraction. Good clinical prognosis is defined as a postoperative JOA recovery rate ≥50%. By comparing the performance of 8 machine learning algorithms, the optimal cervical spinal cord segmentation strategy and classifier are selected. Subsequently, clinical features (smoking history, diabetes, preoperative JOA score, and cSVA) are combined with radiomics features to construct a clinical-radiomics prediction model. Among the three cervical spinal cord segmentation strategies, the SVM model based on the narrowest segment performed best (AUC=0.885). Among clinical features, smoking history, diabetes, preoperative JOA score, and cSVA are important indicators for prognosis prediction. When clinical features are combined with radiomics features, the fusion model achieved excellent performance on the test set (accuracy=0.895, AUC=0.967), significantly outperforming either the clinical model or the radiomics model alone. This study validates the feasibility and superiority of three-dimensional radiomics combined with machine learning in predicting postoperative prognosis for CSM. The combination of radiomics features based on the narrowest segment and clinical features can yield a highly accurate prognosis prediction model, providing new insights for clinical assessment and individualized treatment decisions. Future studies need to further validate the stability and generalizability of this model in multi-center, large-sample cohorts.

Building a General SimCLR Self-Supervised Foundation Model Across Neurological Diseases to Advance 3D Brain MRI Diagnoses

Emily Kaczmarek, Justin Szeto, Brennan Nichyporuk, Tal Arbel

arxiv logopreprintSep 12 2025
3D structural Magnetic Resonance Imaging (MRI) brain scans are commonly acquired in clinical settings to monitor a wide range of neurological conditions, including neurodegenerative disorders and stroke. While deep learning models have shown promising results analyzing 3D MRI across a number of brain imaging tasks, most are highly tailored for specific tasks with limited labeled data, and are not able to generalize across tasks and/or populations. The development of self-supervised learning (SSL) has enabled the creation of large medical foundation models that leverage diverse, unlabeled datasets ranging from healthy to diseased data, showing significant success in 2D medical imaging applications. However, even the very few foundation models for 3D brain MRI that have been developed remain limited in resolution, scope, or accessibility. In this work, we present a general, high-resolution SimCLR-based SSL foundation model for 3D brain structural MRI, pre-trained on 18,759 patients (44,958 scans) from 11 publicly available datasets spanning diverse neurological diseases. We compare our model to Masked Autoencoders (MAE), as well as two supervised baselines, on four diverse downstream prediction tasks in both in-distribution and out-of-distribution settings. Our fine-tuned SimCLR model outperforms all other models across all tasks. Notably, our model still achieves superior performance when fine-tuned using only 20% of labeled training samples for predicting Alzheimer's disease. We use publicly available code and data, and release our trained model at https://github.com/emilykaczmarek/3D-Neuro-SimCLR, contributing a broadly applicable and accessible foundation model for clinical brain MRI analysis.

A machine learning model combining ultrasound features and serological markers predicts gallbladder polyp malignancy: A retrospective cohort study.

Yang Y, Tu H, Lin Y, Wei J

pubmed logopapersSep 12 2025
Differentiating benign from malignant gallbladder polyps (GBPs) is critical for clinical decisions. Pathological biopsy, the gold standard, requires cholecystectomy, underscoring the need for noninvasive alternatives. This retrospective study included 202 patients (50 malignant, 152 benign) who underwent cholecystectomy (2018-2024) at Fujian Provincial Hospital. Ultrasound features (polyp diameter, stalk presence), serological markers (neutrophil-to-lymphocyte ratio [NLR], CA19-9), and demographics (age, sex, body mass index, waist-to-hip ratio, comorbidities, alcohol history) were analyzed. Patients were split into training (70%) and validation (30%) sets. Ten machine learning (ML) algorithms were trained; the model with the highest area under the receiver operating characteristic curve (AUC) was selected. Shapley additive explanations (SHAP) identified key predictors. Models were categorized as clinical (ultrasound + age), hematological (NLR + CA19-9), and combined (all 5 variables). ROC, precision-recall, calibration, and decision curve analysis curves were generated. A web-based calculator was developed. The Extra Trees model achieved the highest AUC (0.97 in training, 0.93 in validation). SHAP analysis highlighted polyp diameter, sessile morphology, NLR, age, and CA19-9 as top predictors. The combined model outperformed clinical (AUC 0.89) and hematological (AUC 0.68) models, with balanced sensitivity (66%-54%), specificity (94-93%), and accuracy (87%-83%). This ML model integrating ultrasound and serological markers accurately predicts GBP malignancy. The web-based calculator facilitates clinical adoption, potentially reducing unnecessary surgeries.

Risk prediction for lung cancer screening: a systematic review and meta-regression

Rezaeianzadeh, R., Leung, C., Kim, S. J., Choy, K., Johnson, K. M., Kirby, M., Lam, S., Smith, B. M., Sadatsafavi, M.

medrxiv logopreprintSep 12 2025
BackgroundLung cancer (LC) is the leading cause of cancer mortality, often diagnosed at advanced stages. Screening reduces mortality in high-risk individuals, but its efficiency can improve with pre- and post-screening risk stratification. With recent LC screening guideline updates in Europe and the US, numerous novel risk prediction models have emerged since the last systematic review of such models. We reviewed risk-based models for selecting candidates for CT screening, and post-CT stratification. MethodsWe systematically reviewed Embase and MEDLINE (2020-2024), identifying studies proposing new LC risk models for screening selection or nodule classification. Data extraction included study design, population, model type, risk horizon, and internal/external validation metrics. In addition, we performed an exploratory meta-regression of AUCs to assess whether sample size, model class, validation type, and biomarker use were associated with discrimination. ResultsOf 1987 records, 68 were included: 41 models were for screening selection (20 without biomarkers, 21 with), and 27 for nodule classification. Regression-based models predominated, though machine learning and deep learning approaches were increasingly common. Discrimination ranged from moderate (AUC{approx}0.70) to excellent (>0.90), with biomarker and imaging-enhanced models often outperforming traditional ones. Model calibration was inconsistently reported, and fewer than half underwent external validation. Meta-regression suggested that, among pre-screening models, larger sample sizes were modestly associated with higher AUC. Conclusion75 models had been identified prior to 2020, we found 68 models since. This reflects growing interest in personalized LC screening. While many demonstrate strong discrimination, inconsistent calibration and limited external validation hinder clinical adoption. Future efforts should prioritize improving existing models rather than developing new ones, transparent evaluation, cost-effectiveness analysis, and real-world implementation.

The comparison of deep learning and radiomics in the prediction of polymyositis.

Wu G, Li B, Li T, Liu L

pubmed logopapersSep 12 2025
T2 weighted magnetic resonance imaging has become a commonly used noninvasive examination method for the diagnosis of Polymyositis (PM). The data regarding the comparison of deep learning and radiomics in the diagnosis of PM is still lacking. This study investigates the feasibility of 3D convolutional neural network (CNN) in the prediction of PM, with comparison to radiomics. A total of 120 patients (with 60 PM) were from center A, and 30 (with 15 PM) were from B, and 46 (with 23 PM) were from C. The data from center A was used as training data, and data from B as validation data, and data from C as external test data. The magnetic resonance radiomics features of rectus femoris were obtained for all cases. The maximum correlation minimum redundancy and least absolute shrinkage and selection operator regression were used before establishing a radiomics score model. A 3D CNN classification model was trained with "monai" based on 150 data with labels. A 3D Unet segmentation model was also trained with "monai" based on 196 original data and their segmentation of rectus femoris. The accuracy on the external test data was compared between 2 methods by using the paired chi-square test. PM and non-PM cases did not differ in age or gender (P > .05). The 3D CNN classification model achieved accuracy of 97% in validation data. The sensitivity, specificity, accuracy and positive predictive value of the 3D CNN classification model in the external test data were 96% (22/23), 91% (21/23), 93% (43/46), and 92% (22/24), respectively. The radiomics score achieved accuracy of 90% in the validation data. The sensitivity, specificity, accuracy, and positive predictive value of the radiomics score in the external test data were 70% (16/23), 65% (15/23), 67% (31/46), and 67% (16/24), respectively, significantly lower than that of CNN model (P = .035). The 3D segmentation model for rectus femoris on T2 weighted magnetic resonance images was obtained with dice similarity coefficient of 0.71. 3D CNN model is not inferior to radiomics score in the prediction of PM. The combination of deep learning and radiomics is recommended for the evaluation of PM in future clinical practice.

Machine-learning model for differentiating round pneumonia and primary lung cancer using CT-based radiomic analysis.

Genç H, Yildirim M

pubmed logopapersSep 12 2025
Round pneumonia is a benign lung condition that can radiologically mimic primary lung cancer, making diagnosis challenging. Accurately distinguishing between these diseases is critical to avoid unnecessary invasive procedures. This study aims to distinguish round pneumonia from primary lung cancer by developing machine-learning models based on radiomic features extracted from computed tomography (CT) images. This retrospective observational study included 24 patients diagnosed with round pneumonia and 24 with histopathologically confirmed primary lung cancer. The lesions were manually segmented on the CT images by 2 radiologists. In total, 107 radiomic features were extracted from each case. Feature selection was performed using an information-gain algorithm to identify the 5 most relevant features. Seven machine-learning classifiers (Naïve Bayes, support vector machine, Random Forest, Decision Tree, Neural Network, Logistic Regression, and k-NN) were trained and validated. The model performance was evaluated using AUC, classification accuracy, sensitivity, and specificity. The Naïve Bayes, support vector machine, and Random Forest models achieved perfect classification performance on the entire dataset (AUC = 1.000). After feature selection, the Naïve Bayes model maintained a high performance with an AUC of 1.000, accuracy of 0.979, sensitivity of 0.958, and specificity of 1.000. Machine-learning models using CT-based radiomics features can effectively differentiate round pneumonia from primary lung cancer. These models offer a promising noninvasive tool to aid in radiological diagnosis and reduce diagnostic uncertainty.
Page 31 of 2352341 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.