Sort by:
Page 30 of 2352341 results

AI and Healthcare Disparities: Lessons from a Cautionary Tale in Knee Radiology.

Hull G

pubmed logopapersSep 14 2025
Enthusiasm about the use of artificial intelligence (AI) in medicine has been tempered by concern that algorithmic systems can be unfairly biased against racially minoritized populations. This article uses work on racial disparities in knee osteoarthritis diagnoses to underline that achieving justice in the use of AI in medical imaging requires attention to the entire sociotechnical system within which it operates, rather than isolated properties of algorithms. Using AI to make current diagnostic procedures more efficient risks entrenching existing disparities; a recent algorithm points to some of the problems in current procedures while highlighting systemic normative issues that need to be addressed while designing further AI systems. The article thus contributes to a literature arguing that bias and fairness issues in AI be considered as aspects of structural inequality and injustice and to highlighting ways that AI can be helpful in making progress on these.

Sex classification from hand X-ray images in pediatric patients: How zero-shot Segment Anything Model (SAM) can improve medical image analysis.

Mollineda RA, Becerra K, Mederos B

pubmed logopapersSep 13 2025
The potential to classify sex from hand data is a valuable tool in both forensic and anthropological sciences. This work presents possibly the most comprehensive study to date of sex classification from hand X-ray images. The research methodology involves a systematic evaluation of zero-shot Segment Anything Model (SAM) in X-ray image segmentation, a novel hand mask detection algorithm based on geometric criteria leveraging human knowledge (avoiding costly retraining and prompt engineering), the comparison of multiple X-ray image representations including hand bone structure and hand silhouette, a rigorous application of deep learning models and ensemble strategies, visual explainability of decisions by aggregating attribution maps from multiple models, and the transfer of models trained from hand silhouettes to sex prediction of prehistoric handprints. Training and evaluation of deep learning models were performed using the RSNA Pediatric Bone Age dataset, a collection of hand X-ray images from pediatric patients. Results showed very high effectiveness of zero-shot SAM in segmenting X-ray images, the contribution of segmenting before classifying X-ray images, hand sex classification accuracy above 95% on test data, and predictions from ancient handprints highly consistent with previous hypotheses based on sexually dimorphic features. Attention maps highlighted the carpometacarpal joints in the female class and the radiocarpal joint in the male class as sex discriminant traits. These findings are anatomically very close to previous evidence reported under different databases, classification models and visualization techniques.

Association of artificial intelligence-screened interstitial lung disease with radiation pneumonitis in locally advanced non-small cell lung cancer.

Bacon H, McNeil N, Patel T, Welch M, Ye XY, Bezjak A, Lok BH, Raman S, Giuliani M, Cho BCJ, Sun A, Lindsay P, Liu G, Kandel S, McIntosh C, Tadic T, Hope A

pubmed logopapersSep 13 2025
Interstitial lung disease (ILD) has been correlated with an increased risk for radiation pneumonitis (RP) following lung SBRT, but the degree to which locally advanced NSCLC (LA-NSCLC) patients are affected has yet to be quantified. An algorithm to identify patients at high risk for RP may help clinicians mitigate risk. All LA-NSCLC patients treated with definitive radiotherapy at our institution from 2006 to 2021 were retrospectively assessed. A convolutional neural network was previously developed to identify patients with radiographic ILD using planning computed tomography (CT) images. All screen-positive (AI-ILD + ) patients were reviewed by a thoracic radiologist to identify true radiographic ILD (r-ILD). The association between the algorithm output, clinical and dosimetric variables, and the outcomes of grade ≥ 3 RP and mortality were assessed using univariate (UVA) and multivariable (MVA) logistic regression, and Kaplan-Meier survival analysis. 698 patients were included in the analysis. Grade (G) 0-5 RP was reported in 51 %, 27 %, 17 %, 4.4 %, 0.14 % and 0.57 % of patients, respectively. Overall, 23 % of patients were classified as AI-ILD + . On MVA, only AI-ILD status (OR 2.15, p = 0.03) and AI-ILD score (OR 35.27, p < 0.01) were significant predictors of G3 + RP. Median OS was 3.6 years in AI-ILD- patients and 2.3 years in AI-ILD + patients (NS). Patients with r-ILD had significantly higher rates of severe toxicities, with G3 + RP 25 % and G5 RP 7 %. R-ILD was associated with an increased risk for G3 + RP on MVA (OR 5.42, p < 0.01). Our AI-ILD algorithm detects patients with significantly increased risk for G3 + RP.

PET-Computed Tomography in the Management of Sarcoma by Interventional Oncology.

Yazdanpanah F, Hunt SJ

pubmed logopapersSep 13 2025
PET-computed tomography (CT) has become essential in sarcoma management, offering precise diagnosis, staging, and response assessment by combining metabolic and anatomic imaging. Its high accuracy in detecting primary, recurrent, and metastatic disease guides personalized treatment strategies and enhances interventional procedures like biopsies and ablations. Advances in novel radiotracers and hybrid imaging modalities further improve diagnostic specificity, especially in complex and pediatric cases. Integrating PET-CT with genomic data and artificial intelligence (AI)-driven tools promises to advance personalized medicine, enabling tailored therapies and better outcomes. As a cornerstone of multidisciplinary sarcoma care, PET-CT continues to transform diagnostic and therapeutic approaches in oncology.

Dual-Branch Efficient Net Architecture for ACL Tear Detection in Knee MRI

kota, T., Garofalaki, K., Whitely, F., Evdokimenko, E., Smartt, E.

medrxiv logopreprintSep 13 2025
We propose a deep learning approach for detecting anterior cruciate ligament (ACL) tears from knee MRI using a dual-branch convolutional architecture. The model independently processes sagittal and coronal MRI sequences using EfficientNet-B2 backbones with spatial attention modules, followed by a late fusion classifier for binary prediction. MRI volumes are standardized to a fixed number of slices, and domain-specific normalization and data augmentation are applied to enhance model robustness. Trained on a stratified 80/20 split of the MRNet dataset, our best model--using the Adam optimizer and a learning rate of 1e-4--achieved a validation AUC of 0.98 and a test AUC of 0.93. These results show strong predictive performance while maintaining computational efficiency. This work demonstrates that accurate diagnosis is achievable using only two anatomical planes and sets the stage for further improvements through architectural enhancements and broader data integration.

Ultrasound-Based Deep Learning Radiomics to Predict Cervical Lymph Node Metastasis in Major Salivary Gland Carcinomas.

Su HZ, Hong LC, Li ZY, Fu QM, Wu YH, Wu SF, Zhang ZB, Yang DH, Zhang XD

pubmed logopapersSep 12 2025
Cervical lymph node metastasis (CLNM) critically impacts surgery approaches, prognosis, and recurrence in patients with major salivary gland carcinomas (MSGCs). We aimed to develop and validate an ultrasound (US)-based deep learning (DL) radiomics model for noninvasive prediction of CLNM in MSGCs. A total of 214 patients with MSGCs from 4 medical centers were divided into training (Centers 1-2, n = 144) and validation (Centers 3-4, n = 70) cohorts. Radiomics and DL features were extracted from preoperative US images. Following feature selection, radiomics score and DL score were constructed respectively. Subsequently, the least absolute shrinkage and selection operator (LASSO) regression was used to identify optimal features, which were then employed to develop predictive models using logistic regression (LR) and 8 machine learning algorithms. Model performance was evaluated using multiple metrics, with particular focus on the area under the receiver operating characteristic curve (AUC). Radiomics and DL scores showed robust performance in predicting CLNM in MSGCs, with AUCs of 0.819 and 0.836 in the validation cohort, respectively. After LASSO regression, 6 key features (patient age, tumor edge, calcification, US reported CLN-positive, radiomics score, and DL score) were selected to construct 9 predictive models. In the validation cohort, the models' AUCs ranged from 0.770 to 0.962. The LR model achieved the best performance, with an AUC of 0.962, accuracy of 0.886, precision of 0.762, recall of 0.842, and an F1 score of 0.8. The composite model integrating clinical, US, radiomics, and DL features accurately noninvasively predicts CLNM preoperatively in MSGCs. CLNM in MSGCs is critical for treatment planning, but noninvasive prediction is limited. This study developed an US-based DL radiomics model to enable noninvasive CLNM prediction, supporting personalized surgery and reducing unnecessary interventions.

Regional attention-enhanced vision transformer for accurate Alzheimer's disease classification using sMRI data.

Jomeiri A, Habibizad Navin A, Shamsi M

pubmed logopapersSep 12 2025
Alzheimer's disease (AD) poses a significant global health challenge, necessitating early and accurate diagnosis to enable timely intervention. Structural MRI (sMRI) is a key imaging modality for detecting AD-related brain atrophy, yet traditional deep learning models like convolutional neural networks (CNNs) struggle to capture complex spatial dependencies critical for AD diagnosis. This study introduces the Regional Attention-Enhanced Vision Transformer (RAE-ViT), a novel framework designed for AD classification using sMRI data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. RAE-ViT leverages regional attention mechanisms to prioritize disease-critical brain regions, such as the hippocampus and ventricles, while integrating hierarchical self-attention and multi-scale feature extraction to model both localized and global structural patterns. Evaluated on 1152 sMRI scans (255 AD, 521 MCI, 376 NC), RAE-ViT achieved state-of-the-art performance with 94.2 % accuracy, 91.8 % sensitivity, 95.7 % specificity, and an AUC of 0.96, surpassing standard ViTs (89.5 %) and CNN-based models (e.g., ResNet-50: 87.8 %). The model's interpretable attention maps align closely with clinical biomarkers (Dice: 0.89 hippocampus, 0.85 ventricles), enhancing diagnostic reliability. Robustness to scanner variability (92.5 % accuracy on 1.5T scans) and noise (92.5 % accuracy under 10 % Gaussian noise) further supports its clinical applicability. A preliminary multimodal extension integrating sMRI and PET data improved accuracy to 95.8 %. Future work will focus on optimizing RAE-ViT for edge devices, incorporating multimodal data (e.g., PET, fMRI, genetic), and exploring self-supervised and federated learning to enhance generalizability and privacy. RAE-ViT represents a significant advancement in AI-driven AD diagnosis, offering potential for early detection and improved patient outcomes.

Machine Learning for Preoperative Assessment and Postoperative Prediction in Cervical Cancer: Multicenter Retrospective Model Integrating MRI and Clinicopathological Data.

Li S, Guo C, Fang Y, Qiu J, Zhang H, Ling L, Xu J, Peng X, Jiang C, Wang J, Hua K

pubmed logopapersSep 12 2025
Machine learning (ML) has been increasingly applied to cervical cancer (CC) research. However, few studies have combined both clinical parameters and imaging data. At the same time, there remains an urgent need for more robust and accurate preoperative assessment of parametrial invasion and lymph node metastasis, as well as postoperative prognosis prediction. The objective of this study is to develop an integrated ML model combining clinicopathological variables and magnetic resonance image features for (1) preoperative parametrial invasion and lymph node metastasis detection and (2) postoperative recurrence and survival prediction. Retrospective data from 250 patients with CC (2014-2022; 2 tertiary hospitals) were analyzed. Variables were assessed for their predictive value regarding parametrial invasion, lymph node metastasis, survival, and recurrence using 7 ML models: K-nearest neighbor (KNN), support vector machine, decision tree, random forest (RF), balanced RF, weighted DT, and weighted KNN. Performance was assessed via 5-fold cross-validation using accuracy, sensitivity, specificity, precision, F1-score, and area under the receiver operating characteristic curve (AUC). The optimal models were deployed in an artificial intelligence-assisted contouring and prognosis prediction system. Among 250 women, there were 11 deaths and 24 recurrences. (1) For preoperative evaluation, the integrated model using balanced RF achieved optimal performance (sensitivity 0.81, specificity 0.85) for parametrial invasion, while weighted KNN achieved the best performance for lymph node metastasis (sensitivity 0.98, AUC 0.72). (2) For postoperative prognosis, weighted KNN also demonstrated high accuracy for recurrence (accuracy 0.94, AUC 0.86) and mortality (accuracy 0.97, AUC 0.77), with relatively balanced sensitivity of 0.80 and 0.33, respectively. (3) An artificial intelligence-assisted contouring and prognosis prediction system was developed to support preoperative evaluation and postoperative prognosis prediction. The integration of clinical data and magnetic resonance images provides enhanced diagnostic capability to preoperatively detect parametrial invasion and lymph node metastasis detection and prognostic capability to predict recurrence and mortality for CC, facilitating personalized, precise treatment strategies.

Building a General SimCLR Self-Supervised Foundation Model Across Neurological Diseases to Advance 3D Brain MRI Diagnoses

Emily Kaczmarek, Justin Szeto, Brennan Nichyporuk, Tal Arbel

arxiv logopreprintSep 12 2025
3D structural Magnetic Resonance Imaging (MRI) brain scans are commonly acquired in clinical settings to monitor a wide range of neurological conditions, including neurodegenerative disorders and stroke. While deep learning models have shown promising results analyzing 3D MRI across a number of brain imaging tasks, most are highly tailored for specific tasks with limited labeled data, and are not able to generalize across tasks and/or populations. The development of self-supervised learning (SSL) has enabled the creation of large medical foundation models that leverage diverse, unlabeled datasets ranging from healthy to diseased data, showing significant success in 2D medical imaging applications. However, even the very few foundation models for 3D brain MRI that have been developed remain limited in resolution, scope, or accessibility. In this work, we present a general, high-resolution SimCLR-based SSL foundation model for 3D brain structural MRI, pre-trained on 18,759 patients (44,958 scans) from 11 publicly available datasets spanning diverse neurological diseases. We compare our model to Masked Autoencoders (MAE), as well as two supervised baselines, on four diverse downstream prediction tasks in both in-distribution and out-of-distribution settings. Our fine-tuned SimCLR model outperforms all other models across all tasks. Notably, our model still achieves superior performance when fine-tuned using only 20% of labeled training samples for predicting Alzheimer's disease. We use publicly available code and data, and release our trained model at https://github.com/emilykaczmarek/3D-Neuro-SimCLR, contributing a broadly applicable and accessible foundation model for clinical brain MRI analysis.

SSL-AD: Spatiotemporal Self-Supervised Learning for Generalizability and Adaptability Across Alzheimer's Prediction Tasks and Datasets

Emily Kaczmarek, Justin Szeto, Brennan Nichyporuk, Tal Arbel

arxiv logopreprintSep 12 2025
Alzheimer's disease is a progressive, neurodegenerative disorder that causes memory loss and cognitive decline. While there has been extensive research in applying deep learning models to Alzheimer's prediction tasks, these models remain limited by lack of available labeled data, poor generalization across datasets, and inflexibility to varying numbers of input scans and time intervals between scans. In this study, we adapt three state-of-the-art temporal self-supervised learning (SSL) approaches for 3D brain MRI analysis, and add novel extensions designed to handle variable-length inputs and learn robust spatial features. We aggregate four publicly available datasets comprising 3,161 patients for pre-training, and show the performance of our model across multiple Alzheimer's prediction tasks including diagnosis classification, conversion detection, and future conversion prediction. Importantly, our SSL model implemented with temporal order prediction and contrastive learning outperforms supervised learning on six out of seven downstream tasks. It demonstrates adaptability and generalizability across tasks and number of input images with varying time intervals, highlighting its capacity for robust performance across clinical applications. We release our code and model publicly at https://github.com/emilykaczmarek/SSL-AD.
Page 30 of 2352341 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.