Sort by:
Page 41 of 1411402 results

Mitigating MRI Domain Shift in Sex Classification: A Deep Learning Approach with ComBat Harmonization

Peyman Sharifian, Mohammad Saber Azimi, AliReza Karimian, Hossein Arabi

arxiv logopreprintAug 27 2025
Deep learning models for medical image analysis often suffer from performance degradation when applied to data from different scanners or protocols, a phenomenon known as domain shift. This study investigates this challenge in the context of sex classification from 3D T1-weighted brain magnetic resonance imaging (MRI) scans using the IXI and OASIS3 datasets. While models achieved high within-domain accuracy (around 0.95) when trained and tested on a single dataset (IXI or OASIS3), we demonstrate a significant performance drop to chance level (about 0.50) when models trained on one dataset are tested on the other, highlighting the presence of a strong domain shift. To address this, we employed the ComBat harmonization technique to align the feature distributions of the two datasets. We evaluated three state-of-the-art 3D deep learning architectures (3D ResNet18, 3D DenseNet, and 3D EfficientNet) across multiple training strategies. Our results show that ComBat harmonization effectively reduces the domain shift, leading to a substantial improvement in cross-domain classification performance. For instance, the cross-domain balanced accuracy of our best model (ResNet18 3D with Attention) improved from approximately 0.50 (chance level) to 0.61 after harmonization. t-SNE visualization of extracted features provides clear qualitative evidence of the reduced domain discrepancy post-harmonization. This work underscores the critical importance of domain adaptation techniques for building robust and generalizable neuroimaging AI models.

A Systematic Review on the Generative AI Applications in Human Medical Genomics

Anton Changalidis, Yury Barbitoff, Yulia Nasykhova, Andrey Glotov

arxiv logopreprintAug 27 2025
Although traditional statistical techniques and machine learning methods have contributed significantly to genetics and, in particular, inherited disease diagnosis, they often struggle with complex, high-dimensional data, a challenge now addressed by state-of-the-art deep learning models. Large language models (LLMs), based on transformer architectures, have excelled in tasks requiring contextual comprehension of unstructured medical data. This systematic review examines the role of LLMs in the genetic research and diagnostics of both rare and common diseases. Automated keyword-based search in PubMed, bioRxiv, medRxiv, and arXiv was conducted, targeting studies on LLM applications in diagnostics and education within genetics and removing irrelevant or outdated models. A total of 172 studies were analyzed, highlighting applications in genomic variant identification, annotation, and interpretation, as well as medical imaging advancements through vision transformers. Key findings indicate that while transformer-based models significantly advance disease and risk stratification, variant interpretation, medical imaging analysis, and report generation, major challenges persist in integrating multimodal data (genomic sequences, imaging, and clinical records) into unified and clinically robust pipelines, facing limitations in generalizability and practical implementation in clinical settings. This review provides a comprehensive classification and assessment of the current capabilities and limitations of LLMs in transforming hereditary disease diagnostics and supporting genetic education, serving as a guide to navigate this rapidly evolving field.

MedNet-PVS: A MedNeXt-Based Deep Learning Model for Automated Segmentation of Perivascular Spaces

Zhen Xuen Brandon Low, Rory Zhang, Hang Min, William Pham, Lucy Vivash, Jasmine Moses, Miranda Lynch, Karina Dorfman, Cassandra Marotta, Shaun Koh, Jacob Bunyamin, Ella Rowsthorn, Alex Jarema, Himashi Peiris, Zhaolin Chen, Sandy R. Shultz, David K. Wright, Dexiao Kong, Sharon L. Naismith, Terence J. O'Brien, Ying Xia, Meng Law, Benjamin Sinclair

arxiv logopreprintAug 27 2025
Enlarged perivascular spaces (PVS) are increasingly recognized as biomarkers of cerebral small vessel disease, Alzheimer's disease, stroke, and aging-related neurodegeneration. However, manual segmentation of PVS is time-consuming and subject to moderate inter-rater reliability, while existing automated deep learning models have moderate performance and typically fail to generalize across diverse clinical and research MRI datasets. We adapted MedNeXt-L-k5, a Transformer-inspired 3D encoder-decoder convolutional network, for automated PVS segmentation. Two models were trained: one using a homogeneous dataset of 200 T2-weighted (T2w) MRI scans from the Human Connectome Project-Aging (HCP-Aging) dataset and another using 40 heterogeneous T1-weighted (T1w) MRI volumes from seven studies across six scanners. Model performance was evaluated using internal 5-fold cross validation (5FCV) and leave-one-site-out cross validation (LOSOCV). MedNeXt-L-k5 models trained on the T2w images of the HCP-Aging dataset achieved voxel-level Dice scores of 0.88+/-0.06 (white matter, WM), comparable to the reported inter-rater reliability of that dataset, and the highest yet reported in the literature. The same models trained on the T1w images of the HCP-Aging dataset achieved a substantially lower Dice score of 0.58+/-0.09 (WM). Under LOSOCV, the model had voxel-level Dice scores of 0.38+/-0.16 (WM) and 0.35+/-0.12 (BG), and cluster-level Dice scores of 0.61+/-0.19 (WM) and 0.62+/-0.21 (BG). MedNeXt-L-k5 provides an efficient solution for automated PVS segmentation across diverse T1w and T2w MRI datasets. MedNeXt-L-k5 did not outperform the nnU-Net, indicating that the attention-based mechanisms present in transformer-inspired models to provide global context are not required for high accuracy in PVS segmentation.

Optimizing meningioma grading with radiomics and deep features integration, attention mechanisms, and reproducibility analysis.

Albadr RJ, Sur D, Yadav A, Rekha MM, Jain B, Jayabalan K, Kubaev A, Taher WM, Alwan M, Jawad MJ, Al-Nuaimi AMA, Mohammadifard M, Farhood B, Akhavan-Sigari R

pubmed logopapersAug 26 2025
This study aims to develop a robust and clinically applicable framework for preoperative grading of meningiomas using T1-contrast-enhanced and T2-weighted MRI images. The approach integrates radiomic feature extraction, attention-guided deep learning models, and reproducibility assessment to achieve high diagnostic accuracy, model interpretability, and clinical reliability. We analyzed MRI scans from 2546 patients with histopathologically confirmed meningiomas (1560 low-grade, 986 high-grade). High-quality T1-contrast and T2-weighted images were preprocessed through harmonization, normalization, resizing, and augmentation. Tumor segmentation was performed using ITK-SNAP, and inter-rater reliability of radiomic features was evaluated using the intraclass correlation coefficient (ICC). Radiomic features were extracted via the SERA software, while deep features were derived from pre-trained models (ResNet50 and EfficientNet-B0), with attention mechanisms enhancing focus on tumor-relevant regions. Feature fusion and dimensionality reduction were conducted using PCA and LASSO. Ensemble models employing Random Forest, XGBoost, and LightGBM were implemented to optimize classification performance using both radiomic and deep features. Reproducibility analysis showed that 52% of radiomic features demonstrated excellent reliability (ICC > 0.90). Deep features from EfficientNet-B0 outperformed ResNet50, achieving AUCs of 94.12% (T1) and 93.17% (T2). Hybrid models combining radiomic and deep features further improved performance, with XGBoost reaching AUCs of 95.19% (T2) and 96.87% (T1). Ensemble models incorporating both deep architectures achieved the highest classification performance, with AUCs of 96.12% (T2) and 96.80% (T1), demonstrating superior robustness and accuracy. This work introduces a comprehensive and clinically meaningful AI framework that significantly enhances the preoperative grading of meningiomas. The model's high accuracy, interpretability, and reproducibility support its potential to inform surgical planning, reduce reliance on invasive diagnostics, and facilitate more personalized therapeutic decision-making in routine neuro-oncology practice. Not applicable.

Validation of an Automated CT Image Analysis in the Prevention of Urinary Stones with Hydration Trial.

Tasian GE, Maalouf NM, Harper JD, Sivalingam S, Logan J, Al-Khalidi HR, Lieske JC, Selman-Fermin A, Desai AC, Lai H, Kirkali Z, Scales CD, Fan Y

pubmed logopapersAug 26 2025
<b><i>Introduction and Objective:</i></b> Kidney stone growth and new stone formation are common clinical trial endpoints and are associated with future symptomatic events. To date, a manual review of CT scans has been required to assess stone growth and new stone formation, which is laborious. We validated the performance of a software algorithm that automatically identified, registered, and measured stones over longitudinal CT studies. <b><i>Methods:</i></b> We validated the performance of a pretrained machine learning algorithm to classify stone outcomes on longitudinal CT scan images at baseline and at the end of the 2-year follow-up period for 62 participants aged >18 years in the Prevention of Urinary Stones with Hydration (PUSH) randomized controlled trial. Stones were defined as an area of voxels with a minimum linear dimension of 2 mm that was higher in density than the mean plus 4 standard deviations of all nonnegative HU values within the kidney. The four outcomes assessed were: (1) growth of at least one existing stone by ≥2 mm, (2) formation of at least one new ≥2 mm stone, (3) no stone growth or new stone formation, and (4) loss of at least one stone. The accuracy of the algorithm was determined by comparing its outcomes to the gold standard of independent review of the CT images by at least two expert clinicians. <b><i>Results:</i></b> The algorithm correctly classified outcomes for 61 paired scans (98.4%). One pair that the algorithm incorrectly classified as stone growth was a new renal artery calcification on end-of-study CT. <b><i>Conclusions:</i></b> An automated image analysis method validated for the prospective PUSH trial was highly accurate for determining clinical outcomes of new stone formation, stone growth, stable stone size, and stone loss on longitudinal CT images. This method has the potential to improve the accuracy and efficiency of clinical care and endpoint determination for future clinical trials.

Random forest-based out-of-distribution detection for robust lung cancer segmentation

Aneesh Rangnekar, Harini Veeraraghavan

arxiv logopreprintAug 26 2025
Accurate detection and segmentation of cancerous lesions from computed tomography (CT) scans is essential for automated treatment planning and cancer treatment response assessment. Transformer-based models with self-supervised pretraining can produce reliably accurate segmentation from in-distribution (ID) data but degrade when applied to out-of-distribution (OOD) datasets. We address this challenge with RF-Deep, a random forest classifier that utilizes deep features from a pretrained transformer encoder of the segmentation model to detect OOD scans and enhance segmentation reliability. The segmentation model comprises a Swin Transformer encoder, pretrained with masked image modeling (SimMIM) on 10,432 unlabeled 3D CT scans covering cancerous and non-cancerous conditions, with a convolution decoder, trained to segment lung cancers in 317 3D scans. Independent testing was performed on 603 3D CT public datasets that included one ID dataset and four OOD datasets comprising chest CTs with pulmonary embolism (PE) and COVID-19, and abdominal CTs with kidney cancers and healthy volunteers. RF-Deep detected OOD cases with a FPR95 of 18.26%, 27.66%, and less than 0.1% on PE, COVID-19, and abdominal CTs, consistently outperforming established OOD approaches. The RF-Deep classifier provides a simple and effective approach to enhance reliability of cancer segmentation in ID and OOD scenarios.

Machine Learning-Driven radiomics on 18 F-FDG PET for glioma diagnosis: a systematic review and meta-analysis.

Shahriari A, Ghazanafar Ahari S, Mousavi A, Sadeghi M, Abbasi M, Hosseinpour M, Mir A, Zohouri Zanganeh D, Gharedaghi H, Ezati S, Sareminia A, Seyedi D, Shokouhfar M, Darzi A, Ghaedamini A, Zamani S, Khosravi F, Asadi Anar M

pubmed logopapersAug 26 2025
Machine learning (ML) applied to radiomics has revolutionized neuro-oncological imaging, yet the diagnostic performance of ML models based specifically on ^18F-FDG PET features in glioma remains poorly characterized. To systematically evaluate and quantitatively synthesize the diagnostic accuracy of ML models trained on ^18F-FDG PET radiomics for glioma classification. We conducted a PRISMA-compliant systematic review and meta-analysis registered on OSF ( https://doi.org/10.17605/OSF.IO/XJG6P ). PubMed, Scopus, and Web of Science were searched up to January 2025. Studies were included if they applied ML algorithms to ^18F-FDG PET radiomic features for glioma classification and reported at least one performance metric. Data extraction included demographics, imaging protocols, feature types, ML models, and validation design. Meta-analysis was performed using random-effects models with pooled estimates of accuracy, sensitivity, specificity, AUC, F1 score, and precision. Heterogeneity was explored via meta-regression and Galbraith plots. Twelve studies comprising 2,321 patients were included. Pooled diagnostic metrics were: accuracy 92.6% (95% CI: 91.3-93.9%), AUC 0.95 (95% CI: 0.94-0.95), sensitivity 85.4%, specificity 89.7%, F1 score 0.78, and precision 0.90. Heterogeneity was high across all domains (I² >75%). Meta-regression identified ML model type and validation strategy as partial moderators. Models using CNNs or PET/MRI integration achieved superior performance. ML models based on ^18F-FDG PET radiomics demonstrate strong and balanced diagnostic performance for glioma classification. However, methodological heterogeneity underscores the need for standardized pipelines, external validation, and transparent reporting before clinical integration.

Bronchiectasis in patients with chronic obstructive pulmonary disease: AI-based CT quantification using the bronchial tapering ratio.

Park H, Choe J, Lee SM, Lim S, Lee JS, Oh YM, Lee JB, Hwang HJ, Yun J, Bae S, Yu D, Loh LC, Ong CK, Seo JB

pubmed logopapersAug 26 2025
Although chest CT is the primary tool for evaluating bronchiectasis, accurately measuring its extent poses challenges. This study aimed to automatically quantify bronchiectasis using an artificial intelligence (AI)-based analysis of the bronchial tapering ratio on chest CT and assess its association with clinical outcomes in patients with chronic obstructive pulmonary disease (COPD). COPD patients from two prospective multicenter cohorts were included. AI-based airway quantification was performed on baseline CT, measuring the tapering ratio for each bronchus in the whole lung. The bronchiectasis score accounting for the extent of bronchi with abnormal tapering (inner lumen tapering ratio ≥ 1.1, indicating airway dilatation) in the whole lung was calculated. Associations between the bronchiectasis score and all-cause mortality and acute exacerbation (AE) were assessed using multivariable models. The discovery and validation cohorts included 361 (mean age, 67 years; 97.5% men) and 112 patients (mean age, 67 years; 93.7% men), respectively. In the discovery cohort, 220 (60.9%) had a history of at least one AE and 59 (16.3%) died during follow-up, and 18 (16.1%) died in the validation cohort. Bronchiectasis score was independently associated with increased mortality (discovery: adjusted HR, 1.86 [95% CI: 1.08-3.18]; validation: HR, 5.42 [95% CI: 1.97-14.92]). The score was also associated with risk of any AE, severe AE, and shorter time to first AE (for all, p < 0.05). In patients with COPD, the quantified extent of bronchiectasis using AI-based CT quantification of the bronchial tapering ratio was associated with all-cause mortality and the risk of AE over time. Question Can AI-based CT quantification of bronchial tapering reliably assess bronchiectasis relevant to clinical outcomes in patients with COPD? Findings Scores from this AI-based method of automatically quantifying the extent of whole lung bronchiectasis were independently associated with all-cause mortality and risk of AEs in COPD patients. Clinical relevance AI-based bronchiectasis analysis on CT may shift clinical research toward more objective, quantitative assessment methods and support risk stratification and management in COPD, highlighting its potential to enhance clinically relevant imaging evaluation.

Improved pulmonary embolism detection in CT pulmonary angiogram scans with hybrid vision transformers and deep learning techniques.

Abdelhamid A, El-Ghamry A, Abdelhay EH, Abo-Zahhad MM, Moustafa HE

pubmed logopapersAug 26 2025
Pulmonary embolism (PE) represents a severe, life-threatening cardiovascular condition and is notably the third leading cause of cardiovascular mortality, after myocardial infarction and stroke. This pathology occurs when blood clots obstruct the pulmonary arteries, impeding blood flow and oxygen exchange in the lungs. Prompt and accurate detection of PE is critical for appropriate clinical decision-making and patient survival. The complexity involved in interpreting medical images can often results misdiagnosis. However, recent advances in Deep Learning (DL) have substantially improved the capabilities of Computer-Aided Diagnosis (CAD) systems. Despite these advancements, existing single-model DL methods are limited when handling complex, diverse, and imbalanced medical imaging datasets. Addressing this gap, our research proposes an ensemble framework for classifying PE, capitalizing on the unique capabilities of ResNet50, DenseNet121, and Swin Transformer models. This ensemble method harnesses the complementary strengths of convolutional neural networks (CNNs) and vision transformers (ViTs), leading to improved prediction accuracy and model robustness. The proposed methodology includes a sophisticated preprocessing pipeline leveraging autoencoder (AE)-based dimensionality reduction, data augmentation to avoid overfitting, discrete wavelet transform (DWT) for multiscale feature extraction, and Sobel filtering for effective edge detection and noise reduction. The proposed model was rigorously evaluated using the public Radiological Society of North America (RSNA-STR) PE dataset, demonstrating remarkable performance metrics of 97.80% accuracy and a 0.99 for Area Under Receiver Operating Curve (AUROC). Comparative analysis demonstrated superior performance over state-of-the-art pre-trained models and recent ViT-based approaches, highlighting our method's effectiveness in improving early PE detection and providing robust support for clinical decision-making.

MedVQA-TREE: A Multimodal Reasoning and Retrieval Framework for Sarcopenia Prediction

Pardis Moradbeiki, Nasser Ghadiri, Sayed Jalal Zahabi, Uffe Kock Wiil, Kristoffer Kittelmann Brockhattingen, Ali Ebrahimi

arxiv logopreprintAug 26 2025
Accurate sarcopenia diagnosis via ultrasound remains challenging due to subtle imaging cues, limited labeled data, and the absence of clinical context in most models. We propose MedVQA-TREE, a multimodal framework that integrates a hierarchical image interpretation module, a gated feature-level fusion mechanism, and a novel multi-hop, multi-query retrieval strategy. The vision module includes anatomical classification, region segmentation, and graph-based spatial reasoning to capture coarse, mid-level, and fine-grained structures. A gated fusion mechanism selectively integrates visual features with textual queries, while clinical knowledge is retrieved through a UMLS-guided pipeline accessing PubMed and a sarcopenia-specific external knowledge base. MedVQA-TREE was trained and evaluated on two public MedVQA datasets (VQA-RAD and PathVQA) and a custom sarcopenia ultrasound dataset. The model achieved up to 99% diagnostic accuracy and outperformed previous state-of-the-art methods by over 10%. These results underscore the benefit of combining structured visual understanding with guided knowledge retrieval for effective AI-assisted diagnosis in sarcopenia.
Page 41 of 1411402 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.