Sort by:
Page 60 of 1341340 results

Enhancing Ultrasound-Based Diagnosis of Unilateral Diaphragmatic Paralysis with a Visual Transformer-Based Model.

Kalkanis A, Bakalis D, Testelmans D, Buyse B, Simos YV, Tsamis KI, Manis G

pubmed logopapersJun 17 2025
This paper presents a novel methodology that combines a pre-trained Visual Transformer-Based Deep Model (ViT) with a custom denoising image filter for the diagnosis of Unilateral Diaphragmatic Paralysis (UDP) using Ultrasound (US) images. The ViT is employed to extract complex features from US images of 17 volunteers, capturing intricate patterns and details that are critical for accurate diagnosis. The extracted features are then fed into an ensemble learning model to determine the presence of UDP. The proposed framework achieves an average accuracy of 93.8% on a stratified 5-fold cross-validation, surpassing relevant state-of-the-art (SOTA) image classifiers. This high level of performance underscores the robustness and effectiveness of the framework, highlighting its potential as a prominent diagnostic tool in medical imaging.

Enhancing cerebral infarct classification by automatically extracting relevant fMRI features.

Dobromyslin VI, Zhou W

pubmed logopapersJun 17 2025
Accurate detection of cortical infarct is critical for timely treatment and improved patient outcomes. Current brain imaging methods often require invasive procedures that primarily assess blood vessel and structural white matter damage. There is a need for non-invasive approaches, such as functional MRI (fMRI), that better reflect neuronal viability. This study utilized automated machine learning (auto-ML) techniques to identify novel infarct-specific fMRI biomarkers specifically related to chronic cortical infarcts. We analyzed resting-state fMRI data from the multi-center ADNI dataset, which included 20 chronic infarct patients and 30 cognitively normal (CN) controls. This study utilized automated machine learning (auto-ML) techniques to identify novel fMRI biomarkers specifically related to chronic cortical infarcts. Surface-based registration methods were applied to minimize partial-volume effects typically associated with lower resolution fMRI data. We evaluated the performance of 7 previously known fMRI biomarkers alongside 107 new auto-generated fMRI biomarkers across 33 different classification models. Our analysis identified 6 new fMRI biomarkers that substantially improved infarct detection performance compared to previously established metrics. The best-performing combination of biomarkers and classifiers achieved a cross-validation ROC score of 0.791, closely matching the accuracy of diffusion-weighted imaging methods used in acute stroke detection. Our proposed auto-ML fMRI infarct-detection technique demonstrated robustness across diverse imaging sites and scanner types, highlighting the potential of automated feature extraction to significantly enhance non-invasive infarct detection.

A Robust Residual Three-dimensional Convolutional Neural Networks Model for Prediction of Amyloid-β Positivity by Using FDG-PET.

Ardakani I, Yamada T, Iwano S, Kumar Maurya S, Ishii K

pubmed logopapersJun 17 2025
Widely used in oncology PET, 2-deoxy-2-18F-FDG PET is more accessible and affordable than amyloid PET, which is a crucial tool to determine amyloid positivity in diagnosis of Alzheimer disease (AD). This study aimed to leverage deep learning with residual 3D convolutional neural networks (3DCNN) to develop a robust model that predicts amyloid-β positivity by using FDG-PET. In this study, a cohort of 187 patients was used for model development. It consisted of patients ranging from cognitively normal to those with dementia and other cognitive impairments who underwent T1-weighted MRI, 18F-FDG, and 11C-Pittsburgh compound B (PiB) PET scans. A residual 3DCNN model was configured using nonexhaustive grid search and trained on repeated random splits of our development data set. We evaluated the performance of our model, and particularly its robustness, using a multisite data set of 99 patients of different ethnicities with images at different site harmonization levels. Our model achieved mean AUC scores of 0.815 and 0.840 on images without and with site harmonization correspondingly. Respectively, it achieved higher AUC scores of 0.801 and 0.834 in the cognitively normal (CN) group compared with 0.777 and 0.745 in the dementia group. As for F1 score, the corresponding mean scores were 0.770 and 0.810 on images without and with site harmonization. In the CN group, it achieved lower F1 scores of 0.580 and 0.658 compared with 0.907 and 0.931 in the dementia group. We demonstrated that residual 3DCNN can learn complex 3D spatial patterns in FDG-PET images and robustly predict amyloid-β positivity with significantly less reliance on site harmonization preprocessing.

SCISSOR: Mitigating Semantic Bias through Cluster-Aware Siamese Networks for Robust Classification

Shuo Yang, Bardh Prenkaj, Gjergji Kasneci

arxiv logopreprintJun 17 2025
Shortcut learning undermines model generalization to out-of-distribution data. While the literature attributes shortcuts to biases in superficial features, we show that imbalances in the semantic distribution of sample embeddings induce spurious semantic correlations, compromising model robustness. To address this issue, we propose SCISSOR (Semantic Cluster Intervention for Suppressing ShORtcut), a Siamese network-based debiasing approach that remaps the semantic space by discouraging latent clusters exploited as shortcuts. Unlike prior data-debiasing approaches, SCISSOR eliminates the need for data augmentation and rewriting. We evaluate SCISSOR on 6 models across 4 benchmarks: Chest-XRay and Not-MNIST in computer vision, and GYAFC and Yelp in NLP tasks. Compared to several baselines, SCISSOR reports +5.3 absolute points in F1 score on GYAFC, +7.3 on Yelp, +7.7 on Chest-XRay, and +1 on Not-MNIST. SCISSOR is also highly advantageous for lightweight models with ~9.5% improvement on F1 for ViT on computer vision datasets and ~11.9% for BERT on NLP. Our study redefines the landscape of model generalization by addressing overlooked semantic biases, establishing SCISSOR as a foundational framework for mitigating shortcut learning and fostering more robust, bias-resistant AI systems.

Risk Estimation of Knee Osteoarthritis Progression via Predictive Multi-task Modelling from Efficient Diffusion Model using X-ray Images

David Butler, Adrian Hilton, Gustavo Carneiro

arxiv logopreprintJun 17 2025
Medical imaging plays a crucial role in assessing knee osteoarthritis (OA) risk by enabling early detection and disease monitoring. Recent machine learning methods have improved risk estimation (i.e., predicting the likelihood of disease progression) and predictive modelling (i.e., the forecasting of future outcomes based on current data) using medical images, but clinical adoption remains limited due to their lack of interpretability. Existing approaches that generate future images for risk estimation are complex and impractical. Additionally, previous methods fail to localize anatomical knee landmarks, limiting interpretability. We address these gaps with a new interpretable machine learning method to estimate the risk of knee OA progression via multi-task predictive modelling that classifies future knee OA severity and predicts anatomical knee landmarks from efficiently generated high-quality future images. Such image generation is achieved by leveraging a diffusion model in a class-conditioned latent space to forecast disease progression, offering a visual representation of how particular health conditions may evolve. Applied to the Osteoarthritis Initiative dataset, our approach improves the state-of-the-art (SOTA) by 2\%, achieving an AUC of 0.71 in predicting knee OA progression while offering ~9% faster inference time.

DGG-XNet: A Hybrid Deep Learning Framework for Multi-Class Brain Disease Classification with Explainable AI

Sumshun Nahar Eity, Mahin Montasir Afif, Tanisha Fairooz, Md. Mortuza Ahmmed, Md Saef Ullah Miah

arxiv logopreprintJun 17 2025
Accurate diagnosis of brain disorders such as Alzheimer's disease and brain tumors remains a critical challenge in medical imaging. Conventional methods based on manual MRI analysis are often inefficient and error-prone. To address this, we propose DGG-XNet, a hybrid deep learning model integrating VGG16 and DenseNet121 to enhance feature extraction and classification. DenseNet121 promotes feature reuse and efficient gradient flow through dense connectivity, while VGG16 contributes strong hierarchical spatial representations. Their fusion enables robust multiclass classification of neurological conditions. Grad-CAM is applied to visualize salient regions, enhancing model transparency. Trained on a combined dataset from BraTS 2021 and Kaggle, DGG-XNet achieved a test accuracy of 91.33\%, with precision, recall, and F1-score all exceeding 91\%. These results highlight DGG-XNet's potential as an effective and interpretable tool for computer-aided diagnosis (CAD) of neurodegenerative and oncological brain disorders.

Exploring factors driving the evolution of chronic lesions in multiple sclerosis using machine learning.

Hu H, Ye L, Wu P, Shi Z, Chen G, Li Y

pubmed logopapersJun 17 2025
The study aimed to identify factors influencing the evolution of chronic lesions in multiple sclerosis (MS) using a machine learning approach. Longitudinal data were collected from individuals with relapsing-remitting multiple sclerosis (RRMS). The "iron rim" sign was identified using quantitative susceptibility mapping (QSM), and microstructural damage was quantified via T1/fluid attenuated inversion recovery (FLAIR) ratios. Additional data included baseline lesion volume, cerebral T2-hyperintense lesion volume, iron rim lesion volume, the proportion of iron rim lesion volume, gender, age, disease duration (DD), disability and cognitive scores, use of disease-modifying therapy, and follow-up intervals. These features were integrated into machine learning models (logistic regression (LR), random forest (RF), and support vector machine (SVM)) to predict lesion volume change, with the most predictive model selected for feature importance analysis. The study included 47 RRMS individuals (mean age, 30.6 ± 8.0 years [standard deviation], 6 males) and 833 chronic lesions. Machine learning model development results showed that the SVM model demonstrated superior predictive efficiency, with an AUC of 0.90 in the training set and 0.81 in the testing set. Feature importance analysis identified the top three features were the "iron rim" sign of lesions, DD, and the T1/FLAIR ratios of the lesions. This study developed a machine learning model to predict the volume outcome of MS lesions. Feature importance analysis identified chronic inflammation around the lesion, DD, and the microstructural damage as key factors influencing volume change in chronic MS lesions. Question The evolution of different chronic lesions in MS exhibits variability, and the driving factors influencing these outcomes remain to be further investigated. Findings A SVM learning model was developed to predict chronic MS lesion volume changes, integrating lesion characteristics, lesion burden, and clinical data. Clinical relevance Chronic inflammation surrounding lesions, DD, and microstructural damage are key factors influencing the evolution of chronic MS lesions.

NeuroMoE: A Transformer-Based Mixture-of-Experts Framework for Multi-Modal Neurological Disorder Classification

Wajih Hassan Raza, Aamir Bader Shah, Yu Wen, Yidan Shen, Juan Diego Martinez Lemus, Mya Caryn Schiess, Timothy Michael Ellmore, Renjie Hu, Xin Fu

arxiv logopreprintJun 17 2025
The integration of multi-modal Magnetic Resonance Imaging (MRI) and clinical data holds great promise for enhancing the diagnosis of neurological disorders (NDs) in real-world clinical settings. Deep Learning (DL) has recently emerged as a powerful tool for extracting meaningful patterns from medical data to aid in diagnosis. However, existing DL approaches struggle to effectively leverage multi-modal MRI and clinical data, leading to suboptimal performance. To address this challenge, we utilize a unique, proprietary multi-modal clinical dataset curated for ND research. Based on this dataset, we propose a novel transformer-based Mixture-of-Experts (MoE) framework for ND classification, leveraging multiple MRI modalities-anatomical (aMRI), Diffusion Tensor Imaging (DTI), and functional (fMRI)-alongside clinical assessments. Our framework employs transformer encoders to capture spatial relationships within volumetric MRI data while utilizing modality-specific experts for targeted feature extraction. A gating mechanism with adaptive fusion dynamically integrates expert outputs, ensuring optimal predictive performance. Comprehensive experiments and comparisons with multiple baselines demonstrate that our multi-modal approach significantly enhances diagnostic accuracy, particularly in distinguishing overlapping disease states. Our framework achieves a validation accuracy of 82.47\%, outperforming baseline methods by over 10\%, highlighting its potential to improve ND diagnosis by applying multi-modal learning to real-world clinical data.

Recognition and diagnosis of Alzheimer's Disease using T1-weighted magnetic resonance imaging via integrating CNN and Swin vision transformer.

Wang Y, Sheng H, Wang X

pubmed logopapersJun 17 2025
Alzheimer's disease is a debilitating neurological disorder that requires accurate diagnosis for the most effective therapy and care. This article presents a new vision transformer model specifically created to evaluate magnetic resonance imaging data from the Alzheimer's Disease Neuroimaging Initiative dataset in order to categorize cases of Alzheimer's disease. Contrary to models that rely on convolutional neural networks, the vision transformer has the ability to capture large relationships between far-apart pixels in the images. The suggested architecture has shown exceptional outcomes, as its precision has emphasized its capacity to detect and distinguish significant characteristics from MRI scans, hence enabling the precise classification of Alzheimer's disease subtypes and various stages. The model utilizes both the elements from convolutional neural network and vision transformer models to extract both local and global visual patterns, facilitating the accurate categorization of various Alzheimer's disease classifications. We specifically focus on the term 'dementia in patients with Alzheimer's disease' to describe individuals who have progressed to the dementia stage as a result of AD, distinguishing them from those in earlier stages of the disease. Precise categorization of Alzheimer's disease has significant therapeutic importance, as it enables timely identification, tailored treatment strategies, disease monitoring, and prognostic assessment. The stated high accuracy indicates that the suggested vision transformer model has the capacity to assist healthcare providers and researchers in generating well-informed and precise evaluations of individuals with Alzheimer's disease.

Deep learning based colorectal cancer detection in medical images: A comprehensive analysis of datasets, methods, and future directions.

Gülmez B

pubmed logopapersJun 17 2025
This comprehensive review examines the current state and evolution of artificial intelligence applications in colorectal cancer detection through medical imaging from 2019 to 2025. The study presents a quantitative analysis of 110 high-quality publications and 9 publicly accessible medical image datasets used for training and validation. Various convolutional neural network architectures-including ResNet (40 implementations), VGG (18 implementations), and emerging transformer-based models (12 implementations)-for classification, object detection, and segmentation tasks are systematically categorized and evaluated. The investigation encompasses hyperparameter optimization techniques utilized to enhance model performance, with particular focus on genetic algorithms and particle swarm optimization approaches. The role of explainable AI methods in medical diagnosis interpretation is analyzed through visualization techniques such as Grad-CAM and SHAP. Technical limitations, including dataset scarcity, computational constraints, and standardization challenges, are identified through trend analysis. Research gaps in current methodologies are highlighted through comparative assessment of performance metrics across different architectural implementations. Potential future research directions, including multimodal learning and federated learning approaches, are proposed based on publication trend analysis. This review serves as a comprehensive reference for researchers in medical image analysis and clinical practitioners implementing AI-based colorectal cancer detection systems.
Page 60 of 1341340 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.