Sort by:
Page 75 of 100993 results

Deep learning radiopathomics based on pretreatment MRI and whole slide images for predicting over survival in locally advanced nasopharyngeal carcinoma.

Yi X, Yu X, Li C, Li J, Cao H, Lu Q, Li J, Hou J

pubmed logopapersMay 21 2025
To develop an integrative radiopathomic model based on deep learning to predict overall survival (OS) in locally advanced nasopharyngeal carcinoma (LANPC) patients. A cohort of 343 LANPC patients with pretreatment MRI and whole slide image (WSI) were randomly divided into training (n = 202), validation (n = 91), and external test (n = 50) sets. For WSIs, a self-attention mechanism was employed to assess the significance of different patches for the prognostic task, aggregating them into a WSI-level representation. For MRI, a multilayer perceptron was used to encode the extracted radiomic features, resulting in an MRI-level representation. These were combined in a multimodal fusion model to produce prognostic predictions. Model performances were evaluated using the concordance index (C-index), and Kaplan-Meier curves were employed for risk stratification. To enhance model interpretability, attention-based and Integrated Gradients techniques were applied to explain how WSIs and MRI features contribute to prognosis predictions. The radiopathomics model achieved high predictive accuracy in predicting the OS, with a C-index of 0.755 (95 % CI: 0.673-0.838) and 0.744 (95 % CI: 0.623-0.808) in the training and validation sets, respectively, outperforming single-modality models (radiomic signature: 0.636, 95 % CI: 0.584-0.688; deep pathomic signature: 0.736, 95 % CI: 0.684-0.810). In the external test, similar findings were observed for the predictive performance of the radiopathomics, radiomic signature, and deep pathomic signature, with their C-indices being 0.735, 0.626, and 0.660 respectively. The radiopathomics model effectively stratified patients into high- and low-risk groups (P < 0.001). Additionally, attention heatmaps revealed that high-attention regions corresponded with tumor areas in both risk groups. n: The radiopathomics model holds promise for predicting clinical outcomes in LANPC patients, offering a potential tool for improving clinical decision-making.

Right Ventricular Strain as a Key Feature in Interpretable Machine Learning for Identification of Takotsubo Syndrome: A Multicenter CMR-based Study.

Du Z, Hu H, Shen C, Mei J, Feng Y, Huang Y, Chen X, Guo X, Hu Z, Jiang L, Su Y, Biekan J, Lyv L, Chong T, Pan C, Liu K, Ji J, Lu C

pubmed logopapersMay 21 2025
To develop an interpretable machine learning (ML) model based on cardiac magnetic resonance (CMR) multimodal parameters and clinical data to discriminate Takotsubo syndrome (TTS), acute myocardial infarction (AMI), and acute myocarditis (AM), and to further assess the diagnostic value of right ventricular (RV) strain in TTS. This study analyzed CMR and clinical data of 130 patients from three centers. Key features were selected using least absolute shrinkage and selection operator regression and random forest. Data were split into a training cohort and an internal testing cohort (ITC) in the ratio 7:3, with overfitting avoided using leave-one-out cross-validation and bootstrap methods. Nine ML models were evaluated using standard performance metrics, with Shapley additive explanations (SHAP) analysis used for model interpretation. A total of 11 key features were identified. The extreme gradient boosting model showed the best performance, with an area under the curve (AUC) value of 0.94 (95% CI: 0.85-0.97) in the ITC. Right ventricular basal circumferential strain (RVCS-basal) was the most important feature for identifying TTS. Its absolute value was significantly higher in TTS patients than in AMI and AM patients (-9.93%, -5.21%, and -6.18%, respectively, p < 0.001), with values above -6.55% contributing to a diagnosis of TTS. This study developed an interpretable ternary classification ML model for identifying TTS and used SHAP analysis to elucidate the significant value of RVCS-basal in TTS diagnosis. An online calculator (https://lsszxyy.shinyapps.io/XGboost/) based on this model was developed to provide immediate decision support for clinical use.

Adversarial artificial intelligence in radiology: Attacks, defenses, and future considerations.

Dietrich N, Gong B, Patlas MN

pubmed logopapersMay 21 2025
Artificial intelligence (AI) is rapidly transforming radiology, with applications spanning disease detection, lesion segmentation, workflow optimization, and report generation. As these tools become more integrated into clinical practice, new concerns have emerged regarding their vulnerability to adversarial attacks. This review provides an in-depth overview of adversarial AI in radiology, a topic of growing relevance in both research and clinical domains. It begins by outlining the foundational concepts and model characteristics that make machine learning systems particularly susceptible to adversarial manipulation. A structured taxonomy of attack types is presented, including distinctions based on attacker knowledge, goals, timing, and computational frequency. The clinical implications of these attacks are then examined across key radiology tasks, with literature highlighting risks to disease classification, image segmentation and reconstruction, and report generation. Potential downstream consequences such as patient harm, operational disruption, and loss of trust are discussed. Current mitigation strategies are reviewed, spanning input-level defenses, model training modifications, and certified robustness approaches. In parallel, the role of broader lifecycle and safeguard strategies are considered. By consolidating current knowledge across technical and clinical domains, this review helps identify gaps, inform future research priorities, and guide the development of robust, trustworthy AI systems in radiology.

Multi-modal Integration Analysis of Alzheimer's Disease Using Large Language Models and Knowledge Graphs

Kanan Kiguchi, Yunhao Tu, Katsuhiro Ajito, Fady Alnajjar, Kazuyuki Murase

arxiv logopreprintMay 21 2025
We propose a novel framework for integrating fragmented multi-modal data in Alzheimer's disease (AD) research using large language models (LLMs) and knowledge graphs. While traditional multimodal analysis requires matched patient IDs across datasets, our approach demonstrates population-level integration of MRI, gene expression, biomarkers, EEG, and clinical indicators from independent cohorts. Statistical analysis identified significant features in each modality, which were connected as nodes in a knowledge graph. LLMs then analyzed the graph to extract potential correlations and generate hypotheses in natural language. This approach revealed several novel relationships, including a potential pathway linking metabolic risk factors to tau protein abnormalities via neuroinflammation (r>0.6, p<0.001), and unexpected correlations between frontal EEG channels and specific gene expression profiles (r=0.42-0.58, p<0.01). Cross-validation with independent datasets confirmed the robustness of major findings, with consistent effect sizes across cohorts (variance <15%). The reproducibility of these findings was further supported by expert review (Cohen's k=0.82) and computational validation. Our framework enables cross modal integration at a conceptual level without requiring patient ID matching, offering new possibilities for understanding AD pathology through fragmented data reuse and generating testable hypotheses for future research.

Domain Adaptive Skin Lesion Classification via Conformal Ensemble of Vision Transformers

Mehran Zoravar, Shadi Alijani, Homayoun Najjaran

arxiv logopreprintMay 21 2025
Exploring the trustworthiness of deep learning models is crucial, especially in critical domains such as medical imaging decision support systems. Conformal prediction has emerged as a rigorous means of providing deep learning models with reliable uncertainty estimates and safety guarantees. However, conformal prediction results face challenges due to the backbone model's struggles in domain-shifted scenarios, such as variations in different sources. To aim this challenge, this paper proposes a novel framework termed Conformal Ensemble of Vision Transformers (CE-ViTs) designed to enhance image classification performance by prioritizing domain adaptation and model robustness, while accounting for uncertainty. The proposed method leverages an ensemble of vision transformer models in the backbone, trained on diverse datasets including HAM10000, Dermofit, and Skin Cancer ISIC datasets. This ensemble learning approach, calibrated through the combined mentioned datasets, aims to enhance domain adaptation through conformal learning. Experimental results underscore that the framework achieves a high coverage rate of 90.38\%, representing an improvement of 9.95\% compared to the HAM10000 model. This indicates a strong likelihood that the prediction set includes the true label compared to singular models. Ensemble learning in CE-ViTs significantly improves conformal prediction performance, increasing the average prediction set size for challenging misclassified samples from 1.86 to 3.075.

Benchmarking Chest X-ray Diagnosis Models Across Multinational Datasets

Qinmei Xu, Yiheng Li, Xianghao Zhan, Ahmet Gorkem Er, Brittany Dashevsky, Chuanjun Xu, Mohammed Alawad, Mengya Yang, Liu Ya, Changsheng Zhou, Xiao Li, Haruka Itakura, Olivier Gevaert

arxiv logopreprintMay 21 2025
Foundation models leveraging vision-language pretraining have shown promise in chest X-ray (CXR) interpretation, yet their real-world performance across diverse populations and diagnostic tasks remains insufficiently evaluated. This study benchmarks the diagnostic performance and generalizability of foundation models versus traditional convolutional neural networks (CNNs) on multinational CXR datasets. We evaluated eight CXR diagnostic models - five vision-language foundation models and three CNN-based architectures - across 37 standardized classification tasks using six public datasets from the USA, Spain, India, and Vietnam, and three private datasets from hospitals in China. Performance was assessed using AUROC, AUPRC, and other metrics across both shared and dataset-specific tasks. Foundation models outperformed CNNs in both accuracy and task coverage. MAVL, a model incorporating knowledge-enhanced prompts and structured supervision, achieved the highest performance on public (mean AUROC: 0.82; AUPRC: 0.32) and private (mean AUROC: 0.95; AUPRC: 0.89) datasets, ranking first in 14 of 37 public and 3 of 4 private tasks. All models showed reduced performance on pediatric cases, with average AUROC dropping from 0.88 +/- 0.18 in adults to 0.57 +/- 0.29 in children (p = 0.0202). These findings highlight the value of structured supervision and prompt design in radiologic AI and suggest future directions including geographic expansion and ensemble modeling for clinical deployment. Code for all evaluated models is available at https://drive.google.com/drive/folders/1B99yMQm7bB4h1sVMIBja0RfUu8gLktCE

Comprehensive Lung Disease Detection Using Deep Learning Models and Hybrid Chest X-ray Data with Explainable AI

Shuvashis Sarker, Shamim Rahim Refat, Faika Fairuj Preotee, Tanvir Rouf Shawon, Raihan Tanvir

arxiv logopreprintMay 21 2025
Advanced diagnostic instruments are crucial for the accurate detection and treatment of lung diseases, which affect millions of individuals globally. This study examines the effectiveness of deep learning and transfer learning models using a hybrid dataset, created by merging four individual datasets from Bangladesh and global sources. The hybrid dataset significantly enhances model accuracy and generalizability, particularly in detecting COVID-19, pneumonia, lung opacity, and normal lung conditions from chest X-ray images. A range of models, including CNN, VGG16, VGG19, InceptionV3, Xception, ResNet50V2, InceptionResNetV2, MobileNetV2, and DenseNet121, were applied to both individual and hybrid datasets. The results showed superior performance on the hybrid dataset, with VGG16, Xception, ResNet50V2, and DenseNet121 each achieving an accuracy of 99%. This consistent performance across the hybrid dataset highlights the robustness of these models in handling diverse data while maintaining high accuracy. To understand the models implicit behavior, explainable AI techniques were employed to illuminate their black-box nature. Specifically, LIME was used to enhance the interpretability of model predictions, especially in cases of misclassification, contributing to the development of reliable and interpretable AI-driven solutions for medical imaging.

An Exploratory Approach Towards Investigating and Explaining Vision Transformer and Transfer Learning for Brain Disease Detection

Shuvashis Sarker, Shamim Rahim Refat, Faika Fairuj Preotee, Shifat Islam, Tashreef Muhammad, Mohammad Ashraful Hoque

arxiv logopreprintMay 21 2025
The brain is a highly complex organ that manages many important tasks, including movement, memory and thinking. Brain-related conditions, like tumors and degenerative disorders, can be hard to diagnose and treat. Magnetic Resonance Imaging (MRI) serves as a key tool for identifying these conditions, offering high-resolution images of brain structures. Despite this, interpreting MRI scans can be complicated. This study tackles this challenge by conducting a comparative analysis of Vision Transformer (ViT) and Transfer Learning (TL) models such as VGG16, VGG19, Resnet50V2, MobilenetV2 for classifying brain diseases using MRI data from Bangladesh based dataset. ViT, known for their ability to capture global relationships in images, are particularly effective for medical imaging tasks. Transfer learning helps to mitigate data constraints by fine-tuning pre-trained models. Furthermore, Explainable AI (XAI) methods such as GradCAM, GradCAM++, LayerCAM, ScoreCAM, and Faster-ScoreCAM are employed to interpret model predictions. The results demonstrate that ViT surpasses transfer learning models, achieving a classification accuracy of 94.39%. The integration of XAI methods enhances model transparency, offering crucial insights to aid medical professionals in diagnosing brain diseases with greater precision.

Multi-modal Integration Analysis of Alzheimer's Disease Using Large Language Models and Knowledge Graphs

Kanan Kiguchi, Yunhao Tu, Katsuhiro Ajito, Fady Alnajjar, Kazuyuki Murase

arxiv logopreprintMay 21 2025
We propose a novel framework for integrating fragmented multi-modal data in Alzheimer's disease (AD) research using large language models (LLMs) and knowledge graphs. While traditional multimodal analysis requires matched patient IDs across datasets, our approach demonstrates population-level integration of MRI, gene expression, biomarkers, EEG, and clinical indicators from independent cohorts. Statistical analysis identified significant features in each modality, which were connected as nodes in a knowledge graph. LLMs then analyzed the graph to extract potential correlations and generate hypotheses in natural language. This approach revealed several novel relationships, including a potential pathway linking metabolic risk factors to tau protein abnormalities via neuroinflammation (r>0.6, p<0.001), and unexpected correlations between frontal EEG channels and specific gene expression profiles (r=0.42-0.58, p<0.01). Cross-validation with independent datasets confirmed the robustness of major findings, with consistent effect sizes across cohorts (variance <15%). The reproducibility of these findings was further supported by expert review (Cohen's k=0.82) and computational validation. Our framework enables cross modal integration at a conceptual level without requiring patient ID matching, offering new possibilities for understanding AD pathology through fragmented data reuse and generating testable hypotheses for future research.

An automated deep learning framework for brain tumor classification using MRI imagery.

Aamir M, Rahman Z, Bhatti UA, Abro WA, Bhutto JA, He Z

pubmed logopapersMay 21 2025
The precise and timely diagnosis of brain tumors is essential for accelerating patient recovery and preserving lives. Brain tumors exhibit a variety of sizes, shapes, and visual characteristics, requiring individualized treatment strategies for each patient. Radiologists require considerable proficiency to manually detect brain malignancies. However, tumor recognition remains inefficient, imprecise, and labor-intensive in manual procedures, underscoring the need for automated methods. This study introduces an effective approach for identifying brain lesions in magnetic resonance imaging (MRI) images, minimizing dependence on manual intervention. The proposed method improves image clarity by combining guided filtering techniques with anisotropic Gaussian side windows (AGSW). A morphological analysis is conducted prior to segmentation to exclude non-tumor regions from the enhanced MRI images. Deep neural networks segment the images, extracting high-quality regions of interest (ROIs) and multiscale features. Identifying salient elements is essential and is accomplished through an attention module that isolates distinctive features while eliminating irrelevant information. An ensemble model is employed to classify brain tumors into different categories. The proposed technique achieves an overall accuracy of 99.94% and 99.67% on the publicly available brain tumor datasets BraTS2020 and Figshare, respectively. Furthermore, it surpasses existing technologies in terms of automation and robustness, thereby enhancing the entire diagnostic process.
Page 75 of 100993 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.