Sort by:
Page 562 of 6156144 results

Qinmei Xu, Yiheng Li, Xianghao Zhan, Ahmet Gorkem Er, Brittany Dashevsky, Chuanjun Xu, Mohammed Alawad, Mengya Yang, Liu Ya, Changsheng Zhou, Xiao Li, Haruka Itakura, Olivier Gevaert

arxiv logopreprintMay 21 2025
Foundation models leveraging vision-language pretraining have shown promise in chest X-ray (CXR) interpretation, yet their real-world performance across diverse populations and diagnostic tasks remains insufficiently evaluated. This study benchmarks the diagnostic performance and generalizability of foundation models versus traditional convolutional neural networks (CNNs) on multinational CXR datasets. We evaluated eight CXR diagnostic models - five vision-language foundation models and three CNN-based architectures - across 37 standardized classification tasks using six public datasets from the USA, Spain, India, and Vietnam, and three private datasets from hospitals in China. Performance was assessed using AUROC, AUPRC, and other metrics across both shared and dataset-specific tasks. Foundation models outperformed CNNs in both accuracy and task coverage. MAVL, a model incorporating knowledge-enhanced prompts and structured supervision, achieved the highest performance on public (mean AUROC: 0.82; AUPRC: 0.32) and private (mean AUROC: 0.95; AUPRC: 0.89) datasets, ranking first in 14 of 37 public and 3 of 4 private tasks. All models showed reduced performance on pediatric cases, with average AUROC dropping from 0.88 +/- 0.18 in adults to 0.57 +/- 0.29 in children (p = 0.0202). These findings highlight the value of structured supervision and prompt design in radiologic AI and suggest future directions including geographic expansion and ensemble modeling for clinical deployment. Code for all evaluated models is available at https://drive.google.com/drive/folders/1B99yMQm7bB4h1sVMIBja0RfUu8gLktCE

Mehran Zoravar, Shadi Alijani, Homayoun Najjaran

arxiv logopreprintMay 21 2025
Exploring the trustworthiness of deep learning models is crucial, especially in critical domains such as medical imaging decision support systems. Conformal prediction has emerged as a rigorous means of providing deep learning models with reliable uncertainty estimates and safety guarantees. However, conformal prediction results face challenges due to the backbone model's struggles in domain-shifted scenarios, such as variations in different sources. To aim this challenge, this paper proposes a novel framework termed Conformal Ensemble of Vision Transformers (CE-ViTs) designed to enhance image classification performance by prioritizing domain adaptation and model robustness, while accounting for uncertainty. The proposed method leverages an ensemble of vision transformer models in the backbone, trained on diverse datasets including HAM10000, Dermofit, and Skin Cancer ISIC datasets. This ensemble learning approach, calibrated through the combined mentioned datasets, aims to enhance domain adaptation through conformal learning. Experimental results underscore that the framework achieves a high coverage rate of 90.38\%, representing an improvement of 9.95\% compared to the HAM10000 model. This indicates a strong likelihood that the prediction set includes the true label compared to singular models. Ensemble learning in CE-ViTs significantly improves conformal prediction performance, increasing the average prediction set size for challenging misclassified samples from 1.86 to 3.075.

Kanan Kiguchi, Yunhao Tu, Katsuhiro Ajito, Fady Alnajjar, Kazuyuki Murase

arxiv logopreprintMay 21 2025
We propose a novel framework for integrating fragmented multi-modal data in Alzheimer's disease (AD) research using large language models (LLMs) and knowledge graphs. While traditional multimodal analysis requires matched patient IDs across datasets, our approach demonstrates population-level integration of MRI, gene expression, biomarkers, EEG, and clinical indicators from independent cohorts. Statistical analysis identified significant features in each modality, which were connected as nodes in a knowledge graph. LLMs then analyzed the graph to extract potential correlations and generate hypotheses in natural language. This approach revealed several novel relationships, including a potential pathway linking metabolic risk factors to tau protein abnormalities via neuroinflammation (r>0.6, p<0.001), and unexpected correlations between frontal EEG channels and specific gene expression profiles (r=0.42-0.58, p<0.01). Cross-validation with independent datasets confirmed the robustness of major findings, with consistent effect sizes across cohorts (variance <15%). The reproducibility of these findings was further supported by expert review (Cohen's k=0.82) and computational validation. Our framework enables cross modal integration at a conceptual level without requiring patient ID matching, offering new possibilities for understanding AD pathology through fragmented data reuse and generating testable hypotheses for future research.

Dietrich N, Gong B, Patlas MN

pubmed logopapersMay 21 2025
Artificial intelligence (AI) is rapidly transforming radiology, with applications spanning disease detection, lesion segmentation, workflow optimization, and report generation. As these tools become more integrated into clinical practice, new concerns have emerged regarding their vulnerability to adversarial attacks. This review provides an in-depth overview of adversarial AI in radiology, a topic of growing relevance in both research and clinical domains. It begins by outlining the foundational concepts and model characteristics that make machine learning systems particularly susceptible to adversarial manipulation. A structured taxonomy of attack types is presented, including distinctions based on attacker knowledge, goals, timing, and computational frequency. The clinical implications of these attacks are then examined across key radiology tasks, with literature highlighting risks to disease classification, image segmentation and reconstruction, and report generation. Potential downstream consequences such as patient harm, operational disruption, and loss of trust are discussed. Current mitigation strategies are reviewed, spanning input-level defenses, model training modifications, and certified robustness approaches. In parallel, the role of broader lifecycle and safeguard strategies are considered. By consolidating current knowledge across technical and clinical domains, this review helps identify gaps, inform future research priorities, and guide the development of robust, trustworthy AI systems in radiology.

Du Z, Hu H, Shen C, Mei J, Feng Y, Huang Y, Chen X, Guo X, Hu Z, Jiang L, Su Y, Biekan J, Lyv L, Chong T, Pan C, Liu K, Ji J, Lu C

pubmed logopapersMay 21 2025
To develop an interpretable machine learning (ML) model based on cardiac magnetic resonance (CMR) multimodal parameters and clinical data to discriminate Takotsubo syndrome (TTS), acute myocardial infarction (AMI), and acute myocarditis (AM), and to further assess the diagnostic value of right ventricular (RV) strain in TTS. This study analyzed CMR and clinical data of 130 patients from three centers. Key features were selected using least absolute shrinkage and selection operator regression and random forest. Data were split into a training cohort and an internal testing cohort (ITC) in the ratio 7:3, with overfitting avoided using leave-one-out cross-validation and bootstrap methods. Nine ML models were evaluated using standard performance metrics, with Shapley additive explanations (SHAP) analysis used for model interpretation. A total of 11 key features were identified. The extreme gradient boosting model showed the best performance, with an area under the curve (AUC) value of 0.94 (95% CI: 0.85-0.97) in the ITC. Right ventricular basal circumferential strain (RVCS-basal) was the most important feature for identifying TTS. Its absolute value was significantly higher in TTS patients than in AMI and AM patients (-9.93%, -5.21%, and -6.18%, respectively, p < 0.001), with values above -6.55% contributing to a diagnosis of TTS. This study developed an interpretable ternary classification ML model for identifying TTS and used SHAP analysis to elucidate the significant value of RVCS-basal in TTS diagnosis. An online calculator (https://lsszxyy.shinyapps.io/XGboost/) based on this model was developed to provide immediate decision support for clinical use.

C V LP, V G B, Bhooshan RS

pubmed logopapersMay 21 2025
Breast cancer remains a leading cause of mortality among women worldwide, underscoring the need for accurate and timely diagnostic methods. Precise segmentation of nuclei in breast histopathology images is crucial for effective diagnosis and prognosis, offering critical insights into tumor characteristics and informing treatment strategies. This paper presents an enhanced U-Net architecture utilizing ResNet-34 as an advanced backbone, aimed at improving nuclei segmentation performance. The proposed model is evaluated and compared with standard U-Net and its other variants, including U-Net with VGG-16 and Inception-v3 backbones, using the BreCaHad dataset with nuclei masks generated through ImageJ software. The U-Net model with ResNet-34 backbone achieved superior performance, recording an Intersection over Union (IoU) score of 0.795, significantly outperforming the basic U-Net's IoU score of 0.725. The integration of advanced backbones and data augmentation techniques substantially improved segmentation accuracy, especially on limited medical imaging datasets. Comparative analysis demonstrated that ResNet-34 consistently surpassed other configurations across multiple metrics, including IoU, accuracy, precision, and F1 score. Further validation on the BNS and MoNuSeg-2018 datasets confirmed the robustness of the proposed model. This study highlights the potential of advanced deep learning architectures combined with augmentation methods to address challenges in nuclei segmentation, contributing to the development of more effective clinical diagnostic tools and improved patient care outcomes.

Boily C, Mazellier JP, Meyer P

pubmed logopapersMay 21 2025
This study systematically examines the impact of training database size and the generalizability of deep learning models for synthetic medical image generation. Specifically, we employ a Cycle-Consistency Generative Adversarial Network (CycleGAN) with softly paired data to synthesize kilovoltage computed tomography (kVCT) images from megavoltage computed tomography (MVCT) scans. Unlike previous works, which were constrained by limited data availability, our study uses an extensive database comprising 4,000 patient CT scans, an order of magnitude larger than prior research, allowing for a more rigorous assessment of database size in medical image translation. We quantitatively evaluate the fidelity of the generated synthetic images using established image similarity metrics, including Mean Absolute Error (MAE) and Structural Similarity Index Measure (SSIM). Beyond assessing image quality, we investigate the model's capacity for generalization by analyzing its performance across diverse patient subgroups, considering factors such as sex, age, and anatomical region. This approach enables a more granular understanding of how dataset composition influences model robustness.

Ghose P, Jamil HM

pubmed logopapersMay 21 2025
A brain tumor is an abnormal growth in the brain that disrupts its functionality and poses a significant threat to human life by damaging neurons. Early detection and classification of brain tumors are crucial to prevent complications and maintain good health. Recent advancements in deep learning techniques have shown immense potential in image classification and segmentation for tumor identification and classification. In this study, we present a platform, BrainView, for detection, and segmentation of brain tumors from Magnetic Resonance Images (MRI) using deep learning. We utilized EfficientNetB7 pre-trained model to design our proposed DeepBrainNet classification model for analyzing brain MRI images to classify its type. We also proposed a EfficinetNetB7 based image segmentation model, called the EffB7-UNet, for tumor localization. Experimental results show significantly high classification (99.96%) and segmentation (92.734%) accuracies for our proposed models. Finally, we discuss the contours of a cloud application for BrainView using Flask and Flutter to help researchers and clinicians use our machine learning models online for research purposes.

Yi X, Yu X, Li C, Li J, Cao H, Lu Q, Li J, Hou J

pubmed logopapersMay 21 2025
To develop an integrative radiopathomic model based on deep learning to predict overall survival (OS) in locally advanced nasopharyngeal carcinoma (LANPC) patients. A cohort of 343 LANPC patients with pretreatment MRI and whole slide image (WSI) were randomly divided into training (n = 202), validation (n = 91), and external test (n = 50) sets. For WSIs, a self-attention mechanism was employed to assess the significance of different patches for the prognostic task, aggregating them into a WSI-level representation. For MRI, a multilayer perceptron was used to encode the extracted radiomic features, resulting in an MRI-level representation. These were combined in a multimodal fusion model to produce prognostic predictions. Model performances were evaluated using the concordance index (C-index), and Kaplan-Meier curves were employed for risk stratification. To enhance model interpretability, attention-based and Integrated Gradients techniques were applied to explain how WSIs and MRI features contribute to prognosis predictions. The radiopathomics model achieved high predictive accuracy in predicting the OS, with a C-index of 0.755 (95 % CI: 0.673-0.838) and 0.744 (95 % CI: 0.623-0.808) in the training and validation sets, respectively, outperforming single-modality models (radiomic signature: 0.636, 95 % CI: 0.584-0.688; deep pathomic signature: 0.736, 95 % CI: 0.684-0.810). In the external test, similar findings were observed for the predictive performance of the radiopathomics, radiomic signature, and deep pathomic signature, with their C-indices being 0.735, 0.626, and 0.660 respectively. The radiopathomics model effectively stratified patients into high- and low-risk groups (P < 0.001). Additionally, attention heatmaps revealed that high-attention regions corresponded with tumor areas in both risk groups. n: The radiopathomics model holds promise for predicting clinical outcomes in LANPC patients, offering a potential tool for improving clinical decision-making.

Fernández-Carnero S, Martínez-Pozas O, Pecos-Martín D, Pardo-Gómez A, Cuenca-Zaldívar JN, Sánchez-Romero EA

pubmed logopapersMay 21 2025
This study aims to investigate the relationship between muscle activation variables assessed via ultrasound and the comprehensive assessment of geriatric patients, as well as to analyze ultrasound images to determine their correlation with morbimortality factors in frail patients. The present cohort study will be conducted in 500 older adults diagnosed with frailty. A multicenter study will be conducted among the day care centers and nursing homes. This will be achieved through the evaluation of frail older adults via instrumental and functional tests, along with specific ultrasound images to study sarcopenia and nutrition, followed by a detailed analysis of the correlation between all collected variables. This study aims to investigate the correlation between ultrasound-assessed muscle activation variables and the overall health of geriatric patients. It addresses the limitations of previous research by including a large sample size of 500 patients and measuring various muscle parameters beyond thickness. Additionally, it aims to analyze ultrasound images to identify markers associated with higher risk of complications in frail patients. The study involves frail older adults undergoing functional tests and specific ultrasound examinations. A comprehensive analysis of functional, ultrasound, and nutritional variables will be conducted to understand their correlation with overall health and risk of complications in frail older patients. The study was approved by the Research Ethics Committee of the Hospital Universitario Puerta de Hierro, Madrid, Spain (Act nº 18/2023). In addition, the study was registered with https://clinicaltrials.gov/ (NCT06218121).
Page 562 of 6156144 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.