Sort by:
Page 29 of 34334 results

Integrating multi-omics data with artificial intelligence to decipher the role of tumor-infiltrating lymphocytes in tumor immunotherapy.

Xie T, Xue H, Huang A, Yan H, Yuan J

pubmed logopapersMay 23 2025
Tumor-infiltrating lymphocytes (TILs) are capable of recognizing tumor antigens, impacting tumor prognosis, predicting the efficacy of neoadjuvant therapies, contributing to the development of new cell-based immunotherapies, studying the tumor immune microenvironment, and identifying novel biomarkers. Traditional methods for evaluating TILs primarily rely on histopathological examination using standard hematoxylin and eosin staining or immunohistochemical staining, with manual cell counting under a microscope. These methods are time-consuming and subject to significant observer variability and error. Recently, artificial intelligence (AI) has rapidly advanced in the field of medical imaging, particularly with deep learning algorithms based on convolutional neural networks. AI has shown promise as a powerful tool for the quantitative evaluation of tumor biomarkers. The advent of AI offers new opportunities for the automated and standardized assessment of TILs. This review provides an overview of the advancements in the application of AI for assessing TILs from multiple perspectives. It specifically focuses on AI-driven approaches for identifying TILs in tumor tissue images, automating TILs quantification, recognizing TILs subpopulations, and analyzing the spatial distribution patterns of TILs. The review aims to elucidate the prognostic value of TILs in various cancers, as well as their predictive capacity for responses to immunotherapy and neoadjuvant therapy. Furthermore, the review explores the integration of AI with other emerging technologies, such as single-cell sequencing, multiplex immunofluorescence, spatial transcriptomics, and multimodal approaches, to enhance the comprehensive study of TILs and further elucidate their clinical utility in tumor treatment and prognosis.

Automated Detection of Severe Cerebral Edema Using Explainable Deep Transfer Learning after Hypoxic Ischemic Brain Injury.

Wang Z, Kulpanowski AM, Copen WA, Rosenthal ES, Dodelson JA, McCrory DE, Edlow BL, Kimberly WT, Amorim E, Westover M, Ning M, Zabihi M, Schaefer PW, Malhotra R, Giacino JT, Greer DM, Wu O

pubmed logopapersMay 23 2025
Substantial gaps exist in the neuroprognostication of cardiac arrest patients who remain comatose after the restoration of spontaneous circulation. Most studies focus on predicting survival, a measure confounded by the withdrawal of life-sustaining treatment decisions. Severe cerebral edema (SCE) may serve as an objective proximal imaging-based surrogate of neurologic injury. We retrospectively analyzed data from 288 patients to automate SCE detection with machine learning (ML) and to test the hypothesis that the quantitative values produced by these algorithms (ML_SCE) can improve predictions of neurologic outcomes. Ground-truth SCE (GT_SCE) classification was based on radiology reports. The model attained a cross-validated testing accuracy of 87% [95% CI: 84%, 89%] for detecting SCE. Attention maps explaining SCE classification focused on cisternal regions (p<0.05). Multivariable analyses showed that older age (p<0.001), non-shockable initial cardiac rhythm (p=0.004), and greater ML_SCE values (p<0.001) were significant predictors of poor neurologic outcomes, with GT_SCE (p=0.064) as a non-significant covariate. Our results support the feasibility of automated SCE detection. Future prospective studies with standardized neurologic assessments are needed to substantiate the utility of quantitative ML_SCE values to improve neuroprognostication.

Validation and comparison of three different methods for automated identification of distal femoral landmarks in 3D.

Berger L, Brößner P, Ehreiser S, Tokunaga K, Okamoto M, Radermacher K

pubmed logopapersMay 23 2025
Identification of bony landmarks in medical images is of high importance for 3D planning in orthopaedic surgery. Automated landmark identification has the potential to optimize clinical routines and allows for the scientific analysis of large databases. To the authors' knowledge, no direct comparison of different methods for automated landmark detection on the same dataset has been published to date. We compared 3 methods for automated femoral landmark identification: an artificial neural network, a statistical shape model and a geometric approach. All methods were compared against manual measurements of two raters on the task of identifying 6 femoral landmarks on CT data or derived surface models of 202 femora. The accuracy of the methods was in the range of the manual measurements and comparable to those reported in previous studies. The geometric approach showed a significantly higher average deviation compared to the manually selected reference landmarks, while there was no statistically significant difference for the neural network and the SSM. All fully automated methods show potential for use, depending on the use case. Characteristics of the different methods, such as the input data required (raw CT/segmented bone surface models, amount of training data required) and/or the methods robustness, can be used for method selection in the individual application.

Meta-analysis of AI-based pulmonary embolism detection: How reliable are deep learning models?

Lanza E, Ammirabile A, Francone M

pubmed logopapersMay 23 2025
Deep learning (DL)-based methods show promise in detecting pulmonary embolism (PE) on CT pulmonary angiography (CTPA), potentially improving diagnostic accuracy and workflow efficiency. This meta-analysis aimed to (1) determine pooled performance estimates of DL algorithms for PE detection; and (2) compare the diagnostic efficacy of convolutional neural network (CNN)- versus U-Net-based architectures. Following PRISMA guidelines, we searched PubMed and EMBASE through April 15, 2025 for English-language studies (2010-2025) reporting DL models for PE detection with extractable 2 × 2 data or performance metrics. True/false positives and negatives were reconstructed when necessary under an assumed 50 % PE prevalence (with 0.5 continuity correction). We approximated AUROC as the mean of sensitivity and specificity if not directly reported. Sensitivity, specificity, accuracy, PPV and NPV were pooled using a DerSimonian-Laird random-effects model with Freeman-Tukey transformation; AUROC values were combined via a fixed-effect inverse-variance approach. Heterogeneity was assessed by Cochran's Q and I<sup>2</sup>. Subgroup analyses contrasted CNN versus U-Net models. Twenty-four studies (n = 22,984 patients) met inclusion criteria. Pooled estimates were: AUROC 0.895 (95 % CI: 0.874-0.917), sensitivity 0.894 (0.856-0.923), specificity 0.871 (0.831-0.903), accuracy 0.857 (0.833-0.882), PPV 0.832 (0.794-0.869) and NPV 0.902 (0.874-0.929). Between-study heterogeneity was high (I<sup>2</sup> ≈ 97 % for sensitivity/specificity). U-Net models exhibited higher sensitivity (0.899 vs 0.893) and CNN models higher specificity (0.926 vs 0.900); subgroup Q-tests confirmed significant differences for both sensitivity (p = 0.0002) and specificity (p < 0.001). DL algorithms demonstrate high diagnostic accuracy for PE detection on CTPA, with complementary strengths: U-Net architectures excel in true-positive identification, whereas CNNs yield fewer false positives. However, marked heterogeneity underscores the need for standardized, prospective validation before routine clinical implementation.

Optimizing the power of AI for fracture detection: from blind spots to breakthroughs.

Behzad S, Eibschutz L, Lu MY, Gholamrezanezhad A

pubmed logopapersMay 23 2025
Artificial Intelligence (AI) is increasingly being integrated into the field of musculoskeletal (MSK) radiology, from research methods to routine clinical practice. Within the field of fracture detection, AI is allowing for precision and speed previously unimaginable. Yet, AI's decision-making processes are sometimes wrought with deficiencies, undermining trust, hindering accountability, and compromising diagnostic precision. To make AI a trusted ally for radiologists, we recommend incorporating clinical history, rationalizing AI decisions by explainable AI (XAI) techniques, increasing the variety and scale of training data to approach the complexity of a clinical situation, and active interactions between clinicians and developers. By bridging these gaps, the true potential of AI can be unlocked, enhancing patient outcomes and fundamentally transforming radiology through a harmonious integration of human expertise and intelligent technology. In this article, we aim to examine the factors contributing to AI inaccuracies and offer recommendations to address these challenges-benefiting both radiologists and developers striving to improve future algorithms.

SD-MAD: Sign-Driven Few-shot Multi-Anomaly Detection in Medical Images

Kaiyu Guo, Tan Pan, Chen Jiang, Zijian Wang, Brian C. Lovell, Limei Han, Yuan Cheng, Mahsa Baktashmotlagh

arxiv logopreprintMay 22 2025
Medical anomaly detection (AD) is crucial for early clinical intervention, yet it faces challenges due to limited access to high-quality medical imaging data, caused by privacy concerns and data silos. Few-shot learning has emerged as a promising approach to alleviate these limitations by leveraging the large-scale prior knowledge embedded in vision-language models (VLMs). Recent advancements in few-shot medical AD have treated normal and abnormal cases as a one-class classification problem, often overlooking the distinction among multiple anomaly categories. Thus, in this paper, we propose a framework tailored for few-shot medical anomaly detection in the scenario where the identification of multiple anomaly categories is required. To capture the detailed radiological signs of medical anomaly categories, our framework incorporates diverse textual descriptions for each category generated by a Large-Language model, under the assumption that different anomalies in medical images may share common radiological signs in each category. Specifically, we introduce SD-MAD, a two-stage Sign-Driven few-shot Multi-Anomaly Detection framework: (i) Radiological signs are aligned with anomaly categories by amplifying inter-anomaly discrepancy; (ii) Aligned signs are selected further to mitigate the effect of the under-fitting and uncertain-sample issue caused by limited medical data, employing an automatic sign selection strategy at inference. Moreover, we propose three protocols to comprehensively quantify the performance of multi-anomaly detection. Extensive experiments illustrate the effectiveness of our method.

Customized GPT-4V(ision) for radiographic diagnosis: can large language model detect supernumerary teeth?

Aşar EM, İpek İ, Bi Lge K

pubmed logopapersMay 21 2025
With the growing capabilities of language models like ChatGPT to process text and images, this study evaluated their accuracy in detecting supernumerary teeth on periapical radiographs. A customized GPT-4V model (CGPT-4V) was also developed to assess whether domain-specific training could improve diagnostic performance compared to standard GPT-4V and GPT-4o models. One hundred eighty periapical radiographs (90 with and 90 without supernumerary teeth) were evaluated using GPT-4 V, GPT-4o, and a fine-tuned CGPT-4V model. Each image was assessed separately with the standardized prompt "Are there any supernumerary teeth in the radiograph above?" to avoid contextual bias. Three dental experts scored the responses using a three-point Likert scale for positive cases and a binary scale for negatives. Chi-square tests and ROC analysis were used to compare model performances (p < 0.05). Among the three models, CGPT-4 V exhibited the highest accuracy, detecting supernumerary teeth correctly in 91% of cases, compared to 77% for GPT-4o and 63% for GPT-4V. The CGPT-4V model also demonstrated a significantly lower false positive rate (16%) than GPT-4V (42%). A statistically significant difference was found between CGPT-4V and GPT-4o (p < 0.001), while no significant difference was observed between GPT-4V and CGPT-4V or between GPT-4V and GPT-4o. Additionally, CGPT-4V successfully identified multiple supernumerary teeth in radiographs where present. These findings highlight the diagnostic potential of customized GPT models in dental radiology. Future research should focus on multicenter validation, seamless clinical integration, and cost-effectiveness to support real-world implementation.

Lung Nodule-SSM: Self-Supervised Lung Nodule Detection and Classification in Thoracic CT Images

Muniba Noreen, Furqan Shaukat

arxiv logopreprintMay 21 2025
Lung cancer remains among the deadliest types of cancer in recent decades, and early lung nodule detection is crucial for improving patient outcomes. The limited availability of annotated medical imaging data remains a bottleneck in developing accurate computer-aided diagnosis (CAD) systems. Self-supervised learning can help leverage large amounts of unlabeled data to develop more robust CAD systems. With the recent advent of transformer-based architecture and their ability to generalize to unseen tasks, there has been an effort within the healthcare community to adapt them to various medical downstream tasks. Thus, we propose a novel "LungNodule-SSM" method, which utilizes selfsupervised learning with DINOv2 as a backbone to enhance lung nodule detection and classification without annotated data. Our methodology has two stages: firstly, the DINOv2 model is pre-trained on unlabeled CT scans to learn robust feature representations, then secondly, these features are fine-tuned using transformer-based architectures for lesionlevel detection and accurate lung nodule diagnosis. The proposed method has been evaluated on the challenging LUNA 16 dataset, consisting of 888 CT scans, and compared with SOTA methods. Our experimental results show the superiority of our proposed method with an accuracy of 98.37%, explaining its effectiveness in lung nodule detection. The source code, datasets, and pre-processed data can be accessed using the link:https://github.com/EMeRALDsNRPU/Lung-Nodule-SSM-Self-Supervised-Lung-Nodule-Detection-and-Classification/tree/main

Artificial Intelligence and Musculoskeletal Surgical Applications.

Oettl FC, Zsidai B, Oeding JF, Samuelsson K

pubmed logopapersMay 20 2025
Artificial intelligence (AI) has emerged as a transformative force in orthopedic surgery. Potentially encompassing pre-, intra-, and postoperative processes, it can process complex medical imaging, provide real-time surgical guidance, and analyze large datasets for outcome prediction and optimization. AI has shown improvements in surgical precision, efficiency, and patient outcomes across orthopedic subspecialties, and large language models and agentic AI systems are expanding AI utility beyond surgical applications into areas such as clinical documentation, patient education, and autonomous decision support. The successful implementation of AI in orthopedic surgery requires careful attention to validation, regulatory compliance, and healthcare system integration. As these technologies continue to advance, maintaining the balance between innovation and patient safety remains crucial, with the ultimate goal of achieving more personalized, efficient, and equitable healthcare delivery while preserving the essential role of human clinical judgment. This review examines the current landscape and future trajectory of AI applications in orthopedic surgery, highlighting both technological advances and their clinical impact. Studies have suggested that AI-assisted procedures achieve higher accuracy and better functional outcomes compared to conventional methods, while reducing operative times and complications. However, these technologies are designed to augment rather than replace clinical expertise, serving as sophisticated tools to enhance surgeons' capabilities and improve patient care.

NOVA: A Benchmark for Anomaly Localization and Clinical Reasoning in Brain MRI

Cosmin I. Bercea, Jun Li, Philipp Raffler, Evamaria O. Riedel, Lena Schmitzer, Angela Kurz, Felix Bitzer, Paula Roßmüller, Julian Canisius, Mirjam L. Beyrle, Che Liu, Wenjia Bai, Bernhard Kainz, Julia A. Schnabel, Benedikt Wiestler

arxiv logopreprintMay 20 2025
In many real-world applications, deployed models encounter inputs that differ from the data seen during training. Out-of-distribution detection identifies whether an input stems from an unseen distribution, while open-world recognition flags such inputs to ensure the system remains robust as ever-emerging, previously $unknown$ categories appear and must be addressed without retraining. Foundation and vision-language models are pre-trained on large and diverse datasets with the expectation of broad generalization across domains, including medical imaging. However, benchmarking these models on test sets with only a few common outlier types silently collapses the evaluation back to a closed-set problem, masking failures on rare or truly novel conditions encountered in clinical use. We therefore present $NOVA$, a challenging, real-life $evaluation-only$ benchmark of $\sim$900 brain MRI scans that span 281 rare pathologies and heterogeneous acquisition protocols. Each case includes rich clinical narratives and double-blinded expert bounding-box annotations. Together, these enable joint assessment of anomaly localisation, visual captioning, and diagnostic reasoning. Because NOVA is never used for training, it serves as an $extreme$ stress-test of out-of-distribution generalisation: models must bridge a distribution gap both in sample appearance and in semantic space. Baseline results with leading vision-language models (GPT-4o, Gemini 2.0 Flash, and Qwen2.5-VL-72B) reveal substantial performance drops across all tasks, establishing NOVA as a rigorous testbed for advancing models that can detect, localize, and reason about truly unknown anomalies.
Page 29 of 34334 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.