Sort by:
Page 7 of 17169 results

Interpretable Artificial Intelligence for Detecting Acute Heart Failure on Acute Chest CT Scans

Silas Nyboe Ørting, Kristina Miger, Anne Sophie Overgaard Olesen, Mikael Ploug Boesen, Michael Brun Andersen, Jens Petersen, Olav W. Nielsen, Marleen de Bruijne

arxiv logopreprintJul 11 2025
Introduction: Chest CT scans are increasingly used in dyspneic patients where acute heart failure (AHF) is a key differential diagnosis. Interpretation remains challenging and radiology reports are frequently delayed due to a radiologist shortage, although flagging such information for emergency physicians would have therapeutic implication. Artificial intelligence (AI) can be a complementary tool to enhance the diagnostic precision. We aim to develop an explainable AI model to detect radiological signs of AHF in chest CT with an accuracy comparable to thoracic radiologists. Methods: A single-center, retrospective study during 2016-2021 at Copenhagen University Hospital - Bispebjerg and Frederiksberg, Denmark. A Boosted Trees model was trained to predict AHF based on measurements of segmented cardiac and pulmonary structures from acute thoracic CT scans. Diagnostic labels for training and testing were extracted from radiology reports. Structures were segmented with TotalSegmentator. Shapley Additive explanations values were used to explain the impact of each measurement on the final prediction. Results: Of the 4,672 subjects, 49% were female. The final model incorporated twelve key features of AHF and achieved an area under the ROC of 0.87 on the independent test set. Expert radiologist review of model misclassifications found that 24 out of 64 (38%) false positives and 24 out of 61 (39%) false negatives were actually correct model predictions, with the errors originating from inaccuracies in the initial radiology reports. Conclusion: We developed an explainable AI model with strong discriminatory performance, comparable to thoracic radiologists. The AI model's stepwise, transparent predictions may support decision-making.

Explainable artificial intelligence for pneumonia classification: Clinical insights into deformable prototypical part network in pediatric chest x-ray images.

Yazdani E, Neizehbaz A, Karamzade-Ziarati N, Kheradpisheh SR

pubmed logopapersJul 11 2025
Pneumonia detection in chest X-rays (CXR) increasingly relies on AI-driven diagnostic systems. However, their "black-box" nature often lacks transparency, underscoring the need for interpretability to improve patient outcomes. This study presents the first application of the Deformable Prototypical Part Network (D-ProtoPNet), an ante-hoc interpretable deep learning (DL) model, for pneumonia classification in pediatric patients' CXR images. Clinical insights were integrated through expert radiologist evaluation of the model's learned prototypes and activated image patches, ensuring that explanations aligned with medically meaningful features. The model was developed and tested on a retrospective dataset of 5,856 CXR images of pediatric patients, ages 1-5 years. The images were originally acquired at a tertiary academic medical center as part of routine clinical care and were publicly hosted on a Kaggle platform. This dataset comprised anterior-posterior images labeled normal, viral, and bacterial. It was divided into 80 % training and 20 % validation splits, and utilised in a supervised five-fold cross-validation. Performance metrics were compared with the original ProtoPNet, utilising ResNet50 as the base model. An experienced radiologist assessed the clinical relevance of the learned prototypes, patch activations, and model explanations. The D-ProtoPNet achieved an accuracy of 86 %, precision of 86 %, recall of 85 %, and AUC of 93 %, marking a 3 % improvement over the original ProtoPNet. While further optimisation is required before clinical use, the radiologist praised D-ProtoPNet's intuitive explanations, highlighting its interpretability and potential to aid clinical decision-making. Prototypical part learning offers a balance between classification performance and explanation quality, but requires improvements to match the accuracy of black-box models. This study underscores the importance of integrating domain expertise during model evaluation to ensure the interpretability of XAI models is grounded in clinically valid insights.

An Enhanced Privacy-preserving Federated Few-shot Learning Framework for Respiratory Disease Diagnosis

Ming Wang, Zhaoyang Duan, Dong Xue, Fangzhou Liu, Zhongheng Zhang

arxiv logopreprintJul 10 2025
The labor-intensive nature of medical data annotation presents a significant challenge for respiratory disease diagnosis, resulting in a scarcity of high-quality labeled datasets in resource-constrained settings. Moreover, patient privacy concerns complicate the direct sharing of local medical data across institutions, and existing centralized data-driven approaches, which rely on amounts of available data, often compromise data privacy. This study proposes a federated few-shot learning framework with privacy-preserving mechanisms to address the issues of limited labeled data and privacy protection in diagnosing respiratory diseases. In particular, a meta-stochastic gradient descent algorithm is proposed to mitigate the overfitting problem that arises from insufficient data when employing traditional gradient descent methods for neural network training. Furthermore, to ensure data privacy against gradient leakage, differential privacy noise from a standard Gaussian distribution is integrated into the gradients during the training of private models with local data, thereby preventing the reconstruction of medical images. Given the impracticality of centralizing respiratory disease data dispersed across various medical institutions, a weighted average algorithm is employed to aggregate local diagnostic models from different clients, enhancing the adaptability of a model across diverse scenarios. Experimental results show that the proposed method yields compelling results with the implementation of differential privacy, while effectively diagnosing respiratory diseases using data from different structures, categories, and distributions.

AI Revolution in Radiology, Radiation Oncology and Nuclear Medicine: Transforming and Innovating the Radiological Sciences.

Carriero S, Canella R, Cicchetti F, Angileri A, Bruno A, Biondetti P, Colciago RR, D'Antonio A, Della Pepa G, Grassi F, Granata V, Lanza C, Santicchia S, Miceli A, Piras A, Salvestrini V, Santo G, Pesapane F, Barile A, Carrafiello G, Giovagnoni A

pubmed logopapersJul 9 2025
The integration of artificial intelligence (AI) into clinical practice, particularly within radiology, nuclear medicine and radiation oncology, is transforming diagnostic and therapeutic processes. AI-driven tools, especially in deep learning and machine learning, have shown remarkable potential in enhancing image recognition, analysis and decision-making. This technological advancement allows for the automation of routine tasks, improved diagnostic accuracy, and the reduction of human error, leading to more efficient workflows. Moreover, the successful implementation of AI in healthcare requires comprehensive education and training for young clinicians, with a pressing need to incorporate AI into residency programmes, ensuring that future specialists are equipped with traditional skills and a deep understanding of AI technologies and their clinical applications. This includes knowledge of software, data analysis, imaging informatics and ethical considerations surrounding AI use in medicine. By fostering interdisciplinary integration and emphasising AI education, healthcare professionals can fully harness AI's potential to improve patient outcomes and advance the field of medical imaging and therapy. This review aims to evaluate how AI influences radiology, nuclear medicine and radiation oncology, while highlighting the necessity for specialised AI training in medical education to ensure its successful clinical integration.

Evolution of CT perfusion software in stroke imaging: from deconvolution to artificial intelligence.

Gragnano E, Cocozza S, Rizzuti M, Buono G, Elefante A, Guida A, Marseglia M, Tarantino M, Manganelli F, Tortora F, Briganti F

pubmed logopapersJul 9 2025
Computed tomography perfusion (CTP) represents one of the main determinants in the decision-making strategy of stroke patients, being very useful in triaging these patients. The aim of this review is to describe the current knowledge and the future applications of AI in CTP. This review contains a short technical description of the CTP technique and how perfusion parameters are currently estimated and applied in clinical practice. We then provided a comprehensive literature review on the performance of CTP analysis software aimed at understanding whether possible differences between commercially available software might have a direct implication on neuroradiological patient stratification, and therefore on their clinical outcomes. An overview of past, present, and future of software used for CTP estimation, with an emphasis on those AI-based, is provided. Finally, future challenges regarding technical aspects and ethical considerations are discussed. In the current state, most of the use of AI in CTP estimation is limited to some technical steps of the processing pipeline, and especially in the correction of motion artifacts, with deconvolution methods that are still widely used to generate CTP-derived variables. Major drawbacks in AI implementation are still present, especially regarding the "black-box" nature of some models, technical workflow implementations, and the economic costs. In the future, the integration of AI with all the information available in clinical practice should fulfill the aim of developing patient-specific CTP maps, which will overcome the current limitations of threshold-based decision-making processes and will lead physicians to better patient selection and earlier and more efficient treatments. KEY POINTS: Question AI is a widely investigated field in neuroradiology, yet no comprehensive review is yet available on its role in CT perfusion (CTP) in stroke patients. Findings AI in CTP is mainly used for motion correction; future integration with clinical data could enable personalized stroke treatment, despite ethical and economic challenges. Clinical relevance To date, AI in CTP mainly finds applications in image motion correction; although some ethical, technical, and vendor standardization issues remain, integrating AI with clinical data in stroke patients promises a possible improvement in patient outcomes.

Securing Healthcare Data Integrity: Deepfake Detection Using Autonomous AI Approaches.

Hsu CC, Tsai MY, Yu CM

pubmed logopapersJul 9 2025
The rapid evolution of deepfake technology poses critical challenges to healthcare systems, particularly in safeguarding the integrity of medical imaging, electronic health records (EHR), and telemedicine platforms. As autonomous AI becomes increasingly integrated into smart healthcare, the potential misuse of deepfakes to manipulate sensitive healthcare data or impersonate medical professionals highlights the urgent need for robust and adaptive detection mechanisms. In this work, we propose DProm, a dynamic deepfake detection framework leveraging visual prompt tuning (VPT) with a pre-trained Swin Transformer. Unlike traditional static detection models, which struggle to adapt to rapidly evolving deepfake techniques, DProm fine-tunes a small set of visual prompts to efficiently adapt to new data distributions with minimal computational and storage requirements. Comprehensive experiments demonstrate that DProm achieves state-of-the-art performance in both static cross-dataset evaluations and dynamic scenarios, ensuring robust detection across diverse data distributions. By addressing the challenges of scalability, adaptability, and resource efficiency, DProm offers a transformative solution for enhancing the security and trustworthiness of autonomous AI systems in healthcare, paving the way for safer and more reliable smart healthcare applications.

Foundation models for radiology: fundamentals, applications, opportunities, challenges, risks, and prospects.

Akinci D'Antonoli T, Bluethgen C, Cuocolo R, Klontzas ME, Ponsiglione A, Kocak B

pubmed logopapersJul 8 2025
Foundation models (FMs) represent a significant evolution in artificial intelligence (AI), impacting diverse fields. Within radiology, this evolution offers greater adaptability, multimodal integration, and improved generalizability compared with traditional narrow AI. Utilizing large-scale pre-training and efficient fine-tuning, FMs can support diverse applications, including image interpretation, report generation, integrative diagnostics combining imaging with clinical/laboratory data, and synthetic data creation, holding significant promise for advancements in precision medicine. However, clinical translation of FMs faces several substantial challenges. Key concerns include the inherent opacity of model decision-making processes, environmental and social sustainability issues, risks to data privacy, complex ethical considerations, such as bias and fairness, and navigating the uncertainty of regulatory frameworks. Moreover, rigorous validation is essential to address inherent stochasticity and the risk of hallucination. This international collaborative effort provides a comprehensive overview of the fundamentals, applications, opportunities, challenges, and prospects of FMs, aiming to guide their responsible and effective adoption in radiology and healthcare.

The future of multimodal artificial intelligence models for integrating imaging and clinical metadata: a narrative review.

Simon BD, Ozyoruk KB, Gelikman DG, Harmon SA, Türkbey B

pubmed logopapersJul 8 2025
With the ongoing revolution of artificial intelligence (AI) in medicine, the impact of AI in radiology is more pronounced than ever. An increasing number of technical and clinical AI-focused studies are published each day. As these tools inevitably affect patient care and physician practices, it is crucial that radiologists become more familiar with the leading strategies and underlying principles of AI. Multimodal AI models can combine both imaging and clinical metadata and are quickly becoming a popular approach that is being integrated into the medical ecosystem. This narrative review covers major concepts of multimodal AI through the lens of recent literature. We discuss emerging frameworks, including graph neural networks, which allow for explicit learning from non-Euclidean relationships, and transformers, which allow for parallel computation that scales, highlighting existing literature and advocating for a focus on emerging architectures. We also identify key pitfalls in current studies, including issues with taxonomy, data scarcity, and bias. By informing radiologists and biomedical AI experts about existing practices and challenges, we hope to guide the next wave of imaging-based multimodal AI research.

Post-hoc eXplainable AI methods for analyzing medical images of gliomas (- A review for clinical applications).

Ayaz H, Sümer-Arpak E, Ozturk-Isik E, Booth TC, Tormey D, McLoughlin I, Unnikrishnan S

pubmed logopapersJul 8 2025
Deep learning (DL) has shown promise in glioma imaging tasks using magnetic resonance imaging (MRI) and histopathology images, yet their complexity demands greater transparency in artificial intelligence (AI) systems. This is noticeable when users must understand the model output for a clinical application. In this systematic review, 65 post-hoc eXplainable AI (XAI), or interpretable AI studies, have been reviewed that provide an understanding of why a system generated a given output for tasks related to glioma imaging. A framework of post-hoc XAI methods, such as Gradient-based XAI (G-XAI) and Perturbation-based XAI (P-XAI), is introduced to evaluate deep models and explain their application in gliomas. The papers on XAI techniques in gliomas are surveyed and categorized by their specific aims such as grading, genetic biomarker detection, localization, intra-tumoral heterogeneity assessment, and survival analysis, and their XAI approach. This review highlights the growing integration of XAI in glioma imaging, demonstrating their role in bridging AI decision-making and medical diagnostics. The co-occurrence analysis emphasizes their role in enhancing model transparency and trust and guiding future research toward more reliable clinical applications. Finally, the current challenges associated with DL and XAI approaches and their clinical integration are discussed with an outlook on future opportunities from clinical users' perspectives and upcoming trends in XAI.

Emerging Frameworks for Objective Task-based Evaluation of Quantitative Medical Imaging Methods

Yan Liu, Huitian Xia, Nancy A. Obuchowski, Richard Laforest, Arman Rahmim, Barry A. Siegel, Abhinav K. Jha

arxiv logopreprintJul 7 2025
Quantitative imaging (QI) is demonstrating strong promise across multiple clinical applications. For clinical translation of QI methods, objective evaluation on clinically relevant tasks is essential. To address this need, multiple evaluation strategies are being developed. In this paper, based on previous literature, we outline four emerging frameworks to perform evaluation studies of QI methods. We first discuss the use of virtual imaging trials (VITs) to evaluate QI methods. Next, we outline a no-gold-standard evaluation framework to clinically evaluate QI methods without ground truth. Third, a framework to evaluate QI methods for joint detection and quantification tasks is outlined. Finally, we outline a framework to evaluate QI methods that output multi-dimensional parameters, such as radiomic features. We review these frameworks, discussing their utilities and limitations. Further, we examine future research areas in evaluation of QI methods. Given the recent advancements in PET, including long axial field-of-view scanners and the development of artificial-intelligence algorithms, we present these frameworks in the context of PET.
Page 7 of 17169 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.