Sort by:
Page 2 of 878 results

Artificial Intelligence for Early Detection and Prognosis Prediction of Diabetic Retinopathy

Budi Susilo, Y. K., Yuliana, D., Mahadi, M., Abdul Rahman, S., Ariffin, A. E.

medrxiv logopreprintJun 20 2025
This review explores the transformative role of artificial intelligence (AI) in the early detection and prognosis prediction of diabetic retinopathy (DR), a leading cause of vision loss in diabetic patients. AI, particularly deep learning and convolutional neural networks (CNNs), has demonstrated remarkable accuracy in analyzing retinal images, identifying early-stage DR with high sensitivity and specificity. These advancements address critical challenges such as intergrader variability in manual screening and the limited availability of specialists, especially in underserved regions. The integration of AI with telemedicine has further enhanced accessibility, enabling remote screening through portable devices and smartphone-based imaging. Economically, AI-based systems reduce healthcare costs by optimizing resource allocation and minimizing unnecessary referrals. Key findings highlight the dominance of Medicine (819 documents) and Computer Science (613 documents) in research output, reflecting the interdisciplinary nature of this field. Geographically, China, the United States, and India lead in contributions, underscoring global efforts to combat DR. Despite these successes, challenges such as algorithmic bias, data privacy, and the need for explainable AI (XAI) remain. Future research should focus on multi-center validation, diverse AI methodologies, and clinician-friendly tools to ensure equitable adoption. By addressing these gaps, AI can revolutionize DR management, reducing the global burden of diabetes-related blindness through early intervention and scalable solutions.

Emergency radiology: roadmap for radiology departments.

Aydin S, Ece B, Cakmak V, Kocak B, Onur MR

pubmed logopapersJun 20 2025
Emergency radiology has evolved into a significant subspecialty over the past 2 decades, facing unique challenges including escalating imaging volumes, increasing study complexity, and heightened expectations from clinicians and patients. This review provides a comprehensive overview of the key requirements for an effective emergency radiology unit. Emergency radiologists play a crucial role in real-time decision-making by providing continuous 24/7 support, requiring expertise across various organ systems and close collaboration with emergency physicians and specialists. Beyond image interpretation, emergency radiologists are responsible for organizing staff schedules, planning equipment, determining imaging protocols, and establishing standardized reporting systems. Operational considerations in emergency radiology departments include efficient scheduling models such as circadian-based scheduling, strategic equipment organization with primary imaging modalities positioned near emergency departments, and effective imaging management through structured ordering systems and standardized protocols. Preparedness for mass casualty incidents requires a well-organized workflow process map detailing steps from patient transfer to image acquisition and interpretation, with clear task allocation and imaging pathways. Collaboration between emergency radiologists and physicians is essential, with accurate communication facilitated through various channels and structured reporting templates. Artificial intelligence has emerged as a transformative tool in emergency radiology, offering potential benefits in both interpretative domains (detecting intracranial hemorrhage, pulmonary embolism, acute ischemic stroke) and non-interpretative applications (triage systems, protocol assistance, quality control). Despite implementation challenges including clinician skepticism, financial considerations, and ethical issues, AI can enhance diagnostic accuracy and workflow optimization. Teleradiology provides solutions for staff shortages, particularly during off-hours, with hybrid models allowing radiologists to work both on-site and remotely. This review aims to guide stakeholders in establishing and maintaining efficient emergency radiology services to improve patient outcomes.

Artificial Intelligence Language Models to Translate Professional Radiology Mammography Reports Into Plain Language - Impact on Interpretability and Perception by Patients.

Pisarcik D, Kissling M, Heimer J, Farkas M, Leo C, Kubik-Huch RA, Euler A

pubmed logopapersJun 19 2025
This study aimed to evaluate the interpretability and patient perception of AI-translated mammography and sonography reports, focusing on comprehensibility, follow-up recommendations, and conveyed empathy using a survey. In this observational study, three fictional mammography and sonography reports with BI-RADS categories 3, 4, and 5 were created. These reports were repeatedly translated to plain language by three different large language models (LLM: ChatGPT-4, ChatGPT-4o, Google Gemini). In a first step, the best of these repeatedly translated reports for each BI-RADS category and LLM was selected by two experts in breast imaging considering factual correctness, completeness, and quality. In a second step, female participants compared and rated the translated reports regarding comprehensibility, follow-up recommendations, conveyed empathy, and additional value of each report using a survey with Likert scales. Statistical analysis included cumulative link mixed models and the Plackett-Luce model for ranking preferences. 40 females participated in the survey. GPT-4 and GPT-4o were rated significantly higher than Gemini across all categories (P<.001). Participants >50 years of age rated the reports significantly higher as compared to participants of 18-29 years of age (P<.05). Higher education predicted lower ratings (P=.02). No prior mammography increased scores (P=.03), and AI-experience had no effect (P=.88). Ranking analysis showed GPT-4o as the most preferred (P=.48), followed by GPT-4 (P=.37), with Gemini ranked last (P=.15). Patient preference differed among AI-translated radiology reports. Compared to a traditional report using radiological language, AI-translated reports add value for patients, enhance comprehensibility and empathy and therefore hold the potential to improve patient communication in breast imaging.

Sex, stature, and age estimation from skull using computed tomography images: Current status, challenges, and future perspectives.

Du Z, Navic P, Mahakkanukrauh P

pubmed logopapersJun 18 2025
The skull has long been recognized and utilized in forensic investigations, evolving from basic to complex analyses with modern technologies. Advances in radiology and technology have enhanced the ability to analyze biological identifiers-sex, stature, and age at death-from the skull. The use of computed tomography imaging helps practitioners to improve the accuracy and reliability of forensic analyses. Recently, artificial intelligence has increasingly been applied in digital forensic investigations to estimate sex, stature, and age from computed tomography images. The integration of artificial intelligence represents a significant shift in multidisciplinary collaboration, offering the potential for more accurate and reliable identification, along with advancements in academia. However, it is not yet fully developed for routine forensic work, as it remains largely in the research and development phase. Additionally, the limitations of artificial intelligence systems, such as the lack of transparency in algorithms, accountability for errors, and the potential for discrimination, must still be carefully considered. Based on scientific publications from the past decade, this article aims to provide an overview of the application of computed tomography imaging in estimating sex, stature, and age from the skull and to address issues related to future directions to further improvement.

Innovative technologies and their clinical prospects for early lung cancer screening.

Deng Z, Ma X, Zou S, Tan L, Miao T

pubmed logopapersJun 18 2025
Lung cancer remains the leading cause of cancer-related mortality worldwide, due to lacking effective early-stage screening approaches. Imaging, such as low-dose CT, poses radiation risk, and biopsies can induce some complications. Additionally, traditional serum tumor markers lack diagnostic specificity. This highlights the urgent need for precise and non-invasive early detection techniques. This systematic review aims to evaluate the limitations of conventional screening methods (imaging/biopsy/tumor markers), seek breakthroughs in liquid biopsy for early lung cancer detection, and assess the potential value of Artificial Intelligence (AI), thereby providing evidence-based insights for establishing an optimal screening framework. We systematically searched the PubMed database for the literature published up to May 2025. Key words include "Artificial Intelligence", "Early Lung cancer screening", "Imaging examination", "Innovative technologies", "Liquid biopsy", and "Puncture biopsy". Our inclusion criteria focused on studies about traditional and innovative screening methods, with an emphasis on original research concerning diagnostic performance or high-quality reviews. This approach helps identify critical studies in early lung cancer screening. Novel liquid biopsy techniques are non-invasive and have superior diagnostic efficacy. AI-assisted diagnostics further enhance accuracy. We propose three development directions: establishing risk-based liquid biopsy screening protocols, developing a stepwise "imaging-AI-liquid biopsy" diagnostic workflow, and creating standardized biomarker panel testing solutions. Integrating traditional methodologies, novel liquid biopsies, and AI to establish a comprehensive early lung cancer screening model is important. These innovative strategies aim to significantly increase early detection rates, substantially enhancing lung cancer control. This review provides both theoretical guidance for clinical practice and future research.

Frequency-Calibrated Membership Inference Attacks on Medical Image Diffusion Models

Xinkai Zhao, Yuta Tokuoka, Junichiro Iwasawa, Keita Oda

arxiv logopreprintJun 17 2025
The increasing use of diffusion models for image generation, especially in sensitive areas like medical imaging, has raised significant privacy concerns. Membership Inference Attack (MIA) has emerged as a potential approach to determine if a specific image was used to train a diffusion model, thus quantifying privacy risks. Existing MIA methods often rely on diffusion reconstruction errors, where member images are expected to have lower reconstruction errors than non-member images. However, applying these methods directly to medical images faces challenges. Reconstruction error is influenced by inherent image difficulty, and diffusion models struggle with high-frequency detail reconstruction. To address these issues, we propose a Frequency-Calibrated Reconstruction Error (FCRE) method for MIAs on medical image diffusion models. By focusing on reconstruction errors within a specific mid-frequency range and excluding both high-frequency (difficult to reconstruct) and low-frequency (less informative) regions, our frequency-selective approach mitigates the confounding factor of inherent image difficulty. Specifically, we analyze the reverse diffusion process, obtain the mid-frequency reconstruction error, and compute the structural similarity index score between the reconstructed and original images. Membership is determined by comparing this score to a threshold. Experiments on several medical image datasets demonstrate that our FCRE method outperforms existing MIA methods.

Deep learning based colorectal cancer detection in medical images: A comprehensive analysis of datasets, methods, and future directions.

Gülmez B

pubmed logopapersJun 17 2025
This comprehensive review examines the current state and evolution of artificial intelligence applications in colorectal cancer detection through medical imaging from 2019 to 2025. The study presents a quantitative analysis of 110 high-quality publications and 9 publicly accessible medical image datasets used for training and validation. Various convolutional neural network architectures-including ResNet (40 implementations), VGG (18 implementations), and emerging transformer-based models (12 implementations)-for classification, object detection, and segmentation tasks are systematically categorized and evaluated. The investigation encompasses hyperparameter optimization techniques utilized to enhance model performance, with particular focus on genetic algorithms and particle swarm optimization approaches. The role of explainable AI methods in medical diagnosis interpretation is analyzed through visualization techniques such as Grad-CAM and SHAP. Technical limitations, including dataset scarcity, computational constraints, and standardization challenges, are identified through trend analysis. Research gaps in current methodologies are highlighted through comparative assessment of performance metrics across different architectural implementations. Potential future research directions, including multimodal learning and federated learning approaches, are proposed based on publication trend analysis. This review serves as a comprehensive reference for researchers in medical image analysis and clinical practitioners implementing AI-based colorectal cancer detection systems.

Evaluating Explainability: A Framework for Systematic Assessment and Reporting of Explainable AI Features

Miguel A. Lago, Ghada Zamzmi, Brandon Eich, Jana G. Delfino

arxiv logopreprintJun 16 2025
Explainability features are intended to provide insight into the internal mechanisms of an AI device, but there is a lack of evaluation techniques for assessing the quality of provided explanations. We propose a framework to assess and report explainable AI features. Our evaluation framework for AI explainability is based on four criteria: 1) Consistency quantifies the variability of explanations to similar inputs, 2) Plausibility estimates how close the explanation is to the ground truth, 3) Fidelity assesses the alignment between the explanation and the model internal mechanisms, and 4) Usefulness evaluates the impact on task performance of the explanation. Finally, we developed a scorecard for AI explainability methods that serves as a complete description and evaluation to accompany this type of algorithm. We describe these four criteria and give examples on how they can be evaluated. As a case study, we use Ablation CAM and Eigen CAM to illustrate the evaluation of explanation heatmaps on the detection of breast lesions on synthetic mammographies. The first three criteria are evaluated for clinically-relevant scenarios. Our proposed framework establishes criteria through which the quality of explanations provided by AI models can be evaluated. We intend for our framework to spark a dialogue regarding the value provided by explainability features and help improve the development and evaluation of AI-based medical devices.

Radiologist-AI workflow can be modified to reduce the risk of medical malpractice claims

Bernstein, M., Sheppard, B., Bruno, M. A., Lay, P. S., Baird, G. L.

medrxiv logopreprintJun 16 2025
BackgroundArtificial Intelligence (AI) is rapidly changing the legal landscape of radiology. Results from a previous experiment suggested that providing AI error rates can reduce perceived radiologist culpability, as judged by mock jury members (4). The current study advances this work by examining whether the radiologists behavior also impacts perceptions of liability. Methods. Participants (n=282) read about a hypothetical malpractice case where a 50-year-old who visited the Emergency Department with acute neurological symptoms received a brain CT scan to determine if bleeding was present. An AI system was used by the radiologist who interpreted imaging. The AI system correctly flagged the case as abnormal. Nonetheless, the radiologist concluded no evidence of bleeding, and the blood-thinner t-PA was administered. Participants were randomly assigned to either a 1.) single-read condition, where the radiologist interpreted the CT once after seeing AI feedback, or 2.) a double-read condition, where the radiologist interpreted the CT twice, first without AI and then with AI feedback. Participants were then told the patient suffered irreversible brain damage due to the missed brain bleed, resulting in the patient (plaintiff) suing the radiologist (defendant). Participants indicated whether the radiologist met their duty of care to the patient (yes/no). Results. Hypothetical jurors were more likely to side with the plaintiff in the single-read condition (106/142, 74.7%) than in the double-read condition (74/140, 52.9%), p=0.0002. Conclusion. This suggests that the penalty for disagreeing with correct AI can be mitigated when images are interpreted twice, or at least if a radiologist gives an interpretation before AI is used.
Page 2 of 878 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.