Sort by:
Page 10 of 22220 results

The impacts of artificial intelligence on the workload of diagnostic radiology services: A rapid review and stakeholder contextualisation

Sutton, C., Prowse, J., Elshehaly, M., Randell, R.

medrxiv logopreprintJul 24 2025
BackgroundAdvancements in imaging technology, alongside increasing longevity and co-morbidities, have led to heightened demand for diagnostic radiology services. However, there is a shortfall in radiology and radiography staff to acquire, read and report on such imaging examinations. Artificial intelligence (AI) has been identified, notably by AI developers, as a potential solution to impact positively the workload of radiology services for diagnostics to address this staffing shortfall. MethodsA rapid review complemented with data from interviews with UK radiology service stakeholders was undertaken. ArXiv, Cochrane Library, Embase, Medline and Scopus databases were searched for publications in English published between 2007 and 2022. Following screening 110 full texts were included. Interviews with 15 radiology service managers, clinicians and academics were carried out between May and September 2022. ResultsMost literature was published in 2021 and 2022 with a distinct focus on AI for diagnostics of lung and chest disease (n = 25) notably COVID-19 and respiratory system cancers, closely followed by AI for breast screening (n = 23). AI contribution to streamline the workload of radiology services was categorised as autonomous, augmentative and assistive contributions. However, percentage estimates, of workload reduction, varied considerably with the most significant reduction identified in national screening programmes. AI was also recognised as aiding radiology services through providing second opinion, assisting in prioritisation of images for reading and improved quantification in diagnostics. Stakeholders saw AI as having the potential to remove some of the laborious work and contribute service resilience. ConclusionsThis review has shown there is limited data on real-world experiences from radiology services for the implementation of AI in clinical production. Autonomous, augmentative and assistive AI can, as noted in the article, decrease workload and aid reading and reporting, however the governance surrounding these advancements lags.

Exploring the interplay of label bias with subgroup size and separability: A case study in mammographic density classification

Emma A. M. Stanley, Raghav Mehta, Mélanie Roschewitz, Nils D. Forkert, Ben Glocker

arxiv logopreprintJul 24 2025
Systematic mislabelling affecting specific subgroups (i.e., label bias) in medical imaging datasets represents an understudied issue concerning the fairness of medical AI systems. In this work, we investigated how size and separability of subgroups affected by label bias influence the learned features and performance of a deep learning model. Therefore, we trained deep learning models for binary tissue density classification using the EMory BrEast imaging Dataset (EMBED), where label bias affected separable subgroups (based on imaging manufacturer) or non-separable "pseudo-subgroups". We found that simulated subgroup label bias led to prominent shifts in the learned feature representations of the models. Importantly, these shifts within the feature space were dependent on both the relative size and the separability of the subgroup affected by label bias. We also observed notable differences in subgroup performance depending on whether a validation set with clean labels was used to define the classification threshold for the model. For instance, with label bias affecting the majority separable subgroup, the true positive rate for that subgroup fell from 0.898, when the validation set had clean labels, to 0.518, when the validation set had biased labels. Our work represents a key contribution toward understanding the consequences of label bias on subgroup fairness in medical imaging AI.

Patient Perspectives on Artificial Intelligence in Health Care: Focus Group Study for Diagnostic Communication and Tool Implementation.

Foresman G, Biro J, Tran A, MacRae K, Kazi S, Schubel L, Visconti A, Gallagher W, Smith KM, Giardina T, Haskell H, Miller K

pubmed logopapersJul 24 2025
Artificial intelligence (AI) is rapidly transforming health care, offering potential benefits in diagnosis, treatment, and workflow efficiency. However, limited research explores patient perspectives on AI, especially in its role in diagnosis and communication. This study examines patient perceptions of various AI applications, focusing on the diagnostic process and communication. This study aimed to examine patient perspectives on AI use in health care, particularly in diagnostic processes and communication, identifying key concerns, expectations, and opportunities to guide the development and implementation of AI tools. This study used a qualitative focus group methodology with co-design principles to explore patient and family member perspectives on AI in clinical practice. A single 2-hour session was conducted with 17 adult participants. The session included interactive activities and breakout sessions focused on five specific AI scenarios relevant to diagnosis and communication: (1) portal messaging, (2) radiology review, (3) digital scribe, (4) virtual human, and (5) decision support. The session was audio-recorded and transcribed, with facilitator notes and demographic questionnaires collected. Data were analyzed using inductive thematic analysis by 2 independent researchers (GF and JB), with discrepancies resolved via consensus. Participants reported varying comfort levels with AI applications contingent on the level of patient interaction, with digital scribe (average 4.24, range 2-5) and radiology review (average 4.00, range 2-5) being the highest, and virtual human (average 1.68, range 1-4) being the lowest. In total, five cross-cutting themes emerged: (1) validation (concerns about model reliability), (2) usability (impact on diagnostic processes), (3) transparency (expectations for disclosing AI usage), (4) opportunities (potential for AI to improve care), and (5) privacy (concerns about data security). Participants valued the co-design session and felt they had a significant say in the discussions. This study highlights the importance of incorporating patient perspectives in the design and implementation of AI tools in health care. Transparency, human oversight, clear communication, and data privacy are crucial for patient trust and acceptance of AI in diagnostic processes. These findings inform strategies for individual clinicians, health care organizations, and policy makers to ensure responsible and patient-centered AI deployment in health care.

Interpretable AI Framework for Secure and Reliable Medical Image Analysis in IoMT Systems.

Matthew UO, Rosa RL, Saadi M, Rodriguez DZ

pubmed logopapersJul 23 2025
The integration of artificial intelligence (AI) into medical image analysis has transformed healthcare, offering unprecedented precision in diagnosis, treatment planning, and disease monitoring. However, its adoption within the Internet of Medical Things (IoMT) raises significant challenges related to transparency, trustworthiness, and security. This paper introduces a novel Explainable AI (XAI) framework tailored for Medical Cyber-Physical Systems (MCPS), addressing these challenges by combining deep neural networks with symbolic knowledge reasoning to deliver clinically interpretable insights. The framework incorporates an Enhanced Dynamic Confidence-Weighted Attention (Enhanced DCWA) mechanism, which improves interpretability and robustness by dynamically refining attention maps through adaptive normalization and multi-level confidence weighting. Additionally, a Resilient Observability and Detection Engine (RODE) leverages sparse observability principles to detect and mitigate adversarial threats, ensuring reliable performance in dynamic IoMT environments. Evaluations conducted on benchmark datasets, including CheXpert, RSNA Pneumonia Detection Challenge, and NIH Chest X-ray Dataset, demonstrate significant advancements in classification accuracy, adversarial robustness, and explainability. The framework achieves a 15% increase in lesion classification accuracy, a 30% reduction in robustness loss, and a 20% improvement in the Explainability Index compared to state-of-the-art methods.

Mitigating Data Bias in Healthcare AI with Self-Supervised Standardization.

Lan G, Zhu Y, Xiao S, Iqbal M, Yang J

pubmed logopapersJul 23 2025
The rapid advancement of artificial intelligence (AI) in healthcare has accelerated innovations in medical algorithms, yet its broader adoption faces critical ethical and technical barriers. A key challenge lies in algorithmic bias stemming from heterogeneous medical data across institutions, equipment, and workflows, which may perpetuate disparities in AI-driven diagnoses and exacerbate inequities in patient care. While AI's ability to extract deep features from large-scale data offers transformative potential, its effectiveness heavily depends on standardized, high-quality datasets. Current standardization gaps not only limit model generalizability but also raise concerns about reliability and fairness in real-world clinical settings, particularly for marginalized populations. Addressing these urgent issues, this paper proposes an ethical AI framework centered on a novel self-supervised medical image standardization method. By integrating self-supervised image style conversion, channel attention mechanisms, and contrastive learning-based loss functions, our approach enhances structural and style consistency in diverse datasets while preserving patient privacy through decentralized learning paradigms. Experiments across multi-institutional medical image datasets demonstrate that our method significantly improves AI generalizability without requiring centralized data sharing. By bridging the data standardization gap, this work advances technical foundations for trustworthy AI in healthcare.

Re-identification of patients from imaging features extracted by foundation models.

Nebbia G, Kumar S, McNamara SM, Bridge C, Campbell JP, Chiang MF, Mandava N, Singh P, Kalpathy-Cramer J

pubmed logopapersJul 22 2025
Foundation models for medical imaging are a prominent research topic, but risks associated with the imaging features they can capture have not been explored. We aimed to assess whether imaging features from foundation models enable patient re-identification and to relate re-identification to demographic features prediction. Our data included Colour Fundus Photos (CFP), Optical Coherence Tomography (OCT) b-scans, and chest x-rays and we reported re-identification rates of 40.3%, 46.3%, and 25.9%, respectively. We reported varying performance on demographic features prediction depending on re-identification status (e.g., AUC-ROC for gender from CFP is 82.1% for re-identified images vs. 76.8% for non-re-identified ones). When training a deep learning model on the re-identification task, we reported performance of 82.3%, 93.9%, and 63.7% at image level on our internal CFP, OCT, and chest x-ray data. We showed that imaging features extracted from foundation models in ophthalmology and radiology include information that can lead to patient re-identification.

Harmonization in Magnetic Resonance Imaging: A Survey of Acquisition, Image-level, and Feature-level Methods

Qinqin Yang, Firoozeh Shomal-Zadeh, Ali Gholipour

arxiv logopreprintJul 22 2025
Modern medical imaging technologies have greatly advanced neuroscience research and clinical diagnostics. However, imaging data collected across different scanners, acquisition protocols, or imaging sites often exhibit substantial heterogeneity, known as "batch effects" or "site effects". These non-biological sources of variability can obscure true biological signals, reduce reproducibility and statistical power, and severely impair the generalizability of learning-based models across datasets. Image harmonization aims to eliminate or mitigate such site-related biases while preserving meaningful biological information, thereby improving data comparability and consistency. This review provides a comprehensive overview of key concepts, methodological advances, publicly available datasets, current challenges, and future directions in the field of medical image harmonization, with a focus on magnetic resonance imaging (MRI). We systematically cover the full imaging pipeline, and categorize harmonization approaches into prospective acquisition and reconstruction strategies, retrospective image-level and feature-level methods, and traveling-subject-based techniques. Rather than providing an exhaustive survey, we focus on representative methods, with particular emphasis on deep learning-based approaches. Finally, we summarize the major challenges that remain and outline promising avenues for future research.

Facilitators and Barriers to Implementing AI in Routine Medical Imaging: Systematic Review and Qualitative Analysis.

Wenderott K, Krups J, Weigl M, Wooldridge AR

pubmed logopapersJul 21 2025
Artificial intelligence (AI) is rapidly advancing in health care, particularly in medical imaging, offering potential for improved efficiency and reduced workload. However, there is little systematic evidence on process factors for successful AI technology implementation into clinical workflows. This study aimed to systematically assess and synthesize the facilitators and barriers to AI implementation reported in studies evaluating AI solutions in routine medical imaging. We conducted a systematic review of 6 medical databases. Using a qualitative content analysis, we extracted the reported facilitators and barriers, outcomes, and moderators in the implementation process of AI. Two reviewers analyzed and categorized the data separately. We then used epistemic network analysis to explore their relationships across different stages of AI implementation. Our search yielded 13,756 records. After screening, we included 38 original studies in our final review. We identified 12 key dimensions and 37 subthemes that influence the implementation of AI in health care workflows. Key dimensions included evaluation of AI use and fit into workflow, with frequency depending considerably on the stage of the implementation process. In total, 20 themes were mentioned as both facilitators and barriers to AI implementation. Studies often focused predominantly on performance metrics over the experiences or outcomes of clinicians. This systematic review provides a thorough synthesis of facilitators and barriers to successful AI implementation in medical imaging. Our study highlights the usefulness of AI technologies in clinical care and the fit of their integration into routine clinical workflows. Most studies did not directly report facilitators and barriers to AI implementation, underscoring the importance of comprehensive reporting to foster knowledge sharing. Our findings reveal a predominant focus on technological aspects of AI adoption in clinical work, highlighting the need for holistic, human-centric consideration to fully leverage the potential of AI in health care. PROSPERO CRD42022303439; https://www.crd.york.ac.uk/PROSPERO/view/CRD42022303439. RR2-10.2196/40485.

Educational Competencies for Artificial Intelligence in Radiology: A Scoping Review.

Jassar S, Zhou Z, Leonard S, Youssef A, Probyn L, Kulasegaram K, Adams SJ

pubmed logopapersJul 21 2025
The integration of artificial intelligence (AI) in radiology may necessitate refinement of the competencies expected of radiologists. There is currently a lack of understanding on what competencies radiology residency programs should ensure their graduates attain related to AI. This study aimed to identify what knowledge, skills, and attitudes are important for radiologists to use AI safely and effectively in clinical practice. Following Arksey and O'Malley's methodology, a scoping review was conducted by searching electronic databases (PubMed, Embase, Scopus, and ERIC) for articles published between 2010 and 2024. Two reviewers independently screened articles based on the title and abstract and subsequently by full-text review. Data were extracted using a standardized form to identify the knowledge, skills, and attitudes surrounding AI that may be important for its safe and effective use. Of 5920 articles screened, 49 articles met inclusion criteria. Core competencies were related to AI model development, evaluation, clinical implementation, algorithm bias and handling discrepancies, regulation, ethics, medicolegal issues, and economics of AI. While some papers proposed competencies for radiologists focused on technical development of AI algorithms, other papers centered competencies around clinical implementation and use of AI. Current AI educational programming in radiology demonstrates substantial heterogeneity with a lack of consensus on the knowledge, skills, and attitudes for the safe and effective use of AI in radiology. Further research is needed to develop consensus on the core competencies for radiologists to safely and effectively use AI to support the integration of AI training and assessment into residency programs.

DREAM: A framework for discovering mechanisms underlying AI prediction of protected attributes

Gadgil, S. U., DeGrave, A. J., Janizek, J. D., Xu, S., Nwandu, L., Fonjungo, F., Lee, S.-I., Daneshjou, R.

medrxiv logopreprintJul 21 2025
Recent advances in Artificial Intelligence (AI) have started disrupting the healthcare industry, especially medical imaging, and AI devices are increasingly being deployed into clinical practice. Such classifiers have previously demonstrated the ability to discern a range of protected demographic attributes (like race, age, sex) from medical images with unexpectedly high performance, a sensitive task which is difficult even for trained physicians. In this study, we motivate and introduce a general explainable AI (XAI) framework called DREAM (DiscoveRing and Explaining AI Mechanisms) for interpreting how AI models trained on medical images predict protected attributes. Focusing on two modalities, radiology and dermatology, we are successfully able to train high-performing classifiers for predicting race from chest x-rays (ROC-AUC score of [~]0.96) and sex from dermoscopic lesions (ROC-AUC score of [~]0.78). We highlight how incorrect use of these demographic shortcuts can have a detrimental effect on the performance of a clinically relevant downstream task like disease diagnosis under a domain shift. Further, we employ various XAI techniques to identify specific signals which can be leveraged to predict sex. Finally, we propose a technique, which we callremoval via balancing, to quantify how much a signal contributes to the classification performance. Using this technique and the signals identified, we are able to explain [~]15% of the total performance for radiology and [~]42% of the total performance for dermatology. We envision DREAM to be broadly applicable to other modalities and demographic attributes. This analysis not only underscores the importance of cautious AI application in healthcare but also opens avenues for improving the transparency and reliability of AI-driven diagnostic tools.
Page 10 of 22220 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.