Sort by:
Page 5 of 17168 results

Exploring the interplay of label bias with subgroup size and separability: A case study in mammographic density classification

Emma A. M. Stanley, Raghav Mehta, Mélanie Roschewitz, Nils D. Forkert, Ben Glocker

arxiv logopreprintJul 24 2025
Systematic mislabelling affecting specific subgroups (i.e., label bias) in medical imaging datasets represents an understudied issue concerning the fairness of medical AI systems. In this work, we investigated how size and separability of subgroups affected by label bias influence the learned features and performance of a deep learning model. Therefore, we trained deep learning models for binary tissue density classification using the EMory BrEast imaging Dataset (EMBED), where label bias affected separable subgroups (based on imaging manufacturer) or non-separable "pseudo-subgroups". We found that simulated subgroup label bias led to prominent shifts in the learned feature representations of the models. Importantly, these shifts within the feature space were dependent on both the relative size and the separability of the subgroup affected by label bias. We also observed notable differences in subgroup performance depending on whether a validation set with clean labels was used to define the classification threshold for the model. For instance, with label bias affecting the majority separable subgroup, the true positive rate for that subgroup fell from 0.898, when the validation set had clean labels, to 0.518, when the validation set had biased labels. Our work represents a key contribution toward understanding the consequences of label bias on subgroup fairness in medical imaging AI.

Interpretable AI Framework for Secure and Reliable Medical Image Analysis in IoMT Systems.

Matthew UO, Rosa RL, Saadi M, Rodriguez DZ

pubmed logopapersJul 23 2025
The integration of artificial intelligence (AI) into medical image analysis has transformed healthcare, offering unprecedented precision in diagnosis, treatment planning, and disease monitoring. However, its adoption within the Internet of Medical Things (IoMT) raises significant challenges related to transparency, trustworthiness, and security. This paper introduces a novel Explainable AI (XAI) framework tailored for Medical Cyber-Physical Systems (MCPS), addressing these challenges by combining deep neural networks with symbolic knowledge reasoning to deliver clinically interpretable insights. The framework incorporates an Enhanced Dynamic Confidence-Weighted Attention (Enhanced DCWA) mechanism, which improves interpretability and robustness by dynamically refining attention maps through adaptive normalization and multi-level confidence weighting. Additionally, a Resilient Observability and Detection Engine (RODE) leverages sparse observability principles to detect and mitigate adversarial threats, ensuring reliable performance in dynamic IoMT environments. Evaluations conducted on benchmark datasets, including CheXpert, RSNA Pneumonia Detection Challenge, and NIH Chest X-ray Dataset, demonstrate significant advancements in classification accuracy, adversarial robustness, and explainability. The framework achieves a 15% increase in lesion classification accuracy, a 30% reduction in robustness loss, and a 20% improvement in the Explainability Index compared to state-of-the-art methods.

Mitigating Data Bias in Healthcare AI with Self-Supervised Standardization.

Lan G, Zhu Y, Xiao S, Iqbal M, Yang J

pubmed logopapersJul 23 2025
The rapid advancement of artificial intelligence (AI) in healthcare has accelerated innovations in medical algorithms, yet its broader adoption faces critical ethical and technical barriers. A key challenge lies in algorithmic bias stemming from heterogeneous medical data across institutions, equipment, and workflows, which may perpetuate disparities in AI-driven diagnoses and exacerbate inequities in patient care. While AI's ability to extract deep features from large-scale data offers transformative potential, its effectiveness heavily depends on standardized, high-quality datasets. Current standardization gaps not only limit model generalizability but also raise concerns about reliability and fairness in real-world clinical settings, particularly for marginalized populations. Addressing these urgent issues, this paper proposes an ethical AI framework centered on a novel self-supervised medical image standardization method. By integrating self-supervised image style conversion, channel attention mechanisms, and contrastive learning-based loss functions, our approach enhances structural and style consistency in diverse datasets while preserving patient privacy through decentralized learning paradigms. Experiments across multi-institutional medical image datasets demonstrate that our method significantly improves AI generalizability without requiring centralized data sharing. By bridging the data standardization gap, this work advances technical foundations for trustworthy AI in healthcare.

Re-identification of patients from imaging features extracted by foundation models.

Nebbia G, Kumar S, McNamara SM, Bridge C, Campbell JP, Chiang MF, Mandava N, Singh P, Kalpathy-Cramer J

pubmed logopapersJul 22 2025
Foundation models for medical imaging are a prominent research topic, but risks associated with the imaging features they can capture have not been explored. We aimed to assess whether imaging features from foundation models enable patient re-identification and to relate re-identification to demographic features prediction. Our data included Colour Fundus Photos (CFP), Optical Coherence Tomography (OCT) b-scans, and chest x-rays and we reported re-identification rates of 40.3%, 46.3%, and 25.9%, respectively. We reported varying performance on demographic features prediction depending on re-identification status (e.g., AUC-ROC for gender from CFP is 82.1% for re-identified images vs. 76.8% for non-re-identified ones). When training a deep learning model on the re-identification task, we reported performance of 82.3%, 93.9%, and 63.7% at image level on our internal CFP, OCT, and chest x-ray data. We showed that imaging features extracted from foundation models in ophthalmology and radiology include information that can lead to patient re-identification.

Harmonization in Magnetic Resonance Imaging: A Survey of Acquisition, Image-level, and Feature-level Methods

Qinqin Yang, Firoozeh Shomal-Zadeh, Ali Gholipour

arxiv logopreprintJul 22 2025
Modern medical imaging technologies have greatly advanced neuroscience research and clinical diagnostics. However, imaging data collected across different scanners, acquisition protocols, or imaging sites often exhibit substantial heterogeneity, known as "batch effects" or "site effects". These non-biological sources of variability can obscure true biological signals, reduce reproducibility and statistical power, and severely impair the generalizability of learning-based models across datasets. Image harmonization aims to eliminate or mitigate such site-related biases while preserving meaningful biological information, thereby improving data comparability and consistency. This review provides a comprehensive overview of key concepts, methodological advances, publicly available datasets, current challenges, and future directions in the field of medical image harmonization, with a focus on magnetic resonance imaging (MRI). We systematically cover the full imaging pipeline, and categorize harmonization approaches into prospective acquisition and reconstruction strategies, retrospective image-level and feature-level methods, and traveling-subject-based techniques. Rather than providing an exhaustive survey, we focus on representative methods, with particular emphasis on deep learning-based approaches. Finally, we summarize the major challenges that remain and outline promising avenues for future research.

Facilitators and Barriers to Implementing AI in Routine Medical Imaging: Systematic Review and Qualitative Analysis.

Wenderott K, Krups J, Weigl M, Wooldridge AR

pubmed logopapersJul 21 2025
Artificial intelligence (AI) is rapidly advancing in health care, particularly in medical imaging, offering potential for improved efficiency and reduced workload. However, there is little systematic evidence on process factors for successful AI technology implementation into clinical workflows. This study aimed to systematically assess and synthesize the facilitators and barriers to AI implementation reported in studies evaluating AI solutions in routine medical imaging. We conducted a systematic review of 6 medical databases. Using a qualitative content analysis, we extracted the reported facilitators and barriers, outcomes, and moderators in the implementation process of AI. Two reviewers analyzed and categorized the data separately. We then used epistemic network analysis to explore their relationships across different stages of AI implementation. Our search yielded 13,756 records. After screening, we included 38 original studies in our final review. We identified 12 key dimensions and 37 subthemes that influence the implementation of AI in health care workflows. Key dimensions included evaluation of AI use and fit into workflow, with frequency depending considerably on the stage of the implementation process. In total, 20 themes were mentioned as both facilitators and barriers to AI implementation. Studies often focused predominantly on performance metrics over the experiences or outcomes of clinicians. This systematic review provides a thorough synthesis of facilitators and barriers to successful AI implementation in medical imaging. Our study highlights the usefulness of AI technologies in clinical care and the fit of their integration into routine clinical workflows. Most studies did not directly report facilitators and barriers to AI implementation, underscoring the importance of comprehensive reporting to foster knowledge sharing. Our findings reveal a predominant focus on technological aspects of AI adoption in clinical work, highlighting the need for holistic, human-centric consideration to fully leverage the potential of AI in health care. PROSPERO CRD42022303439; https://www.crd.york.ac.uk/PROSPERO/view/CRD42022303439. RR2-10.2196/40485.

DREAM: A framework for discovering mechanisms underlying AI prediction of protected attributes

Gadgil, S. U., DeGrave, A. J., Janizek, J. D., Xu, S., Nwandu, L., Fonjungo, F., Lee, S.-I., Daneshjou, R.

medrxiv logopreprintJul 21 2025
Recent advances in Artificial Intelligence (AI) have started disrupting the healthcare industry, especially medical imaging, and AI devices are increasingly being deployed into clinical practice. Such classifiers have previously demonstrated the ability to discern a range of protected demographic attributes (like race, age, sex) from medical images with unexpectedly high performance, a sensitive task which is difficult even for trained physicians. In this study, we motivate and introduce a general explainable AI (XAI) framework called DREAM (DiscoveRing and Explaining AI Mechanisms) for interpreting how AI models trained on medical images predict protected attributes. Focusing on two modalities, radiology and dermatology, we are successfully able to train high-performing classifiers for predicting race from chest x-rays (ROC-AUC score of [~]0.96) and sex from dermoscopic lesions (ROC-AUC score of [~]0.78). We highlight how incorrect use of these demographic shortcuts can have a detrimental effect on the performance of a clinically relevant downstream task like disease diagnosis under a domain shift. Further, we employ various XAI techniques to identify specific signals which can be leveraged to predict sex. Finally, we propose a technique, which we callremoval via balancing, to quantify how much a signal contributes to the classification performance. Using this technique and the signals identified, we are able to explain [~]15% of the total performance for radiology and [~]42% of the total performance for dermatology. We envision DREAM to be broadly applicable to other modalities and demographic attributes. This analysis not only underscores the importance of cautious AI application in healthcare but also opens avenues for improving the transparency and reliability of AI-driven diagnostic tools.

Educational Competencies for Artificial Intelligence in Radiology: A Scoping Review.

Jassar S, Zhou Z, Leonard S, Youssef A, Probyn L, Kulasegaram K, Adams SJ

pubmed logopapersJul 21 2025
The integration of artificial intelligence (AI) in radiology may necessitate refinement of the competencies expected of radiologists. There is currently a lack of understanding on what competencies radiology residency programs should ensure their graduates attain related to AI. This study aimed to identify what knowledge, skills, and attitudes are important for radiologists to use AI safely and effectively in clinical practice. Following Arksey and O'Malley's methodology, a scoping review was conducted by searching electronic databases (PubMed, Embase, Scopus, and ERIC) for articles published between 2010 and 2024. Two reviewers independently screened articles based on the title and abstract and subsequently by full-text review. Data were extracted using a standardized form to identify the knowledge, skills, and attitudes surrounding AI that may be important for its safe and effective use. Of 5920 articles screened, 49 articles met inclusion criteria. Core competencies were related to AI model development, evaluation, clinical implementation, algorithm bias and handling discrepancies, regulation, ethics, medicolegal issues, and economics of AI. While some papers proposed competencies for radiologists focused on technical development of AI algorithms, other papers centered competencies around clinical implementation and use of AI. Current AI educational programming in radiology demonstrates substantial heterogeneity with a lack of consensus on the knowledge, skills, and attitudes for the safe and effective use of AI in radiology. Further research is needed to develop consensus on the core competencies for radiologists to safely and effectively use AI to support the integration of AI training and assessment into residency programs.

Imaging biomarkers of ageing: a review of artificial intelligence-based approaches for age estimation.

Haugg F, Lee G, He J, Johnson J, Zapaishchykova A, Bitterman DS, Kann BH, Aerts HJWL, Mak RH

pubmed logopapersJul 18 2025
Chronological age, although commonly used in clinical practice, fails to capture individual variations in rates of ageing and physiological decline. Recent advances in artificial intelligence (AI) have transformed the estimation of biological age using various imaging techniques. This Review consolidates AI developments in age prediction across brain, chest, abdominal, bone, and facial imaging using diverse methods, including MRI, CT, x-ray, and photographs. The difference between predicted and chronological age-often referred to as age deviation-is a promising biomarker for assessing health status and predicting disease risk. In this Review, we highlight consistent associations between age deviation and various health outcomes, including mortality risk, cognitive decline, and cardiovascular prognosis. We also discuss the technical challenges in developing unbiased models and ethical considerations for clinical application. This Review highlights the potential of AI-based age estimation in personalised medicine as it offers a non-invasive, interpretable biomarker that could transform health risk assessment and guide preventive interventions.

Commercialization of medical artificial intelligence technologies: challenges and opportunities.

Li B, Powell D, Lee R

pubmed logopapersJul 18 2025
Artificial intelligence (AI) is already having a significant impact on healthcare. For example, AI-guided imaging can improve the diagnosis/treatment of vascular diseases, which affect over 200 million people globally. Recently, Chiu and colleagues (2024) developed an AI algorithm that supports nurses with no ultrasound training in diagnosing abdominal aortic aneurysms (AAA) with similar accuracy as ultrasound-trained physicians. This technology can therefore improve AAA screening; however, achieving clinical impact with new AI technologies requires careful consideration of commercialization strategies, including funding, compliance with safety and regulatory frameworks, health technology assessment, regulatory approval, reimbursement, and clinical guideline integration.
Page 5 of 17168 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.