A new commentary urges improvements to patient-centred regulation in healthcare AI to better protect against bias and uphold patient rights.
Key Details
- 1A commentary in the Journal of the Royal Society of Medicine warns that risk-based regulation in healthcare AI fails to protect patients from over- and undertreatment and discrimination.
- 2The EU's 2025 AI Act categorizes medical AI, including imaging AI, as 'high risk' but is said to overlook individual patient preferences and long-term systemic effects.
- 3The authors recommend establishing new patient rights, such as explanation, consent, second opinions, and refusal of AI-driven diagnosis or screening.
- 4Concerns focus on AI system opacity, inaccuracy, and potential bias not fully addressed by current policy frameworks.
Why It Matters
As AI becomes more embedded in healthcare—including radiology—patient trust, safety, and autonomy must be safeguarded. Effective regulation and explicit rights are vital to prevent discrimination and ensure ethical deployment of medical AI systems.

Source
EurekAlert
Related News

•EurekAlert
Legal Gaps in Explaining AI Decisions to Patients in Imaging
A JMIR article examines the disconnect between AI legal requirements and actual patient comprehension in medical imaging and diagnostics.

•EurekAlert
Expert Consensus Sets Standardized Evaluation for Clinical Large Language Models
An expert consensus provides a robust, evidence-based framework for retrospective evaluation of large language models in healthcare.

•EurekAlert
Editorial Warns of AI's Risk to Critical Thinking in Medical Education
Generative AI may undermine critical thinking skills and reinforce bias in new doctors, warns BMJ editorial.