A new commentary urges improvements to patient-centred regulation in healthcare AI to better protect against bias and uphold patient rights.
Key Details
- 1A commentary in the Journal of the Royal Society of Medicine warns that risk-based regulation in healthcare AI fails to protect patients from over- and undertreatment and discrimination.
- 2The EU's 2025 AI Act categorizes medical AI, including imaging AI, as 'high risk' but is said to overlook individual patient preferences and long-term systemic effects.
- 3The authors recommend establishing new patient rights, such as explanation, consent, second opinions, and refusal of AI-driven diagnosis or screening.
- 4Concerns focus on AI system opacity, inaccuracy, and potential bias not fully addressed by current policy frameworks.
Why It Matters
As AI becomes more embedded in healthcare—including radiology—patient trust, safety, and autonomy must be safeguarded. Effective regulation and explicit rights are vital to prevent discrimination and ensure ethical deployment of medical AI systems.

Source
EurekAlert
Related News

•EurekAlert
Expert Insights from JAMA Summit on AI's Role in Healthcare
The JAMA Summit Report brings together expert views on opportunities, risks, and practical steps for integrating AI in healthcare.

•EurekAlert
Landmark Case Highlights Legal Risks for Medical AI Device Makers
A recent legal case may shape future liability risk for manufacturers of AI-enabled medical devices, including those using imaging AI.