Malpractice in the machine age: Legal and ethical responses to machine learning in medical imaging.
Authors
Affiliations (5)
Affiliations (5)
- School of Dentistry and Medical Sciences, Charles Sturt University, NSW, Australia; Teaching Innovation in Radiography Group, Faculty of Science and Health, Charles Sturt University, NSW, Australia. Electronic address: [email protected].
- School of Dentistry and Medical Sciences, Charles Sturt University, NSW, Australia; Teaching Innovation in Radiography Group, Faculty of Science and Health, Charles Sturt University, NSW, Australia. Electronic address: [email protected].
- School of Dentistry and Medical Sciences, Charles Sturt University, NSW, Australia; Health Evidence Synthesis, Recommendations and Impact, School of Public Health, University of Adelaide, South Australia, Australia; Adelaide Medical School, Faculty of Health and Medical Sciences, University of Adelaide, South Australia, Australia. Electronic address: [email protected].
- Adelaide Medical School, Faculty of Health and Medical Sciences, University of Adelaide, South Australia, Australia; The Queen Elizabeth Hospital, Adelaide, South Australia, Australia. Electronic address: [email protected].
- Faculty of Medicine and Health, University of Sydney, Sydney, New South Wales, Australia. Electronic address: [email protected].
Abstract
Artificial intelligence (AI) and machine learning (ML) are increasingly integrated into diagnostic imaging. This review examines how AI adoption affects malpractice risk, the legal standard of care, liability distribution, and informed consent. It also evaluates regulatory developments and ethical concerns, including explicability, autonomy, and professional accountability. AI-supported image interpretation can improve diagnostic accuracy and efficiency. Its integration is reshaping expectations of reasonable clinical practice, with the potential for negligence claims both when clinicians fail to use validated systems and when they rely on insufficiently tested tools. Liability is uncertain because diagnostic responsibility is distributed across clinicians, healthcare organisations, and developers. Existing negligence frameworks assume human reasoning and struggle to accommodate opaque algorithmic decision-making, limiting courts' ability to assess whether AI-assisted diagnoses meet accepted standards. "Black box" models heighten automation bias, hinder legal scrutiny of error, and complicate professional accountability. Informed consent case law suggests AI involvement should be disclosed when it introduces material differences in risk or outcome, although this remains inconsistently applied. Ethical challenges include threats to patient trust, potential clinician deskilling, and reduced transparency in clinical communication. Regulatory initiatives such as the European Union's General Data Protection Regulation and AI Act move toward clearer governance through requirements for data quality, human oversight, and post-market monitoring, yet explicit malpractice guidance remains under-developed globally. Traditional legal and ethical frameworks insufficiently address accountability for AI-driven diagnostic errors. Clarifying responsibility, decision authority, and validation requirements is essential to safeguard patient safety and clinician protection. Clinical protocols should specify approved use cases, oversight expectations, documentation of AI involvement, and management of clinician-algorithm disagreement. Training should support critical review of outputs to mitigate automation bias.