The ethics of simplification: balancing patient autonomy, comprehension, and accuracy in AI-generated radiology reports.
Authors
Affiliations (10)
Affiliations (10)
- Department of Radiology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea.
- BeamWorks Inc, Daegu, Republic of Korea.
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Chilgok Hospital, Daegu, Republic of Korea.
- School of Computer Science and Engineering, Kyungpook National University, Daegu, Republic of Korea.
- Department of Medical Device Engineering and Management, The Graduate School, Yonsei University College of Medicine, Seoul, Republic of Korea.
- Department of Integrative Medicine, The Graduate School, Yonsei University College of Medicine, Seoul, Republic of Korea.
- Institute for Innovation in Digital Healthcare, Yonsei University, Seoul, Republic of Korea.
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Severance Hospital, Yonsei University College of Medicine, 50-1, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea.
- Institute for Innovation in Digital Healthcare, Yonsei University, Seoul, Republic of Korea. [email protected].
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Severance Hospital, Yonsei University College of Medicine, 50-1, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea. [email protected].
Abstract
Large language models (LLMs) such as GPT-4 are increasingly used to simplify radiology reports and improve patient comprehension. However, excessive simplification may undermine informed consent and autonomy by compromising clinical accuracy. This study investigates the ethical implications of readability thresholds in AI-generated radiology reports, identifying the minimum reading level at which clinical accuracy is preserved. We retrospectively analyzed 500 computed tomography and magnetic resonance imaging reports from a tertiary hospital. Each report was transformed into 17 versions (reading grade levels 1-17) using GPT-4 Turbo. Readability metrics and word counts were calculated for each version. Clinical accuracy was evaluated using radiologist assessments and PubMed-BERTScore. We identified the first grade level at which a statistically significant decline in accuracy occurred, determining the lowest level that preserved both accuracy and readability. We further assessed potential clinical consequences in reports simplified to the 7th-grade level. Readability scores showed strong correlation with prompted reading levels (r = 0.80-0.84). Accuracy remained stable across grades 13-11 but declined significantly below grade 11. At the 7th-grade level, 20% of reports contained inaccuracies with potential to alter patient management, primarily due to omission, incorrect conversion, or inappropriate generalization. The 11th-grade level emerged as the current lower bound for preserving accuracy in LLM-generated radiology reports. Our findings highlight an ethical tension between improving readability and maintaining clinical accuracy. While 7th-grade readability remains an ethical ideal, current AI tools cannot reliably produce accurate reports below the 11th-grade level. Ethical implementation of AI-generated reporting should include layered communication strategies and model transparency to safeguard patient autonomy and comprehension.