
University at Buffalo researchers created a tool to spot AI-generated radiology reports used for fraud.
Key Details
- 1Researchers developed a detection framework specifically tuned for radiology reports.
- 2The tool was trained on 14,000 pairs of genuine and AI-generated chest X-ray reports.
- 3General-purpose AI detectors are not reliable for medical documents due to specialized language and structure.
- 4Concerns center on fabricated reports being used to falsify medical histories and file fraudulent claims.
- 5The algorithm focuses particularly on the 'findings' section of reports, where descriptive language is richest.
Why It Matters
As generative AI tools become capable of producing convincing radiology reports, specific detection mechanisms are vital to safeguard against fraud and ensure documentation integrity. This approach addresses a growing threat to both clinical practice and medical insurance workflows.

Source
Radiology Business
Related News

•Radiology Business
NYC Health + Hospitals CEO Considers AI to Replace Radiologists
NYC Health + Hospitals CEO suggests AI could partially replace radiologists, pending regulatory approval.

•Radiology Business
UCLA Appoints Inaugural Associate Dean for Health AI Strategy
UCLA has appointed Katherine P. Andriole as its first associate dean for Health AI Strategy and Innovation, with an initial focus on radiology.

•AuntMinnie
AI Models Reveal Racial Disparities in Breast Cancer Patterns
Machine learning models reveal significant racial disparities and key predictors in breast cancer incidence across diverse groups.