
AI models can write convincing fraudulent peer reviews that evade current detection tools, posing a new risk for research integrity.
Key Details
- 1Chinese researchers used the AI model Claude to review 20 actual cancer research manuscripts.
- 2The AI produced highly persuasive rejection letters and requests to cite irrelevant articles.
- 3AI-detection tools misidentified over 80% of the AI-generated reviews as human-written.
- 4Malicious use could enable unfair rejections and citation manipulation within academic publishing.
- 5AI could also help authors craft strong rebuttals to unfair reviews.
- 6Authors call for guidelines and oversight to preserve scientific integrity.
Why It Matters
Radiology and imaging AI research are heavily dependent on fair and trustworthy peer review processes. The inability to reliably detect AI-generated, fraudulent reviews threatens the credibility of published research and the overall progress of the field.

Source
EurekAlert
Related News

•EurekAlert
Mammogram-AI Accurately Predicts Women's Cardiovascular Disease Risk
AI analysis of mammogram images plus age predicts major cardiovascular disease risk as effectively as traditional tools.

•EurekAlert
Major Study Reveals Barriers to Implementing AI Chest Diagnostics in NHS Hospitals
A UCL-led study identifies significant challenges in deploying AI tools for chest diagnostics across NHS hospitals in England.

•EurekAlert
AI Model Enhances Prediction of Infection Risks from Oral Mucositis in Stem Cell Transplant Patients
Researchers developed an explainable AI tool that accurately predicts infection risks related to oral mucositis in hematopoietic stem cell transplant patients.