
AI models can write convincing fraudulent peer reviews that evade current detection tools, posing a new risk for research integrity.
Key Details
- 1Chinese researchers used the AI model Claude to review 20 actual cancer research manuscripts.
- 2The AI produced highly persuasive rejection letters and requests to cite irrelevant articles.
- 3AI-detection tools misidentified over 80% of the AI-generated reviews as human-written.
- 4Malicious use could enable unfair rejections and citation manipulation within academic publishing.
- 5AI could also help authors craft strong rebuttals to unfair reviews.
- 6Authors call for guidelines and oversight to preserve scientific integrity.
Why It Matters

Source
EurekAlert
Related News

MD Anderson Unveils New AI Genomics Insights and Therapeutic Advances
MD Anderson reports breakthroughs in cancer therapeutics and provides critical insights into AI models for genomic analysis.

SH17 Dataset Boosts AI Detection of PPE for Worker Safety
University of Windsor researchers released SH17, a 8,099-image open dataset for AI-driven detection of personal protective equipment (PPE) in manufacturing settings.

AI Powers Breakthroughs in Optical Metasurface Design for Imaging
A review highlights how AI is revolutionizing the design of optical metasurfaces, advancing compact optics and computational imaging.