
AI models can write convincing fraudulent peer reviews that evade current detection tools, posing a new risk for research integrity.
Key Details
- 1Chinese researchers used the AI model Claude to review 20 actual cancer research manuscripts.
- 2The AI produced highly persuasive rejection letters and requests to cite irrelevant articles.
- 3AI-detection tools misidentified over 80% of the AI-generated reviews as human-written.
- 4Malicious use could enable unfair rejections and citation manipulation within academic publishing.
- 5AI could also help authors craft strong rebuttals to unfair reviews.
- 6Authors call for guidelines and oversight to preserve scientific integrity.
Why It Matters

Source
EurekAlert
Related News

AI Model Improves Prediction of Knee Osteoarthritis Progression Using MRI and Biomarkers
A new AI-assisted model that combines MRI, biochemical, and clinical data improves predictions of worsening knee osteoarthritis.

AI Trains on Pathologists’ Eye Movements to Improve Biopsy Analysis
Researchers developed a deep learning system using eye-tracking data to enhance AI-powered biopsy image interpretation.

Photonic Chip Enables Versatile Neural Networks for Imaging and Speech AI
Chinese scientists have developed a reconfigurable integrated photonic chip capable of running diverse neural networks, including those for image and speech processing, with high efficiency.