
AI models can write convincing fraudulent peer reviews that evade current detection tools, posing a new risk for research integrity.
Key Details
- 1Chinese researchers used the AI model Claude to review 20 actual cancer research manuscripts.
- 2The AI produced highly persuasive rejection letters and requests to cite irrelevant articles.
- 3AI-detection tools misidentified over 80% of the AI-generated reviews as human-written.
- 4Malicious use could enable unfair rejections and citation manipulation within academic publishing.
- 5AI could also help authors craft strong rebuttals to unfair reviews.
- 6Authors call for guidelines and oversight to preserve scientific integrity.
Why It Matters

Source
EurekAlert
Related News

AI-Driven Handheld Endomicroscope Enhances Early Cancer Detection
Researchers develop PrecisionView, a handheld AI-powered endomicroscope for real-time, high-resolution cancer diagnostics.

AI Model Uses EKG and EHR Data to Predict Sudden Cardiac Arrest
Researchers have developed AI models that analyze EKG and EHR data to predict risk of sudden cardiac arrest in the general population.

Sandia Labs Deploys AI-Augmented Imaging for Ceramic Component Inspections
Sandia National Laboratories is introducing AI-assisted optical and acoustic imaging systems to streamline and improve ceramic component inspections for nuclear deterrence.