
AI models can write convincing fraudulent peer reviews that evade current detection tools, posing a new risk for research integrity.
Key Details
- 1Chinese researchers used the AI model Claude to review 20 actual cancer research manuscripts.
- 2The AI produced highly persuasive rejection letters and requests to cite irrelevant articles.
- 3AI-detection tools misidentified over 80% of the AI-generated reviews as human-written.
- 4Malicious use could enable unfair rejections and citation manipulation within academic publishing.
- 5AI could also help authors craft strong rebuttals to unfair reviews.
- 6Authors call for guidelines and oversight to preserve scientific integrity.
Why It Matters

Source
EurekAlert
Related News

AI Sentiment Analysis Boosts Diagnosis of Complex Liver Condition
UC San Francisco researchers found that AI sentiment analysis of clinical notes can improve the diagnosis of hepatorenal syndrome.

AI-Enabled Nanoplatforms Combine Ferroptosis, Immunotherapy, and Imaging for Cancer
A groundbreaking review highlights how advanced nanoplatforms can synergistically integrate ferroptosis, immunotherapy, and multimodal imaging to optimize cancer therapy.

AI Repurposes Routine CT Scans for Osteoporosis Detection
AI algorithms can extract bone density data from routine CT scans to identify osteoporosis, enabling opportunistic screening.