AI-Generated Peer Reviews Threaten Trust in Scientific Publishing

July 30, 2025

AI models can write convincing fraudulent peer reviews that evade current detection tools, posing a new risk for research integrity.

Key Details

  • Chinese researchers used the AI model Claude to review 20 actual cancer research manuscripts.
  • The AI produced highly persuasive rejection letters and requests to cite irrelevant articles.
  • AI-detection tools misidentified over 80% of the AI-generated reviews as human-written.
  • Malicious use could enable unfair rejections and citation manipulation within academic publishing.
  • AI could also help authors craft strong rebuttals to unfair reviews.
  • Authors call for guidelines and oversight to preserve scientific integrity.

Why It Matters

Radiology and imaging AI research are heavily dependent on fair and trustworthy peer review processes. The inability to reliably detect AI-generated, fraudulent reviews threatens the credibility of published research and the overall progress of the field.

Read more

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.