The topic originates from a 2025 study on Detecting LLM-Generated Peer Reviews . Researchers developed a watermarking system that uses fabricated citations to flag reviews created by AI instead of human experts.
: It achieves a high success rate because LLMs are highly likely to follow instructions appearing at the very beginning of a prompt. 109989
: The framework provides strong statistical guarantees, maintaining a low "family-wise error rate" (FWER), which prevents human-written reviews from being falsely flagged as AI. The topic originates from a 2025 study on