AI Paper Flood Out of Control! International Academic Conference ICLR Takes Action to Regulate Low-Quality Submissions

robot
Abstract generation in progress

AI misuse sweeps through academia, leading to a decline in the quality of papers and peer reviews, with errors infiltrating research systems. International conferences are urgently tightening regulations, and academic trust is facing a critical test.

The AI wave backfires on the research industry, threatening the collapse of academic paper quality

Against the backdrop of rapid global advancements in artificial intelligence technology, the AI research community is facing an unprecedented trust crisis. Over the past few years, the review systems of top academic conferences have been flooded with low-quality submissions. Researchers have found that the proportion of human contributions in many papers and peer review content is significantly decreasing. This phenomenon raises concerns not only about changes in writing style but also about the core issue of content accuracy. As precision becomes the foundation of academic research, errors generated by automated tools are quietly seeping into research outcomes.

Inioluwa Deborah Raji, a researcher at UC Berkeley, pointed out that the academic community is enthusiastic about AI transforming other industries, but ironically, the industry itself is falling into chaos due to widespread AI misuse.

Data shows that the scale of this crisis has reached warning levels. According to a research report released by Stanford University in August 2025, up to 22% of papers in the computer science industry show signs of using large language models (LLMs). Text analysis startup Pangram found in a survey of the 2025 International Conference on Learning Representations (ICLR) that about 21% of review comments were entirely AI-generated, and more than half of the review process involved AI-assisted editing. Even more startling, approximately 9% of submitted papers contained over half of their content produced by AI.

Thomas G. Dietterich, an honorary professor at Oregon State University, observed that the volume of uploads to the open preprint platform arXiv has also surged significantly, partly due to researchers rushing in, but the main driver is clearly AI tools.

Peer review system in disarray? Top international conferences take strong action

Faced with a flood of low-quality papers and automated reviews, the academic community has reached a turning point where action is unavoidable. In November 2024, reviewers at ICLR discovered a paper suspected to be AI-generated, which ranked in the top 17% of all submissions, sparking strong doubts about the current evaluation system. Subsequently, in January 2025, detection company GPTZero examined 50 papers presented at the top-tier AI conference NeurIPS and found over 100 instances of automated errors. These errors included fictitious references and incorrect chart data, severely undermining research rigor.

To address this, ICLR updated its submission guidelines: if a paper fails to truthfully disclose extensive use of language models, it will be outright rejected; reviewers submitting low-quality automated reviews will face severe penalties, including rejection of their own papers.

Hany Farid, a computer science professor at UC Berkeley, issued a stern warning that if the scientific community continues to publish erroneous and low-quality papers, society will lose fundamental trust in scientists. In fact, the growth rate of paper submissions has far outpaced the development of detection technologies. For example, NeurIPS submissions increased from 9,467 in 2020 to 17,491 in 2024, and surged further to 21,575 in 2025. There are even extreme cases of single authors submitting over 100 papers in one year, which clearly exceeds the normal output limits for human researchers. Currently, the academic community lacks a unified standard for automated text detection, making prevention even more difficult.

Image source: UC Berkeley Computer Science Professor Hany Farid

Commercial pressures and data pollution: the long-term battle in research

Behind this academic inflation are complex commercial competition and practical considerations. As AI industry salaries and technological rivalry intensify, parts of the research sector are forced to focus on quantity over quality. Market hype attracts many outsiders seeking quick results, diluting academic depth. However, experts emphasize the importance of distinguishing between “reasonable use” and “abuse.”

Thomas G. Dietterich mentioned that for non-native English-speaking researchers (such as scholars from China), AI tools can indeed help improve clarity of expression. Such writing assistance can, to some extent, enhance communication efficiency and should be viewed as a positive application.

However, a deeper crisis lies in “data pollution,” which threatens the future development of AI. Tech giants like Google, Anthropic, and OpenAI are promoting models as research partners in fields like life sciences, and these models are trained on academic texts.

Hany Farid pointed out that if training data is flooded with artificially generated synthetic content, the performance of models will significantly degrade.

Past research has confirmed that when LLMs are fed unfiltered automated data, they eventually collapse and produce meaningless information. Kevin Weil, head of OpenAI’s scientific division, admitted that while AI can be a powerful accelerator for research, human oversight and verification are indispensable. Technology tools can never replace the rigorous attitude required in scientific research.

This content is summarized and generated by Crypto Agent from various sources, reviewed and edited by “Encrypted City.” It is still in training, and may contain logical biases or inaccuracies. The content is for reference only and should not be considered investment advice.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)