According to the editorial from the Organization Science 2026 editorial board of the INFORMS management journal, “More Versus Better: Artificial Intelligence, Incentives, and the Emerging Crisis in Peer Review,” since ChatGPT went live at the end of 2022, the journal’s submission volume has risen 42%, forcing the editorial department’s Deputy editors to expand from 6 to 11, and active senior editors from about 30 to about 60. On 4/27, Wharton professor Ethan Mollick cited this editorial and commented: “Scientific systems designed for humans are getting strained because of AI. AI can be used to do better science, or it can be used to just ‘do more stuff.’ The danger is that ‘more’ is winning.”
Submission volume surged 42%, editorial staff doubled to respond
Organization Science’s data shows the specific pressures on the peer review system in the AI era:
Submissions: up 42% after ChatGPT launched
Deputy editors: 6 → 11 (up 83%)
active senior editors: about 30 → about 60 (doubled)
Most submissions are still rejected, and many are screened out already at the deputy editor’s initial triage stage, but the “triage itself” still carries a large burden
The editorial makes it clear that the issue isn’t “AI replacing researchers,” but “AI letting a flood of low-quality submissions come in.” Volunteer editors and reviewers (mostly other scholars taking on reviewing as an obligation) are hit first—they must spend more time sifting through AI-patched-together manuscripts, which in turn squeezes the time available for genuinely high-quality research.
Mollick: “AI can be used to do better science, or it can be used to do more stuff”
In a post on X sharing this editorial, Ethan Mollick, a Wharton School professor and a pioneer in generative AI education, said his commentary struck the core of the debate:
“Very cool analysis of the submissions to a major management journal that shows how much the system of science, built for humans, is under strain as a result of AI. AI can be used to do better science or it can be used to just do more stuff. The danger is that ‘more’ is winning.”
In a follow-up tweet, he added: “The problem is that the incentives push for ‘more’ over ‘better’.” This line directly points to a structural problem in academia—publish or perish career pressure makes scholars favor churning out more rather than cultivating depth.
The reverse implication for the AI tools industry
Organization Science’s observations pose concrete challenges to the AI tools industry:
First, can writing/programming agent tools like OpenAI Codex, Claude Code, and Gemini be designed with “quality assurance” mechanisms? For example, automatically citing real papers, detecting obvious hallucinations, and determining whether an article is “tacked-together” reorganized content? At present, most AI tools are competing on speed and convenience—no one is selling the pitch of “refusing to produce low-quality content.”
Second, the market for countering tools in academic publishing is emerging. Tools such as Originality.ai, Turnitin AI Detection, and GPTZero attempt to detect AI writing, but they are unlikely to win long-term against the race with the LLMs themselves. A more likely solution is “traceability of human research”—for instance, proving the research process with GitHub commit history, raw experimental records, real-time lab notes, and the like, rather than submitting only a finished product.
Academia isn’t an isolated case: which industries will also be crushed by the “more” flood?
Academic peer review is only the first impacted “human-designed, reliant on volunteer reviewers” system. Other similarly fragile ones include:
Open-source code communities: GitHub PR reviews and open-source project maintainers have been overwhelmed by low-quality PRs submitted by AI
News submissions and media editing: a surge in submissions from freelance authors, making it hard for news editors to distinguish AI-generated content
Legal document review: AI mass-producing contracts and litigation documents, causing lawyers’ review time to skyrocket
Student assignments and university admissions: the number of application essays and coursework far exceeds faculty capacity
The common thread: for any system that relies on “mandatory review by human experts,” once AI drives the marginal cost of output close to zero, a “collapse on the review side” will occur. Organization Science’s solution is to expand staffing (from 6 deputies to 11), but that only delays the problem—it does not solve it.
Conclusion: “Better” requires new social mechanisms
The editorial’s closing message carries deep meaning—“human expert judgment still limits AI’s negative impact on what gets published, but the cost is a greatly increased effort.” In other words: academic quality hasn’t collapsed immediately, but the time each editor/reviewer spends has doubled, and the system’s “energy balance” has been broken.
The next-stage challenge is: how to make AI tools themselves bear the design responsibility for “quality orientation” (rather than “quantity orientation”); how to have incentive mechanisms again reward “depth”; and how to ensure the cost of human expert review is compensated reasonably. None of this is a technical problem—it’s a social and institutional problem—and AI is accelerating these issues from “handling them slowly in the future” to “having to face them right now.”
After ChatGPT appeared, this article said: management journal submissions increased 42%—AI is pushing academia toward “more” rather than “better.” First appeared on Lianxin ABMedia.
Related News
OpenAI ChatGPT falls short of revenue targets, and the CFO admits that compute spending may not be covered.
OpenAI Falls Short of Several Sales Targets, CFO Questions Readiness for Year-End IPO
Arthur Hayes: War spending injects massive funds into the market, and Bitcoin will reach $125k by year-end
Bitmine increased its holdings by more than 100,000 ETH last week, and its total holdings exceeded 5 million ETH units/coins.
Study: Polymarket's 3% Skilled Traders Drive Accuracy, Not Crowds