University of California professor refutes Vibe Coding! Generative AI coding efficiency drops by 20%

加州大學教授打臉Vibe Coding

Professor Sarah Chasins from Berkeley analyzes generative AI in GQ Taiwan, pointing out that LLMs are essentially “fill-in-the-blank games” trained over 300-400 years of computational data. Vibe Coding can handle routine content but struggles with innovation. Studies show that users who believe their efficiency has increased by 20% actually experience a 20% slowdown. The recommended approach involves three steps: minimize the problem to 5 lines, describe it with pseudocode, and develop a validation plan.

The Truth Behind the Fill-in-the-Blank Game Behind ChatGPT

Professor Sarah Chasins first explains, in an accessible way, how ChatGPT works. Built on large language models (LLMs), its core logic is quite simple: it’s a program responsible for matching words together in plausible combinations. Developers of LLMs first collect all human-written documents and web pages online, representing the reasonable vocabulary combinations in human cognition.

Then, the program undergoes large-scale “fill-in-the-blank” training. For example, it might see a sentence like “The dog has four ![加州大學教授打臉Vibe Coding]###https://img-cdn.gateio.im/social/moments-87a9b3933a-b20c904ffd-8b7abd-e2c905###”, where the logical human answer is “legs.” If the program guesses incorrectly, developers correct it until it gets it right. After approximately 300 to 400 years of computational training, the program ultimately produces an enormous “cheat sheet,” known in tech as “parameters.”

Next, providing a dialogue-formatted document allows this fill-in-capable program to transform into a chatbot, automatically completing human questions based on logic. This “fill-in” explanation reveals the core limitation of generative AI: it can only combine patterns seen in training data and cannot truly understand semantics or engage in creative thinking.

This principle is crucial for understanding the limitations of Vibe Coding. When you ask AI to write “login functionality” code, it performs well because there are thousands of examples online. But when you ask it to create an innovative algorithm never before implemented, AI can only piece together similar fragments, often resulting in logical errors or code that cannot run at all.

The Illusion of Efficiency in Vibe Coding

Regarding the recent trend of using LLMs to generate code directly, rather than manually coding, Professor Sarah Chasins remains cautious. She analyzes that these tools perform adequately with routine content that humans have written countless times, but are generally ineffective for any attempt at innovation.

Even more surprising are the research data. The professor cites studies indicating that users who rely on LLM tools for assistance believe their efficiency has increased by 20%, yet their actual development speed is 20% slower than those who do not use such tools. This stark discrepancy between subjective perception and objective reality exposes the biggest trap of Vibe Coding: it creates an illusion of “high efficiency,” but in reality, a lot of time is wasted debugging and fixing errors generated by AI.

This shows that over-reliance on tools can create a false sense of productivity. When faced with novel programming requirements, lacking fundamental skills in logical decomposition and understanding physical principles makes it impossible to correct AI errors, leading to even more time-consuming outcomes. To illustrate, LLMs are like high-end autonomous vehicles—they can handle common routes, but if you don’t understand how to decompose the track or the physics of vehicle operation, encountering unfamiliar, treacherous curves can cause the autonomous system to fail, and you won’t know how to fix it due to lack of basic skills.

Three Major Reasons for Vibe Coding Failure

Zero innovation capability: Can only combine existing patterns from training data, unable to produce truly novel solutions

Hard to detect errors: Generated code may look reasonable but contain logical mistakes that require expertise to identify

Debugging time skyrockets: Fixing AI errors takes longer than writing code yourself, negating speed advantages

Training data bias: LLM training data is mostly in developer language; everyday language descriptions can easily lead to misunderstandings

From a cognitive psychology perspective, this efficiency illusion stems from the “fluency illusion.” When AI rapidly generates large amounts of code, users feel progress is fast and perceive increased efficiency. However, the quality of this code may be poor, requiring more time later to fix. In contrast, human programmers may work slower but produce higher-quality code, resulting in shorter overall time.

Three Steps to Correctly Use Generative AI

Faced with the powerful capabilities of AI tools, many question the necessity of learning to code. The professor believes that the core skill in programming education is “problem decomposition”—breaking down a vague, large problem into smaller parts until each can be solved with a few lines of code. Without this training, users will struggle to leverage AI tools to produce truly functional complex programs.

Furthermore, since LLM training data is mostly in engineer-style language, everyday language used by non-professionals often does not match the training data, making it difficult for AI to generate useful code. To maximize the benefits of generative AI in coding, Chasins recommends following three steps:

Step 1: Minimize the problem—break it down into about 5 lines of code. This is a critical decomposition skill; when you can split complex problems into small units, AI can effectively assist in implementing each part.

Step 2: Use pseudocode—a way to describe logic using a syntax that may combine multiple programming languages and reserved words. Although similar to natural language, pseudocode is not everyday language; its purpose is to help the computer understand the logic more precisely.

Step 3: Develop a validation plan—use extensive testing or professional review to ensure the correctness of AI outputs. This step is often overlooked but is crucial. Many Vibe Coding users take AI-generated code directly into production without sufficient testing, leading to serious errors in live environments.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)