Davos Forum | Divergence in AGI Development? A Look at the Three Major Positions of Google DeepMind and Anthropic

robot
Abstract generation in progress

Google DeepMind CEO Demis Hassabis and Anthropic CEO Dario Amodei participated in a panel discussion on the future development of Artificial General Intelligence (AGI) at the World Economic Forum (WEF) in Davos, Switzerland on 1/20. The entire conversation was rational and friendly, but they showed clear differences on three core issues.

What is Artificial General Intelligence (AGI)?

AGI refers to a hypothetical form of artificial intelligence capable of understanding, learning, and performing a wide range of rational tasks that humans can do, aiming to simulate the cognitive abilities of the human brain. Unlike most existing AI that can only handle single tasks, AGI possesses generalization across domains, able to apply knowledge learned in one area to entirely new situations, and has human-like common sense and world understanding for reasoning and decision-making.

The development of AGI relies on interdisciplinary research in computer science, neuroscience, and cognitive psychology. Currently, true AGI has not yet been realized, but related research and development are ongoing.

When will AGI be achieved? Diverging views from DeepMind and Anthropic CEOs

Anthropic CEO Amodei reaffirmed his timeline proposed last year, suggesting that by 2026–2027, AI capable of reaching “Nobel Prize-level” human standards in most fields could emerge.

“As long as AI can write code and conduct AI research, it can start designing the next generation of models, creating a self-accelerating loop where AI upgrades itself. Once this cycle begins to run smoothly, AGI will show exponential breakthroughs.”

He even stated that within Anthropic, engineers are almost no longer writing code themselves; instead, models generate outputs that are then checked by humans, indicating this path is already taking shape.

However, Google DeepMind CEO Hassabis maintains a more conservative stance, estimating a 50% chance of full AGI emerging before the end of this century. He believes that fields like programming and mathematics, which can be quickly verified, are indeed easier to automate, but natural sciences, theoretical creativity, and the ability to pose good questions still lack key capabilities. Additionally, verification cycles are long, and real-world friction is high, making rapid self-accelerating solutions unlikely.

How quickly will AI impact employment? Amodei warns of urgency, Hassabis emphasizes buffers

On employment issues, their views diverge significantly. Amodei has publicly stated that within 1–5 years, half of entry-level white-collar jobs could disappear. At this forum, he explained that although overall labor data has not yet fully reflected this, preliminary impacts are visible in programming and engineering fields. Demand for entry-level and mid-level workers may first slow down, followed by more obvious replacements. His core concern is that AI progress is exponential, but societal adaptation is linear, and a gap will eventually form.

Hassabis, on the other hand, aligns more with traditional economic perspectives. He believes that in the short term, the pattern will mirror past technological revolutions:

“Some jobs will disappear, but new, higher-value jobs will also emerge, and entry-level positions and internships may be affected first.”

He also emphasizes that current AI tools are “almost universally accessible,” and young people who can quickly master them might accumulate experience faster than through traditional internships.

Should AI development slow down? Amodei advocates for slowing, Hassabis prefers steady progress

Regarding risks and geopolitics, their differences are even clearer. Amodei explicitly hopes the world will slow down AI development to give humanity more time to establish safety and governance mechanisms. He strongly advocates restricting the export of advanced chips, believing that AI’s strategic importance is approaching that of nuclear weapons and should not be measured solely by commercial or supply chain logic. He even used nuclear trade as a metaphor, saying short-term gains should not come at the expense of long-term risks.

Hassabis does not oppose “slowing down” in principle but emphasizes that current geopolitical and corporate competition make it very difficult to truly brake. Under these circumstances, rather than expecting a complete slowdown, a more practical question is:

“How can we establish safety mechanisms as quickly as possible during the high-speed AI race to manage risks?”

Differences beyond consensus determine the pace of AI’s future development

It is worth noting that the two are highly aligned on many major directions, such as believing AI will profoundly change the world, acknowledging real safety risks, and opposing doomsday theories of inevitable destruction. However, their answers differ significantly on the speed of development. These differences reflect the core issues of the current AI era: whether to push AI toward AGI or to control its pace.

(AI is starting to do things on its own, Anthropic explains: How should humans evaluate its performance?)

This article first appeared in Chain News ABMedia.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)