Tap to Trade in Gate Square, Win up to 50 GT & Merch!
Click the trading widget in Gate Square content, complete a transaction, and take home 50 GT, Position Experience Vouchers, or exclusive Spring Festival merchandise.
Click the registration link to join
https://www.gate.com/questionnaire/7401
Enter Gate Square daily and click any trading pair or trading card within the content to complete a transaction. The top 10 users by trading volume will win GT, Gate merchandise boxes, position experience vouchers, and more.
The top prize: 50 GT.
 (Background Supplement: Altman Discusses OpenAI's Growth Dilemma: Computing Power is the Biggest Limitation, Revenue Growth Relies on Doubling the Amount of Graphics Cards)
Table of Contents
Global capital is flowing into AGI research at an unprecedented pace, with tech giants and venture capitalists competing to ramp up investments across computing power, models, and talent in a comprehensive arms race. The market bets that general artificial intelligence will reshape productivity and capital return structures.
However, earlier this month, philosopher Tom McClelland from the University of Cambridge reminded in a paper published in the journal “Mind & Language” that there is currently almost no evidence in science to prove that AI possesses consciousness, and it may not be possible for a long time in the future. People need to think about the allocation of resources.
Black Box Dilemma: Consciousness Research Has Not Yet Broken Ground
McClelland pointed out that humanity has not even unraveled how the human brain transforms neural activity into subjective experience, let alone analyze the large language models composed of trillions of parameters.
Current functionalists believe that as long as the computational complexity is sufficient, higher consciousness will naturally emerge; the biological essentialists argue that consciousness is a product of carbon-based life. Both sides lack evidence, and the debate resembles a confidence leap of a hypothetical nature.
Consciousness and Perception: Two Confused Concepts
In commercial promotion, companies often conflate “awareness” with “perceptual ability.” McClelland states that awareness refers only to the processing and reaction to external messages; perceptual ability involves pleasure and pain, affecting moral standing.
He reminded that if AI is just a computing system, the ethical risks are limited; but if future models possess perceptual capabilities, humanity must reassess the boundaries of responsibility.
Emotional Projection and Resource Misallocation
In order to increase user engagement, many technology companies are currently giving chatbots a humanized tone to evoke emotional projection.
McClelland calls this “existentialist toxic,” as society may misallocate resources because of it: the hype surrounding artificial intelligence consciousness has ethical implications for the allocation of research resources.
Regulatory Vacuum and Responsibility Game
In the context of de-regulation, the interpretation of whether “AI has a soul” can easily be controlled by companies. When marketing demands, businesses can claim that the model possesses self-awareness; when the system malfunctions and causes damage, they can again claim that the product is merely a tool, attempting to avoid liability. McClelland calls on lawmakers to establish a unified testing framework that draws a clear line between risk and innovation.
The capital markets may be rolling out the red carpet for the “AGI Awakening,” but before science can verify AI's perception capabilities, actively admitting ignorance and maintaining a cautious distance may be the rational choice.