Sam Altman would like you to know that your childhood was expensive. Terribly expensive. And frankly, it’s unclear whether humans are worth all the time and resources. Last week, at the AI Impact Summit in India, OpenAI’s CEO offered a staggering take in the AI energy debate—by pointing out that humans, not AI, are basically the problem. Asked about ChatGPT’s environmental footprint, Altman didn’t apologize or hedge. Instead, he compared the immense amount of energy needed to raise humans to the energy demands of an AI data center—and suggested the data centers are getting more efficient.
“It also takes a lot of energy to train a human,” he told The Indian Express. “It takes like 20 years of life and all of the food you eat during that time before you get smart. And not only that, it took the very widespread evolution of the hundred billion people that have ever lived and learned not to get eaten by predators… to produce you.” His conclusion: “Probably AI has already caught up on an energy-efficiency basis, measured that way.” The internet, predictably, did not respond with applause. Indian billionaire business magnate and Zoho co-founder Sridhar Vembu—who was physically in the room—immediately posted on X: “I do not want to see a world where we equate a piece of technology to a human being.”
I do not want to see a world where we equate a piece of technology to a human being.
I work hard as a technologist to see a world where we don’t allow technology to dominate our lives, instead it should quietly recede into the background. https://t.co/PrbjbgCYde
— Sridhar Vembu (@svembu) February 22, 2026
Reddit users weighed in, unsurprisingly, calling Altman “sickeningly evil” and “anti-human.” One user wrote that Altman “literally doesn’t seem to understand that human life has value beyond whatever cost/benefit analysis he applies to implementing lines of code.” On social media, Altman faced a slew of slings and memes from various parties. Tech analyst Max Weinback put it more diplomatically, saying that reducing people to “cost per output” while ignoring “the value of humanity itself” is, in his words, “a bad path.” That’s one way to say it. This is not a new pattern. Sam Altman has previously said that “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” Altman also said he loses sleep over whether launching ChatGPT “was really bad.” He testified before Congress about AI enabling bioweapons and mass disinformation. And he co-signed a statement declaring that mitigating AI extinction risk “should be a global priority alongside pandemics and nuclear war.” And yet, Altman and OpenAI want to achieve artificial general intelligence (AGI), and when someone asks him about the electricity bill, his move is: Have you considered how much energy it took to raise you? Altman did say one thing that might yield broad agreement: that the world “needs to move towards nuclear, wind, and solar very quickly.”
Note that Altman chairs Oklo, a nuclear startup. Whether that makes the recommendation more credible or self-serving may depend on how much trust you have left in Altman after he compared your childhood to a training run.