Gate News message, April 22 — Brockman revealed two significant product developments during an appearance on the Core Memory podcast. First, GPT-5.4 Pro has been used by a customer to solve a new Erdős problem, which Brockman described as “looking really important.”
Brockman highlighted the dramatic improvement in model capabilities. Two years ago, OpenAI required a 20-person team working for two weeks with substantial computational resources to train a model to achieve a bronze medal in the International Mathematical Olympiad. Now, he noted, “a very casually trained model” can achieve the same result. He suggested the implications could be profound: “If you pointed this capability at drug discovery, nobody is pricing that.”
Second, addressing criticism from podcast host Ashley Vance that large language models lack “soul” in their writing, Altman acknowledged, “We’re not where we need to be on personalization.” Brockman added that OpenAI has a new model in development to address this gap. “You can try it after the podcast goes live and tell us if it’s gotten better,” he said.
The remarks underscore OpenAI’s focus on expanding model capabilities beyond reasoning to include more nuanced and personalized writing abilities.
Related News
Bharat1.ai launches $650M AI city project in Bengaluru
Ripple CEO praises SEC’s new direction, and U.S. crypto regulation enters a reset mode
AI nuclear power startup Fermi’s founder urged selling the company after stepping down: market value fell 83% in half a year