In a world racing toward a future defined by artificial intelligence, the very language used to describe its ultimate goal is becoming a point of contention. OpenAI CEO Sam Altman has added fuel to this fire, recently stating that the term “artificial general intelligence” or AGI is “not a super useful term,” as the high-stakes AI race makes its definition increasingly ambiguous. This shift in perspective from one of AI’s most prominent figures signals a changing dialogue in the industry, where the focus is moving from a single, binary milestone to a continuous, exponential increase in model capabilities.
For years, AGI has been the holy grail for companies like OpenAI, which once defined it as a “highly autonomous system that outperforms humans at most economically valuable work.” It was a clear, if ambitious, target. The promise of achieving AGI has been a key factor in companies successfully raising billions of dollars and receiving soaring valuations. However, in a recent interview with CNBC, Altman highlighted the problem: everyone has a different definition.
The rapid advancements in the field, with the recent launch of models like OpenAI’s GPT-5, have blurred the lines of what “human-level” intelligence truly means. As AI systems become more capable in specific, complex tasks, they challenge the traditional, all-encompassing definition of AGI. Altman argues that the nature of work itself is constantly changing, making it difficult to establish a fixed benchmark for an AI that can perform all “economically valuable work.”
This evolving stance from Altman is not entirely new. He has previously suggested that the first AGI would be just a point along a continuum of intelligence, with the ultimate goal being artificial superintelligence (ASI)—a system that reasons far beyond human capability. The current focus, he suggests, should be on this continuous progress and its tangible impact, rather than a single, elusive milestone. This sentiment echoes a growing feeling within the AI community that the obsession with a single AGI moment may be distracting from the more pressing and immediate challenges of safe and responsible AI development. The race is on, but it’s becoming a marathon of continuous innovation rather than a sprint to a fixed finish line.