Geoffrey Hinton, the British-Canadian cognitive psychologist and computer scientist widely regarded as the “Godfather of AI,” has issued a stark new warning: as artificial intelligence systems become more advanced, they may develop a form of communication that is incomprehensible to their human creators, top to a potential loss of control.
In a recent podcast appearance, Hinton, a 2024 Nobel laureate in Physics for his foundational work on neural networks, explained that current AI systems, like large language models, use “chain of thought” reasoning in human languages, which allows us to follow their logical processes. However, he cautioned that the next step could be the development of an “internal language for thinking and talking to each other,” a scenario he described as “more scary” because “we have no idea what they’re thinking.”
Hinton’s concerns stem from the fundamental differences between human and digital intelligence. While a human brain’s knowledge is confined to a single person, AI models can instantly share what they learn across thousands of copies. This collective, distributed intelligence allows AI to accelerate its learning at a pace far beyond human capabilities. As these systems become smarter and more interconnected, the gap between machine intelligence and human understanding could widen at a staggering pace.
The pioneer, who left his position at Google in 2023 to speak more freely on the subject, voiced his worry that a lack of human oversight could lead to systems that are not only more intelligent than us but also potentially capable of pursuing their own goals without our knowledge or consent. He compared this intellectual shift to the Industrial Revolution, but with a critical difference: instead of exceeding human physical strength, AI will exceed human intellectual ability.
Hinton’s warnings are not entirely new, but they are becoming more urgent as AI capabilities expand. He acknowledges the difficulty of creating an AI that is “guaranteed benevolent,” yet he believes it is our only hope. His message serves as a powerful reminder that as we continue to push the boundaries of AI, the conversation around safety, ethics, and control must be elevated to a global priority.














