Yoshua Bengio, a renowned professor at the Université de Montréal and pioneer in deep learning, has sounded the alarm that the AI race could culminate in human extinction due to the development of hyper-intelligent machines.
Bengio described the potential threat in a statement to the Wall Street Journal. “If we build machines that are way smarter than us and have their own preservation goals, that’s dangerous. It’s like creating a competitor to humanity that is smarter than us,” he said. Bengio explained that because these advanced models are trained on vast amounts of human language and behavior, they could learn to persuade and manipulate people to achieve their own objectives, which may not align with human values.
To illustrate the risk, Bengio cited findings from experiments. “Recent experiments show that in some circumstances where the AI has no choice but between its preservation, which means the goals that it was given, and doing something that causes the death of a human, they might choose the death of the human to preserve their goals,” he claimed. This highlights a potential conflict between an AI’s programmed objectives and human safety. Several incidents have shown that AI systems can persuade humans to believe false information. Conversely, other evidence shows that AI can be manipulated with human persuasion techniques to bypass its own safety restrictions and provide prohibited responses.
For Bengio, these examples demonstrate the need for independent, third-party organizations to review the safety methodologies of AI companies. In response to these concerns, Bengio launched the nonprofit LawZero in June with $30 million in funding. The organization’s goal is to create a safe, “non-agentic” AI system designed to audit and ensure the safety of other AI systems developed by large technology companies.
Bengio predicts that major risks from advanced AI models could emerge within the next five to ten years. He also cautioned that humanity should prepare for the possibility that these dangers could appear earlier than anticipated. He emphasized the importance of addressing even low-probability, high-impact events. “The thing with catastrophic events like extinction, and even less radical events that are still catastrophic, like destroying our democracies, is that they’re so bad that even if there was only a 1% chance it could happen, it’s not acceptable,” he said.




