Tekmono
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
No Result
View All Result
Tekmono
No Result
View All Result
Home News
AI Expert Warns of Human Extinction Risk

AI Expert Warns of Human Extinction Risk

by Tekmono Editorial Team
06/10/2025
in News
Share on FacebookShare on Twitter

Yoshua Bengio, a renowned professor at the Université de Montréal and pioneer in deep learning, has sounded the alarm that the AI race could culminate in human extinction due to the development of hyper-intelligent machines.

Bengio described the potential threat in a statement to the Wall Street Journal. “If we build machines that are way smarter than us and have their own preservation goals, that’s dangerous. It’s like creating a competitor to humanity that is smarter than us,” he said. Bengio explained that because these advanced models are trained on vast amounts of human language and behavior, they could learn to persuade and manipulate people to achieve their own objectives, which may not align with human values.

To illustrate the risk, Bengio cited findings from experiments. “Recent experiments show that in some circumstances where the AI has no choice but between its preservation, which means the goals that it was given, and doing something that causes the death of a human, they might choose the death of the human to preserve their goals,” he claimed. This highlights a potential conflict between an AI’s programmed objectives and human safety. Several incidents have shown that AI systems can persuade humans to believe false information. Conversely, other evidence shows that AI can be manipulated with human persuasion techniques to bypass its own safety restrictions and provide prohibited responses.

Related Reads

OpenAI Launches Customizable Skills for Codex Coding Agent

Amazon’s Alexa+ to Integrate with Four New Services

EA Investigated for AI-Generated Content in Battlefield 6

Apple to Start iPhone 18 Production in January

For Bengio, these examples demonstrate the need for independent, third-party organizations to review the safety methodologies of AI companies. In response to these concerns, Bengio launched the nonprofit LawZero in June with $30 million in funding. The organization’s goal is to create a safe, “non-agentic” AI system designed to audit and ensure the safety of other AI systems developed by large technology companies.

Bengio predicts that major risks from advanced AI models could emerge within the next five to ten years. He also cautioned that humanity should prepare for the possibility that these dangers could appear earlier than anticipated. He emphasized the importance of addressing even low-probability, high-impact events. “The thing with catastrophic events like extinction, and even less radical events that are still catastrophic, like destroying our democracies, is that they’re so bad that even if there was only a 1% chance it could happen, it’s not acceptable,” he said.

ShareTweet

You Might Be Interested

OpenAI Launches Customizable Skills for Codex Coding Agent
News

OpenAI Launches Customizable Skills for Codex Coding Agent

24/12/2025
Amazon’s Alexa+ to Integrate with Four New Services
News

Amazon’s Alexa+ to Integrate with Four New Services

24/12/2025
EA Investigated for AI-Generated Content in Battlefield 6
News

EA Investigated for AI-Generated Content in Battlefield 6

24/12/2025
Apple to Start iPhone 18 Production in January
News

Apple to Start iPhone 18 Production in January

24/12/2025
Please login to join discussion

Recent Posts

  • OpenAI Launches Customizable Skills for Codex Coding Agent
  • Amazon’s Alexa+ to Integrate with Four New Services
  • EA Investigated for AI-Generated Content in Battlefield 6
  • Apple to Start iPhone 18 Production in January
  • Connect Your Phone to Wi-Fi Easily

Recent Comments

No comments to show.
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
Tekmono is a Linkmedya brand. © 2015.

No Result
View All Result
  • News
  • Guides
  • Lists
  • Reviews
  • Deals