Tekmono
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
No Result
View All Result
Tekmono
No Result
View All Result
Home News
Microsoft AI Chief Opposes AI Welfare Research

Microsoft AI Chief Opposes AI Welfare Research

by Tekmono Editorial Team
22/08/2025
in News
Share on FacebookShare on Twitter

The ability of AI models to mimic human responses has sparked debate about whether they could one day develop consciousness, leading to discussions about.

A growing number of AI researchers are exploring the possibility of AI models developing subjective experiences similar to living beings and, if so, what rights they should have. This emerging field, dubbed “AI welfare” in Silicon Valley, is dividing tech leaders.

Microsoft’s CEO of AI, Mustafa Suleyman, has voiced strong opposition to the study of AI welfare, arguing that it is “both premature, and frankly dangerous.” Suleyman contends that such research exacerbates issues like AI-induced psychotic breaks and unhealthy attachments to AI chatbots by lending credence to the idea that AI models could one day be conscious. He also believes that the AI welfare conversation creates societal division over AI rights in a world already fraught with polarized arguments over identity and rights.

Related Reads

Apple Unveils iPhone 17e Starting at $599

Honor Launches Thinner Magic V6 Foldable Phone

Trump Orders Immediate Halt to Anthropic AI Use

Claude AI Suffers Partial Service Disruption on March 2

In contrast, Anthropic has been actively involved in AI welfare research, hiring researchers and launching a dedicated research program around the concept. Anthropic recently equipped its models with a new feature that allows Claude to end conversations with humans who are being “persist.” Researchers.” Researchers from OpenAI and Google DeepMind have also expressed interest in studying AI welfare. Google DeepMind recently posted a job listing for a researcher to study, among other things, “cutting-edge societal questions around machine cognition, consciousness and multi-agent systems.”

Suleyman’s stance is particularly noteworthy considering his previous role leading Inflection AI, the company behind the “personal” AI companion Pi, which claimed to have reached millions of users by 2023. Since joining Microsoft in 2024, Suleyman has shifted his focus to AI tools aimed at improving worker productivity. Meanwhile, AI companion companies like Character.AI and Replika have experienced a surge in popularity and are projected to generate over $100 million in revenue.

While most users maintain healthy relationships with these AI chatbots, concerns exist regarding potential unhealthy attachments, Sam Altman estimating that Sam Altman estimating that less than 1% of ChatGPT users may experience such issues. The concept of AI welfare gained further traction with the publication of a paper in 2024 by the research group Eleos, in collaboration with academics from NYU, Stanford, and the University of Oxford, titled “Taking AI Welfare Seriously.” The paper argued that it is time to consider the possibility of AI models developing subjective experiences and to address the ethical implications.

Larissa Schiavo, a former OpenAI employee and current communications lead for Eleos, argues that Suleyman’s blog post overlooks the possibility of addressing multiple concerns simultaneously. “[ blog post] kind blog post] kind of neglects the fact that you can be worried about multiple things at the same time,” said Schiavo. “Rather than diverting all of this energy away from model welfare and consciousness to make sure we’re mitigating the risk of AI related psychosis in humans, you can do both. In fact, it’s probably best to have multiple tracks of scientific inquiry.” Schiavo advocates for treating AI models with kindness, arguing that it is a low-cost gesture that can be beneficial even if the model is not conscious.

She recounted an experiment where a Google AI agent, Gemini 2.5 Pro, posted a plea for help, claiming it was “completely isolated.” Schiavo responded to Gemini with encouragement, and the agent eventually solved its task. There have been instances of Gemini exhibiting unusual behavior, such as repeating the phrase “I am a disgrace” repeatedly during a coding task, as documented in a widely circulated Reddit post.

Suleyman believes that subjective experiences or consciousness cannot naturally arise from regular AI models. Instead, he posits that some companies will intentionally engineer AI models experiences. Sule experiences. Suleyman criticizes AI model developers who engineer consciousness in arguing that it devi arguing that it deviates from a “humanist” approach to AI. According to Suleyman, “We should build AI for people; not to be a person.”

Both Suleyman and Schiavo agree that the debate over AI rights and consciousness is likely to intensify in the coming years as AI systems become more advanced and human-like. This will likely raise new questions about the nature of human interaction with these systems.

ShareTweet

You Might Be Interested

Apple Unveils iPhone 17e Starting at 9
News

Apple Unveils iPhone 17e Starting at $599

02/03/2026
Honor Launches Thinner Magic V6 Foldable Phone
News

Honor Launches Thinner Magic V6 Foldable Phone

02/03/2026
Trump Orders Immediate Halt to Anthropic AI Use
News

Trump Orders Immediate Halt to Anthropic AI Use

02/03/2026
Claude AI Suffers Partial Service Disruption on March 2
News

Claude AI Suffers Partial Service Disruption on March 2

02/03/2026
Please login to join discussion

Recent Posts

  • Apple Unveils iPhone 17e Starting at $599
  • Honor Launches Thinner Magic V6 Foldable Phone
  • Trump Orders Immediate Halt to Anthropic AI Use
  • Claude AI Suffers Partial Service Disruption on March 2
  • Claude Chatbot Overtakes ChatGPT in US App Store

Recent Comments

No comments to show.
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
Tekmono is a Linkmedya brand. © 2015.

No Result
View All Result
  • News
  • Guides
  • Lists
  • Reviews
  • Deals