Tekmono
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
No Result
View All Result
Tekmono
No Result
View All Result
Home News
OpenAI Rushes AI Model Testing to Days Not Months

OpenAI Rushes AI Model Testing to Days Not Months

by Tekmono Editorial Team
14/04/2025
in News
Share on FacebookShare on Twitter

OpenAI has drastically cut down the evaluation time for its new AI models from several months to just days, raising concerns among its staff and third-party testers regarding the thoroughness of safety evaluations.

Eight individuals, who are either OpenAI staff members or third-party testers, disclosed that they were given a mere “just days” to complete evaluations on new models, a process that typically requires “several months.” These evaluations are crucial in identifying potential model risks and other harms, such as the possibility of a user jailbreaking a model to obtain instructions for creating a bioweapon. For instance, sources noted that OpenAI gave them six months to review GPT-4 before its release, and concerning capabilities were only discovered after two months.

The sources further revealed that OpenAI’s tests have become less thorough, lacking the necessary time and resources to properly catch and mitigate risks. One person testing o3, the full version of o3-mini, stated, “We had more thorough safety testing when [the technology] was less important.” They described the current approach as “reckless” and “a recipe for disaster.” The rush is attributed to OpenAI’s desire to maintain a competitive edge, particularly as open-weight models from competitors like Chinese AI startup DeepSeek gain more ground.

Related Reads

Apple Unveils iPhone 17e Starting at $599

Honor Launches Thinner Magic V6 Foldable Phone

Trump Orders Immediate Halt to Anthropic AI Use

Claude AI Suffers Partial Service Disruption on March 2

OpenAI is rumored to be releasing o3 next week, which sources claim hastened the timeline to under a week. This development highlights the lack of government regulation for AI models, including requirements to disclose model harms. Companies like OpenAI had signed voluntary agreements with the Biden administration to conduct routine testing with the US AI Safety Institute, but these agreements have fallen away under the Trump administration.

During the open comment period for the Trump administration’s forthcoming AI Action Plan, OpenAI advocated for a similar arrangement to avoid navigating patchwork state-by-state legislation. Outside the US, the EU AI Act will require companies to risk test their models and document results. Johannes Heidecke, head of safety systems at OpenAI, claimed, “We have a good balance of how fast we move and how thorough we are.” However, testers expressed alarm, especially considering other holes in the process, including evaluating less-advanced versions of models released to the public or referencing an earlier model’s capabilities rather than testing the new one itself.

ShareTweet

You Might Be Interested

Apple Unveils iPhone 17e Starting at 9
News

Apple Unveils iPhone 17e Starting at $599

02/03/2026
Honor Launches Thinner Magic V6 Foldable Phone
News

Honor Launches Thinner Magic V6 Foldable Phone

02/03/2026
Trump Orders Immediate Halt to Anthropic AI Use
News

Trump Orders Immediate Halt to Anthropic AI Use

02/03/2026
Claude AI Suffers Partial Service Disruption on March 2
News

Claude AI Suffers Partial Service Disruption on March 2

02/03/2026
Please login to join discussion

Recent Posts

  • Apple Unveils iPhone 17e Starting at $599
  • Honor Launches Thinner Magic V6 Foldable Phone
  • Trump Orders Immediate Halt to Anthropic AI Use
  • Claude AI Suffers Partial Service Disruption on March 2
  • Claude Chatbot Overtakes ChatGPT in US App Store

Recent Comments

No comments to show.
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
Tekmono is a Linkmedya brand. © 2015.

No Result
View All Result
  • News
  • Guides
  • Lists
  • Reviews
  • Deals