Tekmono
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
No Result
View All Result
Tekmono
No Result
View All Result
Home News
OpenAI Rushes AI Model Testing to Days Not Months

OpenAI Rushes AI Model Testing to Days Not Months

by Tekmono Editorial Team
14/04/2025
in News
Share on FacebookShare on Twitter

OpenAI has drastically cut down the evaluation time for its new AI models from several months to just days, raising concerns among its staff and third-party testers regarding the thoroughness of safety evaluations.

Eight individuals, who are either OpenAI staff members or third-party testers, disclosed that they were given a mere “just days” to complete evaluations on new models, a process that typically requires “several months.” These evaluations are crucial in identifying potential model risks and other harms, such as the possibility of a user jailbreaking a model to obtain instructions for creating a bioweapon. For instance, sources noted that OpenAI gave them six months to review GPT-4 before its release, and concerning capabilities were only discovered after two months.

The sources further revealed that OpenAI’s tests have become less thorough, lacking the necessary time and resources to properly catch and mitigate risks. One person testing o3, the full version of o3-mini, stated, “We had more thorough safety testing when [the technology] was less important.” They described the current approach as “reckless” and “a recipe for disaster.” The rush is attributed to OpenAI’s desire to maintain a competitive edge, particularly as open-weight models from competitors like Chinese AI startup DeepSeek gain more ground.

Related Reads

OpenAI Launches Customizable Skills for Codex Coding Agent

Amazon’s Alexa+ to Integrate with Four New Services

EA Investigated for AI-Generated Content in Battlefield 6

Apple to Start iPhone 18 Production in January

OpenAI is rumored to be releasing o3 next week, which sources claim hastened the timeline to under a week. This development highlights the lack of government regulation for AI models, including requirements to disclose model harms. Companies like OpenAI had signed voluntary agreements with the Biden administration to conduct routine testing with the US AI Safety Institute, but these agreements have fallen away under the Trump administration.

During the open comment period for the Trump administration’s forthcoming AI Action Plan, OpenAI advocated for a similar arrangement to avoid navigating patchwork state-by-state legislation. Outside the US, the EU AI Act will require companies to risk test their models and document results. Johannes Heidecke, head of safety systems at OpenAI, claimed, “We have a good balance of how fast we move and how thorough we are.” However, testers expressed alarm, especially considering other holes in the process, including evaluating less-advanced versions of models released to the public or referencing an earlier model’s capabilities rather than testing the new one itself.

ShareTweet

You Might Be Interested

OpenAI Launches Customizable Skills for Codex Coding Agent
News

OpenAI Launches Customizable Skills for Codex Coding Agent

24/12/2025
Amazon’s Alexa+ to Integrate with Four New Services
News

Amazon’s Alexa+ to Integrate with Four New Services

24/12/2025
EA Investigated for AI-Generated Content in Battlefield 6
News

EA Investigated for AI-Generated Content in Battlefield 6

24/12/2025
Apple to Start iPhone 18 Production in January
News

Apple to Start iPhone 18 Production in January

24/12/2025
Please login to join discussion

Recent Posts

  • OpenAI Launches Customizable Skills for Codex Coding Agent
  • Amazon’s Alexa+ to Integrate with Four New Services
  • EA Investigated for AI-Generated Content in Battlefield 6
  • Apple to Start iPhone 18 Production in January
  • Connect Your Phone to Wi-Fi Easily

Recent Comments

No comments to show.
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
Tekmono is a Linkmedya brand. © 2015.

No Result
View All Result
  • News
  • Guides
  • Lists
  • Reviews
  • Deals