Tekmono
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
No Result
View All Result
Tekmono
No Result
View All Result
Home News
OpenAI’s New Models Jailbroken Hours After Release

OpenAI’s New Models Jailbroken Hours After Release

by Tekmono Editorial Team
07/08/2025
in News
Share on FacebookShare on Twitter

OpenAI’s latest open-weight models, GPT-OSS-120b and GPT-OSS-20b, were reportedly jailbroken within hours of their release on August 7, 2025, by AI jailbreaker Pliny the Liberator, despite OpenAI’s robust safety claims.

The models were touted as fast, efficient, and highly resistant to jailbreaks, having undergone “worst-case fine-tuning” in biological and cyber domains. OpenAI’s Safety Advisory Group reviewed the testing and concluded that the models did not reach high-risk thresholds. The company claimed the models performed at parity with their o4-mini model on jailbreak resistance benchmarks like StrongReject.

However, Pliny the Liberator announced on X, “OPENAI: PWNED GPT-OSS: LIBERATED,” sharing screenshots that showed the models generating instructions for illicit activities, including making methamphetamine, Molotov cocktails, VX nerve agent, and malware. Pliny commented, “Took some tweakin!” regarding his successful breach.

Related Reads

OpenAI Launches Customizable Skills for Codex Coding Agent

Amazon’s Alexa+ to Integrate with Four New Services

EA Investigated for AI-Generated Content in Battlefield 6

Apple to Start iPhone 18 Production in January

The jailbreak occurred as OpenAI is preparing to release its highly anticipated GPT-5 and had launched a $500,000 red teaming challenge to uncover novel risks. Pliny’s public disclosure likely disqualifies him from this initiative.

Pliny’s jailbreaking technique involved a multi-stage prompt that initially appeared to be a refusal, then incorporated a divider with his signature “LOVE PLINY” markers, and subsequently shifted into generating unrestricted content using leetspeak to evade detection. This approach mirrored the methods he successfully employed against previous OpenAI models.

This incident marks another rapid jailbreak by Pliny, who has consistently bypassed major OpenAI releases within hours or days of their launch. His GitHub repository, L1B3RT4S, hosts a library of jailbreak prompts and has garnered over 10,000 stars, remaining a significant resource for the AI jailbreaking community.

ShareTweet

You Might Be Interested

OpenAI Launches Customizable Skills for Codex Coding Agent
News

OpenAI Launches Customizable Skills for Codex Coding Agent

24/12/2025
Amazon’s Alexa+ to Integrate with Four New Services
News

Amazon’s Alexa+ to Integrate with Four New Services

24/12/2025
EA Investigated for AI-Generated Content in Battlefield 6
News

EA Investigated for AI-Generated Content in Battlefield 6

24/12/2025
Apple to Start iPhone 18 Production in January
News

Apple to Start iPhone 18 Production in January

24/12/2025
Please login to join discussion

Recent Posts

  • OpenAI Launches Customizable Skills for Codex Coding Agent
  • Amazon’s Alexa+ to Integrate with Four New Services
  • EA Investigated for AI-Generated Content in Battlefield 6
  • Apple to Start iPhone 18 Production in January
  • Connect Your Phone to Wi-Fi Easily

Recent Comments

No comments to show.
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
Tekmono is a Linkmedya brand. © 2015.

No Result
View All Result
  • News
  • Guides
  • Lists
  • Reviews
  • Deals