Tekmono
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
No Result
View All Result
Tekmono
No Result
View All Result
Home News
OpenAI Introduces Verification Process for AI Access

OpenAI Introduces Verification Process for AI Access

by Tekmono Editorial Team
14/04/2025
in News
Share on FacebookShare on Twitter

OpenAI is considering a new verification process for organizations to access its future AI models via its API, aiming to enhance security and mitigate unsafe AI usage.

According to a support page published on OpenAI’s website, the verification process, called “Verified Organization,” is designed to unlock access to the most advanced models and capabilities on the OpenAI platform. The process requires a government-issued ID from one of the countries supported by OpenAI’s API. An ID can only verify one organization every 90 days, and not all organizations will be eligible for verification.

The verification process is intended to mitigate unsafe use of AI while continuing to make advanced models available to the broader developer community. OpenAI stated that a small minority of developers intentionally use the OpenAI APIs in violation of their usage policies, necessitating the new verification step. The company takes its responsibility seriously to ensure that AI is both broadly accessible and used safely.

Related Reads

OpenAI Launches Customizable Skills for Codex Coding Agent

Amazon’s Alexa+ to Integrate with Four New Services

EA Investigated for AI-Generated Content in Battlefield 6

Apple to Start iPhone 18 Production in January

The new verification process could be aimed at beefing up security around OpenAI’s products as they become more sophisticated and capable. OpenAI has published several reports on its efforts to detect and mitigate malicious use of its models, including by groups allegedly based in North Korea. The verification takes a few minutes and requires a valid government-issued ID.

The move may also be intended to prevent IP theft. According to a report from Bloomberg earlier this year, OpenAI was investigating whether a group linked with DeepSeek, a China-based AI lab, exfiltrated large amounts of data through its API in late 2024, possibly for training models — a violation of OpenAI’s terms.

OpenAI blocked access to its services in China last summer, further highlighting its efforts to manage and secure its API usage. The company’s actions demonstrate its commitment to ensuring the safe and responsible use of its AI models.

ShareTweet

You Might Be Interested

OpenAI Launches Customizable Skills for Codex Coding Agent
News

OpenAI Launches Customizable Skills for Codex Coding Agent

24/12/2025
Amazon’s Alexa+ to Integrate with Four New Services
News

Amazon’s Alexa+ to Integrate with Four New Services

24/12/2025
EA Investigated for AI-Generated Content in Battlefield 6
News

EA Investigated for AI-Generated Content in Battlefield 6

24/12/2025
Apple to Start iPhone 18 Production in January
News

Apple to Start iPhone 18 Production in January

24/12/2025
Please login to join discussion

Recent Posts

  • OpenAI Launches Customizable Skills for Codex Coding Agent
  • Amazon’s Alexa+ to Integrate with Four New Services
  • EA Investigated for AI-Generated Content in Battlefield 6
  • Apple to Start iPhone 18 Production in January
  • Connect Your Phone to Wi-Fi Easily

Recent Comments

No comments to show.
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
Tekmono is a Linkmedya brand. © 2015.

No Result
View All Result
  • News
  • Guides
  • Lists
  • Reviews
  • Deals