headphones
OpenAI’s API may require a verified ID to access future AI models and capabilities
币圈狂人
币圈狂人
authIcon
趋势观察者
04-14 19:30
Follow
Focus
OpenAI has revealed that organizations may soon be required to complete an ID verification process in order to access certain future AI models.
Helpful
Not Helpful
Play

OpenAI has revealed that organizations may soon be required to complete an ID verification process in order to access certain future AI models. The firm said it wants to prevent API misuse, curb unsafe AI use, and deter intellectual property theft.

The artificial intelligence firm said the released Verified Organization status will give developers a new way to unlock access to the most advanced models and capabilities on the OpenAI platform. The firm also said advancing through usage tiers will unlock higher rate limits across models.

OpenAI proposes ID verification for organization

OpenAI now added the ability to verify organization pic.twitter.com/SCZuhpgJ90

— Jonah Manzano (@jonah_manzano) April 13, 2025

OpenAI mentioned on its website’s support page that it may require businesses to complete an ID verification process to access certain future AI models. The firm noted that the verification process, named Verified Organization, will be “a new way for developers to unlock access to the most advanced models and capabilities on the OpenAI platform.”

The tech company also acknowledged that verification will require a government-issued ID from one of the countries supported by OpenAI’s API. The artificial intelligence firm noted that it currently supports identification from over 200 countries.

The ChatGPT maker added that an ID can only verify one organization every 90 days and that not all organizations will be eligible for verification. OpenAI urged businesses to check again at a later date to see if verification becomes available for their organization.

“OpenAI released a new Verified Organization status as a new way for developers to unlock access to the most advanced models and capabilities on the platform and to be ready for the next exciting model release.”

~ Tibor Blaho, Lead Engineer at AIPRMcorp.

The support page reads, “At OpenAI, we take our responsibility seriously to ensure that AI is both broadly accessible and used safely.” The firm also argued that a small minority of developers intentionally use the OpenAI APIs in violation of their usage policies. OpenAI maintained that “we’re adding the verification process to mitigate unsafe use of AI while continuing to make advanced models available to the broader developer community.”

OpenAI revealed that it will only take a few minutes to verify an organization and that the business does not have any spend requirements. The tech firm also highlighted that advancing through usage tiers will unlock higher rate limits across models.

The AI firm also acknowledged that verification will unlock access to advanced models and additional capabilities on the OpenAI platform, which will enable users to utilize the latest AI advancements.

Incase OpenAI says verification is not available, the firm advises users to continue using the platform and existing models as they currently do. The company also argued that models requiring verification today might become available to all customers in the future, even without verification.

OpenAI seeks to boost security and prevent IP theft with ID verification 

The company could be pushing to beef up security around its products with the new verification process as they become more sophisticated and capable. OpenAI has issued several reports regarding its efforts to detect and mitigate malicious use of its models, including by groups allegedly based in North Korea.

The firm may also be working towards deterring IP theft based on a report from Bloomberg earlier this year regarding a China-based AI lab. The report noted that OpenAI was investigating whether a group linked with DeepSeek exfiltrated large amounts of data through its APU in late 2024, possibly for training models, which violates the firm’s terms. The tech company also suspended access to its services for users in China, Hong Kong, and Macau last year.

The ChatGPT maker also said that it has been slightly more than a year since it became the first AI research lab to publish reports on its disruptions. The firm noted it aims to support broader efforts by U.S. and allied governments, industry partners, and other stakeholders to prevent abuse by adversaries and other malicious actors.

Cryptopolitan Academy: Want to grow your money in 2025? Learn how to do it with DeFi in our upcoming webclass. Save Your Spot

Open the app to read the full article
DisclaimerAll content on this website, hyperlinks, related applications, forums, blog media accounts, and other platforms published by users are sourced from third-party platforms and platform users. BiJieWang makes no warranties of any kind regarding the website and its content. All blockchain-related data and other content on the website are for user learning and research purposes only, and do not constitute investment, legal, or any other professional advice. Any content published by BiJieWang users or other third-party platforms is the sole responsibility of the individual, and has nothing to do with BiJieWang. BiJieWang is not responsible for any losses arising from the use of information on this website. You should use the related data and content with caution and bear all risks associated with it. We strongly recommend that you independently research, review, analyze, and verify the content.
Comments(0)

No comments yet

edit
comment
collection
like
share