Home
News
Products
Corporate
Contact
 
Sunday, December 22, 2024

News
Industry News
Publications
CST News
Help/Support
Software
Tester FAQs
Industry News

OpenAI, Anthropic Agree to Give Feds Early Access to AI Models


Monday, September 2, 2024

OpenAI and Anthropic have formally agreed to give a US federal agency access to their upcoming AI models before public release.

The agreements, announced Thursday by the Department of Commerce’s National Institute of Standards and Technology (NIST), promise to help federal officials evaluate AI models for safety risks by giving NIST's newly formed US Artificial Intelligence Safety Institute "access to major new models from each company prior to and following their public release."

"Additionally, the US AI Safety Institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models, in close collaboration with its partners at the UK AI Safety Institute,” the department added.

The deal arrives amid growing interest and concern about the capabilities of next-generation AI models, which some fear could replace human jobs or be abused to harm society. Earlier this year, OpenAI received criticism from a former company researcher who alleged the San Francisco lab prioritized launching new products over safety.

In OpenAI’s case, the company already seems to be honoring its agreement with NIST. According to The Information, OpenAI recently showed a new AI model called Strawberry to federal officials. The model reportedly excels at reliably solving math problems and completing computer programming tasks and could launch within ChatGPT as soon as this fall.

OpenAI didn’t immediately respond to a request for comment about its partnering with NIST. But Anthropic, which was founded by former OpenAI employees, told PCMag: “Our collaboration with the US AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment. This strengthens our ability to identify and mitigate risks, advancing responsible AI development. We're proud to contribute to this vital work, setting new benchmarks for safe and trustworthy AI."

In June, Anthropic gave the UK’s Artificial Intelligence Safety Institute early access to its Claude 3.5 Sonnet AI model in order to further refine the safety mechanisms around the technology. The UK institute then shared its finding with the US AI Safety Institute.

"We have integrated policy feedback from outside subject matter experts to ensure that our evaluations are robust and take into account new trends in abuse," Anthropic said at the time. "This engagement has helped our teams scale up our ability to evaluate 3.5 Sonnet against various types of misuse."

By: DocMemory
Copyright © 2023 CST, Inc. All Rights Reserved

CST Inc. Memory Tester DDR Tester
Copyright © 1994 - 2023 CST, Inc. All Rights Reserved