OpenAI and Anthropic have both signed agreements with the U.S. government for testing new AI models before public release. The National Institute of Standards and Technology (NIST) announced that its AI Safety Institute will oversee AI safety research, testing, and evaluation with both companies. Elizabeth Kelly, director of the AI Safety Institute, stated that these agreements mark an important milestone in the responsible stewardship of AI. OpenAI has previously conducted internal safety testing but has been secretive about its models and training. This collaboration with NIST represents a new level of transparency and accountability for OpenAI.
Government Collaboration and Regulation
The formal collaboration with NIST aligns with the Biden Administration’s AI executive order from last October, which mandates that AI companies provide access to NIST for red-teaming before releasing AI models to the public. OpenAI CEO, Sam Altman, has expressed the importance of national-level oversight in AI development, emphasizing the need for the U.S. to lead in this area. The partnership with NIST also includes sharing findings and feedback in collaboration with the UK AI Safety Institute.
Potential Criticisms
Critics have raised concerns that OpenAI’s collaboration with the government may be a strategic move to ensure favorable regulation and suppress competition. Despite calls for AI regulation and standardization, some view this partnership as a way to control the regulatory landscape in OpenAI’s favor. With the increasing recognition of the safety risks posed by generative AI, this collaboration represents a significant step towards addressing these challenges.


