In a significant expansion of government oversight, three of the world's largest AI developers have agreed to pre-release security reviews of their most advanced models.
Back
In a significant expansion of government oversight, three of the world's largest AI developers have agreed to pre-release security reviews of their most advanced models.

In a move that substantially increases the government’s oversight of artificial intelligence, Alphabet’s Google, Microsoft Corp., and xAI have agreed to provide their advanced AI models for security assessment before public release. The agreement brings the three technology giants into a partnership with the U.S. Commerce Department’s Center for AI Standards and Innovation, or CAISI, which already has similar arrangements with OpenAI and Anthropic PBC.
"Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications," Chris Fall, director of CAISI, said in a statement. "These expanded industry collaborations help us scale our work in the public interest at a critical moment."
The agreements with OpenAI and Anthropic have also been renegotiated to align with President Donald Trump’s AI Action Plan, the agency noted. Since 2024, the center has completed more than 40 evaluations of AI models, including some that have not yet been released to the public. The expansion to include Google, Microsoft, and Elon Musk's xAI follows mounting administration concerns over the capabilities of new systems like Anthropic’s Mythos model.
This expanded government access is a direct consequence of the national security questions raised by increasingly powerful AI. Anthropic’s own analysis of its Mythos model, which reportedly found thousands of vulnerabilities in critical software and infrastructure, has accelerated policy efforts within the Trump administration. The White House has already opposed Anthropic's plans to widen access to Mythos, and the Defense Department is in a legal dispute with the company over its status as a supply-chain risk.
The agreements give more authority to CAISI, a body established as the AI Safety Institute under the Biden administration in 2023 and re-established by the Trump administration last year. While its existence is not yet codified by law, Trump’s AI Action Plan directs the center to lead national security-related model assessments. This could pave the way for new enforcement of existing laws as regulators explore how to apply them to AI systems.
The administration's strategy appears to be twofold. On one hand, there is a clear push for greater safety and security reviews, driven by the potential for misuse of powerful AI. On the other hand, the Trump administration has also stated a goal of lifting some AI safety guardrails to accelerate the rollout of new models and ensure the U.S. maintains a competitive edge over China. This creates a delicate balancing act between mitigating risk and fostering rapid innovation. The new evaluation agreements, along with a potential forthcoming executive order, represent the administration's attempt to navigate that challenge by creating a formal government review process.
This article is for informational purposes only and does not constitute investment advice.