A potential executive order from the Trump administration threatens to upend its industry-friendly tech agenda, following a private warning that one new AI model from Anthropic could pose a significant cybersecurity risk to the nation's critical infrastructure. The move signals a chaotic reversal from the administration's previous focus on deregulation to win the AI race against China.
"We all need to work together on this," Vice President JD Vance told chief executive officers including OpenAI’s Sam Altman, Dario Amodei of Anthropic, and the heads of Google and Microsoft on a recent call, according to people familiar with the matter.
The alarm was raised over Anthropic's 'Mythos' model, which demonstrated an advanced capability for finding software vulnerabilities on its own. In response, the White House has reportedly asked Anthropic to hold off on expanding access to Mythos and has tapped National Cyber Director Sean Cairncross to lead its response. This follows moves by several major labs, including Google, Microsoft, and xAI, to voluntarily work with the Commerce Department's Center for AI Standards and Innovation (CAISI) for pre-deployment evaluation.
The potential for a new, more restrictive federal framework introduces significant regulatory uncertainty for the entire AI sector, threatening to slow the pace of development and add compliance costs for firms like Microsoft (MSFT) and Alphabet (GOOGL), which have invested billions in the technology.
A Shift in Washington
The administration's sudden focus on AI safety marks a stark pivot. As recently as February, Vice President Vance warned a global summit in Paris that overregulation could kill the burgeoning industry. The new posture has been cheered by proponents of AI safety but has created friction within the administration. White House adviser and venture capitalist David Sacks has publicly downplayed the risk, stating on a podcast, "People are treating this like some existential threat. I don’t think it is."
Conversely, National Economic Council Director Kevin Hassett fueled criticism from administration allies by comparing the potential oversight to the Food and Drug Administration's process for clearing new drugs. "Importing the FDA approach into AI would upend President Trump’s current pro-growth AI policy," said Neil Chilson, head of AI policy at the Abundance Institute. White House chief of staff Susie Wiles appeared to push back on a heavy-handed approach, posting on X that the administration's goal is to empower "America’s great innovators, not bureaucracy."
Industry Navigates Uncertainty
The debate leaves the AI industry in a precarious position. Anthropic, which has previously advocated for federal guardrails, now finds its most powerful model at the center of a policy firestorm. OpenAI has also consulted the administration on its own advanced cyber model, GPT-5.5-Cyber, and is similarly limiting access.
For investors, the situation introduces a new layer of political risk to a sector already defined by high capital expenditures and intense competition. While a formal oversight body could standardize safety protocols, it could also create a bureaucratic bottleneck, slowing the go-to-market timeline for new models from leaders like Google, Microsoft, and OpenAI. The outcome of the White House's internal debate will determine whether the U.S. continues its light-touch approach or builds a new regulatory moat around its most advanced AI.
This article is for informational purposes only and does not constitute investment advice.