White House distances itself from tighter AI regulation
Senior White House officials are trying to soothe industry concerns that the administration could require tech companies to submit their advanced artificial intelligence models for federal vetting before releasing them to the public.
A day after one top White House economic adviser publicly confirmed that such a review was under discussion — likening it Wednesday to the Food and Drug Administration’s yearslong testing of prescription drugs — aides to President Donald Trump were sending a different message: Not so fast.
“There’s one or two people who are very intent on government regulations, but they’re sort of the minority of the bunch,” said one senior White House official. This person, like others in this report, was granted anonymity to describe sensitive policy discussions.
The back-and-forth messaging comes as tech industry officials anxiously await an executive order spelling out how the administration plans to prevent powerful new AI models from being misused to launch cyberattacks or even develop bioweapons.
POLITICO reported Tuesday that the White House is eyeing a vetting system that could require AI giants such as OpenAI, Anthropic and Google to go through the government before releasing new models.
While it is not immediately clear how onerous that oversight system would be, Kevin Hassett, the director of the White House National Economic Council, said in a Fox Business interview Wednesday that the administration was considering a pre-release safety testing regime akin to what the FDA does for drugs.
“We’re studying possibly an executive order to give a clear roadmap to everybody about how this is gonna go, and how future AIs that also potentially create vulnerabilities should go through a process so that they’re released into the wild after they’ve been proven safe — just like an FDA drug,” Hassett said.
Hasset’s remarks unsettled people in industry circles because he appeared to signal the administration was zeroing in on much tighter regulations for AI than anticipated, amid fears that powerful new systems like Anthropic’s Claude Mythos can find and exploit critical software vulnerabilities that even elude the world’s top hackers.
The senior White House official said Thursday that Hassett’s remarks were “taken out of context a little bit” and that the White House is looking for “partnership” with companies rather than pursuing “government regulation.”
The administration’s concern about the mixed signals prompted a rare tweet from White House chief of staff Susie Wiles, who said Wednesday night that the government is “not in the business of picking winners and losers.”
“The White House will continue to lead an America First effort that empowers America’s great innovators, not bureaucracy, to drive safe deployment of powerful technologies while keeping America safe,” she added. It was only her fourth post on X since she created her official account there as chief of staff last week.
In a statement, a White House official said that Wiles’ post reiterated a “longtime commitment” to balancing “advancing innovation and ensuring security in our AI policymaking.”
It’s not yet clear what actions the White House plans to take to address powerful cyber models, or when they may be announced.
According to three people familiar with the plans, the White House is also discussing tapping the intelligence community to pre-assess models and help secure systems before widespread release.
Part of the goal of any government pre-release coordination, said one U.S. government official, is to ensure that the intelligence community can study and exploit the tools before adversaries such as Russia and China know of the new capabilities.
Defense Undersecretary Emil Michael, speaking Thursday at an AI conference in Washington, appeared to endorse the latter idea.
“The Mythos moment is really a cyber moment; how is the U.S. government going to deal with cyber, how do we operationalize fixing things that need to be fixed? Because these models are coming one way or the other,” Michael said.
But an array of tech executives, industry-aligned policy groups and former government officials have raised concerns about the idea of the government mandating pre-release oversight of AI models, fearing it will hinder competition.
“If [approval is] something that can be withheld before going to market, that’s a huge issue for any company,” said Daniel Castro, president of Information Technology and Innovation Foundation — a think tank funded by companies such as Anthropic, Microsoft and Meta. “If one competitor got approval and the other didn’t, that could have a huge impact if we’re talking weeks or months in terms of their market access.”
Voluntary federal evaluation and safety-testing agreements for new AI models have been in place for several years, with some tech companies submitting their models for scrutiny by the Commerce Department’s Center for AI Standards and Innovation. CAISI this week announced it had signed deals to conduct safety testing with other leading AI labs, including Google DeepMind, xAI, and Microsoft.
The Trump administration’s scramble to set new safety standards for AI models is being complicated by an ongoing spat between the Pentagon and Anthropic.
After Anthropic balked at letting the Pentagon use its models in autonomous lethal attacks and mass surveillance, Defense Secretary Pete Hegseth in March designated the firm a supply chain risk — an unprecedented move against a U.S. company that barred its model from use on DOD contracts. Trump separately gave federal agencies six months to stop using Anthropic’s products, denouncing its executives as “leftwing nut jobs.”
But federal agencies have been clamoring to get access to Mythos ever since Anthropic announced last month that it had built an AI system so effective at hacking that it could not be released to the general public.
Instead, it is exclusively sharing the model with a few dozen tech and security organizations, so they can get a head start on patching holes in critical software before malicious hackers exploit them. Anthropic has predicted that similar capabilities will be widely available within a year.
Anthropic’s archival OpenAI announced Thursday evening that it is similarly allowing limited previews of a new AI tool, GPT-5.5-Cyber, that can find and patch cyber vulnerabilities.
Fear that the government could be caught off guard by another, more powerful model has prompted the Trump administration to take swift action, and marks a major shift for a president who vowed to slash AI regulation when he entered office.
Some in the industry are worried these actions could go too far.
“The argument’s always been that one of the reasons the U.S. is doing so well is that it’s taken a light-touch approach to this emerging technology that the government doesn’t necessarily know best,” said Castro with the Information Technology and Innovation Foundation. “I think there was a lot of surprise that that was even being considered.”











