Sam Altman, CEO of ChatGPT-maker OpenAI, warned at a Senate hearing Thursday that requiring government approval to release powerful artificial intelligence software would be “disastrous” for the United States’ lead in the technology.
It was a striking reversal after his comments at a Senate hearing two years ago, when he listed creating a new agency to license the technology as his “number one” recommendation for making sure AI was safe.
Altman’s U-turn underscores a transformation in how tech companies and the U.S. government talk about AI technology.
Widespread warnings about AI posing an “existential risk” to humanity and pleas from CEOs for speedy, preemptive regulation on the emerging technology are gone. Instead there is near-consensus among top tech executives and officials in the new Trump administration that the United States must free companies to move even faster to reap economic benefits from AI and keep the nation’s edge over China.
“To lead in AI, the United States cannot allow regulation, even the supposedly benign kind, to choke innovation and adoption,” Sen. Ted Cruz (R-Texas), chair of the Senate Committee on Commerce, Science, and Transportation, said Thursday at the beginning of the hearing.
Venture capitalists who had expressed outrage at the Biden administration’s approach to AI regulation now have key roles in the Trump administration. Vice President JD Vance, himself a former venture capitalist, has become a key proponent of laissez-faire AI policy at home and abroad.
Critics of that new stance warn that AI technology is already causing harms to individuals and society. Researchers have shown that AI systems can become infused with racism and other biases from the data they have been trained on. Image generators powered by AI have become commonly used to harass women by generating pornographic images without consent, and they have also been used to make child sexual abuse images. A bipartisan bill that aims to make it a crime to post nonconsensual sexual images, including AI-generated ones, was passed by Congress in April.
Rumman Chowdhury, the State Department’s U.S. science envoy for AI during the Biden administration, said the tech industry’s narrative around existential concerns distracted lawmakers from addressing real-world harms.
The industry’s approach ultimately enabled a “bait and switch,” where executives pitched regulation around concepts like self-replicating AI, while also stoking fears that the United States needed to beat China on building these powerful systems.
They “subverted any sort of regulation by triggering the one thing the U.S. government never says no to: national security concern,” said Chowdhury, who is chief executive of the nonprofit Humane Intelligence.
Early warnings
The AI race in Silicon Valley, triggered by OpenAI’s release of ChatGPT in November 2022, was unusual for a major tech industry frenzy in how hopes for the technology soared alongside fears of its consequences.
Many employees at OpenAI and other leading companies were associated with the AI safety movement, a strand of thought focused on concerns about humanity’s ability to control theorized “superintelligent” future AI systems.
Some tech leaders scoffed at what they called science-fiction fantasies, but concerns about superintelligence were taken seriously among the ranks of leading AI executives and corporate researchers. In May 2023, hundreds of them signed on to a statement stating that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Fears of superpowerful AI also gained a foothold in Washington and other world centers of tech policy development. Billionaires associated with the AI safety movement, such as Facebook co-founder Dustin Moskovitz and Skype co-founder Jaan Tallinn, funded lobbyists, think tank papers and fellowships for young policy wonks to raise political awareness of the big-picture risks of AI.
Those efforts appeared to win results and to mingle with concerns that regulators shouldn’t ignore the early days of the tech industry’s new obsession as they had for social media. Politicians from both parties in Washington advocated for AI regulation, and when world leaders gathered in the United Kingdom for an international AI summit in November 2023, concerns about future risks were center-stage.
“We must consider and address the full spectrum of AI risk, threats to humanity as a whole as well as threats to individuals, communities, to our institutions and to our most vulnerable populations,” then-Vice President Kamala Harris said in an address she gave during the U.K. summit. “We must manage all these dangers to make sure that AI is truly safe.”
At a sequel to that gathering held in Paris this year, the changed attitude toward AI regulation among governments and the tech industry was plain. Safety was de-emphasized in the Paris summit’s final communiqué compared with that from the U.K. summit. Most world leaders who spoke urged countries and companies to accelerate development of smarter AI.
Vance in a speech criticized attempts to regulate the technology. “We believe that excessive regulation of the AI sector could kill a transformative industry just as it’s taking off, and we’ll make every effort to encourage pro-growth AI policies,” Vance said. “The AI future is not going to be won by hand-wringing about safety.” He singled out the European Union’s AI regulation for criticism.
Weeks later, the European Commission moved to weaken its planned AI regulations.
New priorities
After his return to office, President Donald Trump moved swiftly to reverse the AI agenda under President Joe Biden, the centerpiece of which was a sweeping executive order that, among other things, required companies building the most powerful AI models to run safety tests and report the results to the government.
Biden’s rules had angered start-up founders and venture capitalists in Silicon Valley, who argued that they favored bigger companies with political connections. The issue, along with tech leaders’ opposition to Biden’s antitrust policy, contributed to a surge of support for Trump in Silicon Valley.
Trump repealed Biden’s AI executive order on the first day of his second term and appointed several Silicon Valley figures to his administration, including David Sacks, a prominent critic of Biden’s tech agenda, as his crypto and AI policy czar. This week, the Trump administration scrapped a Biden-era plan to strictly limit exports of chips to other countries in an effort to stop chips reaching China through other nations.
Altman’s statements Thursday are just one example of how tech companies have nimbly matched the Trump administration’s tone on the risks and regulation of AI.
Microsoft President Brad Smith, who in 2023 also advocated for a federal agency focused on policing AI, said at the hearing Thursday that his company wants a “light touch” regulatory framework. He added that long waits for federal wetland construction permits are one of the biggest challenges for building new AI data centers in the United States.
In February, Google’s AI lab DeepMind scrapped a long-held pledge not to develop AI that would be used for weapons or surveillance. It is one of several leading AI companies to recently embrace the role of building technology for the U.S. government and military, with executives arguing that AI should be controlled by Western countries.
OpenAI, Meta and AI company Anthropic, which develops the chatbot Claude, all updated their policies over the past year to get rid of provisions against working on military projects.
Max Tegmark, an AI professor at the Massachusetts Institute of Technology and president of the Future of Life Institute, a nonprofit that researches the potential risk of supersmart AI, said the lack of AI regulation in the U.S. is “ridiculous.”
“If there’s a sandwich shop across the street from OpenAI or Anthropic or one of the other companies, before they can sell even one sandwich they have to meet the safety standards for their kitchen,” Tegmark said. “If [the AI companies] want to release super intelligence tomorrow, they’re free to do so.”
Tegmark and others continue to research potential risks of AI, hoping to push governments and companies to reengage with the idea of regulating the technology. A summit of AI safety researchers took place in Singapore last month, which Tegmark’s organization, in an email to media outlets, called a step forward after the “disappointments” of the Paris meeting where Vance spoke.
“The way to create the political will is actually just to do the nerd research,” Tegmark said.
Will Oremus contributed to this report.