OpenAI Backtracks on Plans to Drop Nonprofit Control

OpenAI Backtracks on Plans to Drop Nonprofit Control
OpenAI Backtracks on Plans to Drop Nonprofit Control

Advertisement

SKIP ADVERTISEMENT

You have a preview view of this article while we are checking your access. When we have confirmed access, the full article content will load.

The company will become a public benefit corporation and the nonprofit that has controlled it will be its largest shareholder.

A four-story office building with large windows. A row of vehicles is parked in a row alongside it.
OpenAI’s San Francisco headquarters. The decision for the nonprofit that runs the company to retain control is a victory for OpenAI’s critics. Credit…Jim Wilson/The New York Times

OpenAI said on Monday that it was restructuring as a public benefit corporation, allowing the nonprofit that controls OpenAI to retain its grip on the company.

The decision is a victory for OpenAI’s critics, including one of its founders, Elon Musk, who complained that the company was too focused on profits and had abandoned its early plan to build artificial intelligence systems with safety foremost in mind.

The changes announced on Monday are the latest in years of corporate drama for what many consider to be the most influential A.I. company in the world. OpenAI’s ChatGPT chatbot, released in late 2022, was an overnight success that sent the rest of the tech industry scrambling. In just a few years, tech’s biggest companies have spent billions on their own A.I. projects, with hundreds of billions more planned for this decade.

Mr. Musk, who is now running his own A.I. company, sued OpenAI over plans it was putting into place to change its corporate structure from an unorthodox system that gave a nonprofit oversight of a for-profit company. But he was not the only critic of OpenAI’s planned changes. The attorneys general in California, where OpenAI has its headquarters, and in Delaware, where it was legally created, also said they were monitoring its restructuring. The office of the California attorney general, Rob Bonta, said in a statement that it was reviewing OpenAI’s new plan.

And in recent weeks, a number of academics from the legal community and experts such as Geoffrey Hinton, who won a Nobel Prize last year for his pioneering A.I. research, also publicly expressed concern about OpenAI’s direction.

The argument over how OpenAI should be structured and what its priorities should be homed in on a fundamental question about artificial intelligence: Should researchers rush headlong to develop new and more powerful A.I. systems? Or should the theoretical risk that A.I. presents to humanity inform everything those researchers create?


Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.


Thank you for your patience while we verify access.

Already a subscriber? Log in.

Want all of The Times? Subscribe.

Advertisement

SKIP ADVERTISEMENT

Read More