Tech
Artificial Intelligence

OpenAI quietly lobbied for weaker AI regulations while publicly calling to be regulated

The company's lobbying efforts succeeded, by the way.
By Matt Binder  on 
ChatGPT logo
Credit: Bob Al-Greene / Mashable

OpenAI CEO Sam Altman has been very loud about the need for AI regulation during numerous interviews, events, and even while sitting before U.S. Congress.

However, according to OpenAI documents used for the company's lobbying efforts in the EU, there's a catch: OpenAI wants regulations that heavily favor the company and have worked to weaken proposed AI regulation. 

The documents, obtained by Time(opens in a new tab) from the European Commission via freedom of information requests, gives a behind-the-scenes peek into what AItman means when he calls for AI regulation.

In the document, titled "OpenAI's White Paper on the European Union's Artificial Intelligence Act," the company focuses on exactly what it says: the EU's AI Act and attempting to change various designations in the law which would weaken the scope of it. For example, "general purpose AI systems" like GPT-3 were classified as "high risk" in the EU's AI Act. 

According to the European Commission, "high risk" classification would include(opens in a new tab) systems that could result in "harm to people’s health, safety, fundamental rights or the environment." They include examples such as AI that "influence voters in political campaigns and in recommender systems used by social media platforms." These "high risk" AI systems would be subject to legal requirements regarding human oversight and transparency.

"By itself, GPT-3 is not a high-risk system, but possesses capabilities that can potentially be employed in high risk use cases," reads the OpenAI white paper. OpenAI also argued against classifying generative AI like the popular ChatGPT and the AI art generator Dall-E as “high risk.”

Basically, the position held by OpenAI is that the regulatory focus should be on the companies using language models, such as the apps that utilize OpenAI's API, not the companies training and providing the models.

OpenAI's stance aligned with Microsoft, Google

According to Time(opens in a new tab), OpenAI basically backed positions held by Microsoft and Google when those companies lobbied to weaken the EU's AI Act regulations.

The section that OpenAI lobbied against ended up being removed from the final version of the AI Act.

OpenAI's successful lobbying efforts likely explain Altman's change of heart when it comes to OpenAI's operations in Europe. Altman previously threatened(opens in a new tab) to pull OpenAI out of the EU over the AI Act. Last month, however, he reversed course. Altman said(opens in a new tab) at the time that the previous draft of the AI Act "over-regulated but we have heard it's going to get pulled back." 

Now that certain parts of the EU's AI Act have been "pulled back," OpenAI has no plans to leave.


Recommended For You
Wendy's will start using an AI chatbot to take drive-through orders

EU consumer group calls for 'urgent investigations' of generative AI risks

This AI camera creates pictures without a lens

'Godfather of AI' has quit Google to warn people of AI risks

OpenAI isn’t training GPT-5 yet

Trending on Mashable
Wordle today: Here's the answer and hints for July 1

NASA's new Mars video is astonishing

Spectacular Webb telescope image reveals things scientists can't explain

Twitter now blocks visitors from viewing tweets, and profiles unless they're logged in

Want to try swinging? Here's a beginner's guide.
The biggest stories of the day delivered to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use(opens in a new tab) and Privacy Policy(opens in a new tab). You may unsubscribe from the newsletters at any time.
Thanks for signing up. See you at your inbox!