Tech Industry
Artificial Intelligence

OpenAI calling for AI regulation is a solid step in no direction

Now is the time to treat superintelligence like atomic energy, says the ChatGPT company. Here's why that makes no sense.
By Chris Taylor  on 
The OpenAI logo above a login screen for ChatGPT.
Credit: NurPhoto

In theory, it was the ideal day for OpenAI to release a blog post titled Governance of Superintelligence(opens in a new tab). In practice, the post and its timing just proved how far the folks at the forefront of AI world are from regulating their technology, or understanding the proper context.

The company behind one of the best-known AI image generators (Dall-E) and the best-known AI chatbot (ChatGPT) published the post because it wants to be seen as a group of sober adults, taking both the promise and the threat of its technology seriously. And look what just happened: AI-generated images of the Pentagon and White House on fire sent a brief shock wave through the stock market. What better time for the market leader to showcase sobriety?

There is a kind of "trust and safety" contest alongside the AI arms race between Microsoft, ChatGPT's main ally, and Google. At the Google IO keynote two weeks ago, excitable announcements about Bard integration were tempered by a segment on "Responsible AI." Last week, OpenAI CEO Sam Altman gave Congressional testimony looking like the humbler, more human answer to Mark Zuckerberg. Next up: OpenAI co-founder Greg Brockman, fresh from his own grilling at the TED conference, is taking his responsible adult roadshow to Microsoft Build.

But what did "Governance of Superintelligence," co-authored by Altman and Brockman, actually have to say for itself? At under a thousand words, not much — and the lack of specificity could harm their cause.

Here's the TL;DR: AI carries risks. So did nuclear power. Hey, maybe we should have a global AI regulatory body similar to the International Atomic Energy Agency (IAEA)! But also there should be a lot of public oversight of this agency, and also the kind of stuff OpenAI is doing right now isn't "in the scope" of regulation, because it's more about helping individuals.

Besides, someone is probably going to create a superintelligence sooner or later, and stopping them would be "unintuitively risky," so "we have to get it right." The end.

There is a clear self-serving purpose to OpenAI amping up the AI threat like this. You want [insert your preferred bad actor here] to get to superintelligence — also confusingly known as AGI, for Artificial General Intelligence —first? Or would you rather support the company so transparent about its technology, they've got "open" in the name?

Ironically, though, OpenAI hasn't been fully open about the language models it has been using to train its chatbots since 2019. The company is reportedly preparing an open source model(opens in a new tab), but it's very likely to be a pale shadow of GPT. OpenAI was a nonprofit, now it's very much a for-profit with a $30 billion valuation(opens in a new tab). That may explain why the blog post read more like marketing pablum than a white paper.

AI isn't the real threat. (Yet.)

When the "Pentagon explosion" AI images hit the internet, it should have been a gimme for OpenAI. Altman and co. could have spent a few more hours updating their prewritten post with mention of AI tools that can help us sift fake news from the real thing.

But that might draw attention to an inconvenient fact for a company looking to hype AI: the problem with the images wasn't AI. They weren't especially convincing. Fake pictures of fires at famous landmarks are something you could create yourself in Photoshop. Local authorities quickly confirmed the explosions hadn't happened, and the stock market corrected itself.

Really, the only problem was that the images went viral on a platform where all the trust and safety features have been removed: Twitter. The account that initially spread them was called "Bloomberg Feed." And it was paying $8 a month for a blue checkmark, which no longer means an account is verified.

In other words, Elon Musk's wholesale destruction of Twitter verification allowed an account to impersonate a well-known news agency. The account spread a fear-inducing picture that was picked up by Russian propaganda services like RT, from whom Musk has also removed the "state media" label.

It is doubtful whether we will ever get an international agency for AI that is as effective as the IAEA. The technology may move too fast, and be too poorly understood, for regulators. But also the main threat it poses for the foreseeable future — even according to "AI godfather" Geoffrey Hinton, the most prominent doomsayer(opens in a new tab) — is that a flood of AI-generated news and images means the average person "will not be able to tell what is true any more."

But in this particular test, the tripwires of fake news worked — no thanks to Musk. What we need more urgently is an international agency that can rein in conspiracy-spewing billionaires with massive megaphones. Perhaps Altman, who has previously called Musk a "jerk" and called out his fibs about OpenAI(opens in a new tab), could write a sober adult blog post about that.

Chris is a veteran journalist and the author of 'How Star Wars Conquered the Universe.' Hailing from the U.K., Chris got his start working as a sub editor on national newspapers in London and Glasgow. He moved to the U.S. in 1996, and became senior news writer for Time.com a year later. In 2000, he was named San Francisco bureau chief for Time magazine. He has served as senior editor for Business 2.0, West Coast editor for Fortune Small Business and West Coast web editor for Fast Company.Chris is a graduate of Merton College, Oxford and the Columbia University Graduate School of Journalism. He is also a long-time volunteer at 826 Valencia, the nationwide after-school program co-founded by author Dave Eggers. His book on the history of Star Wars is an international bestseller and has been translated into 11 languages.


Recommended For You
The best home security cameras for privacy and peace of mind
By Haley Henschel and Andrea Kornstein

Google's new AI tool aims to make online shopping more diverse and size-inclusive

This Google AI keynote could have been a Gmail

Google releases plan to protect you from AI threats

OpenAI isn’t training GPT-5 yet

More in Tech
How to cancel your Amazon Prime membership


DoorDash expands grocery access through SNAP and EBT payment options


Paying for Prime Day purchases with Affirm: With great power comes great responsibility

Trending on Mashable
Wordle today: Here's the answer and hints for July 1

NASA's new Mars video is astonishing

Spectacular Webb telescope image reveals things scientists can't explain

Elon Musk claims Twitter login requirement just 'temporary'

Twitter now blocks visitors from viewing tweets, and profiles unless they're logged in
The biggest stories of the day delivered to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use(opens in a new tab) and Privacy Policy(opens in a new tab). You may unsubscribe from the newsletters at any time.
Thanks for signing up. See you at your inbox!