Select the directory option from the above "Directory" header!

Menu
Is generative AI mightier than the law?

Is generative AI mightier than the law?

The seemingly unstoppable juggernaut launched by OpenAI late last year might soon run into headwinds from the FTC, the EU — and in court.

Credit: Dreamstime

Generative AI, led by Microsoft and Microsoft-backed OpenAI, has turned into what seems an unstoppable juggernaut. Since OpenAI released an early demo of its generative AI tool ChatGPT less than eight months ago, the technology has seemingly taken over the tech world.

Tech behemoths like Microsoft, Google, and Meta have gone all in, with countless smaller companies and startups searching for tech gold. Critics, including many AI researchers, worry that if the technology continues unchecked, it could become increasingly dangerous, spread misinformation, invade privacy, steal intellectual property, take control of vital infrastructure, and even pose an existential threat to humankind.

The only recourse, it seems, is courts and federal agencies. As I’ve noted before, Microsoft and OpenAI have insinuated themselves into the good graces of many lawmakers, including those who will decide whether and how to regulate AI, so Congress may well be beyond hope. That’s why government agencies and the courts need to act.

The stakes couldn’t be higher. And now, thanks to a spate of lawsuits and action by the US Federal Trade Commission (FTC), we may soon find out whether Microsoft’s AI and OpenAI are mightier than the law.

The FTC steps up

Federal agencies have rarely been aggressive with tech companies. If they do try to act, it’s usually well after harm has been done. And the result is typically at best a slap on the wrist.

That’s not the case under the Biden administration, though. The FTC hasn’t been shy in going after Big Tech. And in the middle of July, it took its most important step yet: It opened an investigation into whether Microsoft-backed OpenAI has violated consumer protection laws and harmed consumers by illegally collecting data, violating consumer privacy and publishing false information about people.

In a 20-page letter sent by the FTC to OpenAI, the agency said it’s probing whether the company “engaged in unfair or deceptive privacy or data security practices or engaged in unfair or deceptive practices relating to risks of harm to consumers.”

The letter made clear how seriously the FTC takes the investigation. It wants vast amounts of information, including technical details about how ChatGPT gathers data, how the data is used and stored, the use of APIs and plugins, and information about how OpenAI trains, builds, and monitors the Large Language Models (LLMs) that fuel its chatbot.

None of this should be a surprise to Microsoft or ChatGPT. In May, FTC Chair Lina Khan wrote an opinion piece in The New York Times laying out how she believed AI must be regulated. She wrote that the FTC wouldn’t allow “business models or practices involving the mass exploitation of their users,” adding, “Although these tools are novel, they are not exempt from existing rules, and the FTC will vigorously enforce the laws we are charged with administering, even in this new market.”

In addition to the charges outlined in the FTC letter, she warned of other dangers: the ways AI can turbocharge fraud; how it can automate discrimination and steal people’s jobs; and how big companies can use their leads in AI to illegally dominate markets.

Multibillion-dollar lawsuits against Microsoft and OpenAI

The FTC move isn’t the only legal action Microsoft and OpenAI face. There have been private suits, too. One of the most recent is a $3 billion class-action lawsuit against Microsoft and OpenAI, claiming the companies stole “vast amounts of private information” from people on the internet without their consent, and used that information to train ChatGPT.

The filing charges the companies took “essentially every piece of data exchanged on the internet it could take,” without telling people, or giving them “just compensation” for data collection done at an “unprecedented scale.”

It added that the companies “use stolen private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge.”

Timothy K. Giordano, a partner at the law firm behind the suit told CNN: “By collecting previously obscure personal data of millions and misappropriating it to develop a volatile, untested technology, OpenAI put everyone in a zone of risk that is incalculable – but unacceptable by any measure of responsible data protection and use.”

Another lawsuit, by comedian Sarah Silverman and authors Christopher Golden and Richard Kadrey, charge OpenAI and Meta illegally trained AIs using their copyrighted works without asking permission.

I can vouch from personal experience that Microsoft’s Bing chatbot may well violate copyright laws. While researching this column, I asked the chatbot, “Can the courts rein in AI?”

The answer I got was a curious one. First were a few dozen words that completely misunderstood the question. That was followed by concise, exceedingly well-written paragraphs about the issue. (Those paragraphs sounded nothing like the usual murky word-stew I get when I ask the chatbot a difficult question.)

Then I found out why.

In researching the chatbot’s sources for its answer, I discovered that the chatbot had lifted those well-written paragraphs word for word from an article written by Melissa Heikkilä in Technology Review. They made up more than 80% of the chatbot’s answer — and the remaining 20% was useless. So, essentially the entire useful answer was stolen from Technology Review and Heikkila.

The upshot

Just last week, seven companies, including Microsoft, OpenAI, Google, Meta, and others agreed to put certain guardrails around AI. Those protections are little more than window dressing; the companies say they’ll investigate AI security risks and use watermarks so people can tell when content has been generated by AI. They’re only voluntary, though, with no enforcement mechanism or fines if they’re violated. The New York Times accurately calls them, “only an early, tentative step...largely the lowest common denominator, and can be interpreted by every company differently.”

The European Union, though, has passed an early draft of an AI law with real teeth in it. It says AI developers must publish a summary of copyrighted material they use to train AI; requires companies to stop AI from generating illegal content; curtails face recognition; and bans the use of biometric data from social media to build AI databases.

Even more important is that AI developers would be required to perform risk assessments before their products can be used, similar to the way drugs need to be approved by a government agency before being released. The final version of the law won’t be passed until later in the year, and the AI industry is lobbying mightily against it.

So, can governments and the legal system ever regulate AI? It’s not clear they can. Microsoft, OpenAI, and other AI companies have many reasons to fight against it — countless billions of them. Consider this: A recent report from Macquarie Equity Research found that if only 10% of enterprises who use Microsoft 365 sign up for Microsoft’s AI Copilot productivity tool, the company would get an additional $14 billion of revenue in the first year alone. Microsoft is building Copilots for essentially its entire product line, and pushing its cloud-based AI-as-a-Service offering. The sky is the limit for how much income all that may add up to.

Within a year, we’ll know whether AI will continue untethered, or whether serious safeguards will be put into place. I can’t say I know the outcome, but I’m on the side of regulations and law.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments