Select the directory option from the above "Directory" header!

Menu
Microsoft pushes for government regulation of AI. Should we trust it?

Microsoft pushes for government regulation of AI. Should we trust it?

The generative AI gold rush is on, but there are few real guardrails now governing its use. If action isn’t taken soon, any regulations might be too late to do any good.

Credit: Dreamstime

By now, virtually everyone agrees that powerful generative AI needs to be regulated.

In its various forms, it presents a variety of potential dangers: helping authoritarian regimes, thanks to its ability to create misinformation; allowing Big Tech firms to establish monopolies; eliminating millions of jobs; taking over vital infrastructure; and — in the worst case — becoming an existential threat to humankind.

One way or another, governments around the world, including the regulation-averse United States, eventually need to set rules about how generative AI can and can’t be used, if only to show they’re taking the dangers seriously.

Any such rules couldn’t be more important, because without sufficient guardrails in place soon, it may be too late to stop AI's spread.

That’s why Microsoft is pushing so hard for government action — that is, the kind of government action the company wants. Microsoft knows it’s AI gold-rush time, and the company that stakes its claims first will end up the winner. Microsoft has already staked that claim, and now wants to make sure the federal government won’t interfere.

Not surprisingly, Microsoft President Brad Smith and OpenAI CEO Sam Altman have become the feds’ go-to tech execs for advice on how to regulate generative AI. That puts Microsoft, which has invested $13 billion in OpenAI, in the driver's seat — it’s a safe bet Altman’s recommendations align with Smith’s.

What exactly are they advising the government do? And will their recommendations truly rein in AI, or will it be mere window dressing?

Altman becomes the face of AI

More than anyone else, Altman has become the face of generative AI on Capitol Hill, the person elected officials call to learn more about it and for advice on regulations.

The reason is simple: OpenAI created ChatGPT, the chatbot that revolutionised AI when it was unveiled late in 2022.

Just as important, Altman has carefully courted Congress and presents himself not as a tech zealot, but as a reasonable executive who only wants good things for the world. Left unsaid is the billions of dollars that he, his company, and Microsoft stand to gain by ensuring that AI regulation mirrors what they want.

Altman began a charm offensive in mid-May that included a dinner with 60 members of Congress from both political parties. He testified before many of the same members on the Senate Judiciary subcommittee on privacy, technology and the law, where he was lauded in terms usually reserved for important foreign dignitaries.

At the hearing, Committee chair Sen. Richard Blumenthal (D-CT) — normally a critic of Big Tech — enthused: “Sam Altman is night and day compared to other CEOs. Not just in the words and the rhetoric but in actual actions and his willingness to participate and commit to specific action.”

Altman focused primarily on the apocalyptic, up to and including destruction of humankind. He asked that Congress focus their regulations on these kinds of issues.

It was a bait-and-switch.

By focusing on legislation for the dramatic-sounding but faraway potential apocalyptic risks posed by AI (which some see as largely theoretical rather than real at this point), Altman wants Congress to pass important-sounding, but toothless, rules.

They largely ignore the very real dangers the technology presents: the theft of intellectual property, the spread of misinformation in all directions, job destruction on a massive scale, ever-growing tech monopolies, loss of privacy and worse.

If Congress goes along, Altman, Microsoft and others in Big Tech will reap billions, the public will remain largely unprotected, and elected leaders can brag about how they’re fighting the tech industry by reining in AI.

At the same hearing where Altman was hailed, New York University professor emeritus Gary Marcus issued a cutting critique of AI, Altman, and Microsoft. He told Congress that it faces a “perfect storm of corporate irresponsibility, widespread deployment, lack of regulation and inherent unreliability.”

He charged that OpenAI is “beholden” to Microsoft and said Congress shouldn’t follow his recommendations.

Companies rushing to embrace generative AI care only about profits, Marcus warned, summing up his testimony succinctly: “Humanity has taken a back seat.”

Brad Smith weighs in

A week and a half after Altman’s appearance before Congress, Smith had his turn calling for AI regulation. Smith, a lawyer, joined Microsoft in 1993 and was put in charge of resolving the antitrust lawsuit brought by the US Justice Department. In 2015, he became the company’s president and chief legal officer.

He knows his way around the federal government quite well, so well that Altman met with him for help on how to formulate and present his Congressional proposals. Smith knows it so well that the Washington Post recently published an insidery, adulatory article about his work on AI regulation, claiming, “His policy wisdom is aiding others in the industry.”

On May 25, Smith released Microsoft’s official recommendations for regulating AI. Unsurprisingly, they dovetail neatly with Altman’s views, highlighting the apocalyptic rather than the here-and-now. The only specific recommendation was one no one would disagree with: “Require effective safety brakes for AI systems that control critical infrastructure.”

That’s a given, of course, the lowest of low-hanging fruit. Beyond that, his recommendations were ones only a lawyer could love, full of the kind of high-minded legalese that boils down to: Do nothing, but make it sound important.

Things like, “Develop a broad legal and regulatory framework based on the technology architecture for AI,” and “pursue new public-private partnerships to use AI as an effective tool to address the inevitable societal challenges that come with new technology.”

In fairness, Smith did later note a few significant issues that need to be addressed: deep fakes, false AI-generated videos designed for disinformation; the use of AI by “foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians;” and “alteration of legitimate content with an intent to deceive or defraud people through the use of AI.”

But he offered no regulatory proposals for them. And he left out all the other myriad dangers posed by AI.

Will the cavalry come from Europe?

The US is generally anti-regulatory, especially with technology. Lobbying by Microsoft, OpenAI, Google and others will likely keep it that way. So it may be that Europe, more willing than US leaders to tackle Big Tech, will be the one to address AI’s dangers.

It’s already happening. The European Union recently passed a draft law regulating AI. It’s only a starting point, and the final law likely won’t be finalised until later this year.

But the early draft has sharp teeth. It requires AI developers to publish a summary of all copyrighted material used to train AI; calls on AI developers to put in safeguards stopping AI from generating illegal content; ensure that face recognition be curtailed; and ban companies from using biometric data from social media to build their AI databases.

Even more far reaching is this, according to The New York Times: “The European bill takes a ‘risk-based’ approach to regulating AI, focusing on applications with the greatest potential for human harm. This would include where AI systems were used to operate critical infrastructure like water or energy, in the legal system, and when determining access to public services and government benefits. Makers of the technology would have to conduct risk assessments before putting the tech into everyday use, akin to the drug approval process.”

It would be difficult, if not impossible, for AI companies to have one system for Europe, and a different one in the US. So, a European law could force AI developers to follow its guidelines everywhere around the world.

Altman has been busy meeting European leaders, including Ursula von der Leyen, president of the European Commission (the executive branch of the European Union), trying to rein in those regulations, so far to no avail.

Though Microsoft and other tech companies may think they’ve got the political leaders in the US under control, when it comes to AI, Europe may be where they meet their match.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags Microsoftartificial intelligence (AI)

Show Comments