Select the directory option from the above "Directory" header!

Menu
AI still requires human expertise

AI still requires human expertise

AI generates a lot of answers and saves a lot of time, but it’s too often incomplete or untrustworthy.

Comments
Credit: Thinkstock

From GitHub Copilot to ChatGPT-infused Bing search, AI increasingly permeates our everyday lives. 

While directionally good (machines do more work so people can focus our time elsewhere), you need a fair amount of expertise in a given field to trust the results AI offers. 

Ben Kehoe, former cloud robotics research scientist for iRobot, argues that people still have to take ultimate responsibility for whatever the AI suggests, which requires you to determine whether AI’s suggestions are any good.

Accountability for results

We’re in the awkward toddler phase of AI, when it shows tremendous promise but it’s not always clear just what it will become when it grows up.

I've mentioned before that AI’s biggest successes to date haven’t come at the expense of people, but rather as a complement to people. Think of machines running compute-intensive queries at massive scale, answering questions that people could handle, but much slower.

Now we have things like “fully autonomous self-driving cars” that are anything but. 

Not only is the AI/software not nearly good enough yet, but the laws still won’t allow a driver to blame the AI for a crash (and there are plenty of crashes—at least 400 last year). 

ChatGPT is amazing until it starts making up information during the public launch of the new AI-powered Bing, as just another example.

And so on. This isn’t to deprecate these or other uses of AI. 

Rather, it’s a reminder that, as Kehoe argues, people can’t blame AI for the outcomes of using that AI. 

He stresses, “A lot of the AI takes I see assert that AI will be able to assume the entire responsibility for a given task for a person, and implicitly assume that the person’s accountability for the task will just sort of … evaporate?” People are responsible if their Tesla crashes into another car. They’re also responsible for whatever they choose to do with ChatGPT or for copyright infringement if DALL-E misuses protected material, etc.

For me, such accountability becomes most critical when using AI tools like GitHub Copilot for work.

Watching the watchers

It’s not hard to find developers benefiting from Copilot.

Here’s one developer who appreciated the quick suggestions of APIs but otherwise found it “wonky” and “slow.” 

There are plenty of other mixed reviews. Developers like how it fleshes out boilerplate code, finds and suggests relevant APIs, and more. Developer Edwin Miller notes that Copilot’s suggestions are “generally accurate,” which is both good and bad. 

It’s good that Copilot can be trusted most of the time, but that’s also the problem: It can only be trusted most of the time. To know when its suggestions can’t be trusted, you have to be an experienced developer.

Again, this isn’t a big problem. 

If Copilot helps developers save some time, that’s good, right? It is, but it also means that developers need to take responsibility for the results of using Copilot, so it may not always be a great option for developers who are younger in their careers. 

What could be a shortcut for an experienced developer could lead to bad results for a less experienced one. It’s probably unwise for a newbie to try to take those shortcuts, anyway, as it could stifle their learning of the programming art.

So, yes, by all means, let’s use AI to improve our driving, searching, and programming. But let’s also remember that until we have full trust in its results, experienced people need to keep their proverbial hands on the wheel.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments