Select the directory option from the above "Directory" header!

Menu
Is it dumb to trust smart technology?

Is it dumb to trust smart technology?

Automation is nice, but it doesn't mean you should hand over responsibility to the machines.

Comments

Did we learn nothing from Arthur C. Clarke's 1968 sci-fi epic, 2001: A Space Odyssey?

In the film, astronauts on a mission to Jupiter discover that the HAL 9000 artificial intelligence computer that controls and automates all functions on the spacecraft starts seriously glitching. The astronauts get worried, HAL gets paranoid -- yada, yada, yada -- HAL kills everyone on the ship.

The moral of the story is that when lives depend on fully automated systems, it's a good idea to keep an eye on those systems anyway. (And if that's not the moral of the story, it should have been.)

How do you use something that's fully automatic, anyway? What is the responsibility of the "user"? Can we just hand over control to the bots?

Recent events in the news suggest that when it comes to using our automatic products and features, some people are doing it wrong.

The PetNet failure

Petnet is a $149 cloud-controlled smart feeder for dogs and cats that automatically dispenses pet food on a schedule. The feeder connects to the Internet via your home Wi-Fi network, and you control the feeder with an iOS app, and even dispense treats manually (for example, to assuage your own guilt for leaving Skippy in the care of an appliance).

Petnet is also tied to a pet food delivery service, which can also be automated through Amazon's Dash program. You don't have to order pet food; the feeder will do it for you. (As soon as dog-walking robots and dog-petting machines come on the market, we can disengage with Skippy altogether!)

Crucially, Petnet monitors your pet's food and water consumption to make sure Skippy doesn't get fat.

Sounds awesome, right?

But when a Google-provisioned service that the Petnet cloud depends on went down, some 10 percent of Petnet feeders stopped working properly for about 10 hours. Although the company claims automated feeding schedules were unaffected, users lost the ability to feed manually or change schedules. Some pets went hungry.

Petnet sent an email to customers, advising them to "please ensure that your pets have been fed manually." But many customers are relying on Petnet to feed their pets while on summer vacation.

The Nest Thermostat meltdown

One of the first mainstream Internet of Things devices, Google's Nest Learning Thermostat, is a device that automatically adjusts home temperatures.

Some Nest thermostats experienced a failure last week during a nationwide heat wave. The company issued a statement saying that a "small percentage of Nest Thermostats and Nest Protects" appeared to be offline, even though they were still functioning.

Back in January, a widespread glitch caused Nest thermostats to drain their own batteries, then fail to function. Worse, this occurred during an East Coast cold snap. Many Nest customers were left in the cold.

I'm not aware of any deaths, illness or injuries from the overheating and overcooling that resulted when the Nest's automatic temperature control failure, but it's a possibility for our automated future.

As a larger percentage of temperature controls become Nest-like automated systems, including for disabled, elderly or sick people, problems with these automated systems could be life threatening.

The Tesla Auto-Pilot crash

A Tesla crash in May raised questions about Tesla's Autopilot feature. The car was reportedly in Autopilot mode when it crashed into the side of a tractor trailer, killing the driver. Tesla claimed it was the first known death in more than 130 million miles of Autopilot operation.

The U.S. National Transportation Safety Board found that the car was traveling at 74 mph in a 65 mph zone.

Some Teslas have a range of automation features, including a beta Autosteer mode, an Auto Lane Change feature, an Automatic Emergency Steering and Side Collision Warning system as well as Autopark.

While attention has been paid to the so-called Autopilot set of features, the real failure happened with an automatic emergency braking system, which probably did not engage in the crash. The reason is probably that the truck's side was lit up by the sun, which the Tesla's visual system couldn't distinguish from sky.

Some speculate that the driver wasn't paying attention to the road. The truck driver claims that the driver was "playing Harry Potter on the TV screen" in the car throughout the entire crash.

News of the crash didn't stop the occasional abuse of Tesla Autopilot mode. A recent YouTube video, which has since been removed, shows a man playing Pokémon Go with both hands while Tesla’s Autopilot handled the driving.

How to use fully automatic products and features

Like a Tesla on both Autopilot and Ludicrous modes, we're rapidly screaming toward a future where many of our appliances, equipment, vehicles, gadgets and services are completely automated, controlled remotely by artificial intelligence or locally by algorithms.

It's important for us, as consumers and users, to learn how to safely incorporate these technologies into our lives. What these events tell us is that the right way to use automation is to treat it like the convenience it is, and not a replacement for human awareness, monitoring and judgment.

Pets can't be left in the care of a cloud service entirely. As with before the automatic pet feeder era, a human guardian who cares must be available to check on, feed, or pet-sit any pet when we go on vacation. We can't turn pets' lives and well-being completely over to an app (unless, of course, those pets themselves are robots).

Climate control, and by extension smoke and carbon monoxide detection, can't be left entirely in the hands of the machines. The elderly or others who might not be able to take care of themselves can't be placed in the hands of algorithms, because sometimes algorithms go haywire.

The really hard problem is: What to think and do about self-driving cars?

The Tesla crash got pundits and social media commenters questioning the wisdom of self-driving cars. The problem with that impulse is that automated emergency braking on many different makes and models of cars will save lives, according to the Insurance Institute for Highway Safety.

It seems likely that the crash was caused by human error. It's reasonable to expect the driver of a car to apply the brake when speeding toward a truck in the road. When we consider that human drivers are at fault for thousands of fatal car accidents each year, we should be pushing for truly self-driving cars because people are more dangerous behind the wheel.

Yet at the same time, I think that for the foreseeable future, the right way to "use" a fully automated, fully self-driving car is to let the car drive, but always have someone behind the wheel, paying full attention to the road and ready to take over control at any time.

The Googles and the Teslas of the world will tell us that we don't need steering wheels, brakes or even windshields in self-driving cars, and that the car will drive more safely than we can. They'll easily back up those statements with statistics that show far fewer accidents compared with human-driven cars. But even if automated driving brings down the car-related fatality rate to, say, 10 percent of what it was with human drivers, we're still talking about thousands of people being killed each year.

If you were to leave your baby in a public park and go shopping, statistically speaking it's unlikely that anyone would harm the baby. But you'd never do that because, as a parent, you're not going to take any chances. Similarly, parents will not -- and should not -- trust the lives of their children to automated driving systems. The best approach is to let the self-driving features do the driving while paying full attention to everything that's happening.

What we need is a set of cultural norms that make it clear to people that automating important things doesn't and can't replace a human being paying attention.

Because, in the words of HAL 9000 from the movie 2001: A Space Odyssey, any harm that results from turning our loved ones' safety over to the machines "can only be attributable to human error."

Words of wisdom, HAL. Words of wisdom. Now open the pod bay door, HAL.

HAL?


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments