The Ethics of Autonomous Vehicles

This Friday my colleagues at the Accenture Digital Hub in downtown Chicago will be participating in a lunchtime consortium. It’s a debate of sorts. The topic is autonomous vehicles and ethics.

I can’t make it, so volunteered to compose a little “thought-starter.” Here it is:

Dear Fellow “Digital Hubsians,”

Since I can’t be present, I asked if I could kick off proceedings with an introduction to the topic. A wedding out of town has taken me away but I am present in spirit, (obviously) excited about this topic, and eager to learn how the discussion went.

First of all, what is ethics? I’d say it starts with the Golden Rule: Do unto others as you would like done to you.

There’s an upgrade of sorts that fits even better with autonomous vehicles. It’s been called the Platinum Rule: Do unto others as they would like done to them.

Obviously the Platinum Rule is trickier, because you have to be empathetic enough not to automatically project your preferences onto the other person. But it applies pretty uniformly to autonomous vehicles debates, since just about all of us do not want these things to happen:

  1. Being struck and killed by a vehicle, driverless or not
  2. Ditto being killed or injured while riding in a driverless vehicle
  3. Being put out of a job because of driverless vehicles

These are the three major risks to individuals with the advent of better GPS, car sensors and AI.

To get your brains engaged, what follows are some considerations for each.

1. Pedestrian Deaths and Injuries

How does our society deal with the loss of life when someone is struck by a car driven without a driver? We were suddenly forced to confront that when, on March 18, an Uber vehicle operating in autonomous mode struck and killed a woman in Tempe, Arizona. Although she was not walking inside a crosswalk, and was likely not paying attention as she crossed, this is nonetheless tragic and raises ethical questions such as this one: Should we be climbing into an Uber sometime in the future that is priced below the current cost of even an Uber Pool, because this savings could be correlated with an elevated risk of taking a pedestrian life?

There are few defenders of Uber in terms of ethics, but when Uber begins its defense for killing that woman with one of their vehicles, they will undoubtedly say the pedestrian was carelessly jaywalking. When they do, it’s significant that they’ll be using a framework that is approaching its hundredth birthday. In a Vox.com piece from 2015, entitled “The forgotten history of how automakers invented the crime of ‘jaywalking’“, it was pointed out that things changed when more and more pedestrians were dying. And the change was due to an “aggressive effort in the 1920’s” led by “auto groups and manufacturers”…

“In the early days of the automobile, it was drivers’ job to avoid you, not your job to avoid them,” says Peter Norton, a historian at the University of Virginia and author of Fighting Traffic: The Dawn of the Motor Age in the American City. “But under the new model, streets became a place for cars — and as a pedestrian, it’s your fault if you get hit.”

With AI and sensors getting progressively better at sensing things like crosswalks, a new shift in perception may be fostered that carves out the safe zone as anything within those painted lines. Pedestrians may actually come to feel safer when crossing a street that’s busy with passing autonomous vehicles. Why safer? They would know that as long as they are within the lines of a crosswalk, they are shielded from harm to a degree that arguably doesn’t exist today, due to the fallibility of human operators.

A New Yorker article from 20-plus years ago (the January 22, 1996 issue), written by a much younger Malcolm Gladwell, stuck with me then so vividly that when I found it just now in the website’s archives I was able to zero in on exactly the term — and the concept — it had taught me then: Risk homeostasis.

Risk homeostasis, Gladwell explained, was first described by a Canadian psychologist Gerald Wilde, in his book Target Risk. The idea is simple: “Under certain circumstances, changes that appear to make a system or organization safer in fact don’t. Why? Because human beings have a seemingly fundamental tendency to compensate for lower risks in one area by taking greater risks in another.”

And an example there of “one area” versus “another” is the very type of crosswalk the pedestrian was not using that day in Tempe. Risk homeostasis is illustrated beautifully by Gladwell inside and outside those lines in the road. He writes:

Why are more pedestrians killed crossing the street at marked crosswalks than [elsewhere]? Because they compensate for the “safe” environment of a marked crosswalk by being less [vigilant] about oncoming traffic.

It could be argued that risks will rise in some parts of the streets, but they may fall in others, causing an autonomous-vehicle-induced homeostasis.

2. The risk of riding in a driverless vehicle

Once all of the kinks are worked out, you’d think you couldn’t possibly be safer than riding in a vehicle that obeys all traffic rules and slows to safe traveling speeds when weather or reduced visibility dictates. But the AI spiriting you along would learn to make snap decisions that minimize fatalities based on values well beyond who is making the car payments. Your “passenger life” would possibly not be as valuable compared to multiple lives also at stake.

A decade ago I saw what I considered a truly cringe-worthy Will Smith movie. Some people adored it, raved about it. I found it ethically bonkers. It’s called Seven Pounds, and it’s about a jerky, Type A executive type who, while driving too fast with his wife as a passenger, reads a text message that distracts him long enough to cause a collision which kills seven people, including his wife. Over the next couple of years, while in the throes of depression, he contrives a way to redeem himself. Spoiler: He plans to donate parts of his body to seven other people.

For the sake of this discussion, I will restrain myself and not go off on the filmmaker’s willingness to portray an ultimately fatal illness – depression – as heroism and selflessness. Instead, let’s talk about the accident. What if those six passengers in the other vehicle could have been saved if the car driving him and his wife simply veered off the road and into a tree? How safe would you feel traveling in such a vehicle?

This is part of an emerging field called “machine ethics.” They go well beyond your autonomous Uber ride, as this article in The Economist,Morals and the Machine,” points out:

As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming – or at least appearing to assume – moral agency. Weapons systems currently have human operators “in the loop”, but as they grow more sophisticated, it will be possible to shift to “on the loop” operation, with machines carrying out orders autonomously.

So what is the correct answer to the “Seven Pounds accident?” It’s actually addressed in a thought experiment, created by Philippa Foot in 1967. It’s called The Trolley Dilemma. The latest version goes like this:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person tied up on the side track.

You have two options:

  1. Do nothing, and the trolley kills the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the most ethical choice?

When professional philosophers were polled, the results were:

  • 68% would switch (sacrifice the one individual to save five lives)
  • 8% would not switch
  • The remaining 24% had another view or could not answer

So if Will Smith’s character was being driven by an autonomous Uber and a judgement call similar to the Trolley Dilemma was presented, he and his lovely wife (played by MaShae Alderman) would be toast. Six is three times greater – and more worthy of saving by dint of simple math – than two.

3. Autonomous Vehicle Job Displacement

As though driven by an AI-powered car, our society is barreling toward what could be perceived as an abyss. Millions of professional drivers could be put out of work. True, many of these jobs are extremely difficult and can even shorten lifespans. Automation in the past has saved several generations of workers from back-breaking, life-shortening jobs.

But where will these displaced workers go? With little promise of training into better-paying jobs, there is the very real risk of a social upheaval. Innovative countries and states are looking into a solution, that itself presents ethical challenges. The solution is state-funded universal guaranteed income, also known as UBI (Universal Basic Income).

The ethical quandary is this: Assuming UBI works to quell revolt and ensure public health and modest comfort, should the relatively few “haves” fund the lives of the many more “have-nots” — even if it is for the good of the state?

Ethics questions aside, UBI is being taken seriously. It is being tried in a couple Scandinavian countries, and seriously debated by voters in Switzerland. Closer to home, it’s actually being attempted today, in Stockton, CA.

Like autonomous vehicles and AI transformations in other industries, UBI seems to be something that cannot be ignored. How will it be framed in the U.S.? And what would happen to the American standard of living if it was embraced in other parts of the world but not here? The U.S. was founded and built upon Puritan ideals of hard work and self-sufficiency (reliance on legalized slavery for part of that “hard work” notwithstanding). Can we as a society agree to revise our thoughts about work in time to save our rapidly transforming economy?

Talk amongst yourselves.