Tag Archives: AI

Wolves, Trains and Automobiles: The Domestication of A.I.

I’ve thought and read a lot about artificial intelligence (AI). Particularly, its potential threat to us, its human creators. I’m not much for doomsday theories, but I admit I was inclined to fear the worst. To put things at their most melodramatic, I worried we might be unwittingly creating our own eventual slave masters. But after further reading and thinking, I’ve reconsidered. Yes. A.I. will be everywhere in our future. But not as sinister job-killers and overlords. No, they will be extensions of us in a way I can only compare with that most beloved of domesticated creatures: The dog.

For you to follow my logic, you’ll need to remember two facts:

  1. Our advancement as a species from hunter-gatherers to complex civilizations would not be possible without domesticated plants and animals
  2. Our collective fear of technology is often wildly unfounded

Bear with me, but you’ll also probably need to recall these definitions:

  • Domestication: Taking existing plants or animals and breeding them to serve us. Two examples are the selection of the most helpful plants and turning them into crops. Michael Pollan’s early book, The Botany of Desire: A Plant’s-Eye View of the World, will bring you a long way to seeing this process in action. As for animals, you may think of dogs as being mere pets, but early in our evolution as humans we bred the wolf to help us hunt for meat, and to protect us from predators. Before domestication, we pre-humans hunted in packs, and so did the wolves … never the twain shall meet. After this domestication, we ensured the more docile canines a better life, under the protection of our species and its burgeoning technologies (see definition below), and they delivered the goods for us by helping us thrive in hostile conditions. It was a symbiosis that turned our two packs into a single unit. No wonder the domesticated dog adores us so, and that we consider them man(kind)’s best friend.
  • Technology: Did you know the pencil was once considered technology? So was the alphabet. You may think of them merely as tools, but technology is any tool that is new. And our attitudes toward anything new always starts with fear. Douglas Adams put it this way: “I’ve come up with a set of rules that describe our reactions to technologies: 1.) Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. 2.) Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. 3.) Anything invented after you’re thirty-five is against the natural order of things.” Fear of technology not surprisingly spawned the first science fiction: Mary Shelley’s Frankenstein; or, The Modern Prometheus, a literal fever dream about a scientist’s hubris and the destruction it wrought upon himself and the world. This fear has a name: Moral panic. And it has created some pretty far-fetched urban myths.

In a Wall Street Journal piece, Women And Children First: Technology And Moral Panic, Genevieve Bell listed a few of these vintage myths. The first is about the advent of the electric light: “If you electrify homes you will make women and children … vulnerable. Predators will be able to tell if they are home because the light will be on, and you will be able to see them. So electricity is going to make women vulnerable … and children will be visible too and it will be predators, who seem to be lurking everywhere, who will attack.” And consider this even bigger hoot: “There was some wonderful stuff about [railway trains] too in the U.S., that women’s bodies were not designed to go at 50 miles an hour. Our uteruses would fly out of our bodies as they were accelerated to that speed.”

Sounds messy.

I don’t have to tell you about our modern moral panic surrounding A.I. Except there is a bit of reverse sexism going on, because this time it is male workers who are more the victims. Their work — whether purely intellectual or journeyman labor — will be eliminated. We’ll all be out on the street, presumably to be mowed down by self-driving cars and trucks.

The Chicken Littles had me for a while

So what changed? In the same week I read two thought-provoking articles. One was in The New Yorker, The Mind-Expanding Ideas of Andy Clark. Its subtitle says it all: The tools we use to help us think — from language to smartphones — may be part of thought itself. This long piece describes Clark’s attempt to better understand what consciousness is, and what are its boundaries. In other words, where do we as thinking humans end and the world we perceive begin?

He comes to recognize that there is a reason we perceive the world based on our five senses. Our brains are built to keep us alive and able to reproduce. Nothing more. All the bonus tracks in our brain’s Greatest Hits playlist … Making art, considering the cosmos, perceiving a future and a past … these are all artifacts of a consciousness that moves our limbs through space.

To some people, perception — the transmitting of all the sensory noise from the world — seemed the natural boundary between world and mind. Clark had already questioned this boundary with his theory of the extended mind. Then, in the early aughts, he heard about a theory of perception that seemed to him to describe how the mind, even as conventionally understood, did not stay passively distant from the world but reached out into it. It was called predictive processing.

Predictive processing starts with our bodies. For instance, we don’t move our arm when it’s at rest. We imagine it moving — predict its movement — and when our arm gets the memo it responds. Or not. If we are paralyzed, or that arm is currently in the jaws of a bear, it sends the bad news back to our brains. And so it goes.

In a similar way we project this feedback loop out into the world. But we are limited by our own sense of it.

Domestication of canines was such a game-changer because we suddenly had assistants with different senses and perceptions. Together humans and dogs became a Dynamic Duo … A prehistoric Batman and Robin. But Robin always knew who was the alpha in this relationship.

Right now there is another domestication taking place. It’s not of a plant or an animal, but of a complicated digital application. If that seems a stretch … If grouping these three together — plants, animals and applications — keep in mind that domesticating all of them means altering digital information.

All Life Is Digital

Plants and animals have DNA, or deoxyribonucleic acid. They are alive because they have genetic material. And guess what? It’s all digital. DNA encoding uses 4 bases: G,C,T, and A. These are four concrete values that are expressed in the complex combinations that make us both living, and able to pass along our “usness” to new generations. We’re definitely more complicated than the “currently” binary underpinnings of A.I. But as we’ve seen, A.I. is really showing us humans up in some important ways.

They’re killing us humans at chess. And Jeopardy.

So: Will A.I. become conscious and take us over? Clark would say consciousness is beyond A.I.’s reach, because as impressive as its abilities to move through the world and perceive it are, even dogs have more of an advantage in the consciousness department. He would be backed up by none less than Nobel Prize in Economics winner Daniel Kohneman, of Thinking, Fast and Slow fame. I got to hear him speak on this subject live, at a New Yorker TechFest, and I was impressed and relieved by how sanguine he was about the future of A.I.

Here’s where I need to bring in the other article, a much briefer one, from The Economist. Robots Can Assemble IKEA Furniture sounds pretty ominous. It’s a modern trope that assembling IKEA furniture is an unmanning intellectual test. But the article spoke more about A.I.’s limitations than its looming existential threats.

First, it took the robots comparatively long time to achieve the task at hand. In the companion piece to that article we read that …

Machines excel at the sorts of abstract, cognitive tasks that, to people, signify intelligence—complex board games, say, or differential calculus. But they struggle with physical jobs, such as navigating a cluttered room, which are so simple that they hardly seem to count as intelligence at all. The IKEAbots are a case in point. It took a pair of them, pre-programmed by humans, more than 20 minutes to assemble a chair that a person could knock together in a fraction of the time.

Their struggles brought me back to how our consciousness gradually materialized to our prehistoric ancestors. It arrived not in spite of our sensory experience of the world, but specifically because of it. If you doubt that just consider my natural and clear way just now of describing the arrival of consciousness: I said it materialized. You understood this as a metaphor associated with our perception of the material world.

This word and others to describe concepts play on our ability to feel things. Need another example: This is called a goddamn web page. What’s a page? What’s a web? They’re both things we can touch and experience with our carefully evolved senses.

And without these metaphors these paragraphs would not make sense.

Yes, our ancestors needed the necessary but not sufficient help of things like cooking, which enabled us to take in enough calories to grow and maintain our complex neural network, and the domestication of animals and plants that led us to agriculture and an escape from the limitations of nomadic hunter-gatherer tribes (I strongly recommend Guns, Germs and Steel: The Fates of Human Societies for more on this), but …

To gain consciousness, we also needed to feel things. And what do we call people who don’t feel feelings? Robots. “Soulless machines.”

Without evolving to feel, should A.I. nonetheless take over the world, it’s unlikely they will be assembling their own IKEA chairs with alacrity. They’ll make us do it for them. Because our predictive processing makes this type of task annoying but manageable. We can even do it faster over time.

It’s All About The Feels

But worry not. Our enslavement won’t happen because — and I’m feeling pretty hubristic myself as I write this — we’re the feelers, the dreamers, the artists. Not A.I.

Before we domesticated dogs, we were limited in where in the world we could roam, and the game we could hunt. After dogs, we progressed. We prospered. Dogs didn’t put us out of jobs, if you will, they took the jobs they were better at in our service. Inevitably, we found other ways to use our time, including becoming creatures who are closer to the humans we would recognize on the street today, or staring back in the mirror.

We are domesticating A.I. Never forget that.

And repeat after me: We have nothing to fear but moral panic itself.

The Ethics of Autonomous Vehicles

This Friday my colleagues at the Accenture Digital Hub in downtown Chicago will be participating in a lunchtime consortium. It’s a debate of sorts. The topic is autonomous vehicles and ethics.

I can’t make it, so volunteered to compose a little “thought-starter.” Here it is:

Dear Fellow “Digital Hubsians,”

Since I can’t be present, I asked if I could kick off proceedings with an introduction to the topic. A wedding out of town has taken me away but I am present in spirit, (obviously) excited about this topic, and eager to learn how the discussion went.

First of all, what is ethics? I’d say it starts with the Golden Rule: Do unto others as you would like done to you.

There’s an upgrade of sorts that fits even better with autonomous vehicles. It’s been called the Platinum Rule: Do unto others as they would like done to them.

Obviously the Platinum Rule is trickier, because you have to be empathetic enough not to automatically project your preferences onto the other person. But it applies pretty uniformly to autonomous vehicles debates, since just about all of us do not want these things to happen:

  1. Being struck and killed by a vehicle, driverless or not
  2. Ditto being killed or injured while riding in a driverless vehicle
  3. Being put out of a job because of driverless vehicles

These are the three major risks to individuals with the advent of better GPS, car sensors and AI.

To get your brains engaged, what follows are some considerations for each.

1. Pedestrian Deaths and Injuries

How does our society deal with the loss of life when someone is struck by a car driven without a driver? We were suddenly forced to confront that when, on March 18, an Uber vehicle operating in autonomous mode struck and killed a woman in Tempe, Arizona. Although she was not walking inside a crosswalk, and was likely not paying attention as she crossed, this is nonetheless tragic and raises ethical questions such as this one: Should we be climbing into an Uber sometime in the future that is priced below the current cost of even an Uber Pool, because this savings could be correlated with an elevated risk of taking a pedestrian life?

There are few defenders of Uber in terms of ethics, but when Uber begins its defense for killing that woman with one of their vehicles, they will undoubtedly say the pedestrian was carelessly jaywalking. When they do, it’s significant that they’ll be using a framework that is approaching its hundredth birthday. In a Vox.com piece from 2015, entitled “The forgotten history of how automakers invented the crime of ‘jaywalking’“, it was pointed out that things changed when more and more pedestrians were dying. And the change was due to an “aggressive effort in the 1920’s” led by “auto groups and manufacturers”…

“In the early days of the automobile, it was drivers’ job to avoid you, not your job to avoid them,” says Peter Norton, a historian at the University of Virginia and author of Fighting Traffic: The Dawn of the Motor Age in the American City. “But under the new model, streets became a place for cars — and as a pedestrian, it’s your fault if you get hit.”

With AI and sensors getting progressively better at sensing things like crosswalks, a new shift in perception may be fostered that carves out the safe zone as anything within those painted lines. Pedestrians may actually come to feel safer when crossing a street that’s busy with passing autonomous vehicles. Why safer? They would know that as long as they are within the lines of a crosswalk, they are shielded from harm to a degree that arguably doesn’t exist today, due to the fallibility of human operators.

A New Yorker article from 20-plus years ago (the January 22, 1996 issue), written by a much younger Malcolm Gladwell, stuck with me then so vividly that when I found it just now in the website’s archives I was able to zero in on exactly the term — and the concept — it had taught me then: Risk homeostasis.

Risk homeostasis, Gladwell explained, was first described by a Canadian psychologist Gerald Wilde, in his book Target Risk. The idea is simple: “Under certain circumstances, changes that appear to make a system or organization safer in fact don’t. Why? Because human beings have a seemingly fundamental tendency to compensate for lower risks in one area by taking greater risks in another.”

And an example there of “one area” versus “another” is the very type of crosswalk the pedestrian was not using that day in Tempe. Risk homeostasis is illustrated beautifully by Gladwell inside and outside those lines in the road. He writes:

Why are more pedestrians killed crossing the street at marked crosswalks than [elsewhere]? Because they compensate for the “safe” environment of a marked crosswalk by being less [vigilant] about oncoming traffic.

It could be argued that risks will rise in some parts of the streets, but they may fall in others, causing an autonomous-vehicle-induced homeostasis.

2. The risk of riding in a driverless vehicle

Once all of the kinks are worked out, you’d think you couldn’t possibly be safer than riding in a vehicle that obeys all traffic rules and slows to safe traveling speeds when weather or reduced visibility dictates. But the AI spiriting you along would learn to make snap decisions that minimize fatalities based on values well beyond who is making the car payments. Your “passenger life” would possibly not be as valuable compared to multiple lives also at stake.

A decade ago I saw what I considered a truly cringe-worthy Will Smith movie. Some people adored it, raved about it. I found it ethically bonkers. It’s called Seven Pounds, and it’s about a jerky, Type A executive type who, while driving too fast with his wife as a passenger, reads a text message that distracts him long enough to cause a collision which kills seven people, including his wife. Over the next couple of years, while in the throes of depression, he contrives a way to redeem himself. Spoiler: He plans to donate parts of his body to seven other people.

For the sake of this discussion, I will restrain myself and not go off on the filmmaker’s willingness to portray an ultimately fatal illness – depression – as heroism and selflessness. Instead, let’s talk about the accident. What if those six passengers in the other vehicle could have been saved if the car driving him and his wife simply veered off the road and into a tree? How safe would you feel traveling in such a vehicle?

This is part of an emerging field called “machine ethics.” They go well beyond your autonomous Uber ride, as this article in The Economist,Morals and the Machine,” points out:

As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming – or at least appearing to assume – moral agency. Weapons systems currently have human operators “in the loop”, but as they grow more sophisticated, it will be possible to shift to “on the loop” operation, with machines carrying out orders autonomously.

So what is the correct answer to the “Seven Pounds accident?” It’s actually addressed in a thought experiment, created by Philippa Foot in 1967. It’s called The Trolley Dilemma. The latest version goes like this:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person tied up on the side track.

You have two options:

  1. Do nothing, and the trolley kills the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the most ethical choice?

When professional philosophers were polled, the results were:

  • 68% would switch (sacrifice the one individual to save five lives)
  • 8% would not switch
  • The remaining 24% had another view or could not answer

So if Will Smith’s character was being driven by an autonomous Uber and a judgement call similar to the Trolley Dilemma was presented, he and his lovely wife (played by MaShae Alderman) would be toast. Six is three times greater – and more worthy of saving by dint of simple math – than two.

3. Autonomous Vehicle Job Displacement

As though driven by an AI-powered car, our society is barreling toward what could be perceived as an abyss. Millions of professional drivers could be put out of work. True, many of these jobs are extremely difficult and can even shorten lifespans. Automation in the past has saved several generations of workers from back-breaking, life-shortening jobs.

But where will these displaced workers go? With little promise of training into better-paying jobs, there is the very real risk of a social upheaval. Innovative countries and states are looking into a solution, that itself presents ethical challenges. The solution is state-funded universal guaranteed income, also known as UBI (Universal Basic Income).

The ethical quandary is this: Assuming UBI works to quell revolt and ensure public health and modest comfort, should the relatively few “haves” fund the lives of the many more “have-nots” — even if it is for the good of the state?

Ethics questions aside, UBI is being taken seriously. It is being tried in a couple Scandinavian countries, and seriously debated by voters in Switzerland. Closer to home, it’s actually being attempted today, in Stockton, CA.

Like autonomous vehicles and AI transformations in other industries, UBI seems to be something that cannot be ignored. How will it be framed in the U.S.? And what would happen to the American standard of living if it was embraced in other parts of the world but not here? The U.S. was founded and built upon Puritan ideals of hard work and self-sufficiency (reliance on legalized slavery for part of that “hard work” notwithstanding). Can we as a society agree to revise our thoughts about work in time to save our rapidly transforming economy?

Talk amongst yourselves.