Category Archives: Musings

Wolves, Trains and Automobiles: The Domestication of A.I.

I’ve thought and read a lot about artificial intelligence (AI). Particularly, its potential threat to us, its human creators. I’m not much for doomsday theories, but I admit I was inclined to fear the worst. To put things at their most melodramatic, I worried we might be unwittingly creating our own eventual slave masters. But after further reading and thinking, I’ve reconsidered. Yes. A.I. will be everywhere in our future. But not as sinister job-killers and overlords. No, they will be extensions of us in a way I can only compare with that most beloved of domesticated creatures: The dog.

For you to follow my logic, you’ll need to remember two facts:

  1. Our advancement as a species from hunter-gatherers to complex civilizations would not be possible without domesticated plants and animals
  2. Our collective fear of technology is often wildly unfounded

Bear with me, but you’ll also probably need to recall these definitions:

  • Domestication: Taking existing plants or animals and breeding them to serve us. Two examples are the selection of the most helpful plants and turning them into crops. Michael Pollan’s early book, The Botany of Desire: A Plant’s-Eye View of the World, will bring you a long way to seeing this process in action. As for animals, you may think of dogs as being mere pets, but early in our evolution as humans we bred the wolf to help us hunt for meat, and to protect us from predators. Before domestication, we pre-humans hunted in packs, and so did the wolves … never the twain shall meet. After this domestication, we ensured the more docile canines a better life, under the protection of our species and its burgeoning technologies (see definition below), and they delivered the goods for us by helping us thrive in hostile conditions. It was a symbiosis that turned our two packs into a single unit. No wonder the domesticated dog adores us so, and that we consider them man(kind)’s best friend.
  • Technology: Did you know the pencil was once considered technology? So was the alphabet. You may think of them merely as tools, but technology is any tool that is new. And our attitudes toward anything new always starts with fear. Douglas Adams put it this way: “I’ve come up with a set of rules that describe our reactions to technologies: 1.) Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. 2.) Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. 3.) Anything invented after you’re thirty-five is against the natural order of things.” Fear of technology not surprisingly spawned the first science fiction: Mary Shelley’s Frankenstein; or, The Modern Prometheus, a literal fever dream about a scientist’s hubris and the destruction it wrought upon himself and the world. This fear has a name: Moral panic. And it has created some pretty far-fetched urban myths.

In a Wall Street Journal piece, Women And Children First: Technology And Moral Panic, Genevieve Bell listed a few of these vintage myths. The first is about the advent of the electric light: “If you electrify homes you will make women and children … vulnerable. Predators will be able to tell if they are home because the light will be on, and you will be able to see them. So electricity is going to make women vulnerable … and children will be visible too and it will be predators, who seem to be lurking everywhere, who will attack.” And consider this even bigger hoot: “There was some wonderful stuff about [railway trains] too in the U.S., that women’s bodies were not designed to go at 50 miles an hour. Our uteruses would fly out of our bodies as they were accelerated to that speed.”

Sounds messy.

I don’t have to tell you about our modern moral panic surrounding A.I. Except there is a bit of reverse sexism going on, because this time it is male workers who are more the victims. Their work — whether purely intellectual or journeyman labor — will be eliminated. We’ll all be out on the street, presumably to be mowed down by self-driving cars and trucks.

The Chicken Littles had me for a while

So what changed? In the same week I read two thought-provoking articles. One was in The New Yorker, The Mind-Expanding Ideas of Andy Clark. Its subtitle says it all: The tools we use to help us think — from language to smartphones — may be part of thought itself. This long piece describes Clark’s attempt to better understand what consciousness is, and what are its boundaries. In other words, where do we as thinking humans end and the world we perceive begin?

He comes to recognize that there is a reason we perceive the world based on our five senses. Our brains are built to keep us alive and able to reproduce. Nothing more. All the bonus tracks in our brain’s Greatest Hits playlist … Making art, considering the cosmos, perceiving a future and a past … these are all artifacts of a consciousness that moves our limbs through space.

To some people, perception — the transmitting of all the sensory noise from the world — seemed the natural boundary between world and mind. Clark had already questioned this boundary with his theory of the extended mind. Then, in the early aughts, he heard about a theory of perception that seemed to him to describe how the mind, even as conventionally understood, did not stay passively distant from the world but reached out into it. It was called predictive processing.

Predictive processing starts with our bodies. For instance, we don’t move our arm when it’s at rest. We imagine it moving — predict its movement — and when our arm gets the memo it responds. Or not. If we are paralyzed, or that arm is currently in the jaws of a bear, it sends the bad news back to our brains. And so it goes.

In a similar way we project this feedback loop out into the world. But we are limited by our own sense of it.

Domestication of canines was such a game-changer because we suddenly had assistants with different senses and perceptions. Together humans and dogs became a Dynamic Duo … A prehistoric Batman and Robin. But Robin always knew who was the alpha in this relationship.

Right now there is another domestication taking place. It’s not of a plant or an animal, but of a complicated digital application. If that seems a stretch … If grouping these three together — plants, animals and applications — keep in mind that domesticating all of them means altering digital information.

All Life Is Digital

Plants and animals have DNA, or deoxyribonucleic acid. They are alive because they have genetic material. And guess what? It’s all digital. DNA encoding uses 4 bases: G,C,T, and A. These are four concrete values that are expressed in the complex combinations that make us both living, and able to pass along our “usness” to new generations. We’re definitely more complicated than the “currently” binary underpinnings of A.I. But as we’ve seen, A.I. is really showing us humans up in some important ways.

They’re killing us humans at chess. And Jeopardy.

So: Will A.I. become conscious and take us over? Clark would say consciousness is beyond A.I.’s reach, because as impressive as its abilities to move through the world and perceive it are, even dogs have more of an advantage in the consciousness department. He would be backed up by none less than Nobel Prize in Economics winner Daniel Kohneman, of Thinking, Fast and Slow fame. I got to hear him speak on this subject live, at a New Yorker TechFest, and I was impressed and relieved by how sanguine he was about the future of A.I.

Here’s where I need to bring in the other article, a much briefer one, from The Economist. Robots Can Assemble IKEA Furniture sounds pretty ominous. It’s a modern trope that assembling IKEA furniture is an unmanning intellectual test. But the article spoke more about A.I.’s limitations than its looming existential threats.

First, it took the robots comparatively long time to achieve the task at hand. In the companion piece to that article we read that …

Machines excel at the sorts of abstract, cognitive tasks that, to people, signify intelligence—complex board games, say, or differential calculus. But they struggle with physical jobs, such as navigating a cluttered room, which are so simple that they hardly seem to count as intelligence at all. The IKEAbots are a case in point. It took a pair of them, pre-programmed by humans, more than 20 minutes to assemble a chair that a person could knock together in a fraction of the time.

Their struggles brought me back to how our consciousness gradually materialized to our prehistoric ancestors. It arrived not in spite of our sensory experience of the world, but specifically because of it. If you doubt that just consider my natural and clear way just now of describing the arrival of consciousness: I said it materialized. You understood this as a metaphor associated with our perception of the material world.

This word and others to describe concepts play on our ability to feel things. Need another example: This is called a goddamn web page. What’s a page? What’s a web? They’re both things we can touch and experience with our carefully evolved senses.

And without these metaphors these paragraphs would not make sense.

Yes, our ancestors needed the necessary but not sufficient help of things like cooking, which enabled us to take in enough calories to grow and maintain our complex neural network, and the domestication of animals and plants that led us to agriculture and an escape from the limitations of nomadic hunter-gatherer tribes (I strongly recommend Guns, Germs and Steel: The Fates of Human Societies for more on this), but …

To gain consciousness, we also needed to feel things. And what do we call people who don’t feel feelings? Robots. “Soulless machines.”

Without evolving to feel, should A.I. nonetheless take over the world, it’s unlikely they will be assembling their own IKEA chairs with alacrity. They’ll make us do it for them. Because our predictive processing makes this type of task annoying but manageable. We can even do it faster over time.

It’s All About The Feels

But worry not. Our enslavement won’t happen because — and I’m feeling pretty hubristic myself as I write this — we’re the feelers, the dreamers, the artists. Not A.I.

Before we domesticated dogs, we were limited in where in the world we could roam, and the game we could hunt. After dogs, we progressed. We prospered. Dogs didn’t put us out of jobs, if you will, they took the jobs they were better at in our service. Inevitably, we found other ways to use our time, including becoming creatures who are closer to the humans we would recognize on the street today, or staring back in the mirror.

We are domesticating A.I. Never forget that.

And repeat after me: We have nothing to fear but moral panic itself.

The Ethics of Autonomous Vehicles

This Friday my colleagues at the Accenture Digital Hub in downtown Chicago will be participating in a lunchtime consortium. It’s a debate of sorts. The topic is autonomous vehicles and ethics.

I can’t make it, so volunteered to compose a little “thought-starter.” Here it is:

Dear Fellow “Digital Hubsians,”

Since I can’t be present, I asked if I could kick off proceedings with an introduction to the topic. A wedding out of town has taken me away but I am present in spirit, (obviously) excited about this topic, and eager to learn how the discussion went.

First of all, what is ethics? I’d say it starts with the Golden Rule: Do unto others as you would like done to you.

There’s an upgrade of sorts that fits even better with autonomous vehicles. It’s been called the Platinum Rule: Do unto others as they would like done to them.

Obviously the Platinum Rule is trickier, because you have to be empathetic enough not to automatically project your preferences onto the other person. But it applies pretty uniformly to autonomous vehicles debates, since just about all of us do not want these things to happen:

  1. Being struck and killed by a vehicle, driverless or not
  2. Ditto being killed or injured while riding in a driverless vehicle
  3. Being put out of a job because of driverless vehicles

These are the three major risks to individuals with the advent of better GPS, car sensors and AI.

To get your brains engaged, what follows are some considerations for each.

1. Pedestrian Deaths and Injuries

How does our society deal with the loss of life when someone is struck by a car driven without a driver? We were suddenly forced to confront that when, on March 18, an Uber vehicle operating in autonomous mode struck and killed a woman in Tempe, Arizona. Although she was not walking inside a crosswalk, and was likely not paying attention as she crossed, this is nonetheless tragic and raises ethical questions such as this one: Should we be climbing into an Uber sometime in the future that is priced below the current cost of even an Uber Pool, because this savings could be correlated with an elevated risk of taking a pedestrian life?

There are few defenders of Uber in terms of ethics, but when Uber begins its defense for killing that woman with one of their vehicles, they will undoubtedly say the pedestrian was carelessly jaywalking. When they do, it’s significant that they’ll be using a framework that is approaching its hundredth birthday. In a Vox.com piece from 2015, entitled “The forgotten history of how automakers invented the crime of ‘jaywalking’“, it was pointed out that things changed when more and more pedestrians were dying. And the change was due to an “aggressive effort in the 1920’s” led by “auto groups and manufacturers”…

“In the early days of the automobile, it was drivers’ job to avoid you, not your job to avoid them,” says Peter Norton, a historian at the University of Virginia and author of Fighting Traffic: The Dawn of the Motor Age in the American City. “But under the new model, streets became a place for cars — and as a pedestrian, it’s your fault if you get hit.”

With AI and sensors getting progressively better at sensing things like crosswalks, a new shift in perception may be fostered that carves out the safe zone as anything within those painted lines. Pedestrians may actually come to feel safer when crossing a street that’s busy with passing autonomous vehicles. Why safer? They would know that as long as they are within the lines of a crosswalk, they are shielded from harm to a degree that arguably doesn’t exist today, due to the fallibility of human operators.

A New Yorker article from 20-plus years ago (the January 22, 1996 issue), written by a much younger Malcolm Gladwell, stuck with me then so vividly that when I found it just now in the website’s archives I was able to zero in on exactly the term — and the concept — it had taught me then: Risk homeostasis.

Risk homeostasis, Gladwell explained, was first described by a Canadian psychologist Gerald Wilde, in his book Target Risk. The idea is simple: “Under certain circumstances, changes that appear to make a system or organization safer in fact don’t. Why? Because human beings have a seemingly fundamental tendency to compensate for lower risks in one area by taking greater risks in another.”

And an example there of “one area” versus “another” is the very type of crosswalk the pedestrian was not using that day in Tempe. Risk homeostasis is illustrated beautifully by Gladwell inside and outside those lines in the road. He writes:

Why are more pedestrians killed crossing the street at marked crosswalks than [elsewhere]? Because they compensate for the “safe” environment of a marked crosswalk by being less [vigilant] about oncoming traffic.

It could be argued that risks will rise in some parts of the streets, but they may fall in others, causing an autonomous-vehicle-induced homeostasis.

2. The risk of riding in a driverless vehicle

Once all of the kinks are worked out, you’d think you couldn’t possibly be safer than riding in a vehicle that obeys all traffic rules and slows to safe traveling speeds when weather or reduced visibility dictates. But the AI spiriting you along would learn to make snap decisions that minimize fatalities based on values well beyond who is making the car payments. Your “passenger life” would possibly not be as valuable compared to multiple lives also at stake.

A decade ago I saw what I considered a truly cringe-worthy Will Smith movie. Some people adored it, raved about it. I found it ethically bonkers. It’s called Seven Pounds, and it’s about a jerky, Type A executive type who, while driving too fast with his wife as a passenger, reads a text message that distracts him long enough to cause a collision which kills seven people, including his wife. Over the next couple of years, while in the throes of depression, he contrives a way to redeem himself. Spoiler: He plans to donate parts of his body to seven other people.

For the sake of this discussion, I will restrain myself and not go off on the filmmaker’s willingness to portray an ultimately fatal illness – depression – as heroism and selflessness. Instead, let’s talk about the accident. What if those six passengers in the other vehicle could have been saved if the car driving him and his wife simply veered off the road and into a tree? How safe would you feel traveling in such a vehicle?

This is part of an emerging field called “machine ethics.” They go well beyond your autonomous Uber ride, as this article in The Economist,Morals and the Machine,” points out:

As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming – or at least appearing to assume – moral agency. Weapons systems currently have human operators “in the loop”, but as they grow more sophisticated, it will be possible to shift to “on the loop” operation, with machines carrying out orders autonomously.

So what is the correct answer to the “Seven Pounds accident?” It’s actually addressed in a thought experiment, created by Philippa Foot in 1967. It’s called The Trolley Dilemma. The latest version goes like this:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person tied up on the side track.

You have two options:

  1. Do nothing, and the trolley kills the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the most ethical choice?

When professional philosophers were polled, the results were:

  • 68% would switch (sacrifice the one individual to save five lives)
  • 8% would not switch
  • The remaining 24% had another view or could not answer

So if Will Smith’s character was being driven by an autonomous Uber and a judgement call similar to the Trolley Dilemma was presented, he and his lovely wife (played by MaShae Alderman) would be toast. Six is three times greater – and more worthy of saving by dint of simple math – than two.

3. Autonomous Vehicle Job Displacement

As though driven by an AI-powered car, our society is barreling toward what could be perceived as an abyss. Millions of professional drivers could be put out of work. True, many of these jobs are extremely difficult and can even shorten lifespans. Automation in the past has saved several generations of workers from back-breaking, life-shortening jobs.

But where will these displaced workers go? With little promise of training into better-paying jobs, there is the very real risk of a social upheaval. Innovative countries and states are looking into a solution, that itself presents ethical challenges. The solution is state-funded universal guaranteed income, also known as UBI (Universal Basic Income).

The ethical quandary is this: Assuming UBI works to quell revolt and ensure public health and modest comfort, should the relatively few “haves” fund the lives of the many more “have-nots” — even if it is for the good of the state?

Ethics questions aside, UBI is being taken seriously. It is being tried in a couple Scandinavian countries, and seriously debated by voters in Switzerland. Closer to home, it’s actually being attempted today, in Stockton, CA.

Like autonomous vehicles and AI transformations in other industries, UBI seems to be something that cannot be ignored. How will it be framed in the U.S.? And what would happen to the American standard of living if it was embraced in other parts of the world but not here? The U.S. was founded and built upon Puritan ideals of hard work and self-sufficiency (reliance on legalized slavery for part of that “hard work” notwithstanding). Can we as a society agree to revise our thoughts about work in time to save our rapidly transforming economy?

Talk amongst yourselves.

Test Tube T-bones

A magazine article I read as a kid has stayed with me all these years. It must have been 1970. Back then Time was an important window to the world (scary thought). In this piece its editors wanted to dazzle us with visions of the future; 50 years hence to be exact. They had a staff artist sketch the predictions of a jury of futurists. The result is a picture of the men and women who would inhabit the U.S., circa 2020. These people, it didn’t escape my notice, were the young adults I would come to know when I was as old as my grandparents.

You could see in those sketches the time’s many revolutionary changes. The forecasters used as their starting blocks the recent revolutions in feminism, fashion and technology — and probably many more. They ran feverishly from that spot, only stopping when the horizon gleamed brightly with geo-domes and hovercrafts. Sometimes optimistic, the depictions were mostly just plain weird.

True, there were a few on-target predictions. I especially recall the general metrosexual appearance of the men.  It seems that by then facial hair will be outlawed, or perhaps cured. The men were also uniformly round-shouldered, presumably made so by the helpful toilings of brawny robots. As for the fairer gender, I was a little too young then to notice, but yes, the women of the future will be plenty hot … if you go in for the boyish, fashion model types.

A Thin Future

Conspicuous by today’s standards, no one depicted in this lineup looked even remotely in need of Jenny Craig. The effects of the Earl Butz / Nixon Era agricultural policies had not yet materialized at the time of the article, so the futurists couldn’t factor them forward to today’s ever-expanding American waistlines. Corn crops were not yet heavily subsidized. The cost of food on American tables was three times higher in 1970 than it is today, factored for inflation. Futurists had no inkling of a time where cheap corn-based calories were the norm and rates of obesity and diabetes were through the roof.

Those postcards from the future seemed lightweight in other ways as well.

There were many small gaffes. Example: There were no tattoos or piercings. There were several glaring ones, too. I recall that all the Americans were WASP-white. (This was after all from a time in our history when, until a few years earlier, Crayola was able to unironically label a salmony-beige crayon “Flesh.”) Also, inexplicably, everyone in 2020 will wear long robes. Did you know this? Apparently we’ll all look like we just stepped out of the shower.

The Future, Now and Then

Why do I bring all this up? I occasionally read freshly-minted portraits of the future, and I find it fun to compare the way they make me feel now versus back then. I’ve just read a new glimpse of our future, and I can tell you this: Our future, four decades ago, may have looked weird, but today the future just looks gross.

I’m referring to the recent New Yorker story about meat that is being cultured in the lab. Yes. Right now, in 2011. Food scientists are taking stem cells of our holy trinity of animal protein – cattle, chickens and pigs – and culturing them in a nutrient-rich broth. As you may know, stem cells are capable of turning into any of their owners’ tissues. Scientists are flipping the cellular switches to Muscle and seeing what happens.

What they’re finding is plenty: The promise of cheap, plentiful meat. This is meat free of corn-fed, antibiotic drenched, water-guzzling, e-coli-growing livestock. Yes, all this from clusters of cells multiplying with abandon in labs far from their genetic benefactors.

These scientists are also finding that by “folding” sheets of these cultured cells onto themselves, they can create what will look like ground meat. Future research will look at the next step, using 3-D printers that issue bubbles of specialized cells instead of colors of ink to “print” lab-grown steaks, cutlets and chops.

Is your mouth watering yet? No, mine isn’t either. But if you care for the fate of the earth you may want to stay seated at the table.

Solving Several Global Problems At Once

In that May 23, 2011 issue of the New Yorker, Michael Specter describes this strange but thrilling convergence of people and technology. He writes, “[This is] a new discipline, propelled by an unlikely combination of stem-cell biologists, tissue engineers, animal-rights activists, and environmentalists.”

A brand new discipline is a big deal. The father and elder statesman of this one is William van Eelen, an 88-year-old Dutch native; part-time scientist, and full-time zealot. He is the focus of this New Yorker piece. Specter reports that van Eelen has pursued his dream of feeding the world from a Petri dish since the 1950s. We learn that he doggedly championed his cause in (not surprisingly) the face of decades of aggressive skepticism and even derision. It has only been relatively recently that technology and world events have caught up to him and begun to propel his work forward. A dozen years ago he achieved an important milestone. He was granted U.S. and international patents for his Industrial Production of Meat Using Cell Culture Methods.

Why are environmentalists among its supporters?

For all the carbon emissions they are responsible for, you’d think every beef flank and chicken breast we eat arrives at our plates from the back of a Hummer. According to the piece, “our patterns of meat consumption have become increasingly dangerous for both individuals and the planet …”

According to the United Nations Food and Agriculture Organization, the global livestock industry is responsible for nearly twenty percent of humanity’s greenhouse-gas emissions. That is more than all cars, trains, ships and planes combined. Cattle consume nearly ten percent of the world’s freshwater resources, and eighty percent of all farmland is devoted to the production of meat.

There is also a cure for individual crises: “According to a report issued recently by the American Public Health Association, animal waste from industrial farms ‘often contains pathogens, including antibiotic-resistant bacteria’ … Seventy percent of all antibiotics and related drugs consumed in the United States are fed to hogs, poultry, and beef.

“[Also,] the World Health Organization has attributed a third of the world’s deaths to the twin epidemics of diabetes and cardiovascular disease, both greatly influenced by excessive consumption of animal fats.” The article made the point that by re-engineering the meat that’s being cultured, we may someday be able to dine on burgers more akin to health food than heart-attacks-on-a-bun.

Phasing Out the Factory Animal

That’s just the humanitarian outlook of in-vitro meat. Let’s not forget the “animalitarian” perspective:

By 2030, the world will likely consume seventy percent more meat than it did in 2000. The … implications for animal welfare [are daunting]: billions of cows, pigs, and chickens spend their entire lives crated, boxed, or force-fed grain in repulsive conditions on factory farms.

We’re reassured in the article that the cure for these social ills won’t leave a bad taste in our mouths. The fact is, we’re not talking about artificial meat. It’s the real thing. This is hard for people to grasp, as demonstrated by the way Terry Gross of Fresh Air struggled with the idea in her recent interview of Mr. Specter.

He concedes this point, and its general lack of appeal, “Nearly every person I told that I was working on this piece asked the same question: What does it taste like? (And the first word most people blurted out to describe their feelings was ‘Yuck.’) Researchers say that taste and texture – fats and salt and varying amounts of protein – can be engineered into lab-grown meat with relative ease.”

What won’t be easy is scale.

This work is being done by scientists now in tiny quantities, with muscle tissue no larger than contact lenses. What is needed is a transition from science to engineering. Rallying the financing for this won’t be easy until more people can see a shared vision of the benefits of in-vitro meat.

But don’t despair. Just as the space program in the 1960s prospered because the science was already in place, the scientific underpinnings of industrial meat exist today. What is lacking is awareness. That, and the leadership necessary to tackle tough problems like global warming, and human hunger and illness, in the face of a future that makes us all a little queasy. It’s one thing for a nation to get behind men on the moon. It’s another to look forward to tucking into a test tube T-bone.

Photo credit courtesy The Big Scout Project via Creative Commons