303.666.4133

The Seven Deadly Sins of Machine Intelligence

by | Jul 13, 2016 | Artificial Intelligence

Thomas Frey Futurist Speaker the seven deadly sins of machine intelligence

The year is 2028 and James Mathews has been summoned to appear in front of Winston III, the famous AI Judge overseeing an electronic courtroom, set up as a pilot project, but many have already dubbed it the justice system of the future.

Mathews, the defendant for this test case, has developed an image-forming technology that captures reflections of reflections, and through these obscure light fragments, manages to piece together works of art comprised of personal images and video clips. With many of his works depicting people behind closed doors, a slew of well-articulated media condemnations began showing up claiming his work to be an invasion of privacy.

Since his machines only use what he refers to as “second generation reflections,” light fragments that he collected in public spaces far away from the people and buildings his images portray, Mathews believes he’s in the clear, and his artistic works are fair game.

Many believe this to be the perfect test case for Winston III, a lifelike judge-bot infused with state of the art AI capabilities. Because of the complexity of privacy laws and the tens of thousands of rules, regulations, and requirements governing the issue, an impartial compiler of all the facts would be needed to render an evenhanded final judgment.

In the courthouse, Winston III is dressed like a techie judge, of sorts, to give participants the feeling they’re in a traditional courtroom, but one that will produce a fairer outcome.

After decades of people protesting the bias and favoritism shown by our fraying legal system, a team of research scientists set out to design the ultimate fair judicial process, starting with a redesign of the most central figure in a courtroom, the judge.

Creating a robot that looked like a judge was a minor accomplishment compared to the artificial intelligence engine needed to assimilate the intent of countless statutes, apply them to a given situation, and rendering a legal opinion.

Since there were no central repositories for the laws, their first task was to create a public database of all laws and make it both accessible to their AI engine and viewable by the general public.

Once the database was in place, they next had to add meaning to the seemingly endless verbiage in past laws, regulations, and court rulings. They did this by linking past court rulings with each statute and reverse engineering the legal interpretations as they were applied over the past several decades to each case file.

Using human decisions from the past and applying them to the artificial intelligence engines of the future is nothing new, but this project attempted to raised the bar of sophistication to a whole new level.

The danger of using the “reverse-engineered human” technique, at least in this application, was in the possible contamination of an impartial and preference-neutral AI with human biases.

As many of us already know, along with our attempts to make machines more human-like in their thinking, comes the potential for them to develop more flaws in their decision-making…. just like humans.

But human bias is only scratching the surface of imperfections that can result from badly programmed AI. For this reason I thought it would be enlightening to discuss the downside of machine intelligence and the seven deadly sins of flawed encoding and how it can go woefully wrong.

The Original Seven Deadly Sins

The seven deadly sins did not come from the Bible, but do stem from some of the teachings of King Solomon found in the Book of Proverbs 6:16-19. In these verses, King Solomon refers to the “six things the LORD doth hate: yea, seven are an abomination unto him.”

1. Lust – It is usually thought of as intense or unbridled sexual desire but can also include lust for power and money.

2. Gluttony – Overindulgence and over consumption of anything to the point of waste.

3. Greed – An abnormal desire for possessions, to the point where stealing, hoarding, and kleptomania can be justified.

4. Sloth – While sloth can be defined in many ways, it’s generally defined as laziness where people fail to act. Evil exists when “good” people fail to act.

5. Wrath – Best described as uncontrolled feelings of anger, rage, and even hatred, often manifesting itself in a desire to seek vengeance.

6. Envy – Similar to greed and lust, malicious envy is characterized by an insatiable desire or resentful covetousness towards the traits or possessions of someone else.

7. Pride – While not all pride is bad, extreme hubris or pride is often considered the most dangerous of the seven deadly sins. Pride, in this context, refers to dangerously corrupt selfishness, the putting of one’s own desires, urges, wants, and whims before the welfare of everyone else.

Thomas Frey Futurist Speaker How do we know if our machines are hardwired to fail

Future self-learning systems will develop inputs from a variety of sources. As requirements for subtle human-like perception increases, the fastest path to data collection will be to capture the processes used by human experts.

For example, quality control in the perfume industry is often based on judgment calls made by seasoned professionals weighing a number of hard-to-quantify olfactory attributes that lead to a final decision.

Without a periodic table for smells and tastes to serve as a baseline of comparison, one person’s olfactory talents may indeed be quite different from someone else’s.

In this type of situation, it may be easier to monitor and learn from the reaction of experts rather than develop a top-down decision-tree. With this scenario, data gathering from human subjects is far easier.

Its not easy to explain how judgment calls made for the perfume industry can cause biases in unrelated applications such as analytic accounting, anticipatory tutoring, or even recommending a lifestyle-specific diet plan, but that’s exactly what will happen.

Over time, self-learning systems will develop “sanitizing” software to eliminate favoritisms and biases stemming from marginal inputs, but that will take time.

In the mean time, here are some of the deadly sins likely to accompany these kinds of tainted inputs.

1. Deceptive Sneakiness – In much the same way a person feels betrayed by a cheating spouse, future machines with secretive reasoning and veiled tendencies will yield similar feelings of distrust.

2. Skeptical Pessimism – TV shows and movies have conditioned us to believe that by simply asking a computer what the odds for success are, future computers will give an exact number like 35.5%. But computers have never been that precise, nor will they be in the future, instead offering wide ranges such as 40-70%. An AI suffering from a pessimism bias will often yield gloomy predictions like 0-10% or, discouraging words such as, “You’re doomed to failure!”

3. Self-Centered – It’s easy to imagine a machine that is programmed for survival, perhaps even at the expense of its own operators. Yet this human-like quality has the potential to be much more subtle and permeate its decision-making circuits with “me first” requests like better operators, better materials, better maintenance, or even fewer hours.

4. Gullibility – We would hope that machines of the future would be impervious to online scams, but just as spam filters have their own workarounds, every decision-point has the potential for similar blind spots.

5. Domineering – Nobody likes a bully that brushes off new inputs and discards better options, but a domineering AI is nothing to trifle with. Machines can learn to get their own way by adapting

6. Indiscretion – Everyone has his or her own secrets and conveying the sensitivity of certain information to a machine is not easy. For example, if you asked a machine to reorder your medicine, it may not understand the need to keep both credit card info and medical data secret in an overly complicated ordering process.

7. Narrow-Mindedness – Are we better off with decisions made after reviewing huge volumes of information, or more efficient judgments from a limited number of databases with higher quality records? Machines can be narrow-minded broad, but shallow, as well as narrow-minded limited in scope.

Final Thoughts

Going back to the opening scenario, will we be working with judge-bots anytime soon?

In my opinion, we are destined to work with a number of prototypes, several generations of decreasingly flawed Winston III judge-bots, before we finally get to a machine capable of rendering a reasonably unbiased verdict.

Using the “seven deadly sins” approach to understanding how negative human attributes can corrupt non-human machines has been an exercise for me in better understanding the massive potential for how things can go wrong.

In the future, machine intelligence will only be as good as the decision-forming architecture at its core. AI will find tons of uses in narrowly defined applications, but every time we stretch the scope, even by a seemingly insignificant amount, the potential for imperfections will grow exponentially.

“Just when we thought it was safe to go swimming in intelligent waters, we realized the water was still dumber than the toe we were attempting to dip into it.”

By Futurist Thomas Frey

Author of “Communicating with the Future” – the book that changes everything

Translate This Page