303.666.4133

Creating a Code of Ethics for Artificial Intelligence

by | May 5, 2022 | Artificial Intelligence

Futurist Speaker Thomas Frey Blog: Creating a Code of Ethics for Artificial Intelligence

For more than 50 years, scientists, philosophers, and mathematicians have been exploring and perfecting advances in artificial intelligence (AI) while the average person was in the dark about the details and actual capabilities during this time. It wasn’t until IBM’s Deep Blue computer defeated world chess champion Garry Kasparov in 1997 that we could begin to relate to this incredible new branch of science.

The next generation experienced a similar moment 18 years later when IBM’s Watson defeated two Jeopardy champions.

And now, a decade further still, AI is all around us.

Most of us can’t even distinguish applications of AI from “normal” life. AI powers our social media and news feeds. It’s given us speech and biometric recognition along with verbal computer communication. Language-based interfaces answer our questions and give us directions to where we’re going.

Simply put, but amazing to consider, AI simulates and mimics human thought processes and actions to perform tasks, forming conclusions in a way that a human might.

So when it comes to machines that think and act like a person … what could go wrong with that?

We naturally don’t trust things we don’t understand. And if we can’t understand things like AI fully, we want to be sure that someone we trust is watching out for our best interests. We don’t have that kind of oversight at the moment, but we’re quickly going to need it.

Bad Data and Bad Actors

Why do we need AI regulation?

Many ethical issues arising from AI are due to poorly formed or inadequate inputs such as the data and the basis of knowledge the AI program is provided to work with. Bad data gives bad results, and biased data, that doesn’t actually reflect a real-world situation, will trigger biased machine learning and, therefore, flawed results. Policies or actions based on those results move us backwards, not forwards.

There are also occasions for humans to use AI technology in a more nefarious, possibly criminal way. AI can be used to manipulate, trick, and distort, but one person’s manipulation may also be another person’s “persuasion.” And pre-judging – in targeted advertising, for example – isn’t too different from “customizing.” It’s all a matter of degrees and intent. We know inappropriate AI-driven activity when we see it, but bright lines are often hard to define.

Underlying Standards for AI Ethics

Responsible AI that safely serves humankind (and not just the manipulators, tricksters, and distorters) will need to:

Explainability

Be based on standards of transparency and “explainability,” meaning a common understanding of the input, processing, and output of the AI function. This entails having common, agreed-upon language and definitions as well as measures of success and error. It means there are no black boxes with results or processes that can’t be rationally explained.

Standards

Be based on standards that weigh both the benefits and the risks of AI. This requires stepping back within the research and design phase to consider the harm or downsides that might occur due to missteps in the development of the system or exploitation of the system when it’s released. At the development stage, there’s a tendency to over-promote the positives and downplay concerns, a tendency known as “ethics washing.” Equally concerning is the inclination to ignore the downsides of AI “for now,” while offering assurances that these issues will be addressed as they arise.

Safeguards

Maintain guardrails, safety standards, and protocols to govern high-risk functions of AI, such as voice and facial recognition or social bots. The AI behind deepfakes is a prime example, this amazing technology can be used for legitimate reasons such as movie special effects. Or it can be used for nefarious purposes to misinform, deceive, misrepresent, and much worse. Bots are another example. Appropriately used as digital assistants or as web crawlers to sort and rank information, social bots, on the other hand, can be intentionally used to distort online discussions and spread falsehoods.

Human-in-the-Loop

Ensure there’s always a human in the loop to train, tune, and test AI algorithms to ensure those first three criteria are met. In other words, AI should not be completely overseeing AI development.

Using these standards as guiding posts, AI regulation will need to quickly get into the weeds. We shouldn’t try to create broad enforceable standards that try to address every conceivable application. Instead, we should focus on use cases regarding specific tools, such as facial recognition and business sectors, such as the use of AI in banking or national defense.

Creating an AI Police Force

Who is it that should put these policies in place and enforce them? This is a huge question with no easy answers.

We can’t completely count on legislative bodies to codify and regulate these standards. Policymaking is slow, subject to special interest manipulation, and, yes, even political. These government bodies always seem to address last-generation challenges even as iterative new problems replace the old ones.

Plus, the standards need to be global, which is an even more challenging policy arena. Governments at every level have a measure of independence from industry, which is important to foster trust but makes them less attuned to the urgencies of today.

To be certain, government bodies haven’t exactly been on the sidelines. Nations (e.g., the U.S. and China) and blocs of nations (e.g., the EU) have issued frameworks for oversight and regulation of AI. It’s at least a start.

More likely the policies will be set by regulation, with knowledgeable input from industry tempered by informed regulatory staff who can put that input into context.

As this evolves, though, we’ll need to put the onus on tech companies themselves to establish internal standards and work collectively to develop industry-wide standards. These can serve as points of information and even boundary lines for establishing regulations.

Futurist Speaker Thomas Frey Blog: Creating An AI Police Force

What Else Needs to Happen?

To be certain, I’m a big fan of AI. Our future will be driven by AI technology that can discern, understand, learn, and act far quicker and more reliably than we can. It will give us additional capabilities and we’ll all be better for it.

But in order to enhance trust and in order to keep AI in its proper place, in addition to regulatory safeguards, all of us will need to recognize AI for what it is and what it isn’t. AI involves machine learning and other processes that might be flawed if that learning comes from the processing of incomplete or biased data.

Ninety-nine times out of 100 your map application will get you to your destination. But don’t bet your life on it because there will still be times when it sends you to a dead-end or a road closure. AI programs might be able to predict an election outcome with a level of certainty, but they may still have their own “Dewey Defeats Truman” moments, perhaps because in those political contexts it relies on polling data from people who increasingly hate to be polled.

We’ll also need to develop our own levels of healthy skepticism – to not always believe what we see and read in social media, for example. That will be a major challenge as more and more curated content is AI-distributed, and bot-driven, in a way that reinforces our predilections, to begin with.

For better and worse, AI mimics human thought processes. The cycle of incomplete data inputs, leading to faulty learning, resulting in bad decisions is true with humans as well.

We aspire to live in a world far better than what we have today, and AI is a key piece of the equation for getting us there.

Translate This Page

Book Futurist Speaker Thomas Frey