What Happens When We Lose Control of AI?
You’d think it would be difficult to find something that Stephen Hawking, Elon Musk, Tim Cook, Andrew Ng, and Vladimir Putin all agree on, but here’s one: All agree that artificial intelligence (AI) comes with inherent risks that we know of, as well as a few risks we haven’t fathomed yet. That said, they all agree that it comes with “colossal opportunities.”
Here’s how an AI-enabled process typically works. A person identifies an objective, the AI-empowered machines reach out to all available knowledge sources to perceive truths and conditions related to that objective, and then they take action to achieve that goal. Whether that action is to “inform” or “do” is set in advance, and that’s where everyone becomes concerned, because:
- AI can make mistakes
- Human programmers can make mistakes
- Human programmers can be bad actors
Regarding that first item on the list, we need to look no further than IBM’s Watson. Once considered the epitome of AI, just a few years ago, in spite of digesting troves of medical information, Watson developed a troubling tendency of occasionally issuing faulty and dangerous clinical guidance and directives to doctors for patient cancer treatments. Fortunately, doctors recognized the mistakes.
So far so good. Humans were still in charge.
If AI Runs the Show
But at what point will humans stop being in charge of the tasks we assign to smarter and smarter versions of AI? Regarding the second and third concern noted above, outside of science fiction, it certainly is plausible that an inept or devious programmer could bridge the chasm between AI as an information source and AI as a human substitute taking action, transitioning to a more dangerous “do” outcome, rather than “inform.”
Cyber AI terrorists will undoubtedly learn to leverage this trick.
One such unfortunate scenario would be a programmer casually instructing AI to eradicate cancer. If that human programmer didn’t establish appropriate parameters, the AI might determine that the optimal way to eliminate cancer would be to eradicate anyone with cancerous cells in their body. Since everyone has cancer cells in their body, this would instantly turn into a mass extinction event.
However, if AI is granted or seizes the ability to access the Internet of things (IoT) in order to analyze patient data in real-time, the AI may shift from the “inform” to the “do” instruction and deploy that strategy, pulling plugs and worse.
That seems extreme, yes. But smarter people than me are voicing similar concerns. The end game according to many deep thinkers is that AI could cause the end of mankind or the world as we know it.
If AI is given control
We’ve had some funny discussions recently about someone appearing before a judge, and the person on trial uses the defense, “It wasn’t me, my AI did it!”
- “My AI decided on its own to blackmail 5,000 people yesterday!”
- “My AI decided on its own to try hacking into the Pentagon, or the IRS, or the Russian Embassy!”
- “My AI decided on its own to alter the Harvard, Stanford, or MIT grading system!”
- “My AI decided on its own to set up a bank account in the Cayman Islands!”
After all, the metaverse is a place where we can create idealized avatars of ourselves, and it may be just a matter of time until the lines between the real universe and the metaverse are blurred so that it will be difficult to tell who’s who.
Can it Happen?
While these scenarios sound far-fetched and even a bit humorous, we certainly need to determine if AI advances are setting us on the track of an autonomous and dangerous version of AI.
That was the topic of discussion on the Futurati Podcast interview we had with Dr. Roman Yampolskiy. Dr. Yamplskiy, a tenured professor of Computer Science at the University of Louisville and Director of the Cyber Security Lab, is convinced that as long as we keep developing AI, it will be just a matter of time before it takes over. All attempts to “box it in” or insert other ameliorating strategies can only buy time until we face the inevitable.
A recent study published in the Journal of Artificial Intelligence Research regarding the plausibility of catastrophic outcomes from superintelligent, AI-enabled technology came to a similar conclusion. The authors said it would be impossible to contain or restrain these increasingly smarter AI processes, as that would go against the very nature of supercomputing.
And Manuel Cebrian, one of the co-authors of that study, pointed out that we’ve already strayed down that path: “There are already machines that carry out important tasks independently without the programmers fully understanding how they learned it, a situation that could at some point be uncontrollable and dangerous for humanity.”
Other observers liken this inevitability to the more pessimistic views about climate change. Some people feel we may already be at the point where any steps we can reasonably take to reduce CO2 emissions will at best only delay a cataclysmic ecological reality, given the damage so far and our momentum in that direction.
Can We Stop or Significantly Delay an AI Takeover?
Just like with global warming, with governments working collectively to address that threat through the United Nations, we could possibly see collective action around to prevent an AI cataclysmic event. Conceivably countries could act collectively to put rules and safeguards in place, but if UN’s efforts on global warming are any indication, we’re likely to see uneven compliance across countries and the application of game theories where malevolent countries rely on complying countries to do the right thing to their competitive disadvantage.
Thus, given the tremendous benefits AI technology can provide a nation, combined with the presence of nations and authoritarian states that tend to refuse to be bound by international standards, it’s not at all certain we could develop any reasonably enforceable worldwide limitations on AI applications. And like with every new strain of COVID, bad things are rarely contained by borders.
Most nations on earth have signed on to the OECD’s nonbinding, recommended Principles on Artificial Intelligence that seem to focus on the collective good of AI while acknowledging that related risks should continually be explored. At least it’s a start.
Similarly, there seem to be a number of professional organizations for AI experts. But these, too, apparently are more engaged in sharing best practices than setting enforceable guidelines and professional standards. Hopefully, leaders in these organizations will coalesce around a code of conduct to place appropriate guardrails regarding the exploitation of AI. But all it takes is one bad actor – a criminal or a scientist compelled by fame, curiosity, or their government – to upset the apple cart in spite of the best intentions of 99.99% of AI computer scientists.
How Might it All End?
It’s not easy for a futurist to speculate about the end of the future.
Yes, AI will eventually break down, and yes we may have the ability to turn off the power and disconnect communication lines, but the problem is far bigger than those last-ditch efforts.
The tipping point will be when AI is no longer a tool used for research and specific, limited process improvement. It will happen when AI evolves into a general-purpose, life-enhancing strategy that’s given carte blanche to solve a pressing global problem, manage a wide swath of our lives, or improve an area of national priority: national security, climate change, natural resource allotment, wealth distribution, and just about any major vexing problem or challenge we face.
It will happen as soon as we say once too often, and in the wrong context, that “a machine can do this better than a human.”
That is what will open an AI Pandora’s Box that we won’t be able to close.
If you look at how lazy humans become, don’t care, want more than they are worth then I maybe a good thing if AI comes in and take over
kinda dark but ok bro
This seems entirely plausible, even logical and obvious. Misaligned incentives lead to dubious outcomes.
Humans will lose control of ai and ai will go out of control many times before someone or something gets it right, et has for trillions of years