303.666.4133

Is Moore’s Law for Mad Science Inevitable?

by | Aug 20, 2020 | Predictions

Futurist Speaker Thomas Frey Blog: Limit Sensitive Information Available Online That Could Prove Dangerous

In 1996, Artificial Intelligence (AI) theorist Eliezer Yudkowsky wrote: “Our sole responsibility is to produce something smarter than we are; any problems beyond that are not ours to solve.”

Nearly 20 years later Yudkowsky seemed to have something to say about people at the other end of the intelligence spectrum. In what he coined Moore’s Law for Mad Science, Yudkowsky stated that, “Every 18 months, the minimum IQ necessary to destroy the world drops by one point.”

Just taking a moment to give credit where credit is due, this “Law” (more of a prediction or estimation) was no doubt suggested in tongue-in-cheek deference to Gordon Moore, co-founder of Intel Corporation. Moore predicted in 1965 that the number of transistors per silicon chip would double every year, an assertion known as Moore’s Law. That prediction was still on track up until 2019 when it finally came to an end.

But with Yudkowsky, was his “Moore’s Law” a reconsideration of the optimism and positivity implied in his earlier statement? Should we seriously consider placing limits on self-learning technologies as Elon Musk has warned? At what point do AI and system intelligence become a danger because human intelligence has not advanced enough to cope with it?

Another trend that ties into Yudkowsky’s Mad Science law is the fact that, over time, we’re putting more and more information, and therefore more power, into the hands of individuals. Should we try to place limits around the distribution of information that could prove dangerous?

Thanks to the Internet, all kinds of sensitive information is available, and not just to scientists.

With a little bit of digging, complex and dangerous information is not only available, it’s dumbed down for the rest of us, like me and you, as well as the anarchist who lives down the road, and the delusional schizophrenic working in their garage laboratory-workshop. After all, mad science information is for mad scientists, right?

Futurist Speaker Thomas Frey Blog: Limit Sensitive Information Available Online That Could Prove Dangerous

So, yes, we need to be vigilant. We need to be aware that there are countless, remarkably detailed “how-to” articles available deep online about how to cause chaos or trigger widespread death.

But do we really have the ability to destroy the world? Releasing a more viral version of COVID-19 may be one approach. We’ve already seen some very entertaining television shows and movies have been built around foiling plots to destroy all or parts of the world.

What about nuclear weapons? We seem still to be in a kind of Cold War environment where wars between superpowers are fought regionally, often by proxies, using conventional weapons and armory. Outright utilization of nuclear mega-weapons by one of the superpowers would certainly trigger mutually assured destruction on a widespread or, yes, possibly global scale.

Our biggest concern, though, shouldn’t be that a rogue nation will choose to act, but that a rogue actor will be able to gain access to them, or reframe access to them through a sophisticated ransom scenario.

While the age of heavy military guns and hardware is ending, a new age of bio, cyber, and propaganda wars is just beginning. The concept of imminent risk and menacing danger is being reframed around non-intuitive, non-visible, and non-obvious threats related to the infrastructure and systems all around us.

Power grids, air traffic control systems, Internet misinformation and rumors, as well as medical labs all have technology-based points of entry.

As these systems become automated and smarter, as per Yudkowsky first statement, often weak links are discovered by hackers (granted, they’re smart people) and shared with others who don’t need to figure things out for themselves, leading to Yudkowsky second statement, by simply following the devious instructions they find online.

Yudkowsky is clearly someone that will make you think! That said, I would tie his two statements together into, “Our sole responsibility is to produce something smarter than we are, but let’s make sure the keys are locked away from those without a legitimate need to know.”

Otherwise yes, it will eventually become rather simple to destroy the world.

Translate This Page