Are we heading towards a Technological Singularity?
No one would argue that technology is progressing at a rapid pace. Blockchain, virtual reality, 3D printing, autonomous transportation – it’s difficult to keep up with developments in these and other breakthrough technologies. Major areas of innovation are arriving faster and faster.
And no one would disagree that artificial intelligence offers greater capacity than human brains in many respects, powered by machine learning and quantum computing. Machines can process information faster than we ever could. Their capacity, too, is increasing exponentially.
But will we ever reach the point where human beings are no longer in charge and to a time when we aren’t the supreme beings on Earth? And will we inevitably see these kinds if breakthroughs and developments happen on their own, powered and informed by previous breakthroughs?
What is Technological Singularity?
Those kinds of questions help frame some of the issues behind technological singularity – that point in time where rapidly accelerating technology becomes uncontrollable and inevitable. Singularity, in this case, is the tipping point where an intelligent object can design and produce improved subsequent versions of itself and other machines without human intervention, bypassing the limiting boundaries of human intelligence.
Technological singularity isn’t just a matter of machine capability. It has a time element as well. The technological singularity theory holds that these kinds of self-propelled innovation will happen at a rapidly accelerating pace. Technology breakthroughs and developments will occur over shorter and shorter periods of time until we reach the point, according to Kevin Kelly, founding and executive editor of Wired magazine, that “all the change in the last million years will be superseded by the change in the next five minutes.” Only non-humans would be able to survive this chaos, which leads us to the next element related to singularity: transhumanism.
Transhumanism
Taken to an extreme, technological singularity can reach a point where the line between humans and machines is erased. Some say we’ll see human brains duplicated or removed and placed within never-dying machines so the “person” lives forever.
Another transhumanist scenario suggests that we’ll reach the point where imputing AI, biotechnology, and genetic manipulation within human bodies will allow a person to live forever. After all, prosthetics, implants, and artificial valves have already sent us down this path to a limited degree.
And then there’s the speculation that machines will create and program robots that will dominate the world. Their actions will be 100% focused on accomplishing goals without consideration for the externalities they create. They’ll have no hesitancy to destroy humans, the environment, and certainly our social norms if that’s what it takes.
We’ve seen these kinds of Hollywood movies pitting mankind versus machines. Thanks to their creative thinking, the human usually wins – so we can sleep at night and so our favorite actor’s character isn’t destroyed – but there’s no reason to think that would be the case if computers and robots truly progressed to that level of development.
Can it Happen?
The first person to use the word “singularity” in the technological context was John von Neumann early last century. The ideas were further fleshed out by science fiction writer Vernor Vinge in the 1980s and in Ray Kurzweil’s 2005 book, The Singularity is Near: When Humans Transcend Biology. According to Kurzweil, machine intelligence will surpass human intelligence in 2029, and the technological singularity will occur around 2045.
No doubt, we’ll certainly accomplish a lot before 2029 and many aspects of our lives will probably be unrecognizable in 24 years. We’ve already witnessed unprecedented increases in technology, along with advances in genetic engineering, synthetic biology, nootropic drugs, and direct brain-computer interfaces.
We’re accomplishing things we never even considered two decades ago, let alone thought were possible. This kind of progress won’t stop, but it doesn’t necessarily mean the advances will build to a crescendo and culminate in a moment of singularity.
Color Me Skeptical
I’m rather skeptical of many of the predictions surrounding the singularity, particularly the transhumanism part of the equation. I find I’m in good company as a skeptic because Gordon Moore also has his doubts, and Moore’s Law is often used to reinforce the prediction of singularity. Other skeptics include people like Jaron Lanier, Bruce Sterling, and Paul Allen.
I simply don’t see the emergence of a tipping point where we lose ownership of progress. In my opinion, several factors will keep this from happening:
- As machines get smarter, the remaining challenges that would need to be solved before they can become autonomous will keep getting more complex. The delta between the two will never intersect.
- Societies will form institutions to ensure that AI advancements are constrained by human values. The “mad scientist,” Dr. Frankenstein scenario isn’t realistic. The line between the technology that supports humanity and the technology that threatens it will be policed by smart people who will advise policymakers accordingly.
- As futurist Martin Ford suggests, well before we get to the point of technological singularity, the technological advancements that lead up to it will displace so many workers in so many skilled professions that society would lose all desire to continue down that path.
We Don’t Need Singularity
The concepts around technological singularity are the extrapolation of our legitimate progress taken to the point of absurdity. Yes, we’re moving up an exponential growth curve, and yes, we’re heading into unchartered territory. But five minutes of innovation supplanting all known technology? Really? An evolved form of humanity instantly taking over the earth? Really? Those aren’t small steps just over the horizon, they’re chasms that won’t be crossed.
So far no one has been able to answer the fundamental question of “Why?”
So take a deep breath … and focus on the incredible progress we’ve been making, progress that will give us an unbelievable new world in which to live as we move into the future. And yes, we can enjoy AI-themed science fiction without assuming the story is inevitable. It’s possible to believe that technology will continue to rapidly transform our lives without buying into the apocalyptic visions of technological singularity and transhumanism.
Since a singularity is the point where the rules of science stop working, like the gravitational singularity of a black hole, there’s no good models for predicting what comes out the other side.
It would be a new beginning, not a future.
Thank you for the skepticism, particularly the comment on the absurdity of extrapolating the past into the future without regard for the non-linear feedback behavior of complex systems. This is a widespread fallacy in pop-futurism. Singularities, transhumanism, spacefaring societies, advanced AI, fusion, colonizing Mars — these are all romantic ideas about the future that are unfortunately not grounded in any measurable reality. They rely on an even more absurd premise: the magical thinking that we can have limitless growth on a finite planet. They overlook the fundamental understanding that economic growth and technological development are functions of surplus net energy, and net energy, on the whole, has been declining over the past few decades and will likely continue to decline as the few remaining easily obtainable fossil fuel resources are used up, and lower EROEI “green” energy technologies replace fossil energy systems at greater scale. With this decline in net energy will come very hard limits on growth, technology usage, and technological development, particularly within the energy-intensive digital tech ecosystem that underpins most of the “predictions” of our wonderous techno-optimist future. For more on this, see the work of USCD Prof of Physics Tom Murphy on his appropriately titled “Do The Math” blog, and Dr. Dan O’Neill, Professor in Ecological Economics at the University of Leeds.
Ever since OpenAI came to be, (ChatGPT) the idea of the singularity has grown larger. Although it is highly unlikely, the singularity might be near. With things like ChatGPT we don’t need to use our brains, and if technology figures that out, they can remove technology from our reach, and the world would crumble.
The article discusses how technological singularityis the point where machines can design and produce improved versions of themselves and other machines without human intervention. It also mentions how transhumanism can reach a point where the line between humans and machines is erased. The article presents both sides of the argument, where some believe it will happen while others are skeptical. The writer argues that the emergence of a tipping point where we lose ownership of progress is unlikely due to the complexity of the remaining challenges that would need to be solved before machines can become autonomous. The writer believes that societies will form institutions to ensure that AI advancements are constrained by human values and that technological advancements that lead up to singularity will displace so many workers that society would lose all desire to continue down that path. While technological progress has been rapid, it is unlikely that we will reach a point where machines dominate humans.