The Perfection Quandary
Whenever they do, big ideas carries with them a heavy responsibility, the responsibility of either moving them forward or allowing them to die in the silent echo chambers of our own grey matter.
For this reason, I’ve often equated my eureka moments to that of being tortured by my own ideas. Yes, grand ideas are a wonderful playground where you can dream about starting a new company, solving some of the world’s biggest problems, and constructing visions of wealth and influence, all in the time it takes most people to get ready for work.
However, what I’ve told you is far from a rare condition. Millions, perhaps even billions, are being equally tortured by their own epiphanies, on a daily basis. In fact, every new product, book, movie, and mobile app has been born out of one of these lightning-strike moments.
Without epiphanies, life would be a very monochrome experience. No swashes to add color to our dreams, no voices of urgency calling out in the middle of the night, and no moments of anticipation to cause our mind’s fertile proving grounds to blossom. Instead, every silent box we open will be just that… silent.
I often weep for the great ones that have been lost. Every grand problem humanity faces today has been solved a million times over inside the minds of people unprepared to move them forward.
That’s right, every major problem plaguing the world today, ranging from human trafficking, to water shortages, major pollution issues, poverty, and even war has been solved again and again with personal epiphanies and no ability to implement them.
But that’s about to change.
Over the coming decades the achievements of machine intelligence will continue to hockey-stick its way up the exponential growth curve, creating digital mechanisms for capturing personal epiphanies, with all the right tools and incentives for idea-holders to attach their solutions to.
At this point we will begin harvesting pinnacle thought moments from our best and brightest, paving the way for more continuous idea-feeds, and a crude form of the global brain, as imagined by countless science fiction writers in the past, will begin to take shape.
However, no breakthrough technology is without its unintended consequences, and this one is no exception.
In our rush to solve all of life’s major problems, and we each have our own utopian image of the good life, our drive for solutions will leapfrog us directly onto the lily pad of perfection. And ironic as it may sound, it will be this drive for perfection that will be our undoing.
Early Days of Artificial Intelligence
In 1950 Alan Turing published a groundbreaking paper where he contemplated the possibility of inventing a thinking machine. Since he concluded that “thinking” was difficult to define, he devised his famous Turing Test as a way to know when we’ve achieved it.
According to Turing, if a machine could carry on a conversation (over a teleprinter, the most advanced technology of the day) that was indistinguishable from a conversation with a human being, then it was reasonable to assume that the machine was “thinking” and building a “thinking machine” was at least plausible.
The field of artificial intelligence officially began at Dartmouth College in 1956 with a conference organized by Marvin Minsky and John McCarthy. Many of the early artificial intelligence researchers predicted that a machine with human-like intelligence was no more than a generation away. They spoke so convincingly that investors ponied up millions to make this vision come true.
In the early 1970s, investors became disillusioned with the progress and began to back away. The period between 1974 and 1980 became known as the 1st AI winter.
In 1980, a move by the Japanese government to fund AI research caused a number of other governments to follow suit and the millions from the 50s quickly mushroomed into billions. In the 1980s a form of AI program called “expert systems” became popular was adopted by corporations around the world. Knowledge became the primary focus of mainstream AI research.
However, that wasn’t enough to sustain interest and the second AI winter happened between 1987 and 1993 when investors once again pulled away from funding any new projects.
That all changed when a number of AI success stories began to surface:
- 1997 – IBM’s Deep Blue victory over the reigning world chess champion Gary Kasparov.
- 2005 – DARPA’s Grand Challenge, which involved driving an unmanned vehicle over an unrehearsed 131-mile desert trail, was won by the Stanford Team.
- 2007 – DARPA’s Urban Challenge, involving autonomously navigating a 55 mile high traffic urban roadway, was won by the Carnegie Melon Team
- 2011 – IBM’s Watson defeated two of the top champions, Ken Jennings and Brad Rutter, in the quiz game show of Jeopardy.
We are now overdue for another AI matchup, one that I’ve speculated will involve pairing up a driverless car against a recent winner of the Indianapolis 500.
In Search of the Singularity
The person who coined the term “singularity” was mathematician John von Neumann. In a 1958 interview, von Neumann described the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, can not continue.”
Since that first cryptic mention half a century ago, futurist thinkers like Vernor Vinge and Ray Kurzweil began focusing on the exponential growth of artificial intelligence. As a Moore’s Law type of advancement, we will develop superintelligent entities with decision-making abilities far beyond our ability to understand them.
Cloaked in this air of malleable mystery, Hollywood has begun to cast the singularity as everything from the ultimate boogeyman to the savior of humanity.
Adding to these prophecies are a number of fascinating trend lines that give credence to these predictions. In addition to our ever-growing awareness of the world around us brought on by social media and escalating rates of digital innovations, human intelligence shows a continued rise, every decade, since IQ tests were first invented in the 1930s, a phenomenon known as the Flynn Effect.
The Problem with Perfection
Ironic as it may sound, perfection is an imperfect concept.
Each of us has been born and raised with all of the foibles and limitations of being human. A typical day involves forgetting where we’ve put our keys, stubbing our toe, getting angry at the wrong person, and dropping a plate full of food. And those are just the little things.
We are indeed intelligent beings, but for all of our limitations, the intelligence we possess doesn’t seem hardly enough.
Even after a full night sleep we still wake up tired, we crave food that isn’t good for us, and our pets end up being a poor substitute for the kids we didn’t have.
Most importantly, we have a never-ending need for social interaction. The proliferation of social networks has given many the illusion of being surrounded by those who love them, but the reality is just the opposite. A 2012 study in the Atlantic concluded that one in four Americans had no one with whom they can discuss important matters, compared with one in ten 30 years ago.
Similarly, a 2013 survey conducted by Lifeboat shows that the average American only has one real friend, drawing the conclusion that we are in a friendship crisis. This prompted The Guardian newspaper to declare we are entering the “age of loneliness.”
The opposite of loneliness is not togetherness, it’s intimacy. We need to be needed, and that’s where perfection comes in.
Our human needs are what creates our economy. Without needs, we have no economy.
It’s easy to imagine a perfect person as being self-fulfilled. The more flawless our lives become, the more self-sufficient, self-reliant, self-absorbed, and isolated we become. But our need for control works just the opposite, leaving us in control… of a universe of one.
Nothing exciting ever happens in a vacuum. Well, it does, but the vacuum doesn’t care. We need someone who cares.
That’s where artificial or machine intelligence comes into play. The improvements we seek with this technology are rarely going to be the improvements we need.
Our striving to make a better world is a superficial goal, much like winning the lottery, buying expensive jewelry, or eating 3 lbs. of chocolate. The instant high we experience from filling our immediate gratification only sets us up for a second stage of emptiness and the crashing realization that once we have it all, it’s never enough.
Final Thoughts
Most of us today are incredibly lethargic, sitting 9.3 hours a day. Sitting has become the smoking of the Millennial generation.
But from our personal command center, wherever it is that we may be sitting, we’re able to control most of what we want in our lives, right?
So what is it that we want to control? On-demand entertainment, on-demand answers, food, healthcare, sex, transportation, news, or something else.
Setting perfection as our goal, we need to begin with defining perfection. Does perfection mean we’ve optimized our efficiency, our purpose, our income, our accomplishments, our relationships, our happiness, or something else?
The balancing act of life was never intended for someone to win in every category, and even if that were possible, without needs we’d somehow become devoid of purpose.
This entire discussion has left me in a bit of a quandary, better put, a perfection quandary. Why is there an exception to every rule? Perhaps we need to solve the laws of unintended consequences?
Try as we may, we seem destined to struggle.
For this reason I’ve concluded that broad forms of AI will not live up to their expectations, and the singularity will not unleash the utopia many are predicting. But as with all mysteries of science, we will never really know for sure until we reach the other side.
I often wake up in the middle of the night with a big idea, something I’ve dubbed the grand epiphany. But as it turns out, very few actually fit into the “grand” category.
Whenever they do, big ideas carries with them a heavy responsibility, the responsibility of either moving them forward or allowing them to die in the silent echo chambers of our own grey matter.
For this reason, I’ve often equated my eureka moments to that of being tortured by my own ideas. Yes, grand ideas are a wonderful playground where you can dream about starting a new company, solving some of the world’s biggest problems, and constructing visions of wealth and influence, all in the time it takes most people to get ready for work.
However, what I’ve told you is far from a rare condition. Millions, perhaps even billions, are being equally tortured by their own epiphanies, on a daily basis. In fact, every new product, book, movie, and mobile app has been born out of one of these lightning-strike moments.
Without epiphanies, life would be a very monochrome experience. No swashes to add color to our dreams, no voices of urgency calling out in the middle of the night, and no moments of anticipation to cause our mind’s fertile proving grounds to blossom. Instead, every silent box we open will be just that… silent.
I often weep for the great ones that have been lost. Every grand problem humanity faces today has been solved a million times over inside the minds of people unprepared to move them forward.
That’s right, every major problem plaguing the world today, ranging from human trafficking, to water shortages, major pollution issues, poverty, and even war has been solved again and again with personal epiphanies and no ability to implement them.
But that’s about to change.
Over the coming decades the achievements of machine intelligence will continue to hockey-stick its way up the exponential growth curve, creating digital mechanisms for capturing personal epiphanies, with all the right tools and incentives for idea-holders to attach their solutions to.
At this point we will begin harvesting pinnacle thought moments from our best and brightest, paving the way for more continuous idea-feeds, and a crude form of the global brain, as imagined by countless science fiction writers in the past, will begin to take shape.
However, no breakthrough technology is without its unintended consequences, and this one is no exception.
In our rush to solve all of life’s major problems, and we each have our own utopian image of the good life, our drive for solutions will leapfrog us directly onto the lily pad of perfection. And ironic as it may sound, it will be this drive for perfection that will be our undoing.
Early Days of Artificial Intelligence
In 1950 Alan Turing published a groundbreaking paper where he contemplated the possibility of inventing a thinking machine. Since he concluded that “thinking” was difficult to define, he devised his famous Turing Test as a way to know when we’ve achieved it.
According to Turing, if a machine could carry on a conversation (over a teleprinter, the most advanced technology of the day) that was indistinguishable from a conversation with a human being, then it was reasonable to assume that the machine was “thinking” and building a “thinking machine” was at least plausible.
The field of artificial intelligence officially began at Dartmouth College in 1956 with a conference organized by Marvin Minsky and John McCarthy. Many of the early artificial intelligence researchers predicted that a machine with human-like intelligence was no more than a generation away. They spoke so convincingly that investors ponied up millions to make this vision come true.
In the early 1970s, investors became disillusioned with the progress and began to back away. The period between 1974 and 1980 became known as the 1st AI winter.
In 1980, a move by the Japanese government to fund AI research caused a number of other governments to follow suit and the millions from the 50s quickly mushroomed into billions. In the 1980s a form of AI program called “expert systems” became popular was adopted by corporations around the world. Knowledge became the primary focus of mainstream AI research.
However, that wasn’t enough to sustain interest and the second AI winter happened between 1987 and 1993 when investors once again pulled away from funding any new projects.
That all changed when a number of AI success stories began to surface:
- 1997 – IBM’s Deep Blue victory over the reigning world chess champion Gary Kasparov.
- 2005 – DARPA’s Grand Challenge, which involved driving an unmanned vehicle over an unrehearsed 131-mile desert trail, was won by the Stanford Team.
- 2007 – DARPA’s Urban Challenge, involving autonomously navigating a 55 mile high traffic urban roadway, was won by the Carnegie Melon Team
- 2011 – IBM’s Watson defeated two of the top champions, Ken Jennings and Brad Rutter, in the quiz game show of Jeopardy.
We are now overdue for another AI matchup, one that I’ve speculated will involve pairing up a driverless car against a recent winner of the Indianapolis 500.
In Search of the Singularity
The person who coined the term “singularity” was mathematician John von Neumann. In a 1958 interview, von Neumann described the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, can not continue.”
Since that first cryptic mention half a century ago, futurist thinkers like Vernor Vinge and Ray Kurzweil began focusing on the exponential growth of artificial intelligence. As a Moore’s Law type of advancement, we will develop superintelligent entities with decision-making abilities far beyond our ability to understand them.
Cloaked in this air of malleable mystery, Hollywood has begun to cast the singularity as everything from the ultimate boogeyman to the savior of humanity.
Adding to these prophecies are a number of fascinating trend lines that give credence to these predictions. In addition to our ever-growing awareness of the world around us brought on by social media and escalating rates of digital innovations, human intelligence shows a continued rise, every decade, since IQ tests were first invented in the 1930s, a phenomenon known as the Flynn Effect.
The Problem with Perfection
Ironic as it may sound, perfection is an imperfect concept.
Each of us has been born and raised with all of the foibles and limitations of being human. A typical day involves forgetting where we’ve put our keys, stubbing our toe, getting angry at the wrong person, and dropping a plate full of food. And those are just the little things.
We are indeed intelligent beings, but for all of our limitations, the intelligence we possess doesn’t seem hardly enough.
Even after a full night sleep we still wake up tired, we crave food that isn’t good for us, and our pets end up being a poor substitute for the kids we didn’t have.
Most importantly, we have a never-ending need for social interaction. The proliferation of social networks has given many the illusion of being surrounded by those who love them, but the reality is just the opposite. A 2012 study in the Atlantic concluded that one in four Americans had no one with whom they can discuss important matters, compared with one in ten 30 years ago.
Similarly, a 2013 survey conducted by Lifeboat shows that the average American only has one real friend, drawing the conclusion that we are in a friendship crisis. This prompted The Guardian newspaper to declare we are entering the “age of loneliness.”
The opposite of loneliness is not togetherness, it’s intimacy. We need to be needed, and that’s where perfection comes in.
Our human needs are what creates our economy. Without needs, we have no economy.
It’s easy to imagine a perfect person as being self-fulfilled. The more flawless our lives become, the more self-sufficient, self-reliant, self-absorbed, and isolated we become. But our need for control works just the opposite, leaving us in control… of a universe of one.
Nothing exciting ever happens in a vacuum. Well, it does, but the vacuum doesn’t care. We need someone who cares.
That’s where artificial or machine intelligence comes into play. The improvements we seek with this technology are rarely going to be the improvements we need.
Our striving to make a better world is a superficial goal, much like winning the lottery, buying expensive jewelry, or eating 3 lbs. of chocolate. The instant high we experience from filling our immediate gratification only sets us up for a second stage of emptiness and the crashing realization that once we have it all, it’s never enough.
Final Thoughts
Most of us today are incredibly lethargic, sitting 9.3 hours a day. Sitting has become the smoking of the Millennial generation.
But from our personal command center, wherever it is that we may be sitting, we’re able to control most of what we want in our lives, right?
So what is it that we want to control? On-demand entertainment, on-demand answers, food, healthcare, sex, transportation, news, or something else.
Setting perfection as our goal, we need to begin with defining perfection. Does perfection mean we’ve optimized our efficiency, our purpose, our income, our accomplishments, our relationships, our happiness, or something else?
The balancing act of life was never intended for someone to win in every category, and even if that were possible, without needs we’d somehow become devoid of purpose.
This entire discussion has left me in a bit of a quandary, better put, a perfection quandary. Why is there an exception to every rule? Perhaps we need to solve the laws of unintended consequences?
Try as we may, we seem destined to struggle.
For this reason I’ve concluded that broad forms of AI will not live up to their expectations, and the singularity will not unleash the utopia many are predicting. But as with all mysteries of science, we will never really know for sure until we reach the other side.
Author of “Communicating with the Future” – the book that changes everything
Very good, Our family lives inside of: ´´Confidence, Clarity and a sense of purpose are all things that need to be established, and re-established, daily´´. + ´´We love yes(s), no(s) are ok and we are not afraid to ask.´´ Started: 2010 + Health mental/physical/emotional/energy – no smoking, no alcohol, no caffeine, no drugs, no medications, no supplements, no protein powders. Started: august/1998