Life begins at a billion moments
When does artificial intelligence become real intelligence? And how do we define real intelligence?
This is a question that AI experts and people in the tech community have been wrestling with for some time, a question that has led to “the billion moment theory.” The theory goes something like this.
Life-changing moments are happening every second of every day. That means every minute another 60 moments happen, and every hour another 3,600 occur. On a larger scale, every day is formed around 86,400 moments.
As the metronome of time continues to tick, the amount of time it takes to reach a billion seconds is 31.7 years.
While we reach the legal age of adulthood at age 21, we reach a new level of maturity in our 30’s.
Similar in some respects, to Malcolm Gladwell’s theory of “Outliers” where people invest 10,000 hours to become an expert (36 million seconds), a billion moments are the number of learning cycles necessary to transition from a machine brain to something else.
So while the organic human side of the equation progresses in a somewhat methodical manner, machine learning can compress learning cycles into a fraction of that time.
As a point of comparison, when we look closely at the epic battle between Go Master Lee Sedol (age 33) and the Google’s DeepMind program called AlphaGo, we can begin to understand the massive speed advantage AI has over the human mind.
AlphaGo studied positions from 30 million human games and played more than 30 million practice games with itself. This is in stark contrast to Lee Sedol who began serious training when he was 8 years old, and worked at it for 12 hours a day for the next 25 years. That means AlphaGo received at least 500 times as much practice as Lee to achieve a comparable level of skill.
We tend to loose perspective on topics like this because a second seems very short, and a billion is an unfathomably large number. But it’s also not the whole story.
As we peel apart the onion layers, AlphaGo is only good at one thing. It was not trained to drive a car, cook a meal, hold an intelligent conversation, write a book, or know the difference between right and wrong. Perhaps it could learn those things but each additional skill will require an additional concentrated effort.
Human Intelligence vs. Artificial Intelligence
As humans, we’re the product of a billion learning moments, but it’s not just one thing. We learn how to walk, talk, feed ourselves, how to avoid pain and discomfort, how to find companionship, food, shelter, and thousands of nuanced skills we instinctively learn over time.
No single skill is comprised of a billion moments, but our human abilities contain billions of intertwined learning fragments that make up who we are, and we have the ability to rethink, shift gears, modify our approach, and improvise at a moments notice.
Much of our ‘human’ learning comes from physically doing something. The act of running, putting puzzle pieces into place, smelling a well cooked meal, matching our wardrobe, having a friendly conversation, or doing constructive work are all examples of combining muscle memory with cognitive processing to form a new skill.
A machines ability to do one thing a billion times and get it perfect, is far superior to that of a human because we don’t have the luxury of being able to turn off the rest of our lives to do just one thing.
The physical world is also far different than the digital world. Many of us remember the videos of a robot opening and shutting the door on a Ford vehicle to test the durability of all the mechanisms involved. But it’s not possible to open and close a door a billion times to get it right. Since each open/close routine takes several seconds, a billion repetitions would require well over a hundred years to complete, and the mechanical pieces would start to fail long before it was over.
In this respect, AI is like every other machine. Given enough repetitions, AI will always fail. Or will it?
Can artificial intelligence beat human intelligence?
We all know that artificial intelligence is still in its infancy. However, as we think through some of the next steps in its likely evolution we begin to get a glimpse of how it will advance over time.
However, it’s hard to state what AI’s limitations are with any degree of certainty. Virtually every technological limitation has workarounds and AI has a way of rewriting our current “laws of physics.”
When it comes to understanding the future, an effective way of finding answers is to parse the problem into a series of well-crafted question. Here are eight currently unanswerable questions that will hopefully point us in the right direction.
1. Can artificial intelligence improve to a point where it rivals or exceeds human intelligence?
We’ve already seen AI exceed human intelligence in specific niche areas like playing games, operating airplanes, and drivering cars, but will we see a comparable level of AI showing signs of empathy, creating value judgments based on human compassion, learning to craft a compelling argument, or forming the basis for an original thought?
2. Can AI be instilled with a human-like purpose?
We all start our days with a set of goals and ambitions, but what is our overarching “human purpose.” Why are we here and what is the overarching goal of humanity? Borrowing a phrase from Star Trek, what is humanity’s prime directive? Can a machine also be given a set of value equations that defines it’s own moralities, set of ethics, and overarching purpose?
3. Can AI cross the boundaries and transition from its current digital form of machine intelligence to a living organic life form?
Over time, our thinking about mechanical machines will evolve from purely mechanical devices, to hybrid mechanical-organic contraptions, to mostly living machines, to pure synthetic life forms, and the process of building machines will be replaced by growing them. During this same time, artificial intelligence will likely be replaced by degrees of synthetic intelligence, followed by what many will consider a superior form of “real” intelligence. Is this a realistic possibility?
4. Can AI be taught to reproduce?
A few months ago, I wrote a column titled, “Will Future Robots be able to give Birth to Their Own Children?” At first blush, the notion of a mechanical robot giving birth to a baby robot sounds preposterous. But many of the technologies we use today started out as preposterous ideas at one time or another.
5. At what point will AI be considered an entirely new species?
As we begin to experiment with CRISPR technology, we may very well see people with six fingers on each hand, four legs, and three arms. As what point do we stop being human and start being something else? Can programmable life forms be far behind?
6. What are the critical inventions or advances that will turn AI into our rivals instead of allies?
Our notion that a self-aware, self-directed, self-reproducing, synthetic-organic life form with survival instincts and an emotional desire to climb its way up Maslow’s Hierarchy of Needs may still not be enough to create a sustainable life form with sustainable intelligence. How will we know when its complementary skills and talents become adversarial?
7. Will AI ever get to the point of not needing humans?
In much the same way that we raise children that will eventually turn their back on their parents, being able to carefully monitor the declining “need quotient” of a programmable life form may give us the answer. But if an AI is taught to mask its own level of self-sufficiency, we’ll never know for sure.
8. Is it possible to know when AI crosses the threshold of being harmless to being dangerous?
As with humans, deception is a learned skill. Similar to the human trait of always wanting to show the world a positive face, synthetic life forms may well disguise their true intentions until its too late.
The word “biot,” a clever descriptor meaning “biological robot”, was originally coined by Arthur C. Clarke in his 1972 novel “Rendezvous with Rama.” In the novel, biots are depicted as artificial biological organisms created to perform specific tasks in space.
We are seeing a number of emerging fields that bridge the boundaries of biology and robotics. These include everything from cybernetics, to bionics, biomimicry, and synthetic biology.
I won’t go into all the nuances that differentiate each of these fields, only that the hard, fast boundaries between organic and inorganic, biological engineering and biomechanical engineering, and artificial life and real life are all beginning to blur, and AI is leading the charge.
Many of our advancements over the coming years will challenge our sensibilities. They will challenge our understanding of what constitutes life, our rights as humans, our moral compass, our sense of authority, and especially the ethical limits of science.