“As soon as it works, no one calls it AI anymore.” —John McCarthy
John McCarthy coined the term Artificial Intelligence in 1956. And he rightly complained about it. The idea of computers getting more powerful than humans is not new. It has been with us since inception of computers itself. Yet, every time computers start to do what one generation considered sci-fi, the next generation takes it everything but AI. So maybe AlphaGo beating the world number one Go player is just a natural progression of technology. And not an act of superintelligence as we think of it. But maybe it is. Nobody has the right answer because no one can see the future. What’s important is the question of where we are in a historical context.
So, where exactly are we?
Tim Urban from “Wait But Why” blog wrote a seminal piece on AI. It's a two piece blog post which is about the length of a mini book. In it Tim divides AI into three categories (Originally, James Barrot came up with these categories in his book, Our Final Invention).
1. Artificial Narrow Intelligence (ANI)
2. Artificial General Intelligence (AGI)
3. Artificial Super Intelligence (ASI)
ANI is what's surrounding you now a day. Amazon suggesting you new books and paper towels based on your previous purchases. Facebook suggesting you new friends. And Google predicting what you might be looking for as soon as start typing. Apart from that it's also your smartphone making thousands of decisions every day on your behalf based on what it already knows about you. ANI is specific and narrow in width. You provide the context of the problem and vast amount of data and it takes it from there. Over time it becomes more efficient at that task than any human can possibly be. But the intelligence is result of sheer computing power than any contextual understanding. You still need human brain for context, cognition and emotional intelligence.
In case you are wondering, IBM Deep Blue, IBM Watson and DeepMind's AlphaGo are still examples of ANI.
AGI is basically human level intelligence. When a computer can figure what you could have in the first place without any external input. A computer at AGI level will have context, cognition and emotional intelligence to understand and solve a problem an human can. And at a speed that's at least at par to that of an ordinary human being. This is what we are dealing with right now. To bring computers to AGI level. There are three ways we can go from ANI to AGI.
1. First is to let the evolution take place. We slowly feed things to computers over time. And eventually they will know what we know. But this approach is very slow. It took us millions of years to know what we know today.
2. The second is to slice the brain and mimic its functionality layer by layer. This, of course, has biochemical limitations among others.
3. The third is to enable computers gain intelligence and build their own on top of that. This is perhaps our best option.
ASI is a level of intelligence that surpasses human brain. And while we need concerted efforts to make AGI happen, ASI will happen as a consequence of AGI. When a computer is at AGI, it will be able to figure out solution to any problem on its own without needing human intervention. What that entails is if that computer decides to build another computer to solve a problem, it will just do. For all of our history, we created machines. For the first time, if ASI happens, machines will create machines. The Gates', Musks' and Hawking's worry that this might happen sooner than we expect.
In the book Sapiens, Yuval argues that what made humans such a dominant species on earth is their ability to gain and share knowledge. If a lion knows something, it only makes that lion more knowledgeable and powerful. There is no way for the lion to communicate this knowledge and power to other lions. But if a human knows something, it's only a matter of time other humans will know the same. Modern day computers share the same trait. So if one computer gains the ability to understand context the rest will too. At the end of the day it's software. Easily copyable software.
But isn't computer interaction and software sharing something we had for past three decades? Yes, but now we have two additional things that we never had before.
1. A vast amount of data aka Big Data.
2. A seemingly unlimited computing power.
Computing power is doubling every 18 months. Soon this accumulative power will surpass the human brain because every double increases the baseline of next double. And Big Data is enabling a strong foundation of knowledge for computers to begin with. Tell a computer this is a cat image enough number of times and it will identify the next cat image without any intervention. Even if the image is rather foggy. Machine learning enables computers to start learning new things on their own. And is empowered by these two fundamental technologies.
Think of Computers with machine learning as growing children. You tell them everything in the beginning but soon they outgrow you. That can either be a good thing or bad.