The future will be far more surprising than most observers realize: few have truly internalized the implications of the fact that the rate of change itself is accelerating. - Ray Kurzweil, “The Law of Accelerating Returns”
It’s only one man talking, making projections about the future of technology and not coincidentally the future of the human race. Yet many of Ray Kurzweil’s predictions have hit the mark. In 2009, he analyzed 108 of them and found 89 entirely correct and another 13 “essentially” correct. “Another 3 are partially correct, 2 look like they are about 10 years off, and 1, which was tongue in cheek anyway, was just wrong,” he added. If he can maintain this rate of success, many of his other predictions will happen within the lifetime of most people alive today. And almost no one is prepared for them.
Author, inventor, successful entrepreneur, futurist, and currently head of Google’s engineering department, Kurzweil is enthusiastic about the technology explosion that’s coming. Here are a few predictions he’s made over the years:
In The Age of Intelligent Machines (1990) he said that by the early 2000s computers would be transcribing speech into computer text, telephone calls would be routinely screened by intelligent answering machines, and classrooms would be dominated by computers. He also said by 2020 there would be a world government, though I suspect he’s backed off from that view. (See his comment to Gorbachev in 2005 that technology promotes decentralization which ultimately works against tyranny.)
In The Age of Spiritual Machines (1999) he predicted that by 2009 most books would be read on screens rather than paper, people would be giving commands to computers by voice, and they would use small wearable computers to monitor body functions and get directions for navigation.
Some of the milder predictions in The Singularity is Near (2005) include $1,000 computers having the memory space of one human brain (10 TB or 1013 bits) by 2018, the application of nano computers (called nanobots) to medical diagnosis and treatment in the 2020s, and the development of a computer sophisticated enough to pass a stringent version of the Turing test — a computer smart enough to fool a human interrogator into thinking it was human — no later than 2029.
Soon after that, we can expect a rupture of reality called the Singularity.
The Technological Singularity
As used by mathematicians, a singularity denotes “a value that transcends any finite limitation,” such as the value of y in the function y = 1/x. As x approaches zero, “y exceeds any possible finite limit (approaches infinity).” Astrophysicists also use the term to refer to the infinite density of a black hole.
In Artificial Intelligence (AI), the Singularity refers to an impending event generated by entities with greater than human intelligence. From Kurzweil’s perspective, “the Singularity has many faces. It represents the nearly vertical phase of exponential growth that occurs when the rate is so extreme that technology appears to be expanding at infinite speed. . . We will become vastly smarter as we merge with our technology.”
And by “merge” he means (from The Singularity is Near):
Biology has inherent limitations. For example, every living organism must be built from proteins that are folded from one-dimensional strings of amino acids. Protein-based mechanisms are lacking in strength and speed. We will be able to reengineer all of the organs and systems in our biological bodies and brains to be vastly more capable.
The Singularity, in other words, involves Intelligence Amplification (IA) in humans. We will, on a voluntary basis, become infused with nanobots: “robots designed at the molecular level, measured in microns.” Nanobots will have multiple roles within the body, including health maintenance and their ability to vastly extend human intelligence.
Once nonbiological intelligence gets a foothold in the human brain (this has already started with computerized neural implants), the machine intelligence in our brains will grow exponentially (as it has been doing all along), at least doubling in power each year. In contrast, biological intelligence is effectively of fixed capacity.
As molecular nanotechnology involves the manipulation of matter on atomic or molecular levels, it will be possible to infuse everything on planet earth with nonbiological intelligence. Potentially, the whole universe could be saturated with intelligence.
What will the post-Singularity world look like?
Most of the intelligence of our civilization will ultimately be nonbiological. By the end of this century, it will be trillions of trillions of times more powerful than human intelligence. However, to address often-expressed concerns, this does not imply the end of biological intelligence, even if it is thrown from its perch of evolutionary superiority. Even the nonbiological forms will be derived from biological design. Our civilization will remain human— indeed, in many ways it will be more exemplary of what we regard as human than it is today . . .
The trend tells the story
Kurzweil builds his case on historical trends, as we see in these charts:
Both charts show the same progression, but on different scales. Life arrives roughly 3.7 billion years ago in the form of biogenic graphite followed by the appearance of cells two billion years later. As we move from there biological evolution picks up speed, as does human technology. Viewing the linear plot, everything seems to happen in one day. Though the time span from the introduction of the personal computer to the World Wide Web took 14 years (from the MITS Altair 8800 in 1975 to Tim Berners-Lee’s proposal in March, 1989), it happened almost instantaneously in the overall picture. The second chart lays it out for us dramatically.
Exponential forces are very seductive, he says. Until we get far enough along on the curve, they seem linear. Once we’re past the “knee” the trend starts to become clear. Or it should.
Mother Jones ran an article a year ago that illustrates how deceptive exponential trends can be. Imagine if Lake Michigan were drained in 1940, and your task was to fill it by doubling the amount of water you add every 18 months, beginning with one ounce. So, after 18 months you add two ounces, 18 months later you add four ounces, and so on. Coincidentally, as you were adding your first ounce to the dry lake, the first programmable computer in fact made its debut.
You continue. By 1960 you’ve added 150 gallons. By 1970, 16,000 gallons. You’re getting nowhere. Even if you stay with it to 2010, all you can see is a bit of water here and there. In the 47 18-month periods that have passed since 1940, you’ve added about 140.7 trillion ounces of water. You’ve done a lot of work but made almost no progress. You break out a calculator and find that you need 144 quadrillion more ounces to fill the lake.
You’ll never finish, right? Wrong. You keep filling it as you always have, doubling the amount you add every 18 months, and by 2025 the lake is full.
In the first 70 years, almost nothing. Then 15 years later the job is finished.
Lake Michigan was chosen because its capacity in fluid ounces is roughly equal to the computing power of the human brain measured in calculations per second. Eighteen months served as the time interval because it corresponds to Moore’s Law (Intel’s David House modified Moore’s 2-year estimate in the 1970s, saying computer performance would double every 18 months. As of 2003, it was doubling every 20 months.) As Kurzweil notes,
We've moved from computers with a trillionth of the power of a human brain to computers with a billionth of the power. Then a millionth. And now a thousandth. Along the way, computers progressed from ballistics to accounting to word processing to speech recognition, and none of that really seemed like progress toward artificial intelligence . . . .
The truth is that all this represents more progress toward true AI than most of us realize. We've just been limited by the fact that computers still aren't quite muscular enough to finish the job. That's changing rapidly, though.
Even as AI progresses, the achievements are often discounted. In The Age of Intelligent Machines (1990) Kurzweil predicted a computer would beat the world chess champion by 1998. While musing about this prediction in January, 2011 he said, “I also predicted that when that happened we would either think better of computer intelligence, worse of human thinking, or worse of chess, and that if history was a guide, we would downgrade chess. [IBM’s] Deep Blue defeated Garry Kasparov in 1997, and indeed we were immediately treated to rationalizations that chess was not really exemplary of human thinking after all.”
What was missing? The ability to handle the “subtleties and unpredictable complexities of human language.” Computers could never do this. These were skills forever unique to humans.
Men in Jeopardy!
Then along came Watson.
The victory of the Watson Supercomputer over two Jeopardy! champions is one small step for IBM, one giant leap for computerkind, [Kurzweil proclaimed].
Watson had a three-day match with the champions in February 2011. In a warm-up match, one of the categories was rhymes. The host read the clue to the contestants: “A long tiresome speech given by a frothy pie topping.” Watson quickly replied, “What is a meringue harangue?” The humans didn’t get it.
How did Watson acquire such encyclopedic knowledge? Did IBM engineers hand-feed it information? No. Like a person, Watson read voluminously. Unlike a person, it read all 200 million pages of Wikipedia.
But there’s more. According to IBM, “Through repeated use, Watson literally gets smarter by tracking feedback from its users and learning from both successes and failures.” [Emphasis added] IBM also claims “Watson's servers can handle processing 500 gigabytes of information a second, the equivalent of 1 million books, with its shared computer memory” that totals 8 terabytes.
At Google, Kurzweil’s ambition is to do more than train a computer to read Wikipedia.
We want [computers] to read everything on the web and every page of every book, then be able to engage an intelligent dialogue with the user to be able to answer their questions.
When Kurzweil says “everything on the web,” he means everything — including “every email you've ever written, every document, every idle thought you've ever tapped into a search-engine box.”
Some will find comfort at this point contemplating the beauty and majesty of nature. Perhaps they will find inspiration in trees. K. Eric Drexler has been inspired by trees and pays tribute to them in Unbounding the Future:
[Trees] gather solar energy using molecular electronic devices, the photosynthetic reaction centers of chloroplasts. They use that energy to drive molecular machines—active devices with moving parts of precise, molecular structure—which process carbon dioxide and water into oxygen and molecular building blocks. They use other molecular machines to join these molecular building blocks to form roots, trunks, branches, twigs, solar collectors, and more molecular machinery. Every tree makes leaves, and each leaf is more sophisticated than a spacecraft, more finely patterned than the latest chip from Silicon Valley. They do all this without noise, heat, toxic fumes, or human labor, and they consume pollutants as they go. Viewed this way, trees are high technology. Chips and rockets aren't.
Trees give a hint of what molecular nanotechnology will be like.
And molecular technology gives a hint of what our future will be like.