Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":1706919,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,mobile,","session":"A"}']
Guest

Our iPhones will soon be more intelligent than we are

Image Credit: Bruce Rolff / Shutterstock

Ray Kurzweil made a startling prediction in 1999 that appears to be coming true: that by 2023 a $1,000 laptop would have the computing power and storage capacity of a human brain. He also predicted that Moore’s Law, which postulates that the processing capability of a computer doubles every 18 months, would apply for 60 years — until 2025 — giving way then to new paradigms of technological change.

Kurzweil, a renowned futurist and the director of engineering at Google, told me a few days ago that the hardware needed to emulate the human brain may be ready even sooner than he predicted — in around 2020 — using technologies such as graphics processing units (GPUs), which are ideal for brain-software algorithms. He predicts that the complete brain software will take a little longer: until about 2029.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":1706919,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,mobile,","session":"A"}']

The implications of all this are mind-boggling. If Kurzweil’s right, then within 14 years, the smartphones in our pockets will be as intelligent as we are. In fact, if we focus just on computational intelligence (leaving aside emotional and other forms of intelligence), our phones will match us in just seven years — about when the iPhone 11 is likely to be released. It doesn’t stop there, though. These devices will continue to advance, exponentially, until they exceed the combined intelligence of the human race. Already, our computers have a big advantage over us: They are connected via the Internet and share information with each other billions of times faster than we can. It is hard to even imagine what becomes possible with these advances and what the implications are.

Doubts are understandable about the longevity of Moore’s Law and the practicability of these advances. There are limits, after all, to how much transistors can be shrunk: Nothing can be smaller than an atom. Even short of this physical limit, there will be many other technological hurdles. Intel acknowledges these limits but suggests that Moore’s Law can keep going for another five to 10 years. So the silicon-based computer chips in our laptops will likely sputter their way to match the power of a human brain.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

In our recent discussion, Kurzweil said Moore’s Law isn’t the be-all and end-all of computing and that the advances will continue regardless of what Intel can do with silicon. Moore’s Law itself was just one of five paradigms in computing: electromechanical, relay, vacuum tube, discrete transistor, and integrated circuits. In his (1999) “Law of Accelerating Returns,” Kurzweil explained that technology has been advancing exponentially since the advent of evolution on Earth and that computing power has been rising exponentially — from the mechanical calculating devices used in the 1890 U.S. Census, via the machines that cracked the Nazi Enigma code, the CBS vacuum-tube computer, the transistor-based machines used in the first space launches, and more recently the integrated-circuit-based personal computer.

With exponentially advancing technologies, things move very slowly at first and then advance dramatically. Each new technology advances along an S-curve — an exponential beginning, flattening out as the technology reaches its limits. As one technology ends, the next paradigm takes over. That is what has been happening, and why there will be new computing paradigms after Moore’s Law.

Already, there are significant advances on the horizon, such as the GPU, which uses parallel computing to create massive increases in performance, not only for graphics, but also for neural networks, which constitute the architecture of the human brain. There are 3D chips in development that can pack circuits in layers. IBM and the Defense Advanced Research Projects Agency are developing cognitive-computing chips. New materials, such as gallium arsenide, carbon nanotubes, and graphene, are showing huge promise as replacements for silicon. And then there is the most interesting — and scary — technology of all: quantum computing.

Instead of encoding information as either a zero or a one, as today’s computers do, quantum computers will use quantum bits, or qubits, whose states encode an entire range of possibilities by capitalizing on the quantum phenomena of superposition and entanglement. Computations that would take today’s computers thousands of years will occur in minutes on these.

Add artificial intelligence to the advances in hardware, and you begin to realize why luminaries such as Elon MuskStephen Hawking, and Bill Gates are worried about the creation of a “super intelligence.” Musk fears that “we are summoning the demon.” Hawking says it “could spell the end of the human race.” And Gates wrote: “I don’t understand why some people are not concerned.”

Kurzweil tells me he is not worried. He believes we will create a benevolent intelligence and use it to enhance ourselves. He sees technology as a double-edged sword, just like fire, which has kept us warm but has also burned down our villages. He believes that technology will enable us to address the problems that have long plagued human civilization — such as disease, hunger, energy, education, and clean water — and that we can use it for good.

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":1706919,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,mobile,","session":"A"}']

These advances in technology are a near certainty. The question is whether humanity will rise to the occasion and use them in a beneficial way. We can either build a Star Trek future, in which our civilization rises to new heights, or descend into a Mad Max world. It is up to us.

Vivek Wadhwa is a fellow at Rock Center for Corporate Governance at Stanford University, director of research at Center for Entrepreneurship and Research Commercialization at Duke, and distinguished fellow at Singularity University. His past appointments include Harvard Law School, University of California Berkeley, and Emory University.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More