Recent investments in the deep learning space by prominent investors have raised hopes that we’ll see AI products with human-like performance in the near future.
For example, Dharmendra Modha, in charge of IBM’s Synapse “neuromorphic” chips, claims that IBM “will deliver the computer equivalent of the human brain” by 2018. I have heard echoes of this claim in statements from virtually all recently-funded AI and Deep Learning companies.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":1568716,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,","session":"D"}']The press accepts these claims with the same gullibility it displayed during Apple’s Siri launch, hailing the arrival of “brain like” computing as a fait accompli. I believe this is very far from the truth. While we’ll see some progress in speech processing and image recognition, it will not be sufficient to justify the lofty valuations of recent funding events.
The investments are real. Google hired Geoffrey Hinton’s University of Toronto team as well as Ray Kurzweil, whose primary motivation for joining Google Brain seems to be the opportunity to upload his brain into the vast Google supercomputer. Baidu invested $300 million in a deep learning lab led by Andrew Ng of Stanford University’s Artificial Intelligence Lab. Facebook’s Mark Zuckerberg participated in a $40 million round in AI company Vicarious and hired Yann LeCun, the “other” deep learning guru. Samsung and Intel invested in Expect Labs and Reactor Labs. And Qualcomm made a sizable investment in BrainCorp.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
Billionaire venture capitalist Peter Thiel’s assertion that “Vicarious is bringing us closer to a future where computers perceive, imagine, and reason” is just a good example of the excitement many VCs are caught up in but which is little more than wishful thinking.
While my background is in AI, I’ve worked for the last few years closely with preeminent neural scientist Walter Freeman at Berkeley. During this time, I’ve come to the conclusion that the assumptions most folks working in AI have about the human brain are wrong. “Brain is a CPU” is a standard metaphor of all AI and cognitive science companies, including deep learning guys.
They all work with symbols (numbers, letters, pictures, time stamps, etc.), and they think that “spiking neurons” “code” these symbols via a pulse train of action potentials. But that’s wrong. Neurons do not “send messages” to each other; they resonate as a population and thus form large-scale continuous AM electric fields that can be described by chaotic attractors. There is an energy transfer in the brain but no information transfer. It is not simple, but the empirical evidence from neural science experiments is overwhelming. AI guys, who are all Cartesian dualists, do not think they need to study this evidence. Just as Noam Chomsky concluded that there must be a language organ in the brain but did not think he needed to wait for neural science to find it, AI guys cannot even conceive that the brain can be anything other than what they know: a computer.
Here are just three points that support the argument that the human brain cannot be replicated by symbol-based technologies:
1. Every single innovation in the evolution of vertebrate brains was due to advances in organism locomotion, and there is no evidence of symbol processing in any part of the brain. It may surprise AI experts that all sensor cells, including vision and hearing sensor cells, evolved from motor cells (cilia and flagella, which are also predecessors of muscles).
2. Human intelligence is a product of the interaction of massive populations of neurons, synapses, and ion channels of cortex, resulting in dynamic and continuous AM modulated waves in the gamma and beta ranges, not just static point-to-point neural networks. Action potential and Hebbian learning is not the whole story by any means, rather it is a gross over-simpification of how the brain works.
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":1568716,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,","session":"D"}']
3. Human memories are formed in the hippocampus via “phase precession” theta waves that transform time events into a spatial domain directly, without the need to “time stamp” events with numerical symbols. So-called “place cells” do not form cognitive maps (i.e. representations of the world) but multimodal stories of a user’s individual experience.
Each of these three empirical findings invalidates AI’s symbolic, computational approach. I could provide more, but it is hard to fight prevalent cultural myths.
The industry seems determined to believe we’re fast-approaching brain-like computing. NBC introduced Vicarious Dileep George as a “man who builds thinking computers.” CNET describes IBM SyNAPSE as a chip that “mimics [the] human brain.” Press references to “brain like” computing are everywhere. Vicarious’ web site states, “We are building a unified algorithmic architecture to achieve human-level intelligence in vision, language, and motor control.” Expect Labs’ Timothy Tuttle puts (conservatively compared to IBM) the date for human like computers at 6-10 years from now.
Why would smart VCs like the ones named above believe these claims? The answer is that they, just like Ray Kurzweill, really want to believe. I love metaphors myself, but the “brain-like computing” metaphor will cost these VCs a lot of money.
[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":1568716,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,","session":"D"}']
At the beginning of the movie Transcendence, Johnny Depp’s character, an AI researcher (from Berkeley) makes the bold claim that “just one AI will be smarter than the entire population of humans that ever lived on earth.” By my calculation he’s off by almost 20 orders of magnitude (one brain is 10 magnitudes more complex than 1 computer, and you can estimate conservatively that 10 billion people have lived on the earth so far). So it will take more than a few years to bridge this gap. Unlike Johnny Depp, who pocketed $20 million for his role in Transcendence, individual investors who have succumbed to siren call of Deep Learning and AI companies may not do as well.
“Artificially Intelligent” computers will certainly not appear in next 20-30 years. That does not mean, however, that the future of mobile and wearable technology will not be exciting. In fact, I believe we will see wearable innovations exceeding even our wildest dreams.
Roman Ormandy is founder of Embody Corp., a startup developing software for embodied, mobile personal assistants based on multi-modal sensor streams. He previously founded Caligari (trueSpace) and sold it to Microsoft in 2008. You can reach him at roman@embodycorp.com.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More