Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":877182,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"big-data,business,mobile,","session":"D"}']

Scientists exploring computers that can learn and adapt

For six or seven decades, computers have been based on processing electronic 1s and 0s.

Now, computer scientists are breaking out of that paradigm in strange new directions, as they seek new ways to tackle problems that digital computers can’t easily solve.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":877182,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"big-data,business,mobile,","session":"D"}']

One approach is quantum computing, in which the computer takes advantage of the ambiguous, “quantum” states of matter. After years of being mostly hypothetical quantum computing is edging into the real world: One startup, D-Wave, is building quantum computers that, it hopes, can determine a variety of possible solutions simultaneously and save energy by speeding up the computing process by many orders of magnitude. That could be useful for cryptography, among other applications.

Another approach is neuromorphic processing, in which circuits are wired together in a manner similar to the way neurons in the human brain connect to each other. As a neuromorphic processor evaluates a problem, it weights connections based on the results of its analysis, enabling the whole complex to “learn” in the same way human brains learn. This technology, and the Stanford researchers exploring it, is the subject of a New York Times story today.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

The promise is that computers will some day be able to use this technology to improve their abilities at things like speech and image recognition, which humans excel at but computers, until lately, have not done so well at. 

Computer scientists have worked with “neural networks,” which are written in software and run on traditional silicon chips, for decades. The technology has advanced in recent years, and last year a Google neural network scanned a database of 10 million images and taught itself to recognize cats.

Neuromorphic processors take a similar approach, but move a level down into the wiring of the computer itself, which has some promise for more efficient learning algorithms.

Neuromorphic processors still use silicon chips, and the technology is not yet advanced enough to replace traditional CPUs. But it shows some promise for augmenting traditional chips in situations where adaptability, error tolerance, and low power are priorities.

Another area where machine learning has tremendous relevance: “Big data,” or the emerging field of finding patterns among enormous and often heterogeneous data sets.

Students seem alert to the possibilities of machine learning. According to the Times, the most popular Stanford class this past fall was a graduate machine-learning class that attracted 760 students.

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":877182,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"big-data,business,mobile,","session":"D"}']

“Everyone knows there is something big happening, and they’re trying find out what it is,” the Times quotes computational neuroscientist Terry Sejnowski as saying.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More