Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":2031835,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"big-data,business,cloud,","session":"A"}']

Intel unveils next-generation Xeon Phi chips for A.I.

Diane Bryant, executive vice president of data center group at Intel.

Image Credit: Dean Takahashi

Silicon Valley is full of chatter about artificial intelligence, deep learning neural networks, and machine learning. And Intel, the world’s biggest chip maker, is becoming a lot more conversant in that chatter today.

Intel executive Diane Bryant announced today that the company is working on a next-generation version of its high-end server chip, the Xeon Phi, for A.I. applications.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2031835,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"big-data,business,cloud,","session":"A"}']

Baidu will use the upcoming Xeon Phi chips in the data centers it is building for its Deep Speech platform, where its networks will be able to parse natural language speech as quickly and accurately as possible.

By 2020, there will be more servers handling data analytics than any other workload, Bryant said.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Intel’s chips have been speedy number crunchers for the longest time. But in recent years, Nvidia’s graphics chips have become a lot more useful in servers dedicated to neural networks, which can process unstructured data such as video or speech and recognize patterns more easily.

To respond, Intel has started focusing more resources on central processing units (CPUs) that can handle more deep learning tasks. And Intel is betting that an improved CPU, and Xeon Phi in particular, is the answer. The new chips, code-named Knights Mill, will arrive in 2017.

Above: Diane Bryant of Intel with Jing Wang of Baidu.

Intel also acquired Nervana, a San Diego, California-based deep learning startup, for more than $350 million last week. That team will help Intel on multiple levels with deep-learning cloud applications and a development framework. Jason Waxman, corporate vice president for cloud computing at Intel, said in an interview with VentureBeat that the Nervana team will be broadly useful for Intel’s A.I. efforts.

Intel argues that its Xeon Phi chips will run at “comparable levels of performance” to Nvidia’s graphics processing units. Of course, Nvidia begs to differ, and it said so in a blog post yesterday.

Bryant said that the improvement in performance with CPU-based processing is huge because the processor can access memory much faster, which is important as the size of the task scales upward.

Intel is also partnering with the National Energy Research Scientific Computing Center to optimize machine learning at huge scales.

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2031835,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"big-data,business,cloud,","session":"A"}']

Jing Wang, senior vice president at Baidu, said, “The next era is the era of artificial intelligence. It is technology that change people’s lives. Baidu is very excited about” using A.I. for speech and natural language processing.

Above: The new Intel Xeon Phi

Image Credit: Intel

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More