Artificial intelligence (A.I.) dates back to the 1950s, when the term was coined. But the way Intel sees it, the field is not old — it represents a growth opportunity for the chipmaker.

Last week Intel made a bold move and acquired Nervana, one of the preeminent startups in deep learning, a type of A.I. that involves training artificial neural networks on data and then getting them to make inferences on new data. Today, when Intel announced a new generation of Xeon Phi server chips, the emphasis was on their ability to handle A.I. workloads. In previous years, Xeon Phi has been geared toward the high-performance computing market.

Clearly, the company’s interest in the area is on the upswing — following a considerable A.I. push from graphics card maker Nvidia.

“The world of A.I. is still rather nascent,” Diane Bryant, executive vice president and general manager of Intel’s Data Center Group, told VentureBeat in an interview today at the company’s Intel Developer Forum conference in San Francisco. “A lot of exploration, a lot of research, a lot of — well, the academic community is investing heavily. So there’s still a lot of research going on in the world of A.I.”

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

In other words, Intel thinks it’s not too late for the company to become closely associated with A.I. In the specific realm of deep learning, Nvidia is considered the go-to arms dealer for graphics processing units (GPUs) that can be attached to servers. Web companies like Baidu, Facebook, and Google, as well as cloud providers like Amazon Web Services, IBM SoftLayer, and Microsoft Azure, all rely on Nvidia GPUs.

“I do obviously struggle with what the default architecture is,” Bryant said. She pointed to results of research Intel did on the use of servers for machine learning or deep learning last year.

Machine learning vs. deep learning

Of all those servers, she said, 7 percent were handling deep learning or machine learning, and of that 7 percent, 95 percent of it involved Intel chips, 2.4 percent were on Intel chips with a general-purpose GPU, and 2.6 percent did not use Intel chips (think Power or SPARC servers). Just 0.1 percent of the servers shipped in 2015 were doing deep learning model training, and of that 0.1 percent, 73 percent of it was on Intel-based servers, while the rest was on Intel-based servers with general-purpose GPUs.

“You are talking about a sliver of a sliver that is actually using a GPU accelerator,” said Bryant, who has been at Intel since 1985 and previously worked as the company’s chief information officer. Of servers doing machine learning or deep learning, “the vast, vast majority of workloads are machine learning. So deep learning comprised 0.1 percent of all servers deployed last year.”

If the data is correct, that would mean Intel has not at all been left out of the A.I. business. It is only in the past five years that researchers have realized that it can be economical to train deep learning systems on GPUs. But maybe that’s OK for Intel.

“To invent and to build deep learning solutions, obviously it’s still small, and … a nascent market, but it’s a market that we believe is going to explode,” Bryant said. And Xeon Phi will be the primary product Intel will promote for that market.

Back to the future

In the 1980s, when plenty of startups were offering A.I. technology to businesses, Intel was in fact developing A.I. products — the thing is, they never made it out of Intel’s labs, Bryant said. “They simply invented it too early and walked away from it,” she said.

In the 2000s, Intel had a project codenamed Larrabee that was intended to push out discrete, standalone graphics accelerators — the kind that has in more recent years made Nvidia so popular with deep learning. But Larrabee GPUs never saw the light of day. The company changed paths. It focused on selling integrated graphics for desktop computers, and later, in 2012, it “renamed Larrabie Xeon Phi,” Bryant said.

But now there’s something new, from Google, called tensor processing units (TPUs). They offer “advanced acceleration capabilities” for workloads like Google’s TensorFlow deep learning framework, Google has said.

“I think that’s fabulous,” Bryant said. The TPUs are meant to improve the inference stage of deep learning, she said.

As fascinating as it is that Google is now engaged in this activity — likely using TSMC as a fab, she said — they’re carefully tuned, unlike off-the-shelf server chips like the ones Intel offers to all types of companies.

“A large cloud provides hundreds of thousands of servers in a data center,” she said. “You want all those servers to look the same. You can’t load TensorFlow inference because it doesn’t have an accelerator. I need to load it here (in one area of the data center). Consistency in the data center is very important to them, so our job is to look at what’s being accelerated in TensorFlow and integrate that into Xeon processors.”

Meanwhile, Intel will also be integrating the technology it picked up through the Nervana acquisition. Intel will push it through the manufacturing process and get silicon back, and then do benchmarking, Bryant said. The deal has not closed yet, she said.

Update on August 18: Clarified statistics from the 2015 server study that Bryant mentioned.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More