intel microservers

VB: There has been concern that this category might cannibalize the more expensive server chips. That percentage there suggests otherwise, though, that that’s not really happening.

Waxman: You never know what the volume ramp looks like. Some people think that it’s a hockey stick and some people paint it as a flat line. From a strategy perspective, what we decided to do is to make sure that, because we don’t know, it’s important for us to lead. When customers want something, we want to make sure we’re delivering the best product.

Part of the reason that we came to market with the Atom processor C2000 was because customers in that space told us very clearly what they wanted. They wanted 64-bit capability. They wanted competitive performance. They wanted energy efficiency. They wanted all the cloud data center class features, like ECC and virtualization technology. And so the product that we developed wasn’t so much predicated on whether the market was one or three or five percent, but more about making sure that if there is an emerging segment here, we have a leadership class product. We’re proud of the fact that we’ve delivered our second-generation 64-bit Atom SoC workload that is best in class before competitive products have come up with their first generation.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

VB: ARM is active in the space as well. How do you look at that competitive threat?

Waxman: We always take the competition very seriously. The reality is that it’s not ARM, but it’s the cadre of different SoC vendors that are looking at their own approach into the market. Our goal is to make sure that we’re providing the best products and derivatives to address the span of the market. The customers, whether they’re looking at micro servers or cold storage or networking, buy based off of their total cost of ownership. What’s going to give them the best solution? That’s a combination of performance, power, density, and the features that matter.

We feel confident that what we’re delivering is not only delivering best in class for that segments, but we’re doing it in a way that’s allowing people to retain their software investment. That’s important. Software compatibility has been part of Intel’s value since the early days of x86. It’s part of what’s led us to success in the server market. We want to deliver that leadership product while maintaining the user’s investment.

Server farm with cooling

Above: Server farm with cooling

Image Credit: Dean Takahashi

VB: These guys have not really executed so far. They seem to be putting things off until next year almost every year. The competition seems to be moving slowly in this sector, even though it’s been talked about a lot.

Waxman: That’s one of the things that’s challenging about the server market. Particularly when you look at some of the growth segments in the data center. If you look at cloud computing or high performance computing, customers are telling us that they still want to see a fast performance growth rate. They want to see repeatable improvements.

VB: What sort of view does this look like to you? It seems like there’s a lot of competitors battling for what is now a very small piece of the market.

Waxman: It’s hard to contest that view. [chuckles]

VB: AMD announced their deal with Verizon recently. They said that their Opteron-based micro server chips are going to be getting into their cloud infrastructure. I think you guys contend that a lot of that cloud infrastructure – especially with a lot of the SeaMicro installations – is already based on Intel.

Waxman: I always like to let the customers disclose what architecture they’re using. It’s their call. But I do think Andrew did seem to acknowledge that a sizable portion was based on Intel architecture.

VB: What about the highest end of the market? How would you describe the dynamics there?

Waxman: Talking about data centers, there are two trends that are important. One is, again, a continued demand for more performance and capability. The rationale is simple. A lot of hyper scale data centers, whether cloud or high performance computing, they’re looking at just sheer numbers of systems that they want to deploy. If, as an example, you could get 10 or 15 percent more performance, that might not seem like much, but if you look at it in the context of 100,000 servers and avoiding the overhead of 10,000 to 15,000 servers, it’s a sizable amount. We continue to get requests from customers to find new ways to get even more performance. One of the things we’ve talked about a bit is that we’re getting more requests for customization and optimization by customers for their particular workload so they can eke a little more value out of our servers.

Splitting the enterprise market from the scale-out market, one of the things that differentiates them at the high end is the desire for reliability features and things such as shared memory. We have the E7 product lines for large mission critical databases with higher levels of reliability, error correcting circuitry. You have other segments of the market that don’t necessarily value the shared memory, but they just want more performance.

Server farm

Above: Server farm

Image Credit: Intel

VB: What do you think about GPU computing and how it’s making its way into data centers?

Waxman: We talk about this a lot. Every time we look at an innovation in our silicon, if it requires software modification, there’s always this escape velocity that’s required. Meaning that sometimes that general purpose Xeon covers so much and can do so much that it’s hard to come up with something that’s differentiated for a different segment. That’s the border line where GPUs tend to reside. It’s difficult, in many cases, for people to program. They need to make investments in it. By the time they tap that potential, they could get almost equivalent performance out of the Moore’s Law that we deliver from the next generation of Xeon processors. There’s a constant treadmill that is difficult to escape from.

VB: If you’re looking out a few years, what are the trends that you see and that you’ll have to adapt to over time?

Waxman: I think that there are a number of them. One is, we’ll continue to see hyper scale and large scale computing becoming a bigger portion of the business. For us, we have to think about how our technologies deploy at scale, and how we’re allowing these users that will have these large application-centric data centers to optimize and get the most capability out of it.

Another trend that I see is just the sheer desire to simplify and make each of the platforms more programmable. If you think about compute and network and storage, each of them has largely been vertical stacks at various points in time. Now servers have migrated to a horizontal solution.

I think people are looking to bring that same type of mentality into networking, as one example. That’s part of the reason we see the desire for software defined networking and network function virtualization, and network function virtualization in particular, where you had special purpose hardware, and now you’d like to be able to run some of those applications and workloads on a standard Intel-based platform.

The third one is around big data. I’m a believer that, as much hype and growth as there has been around cloud, the potential for big data is substantially larger. It goes back to the economic drivers. If you look at simplifying IT – say, a half a trillion dollar per year industry – if you look at solving major problems in health care or manufacturing or government, or opportunities for more optimized marketing and supply chains, you’re talking about trillions of dollars worth of economic value. It’s going to be a big growth driver, particularly as we connect the internet of things to the analytics in the data center.