Server farm side

Above: Server farm side

Image Credit: Intel


VB: How do you convey some of this to regular folks? Paul Otellini (former CEO of Intel) used to say that for every 50 cell phones sold, somebody has to deploy a server. What are some of the things you like to draw out that way?

Waxman: We haven’t quite gotten it down to a science yet, what the ratio is going to be for the internet of things. We’ve seen forecasts of anywhere from 15 billion to 50 billion connected devices that are going to be doing computing. These range for sensors embedded in homes managing energy networks to smarter manufacturing equipment to wearables that might be monitoring people’s vital signs. You think about this huge proliferation of devices that will be computing. They need to be connected. They’re all going to be giving off data. You need a place to store that data and to analyze that data.

It’s too early to tell exactly whether one server for every 200 devices will hold, but there’s little doubt in my mind that once people get a taste of the insights and information coming off of those 15 billion plus devices, it’s going to represent a great driver for the data center business.

VB: What’s going to be interesting in the news in the next six months or a year, regarding the server market?

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Waxman: The increased focus on network transformation is probably going to be one of the big headlines over the next six months. The need is clearly there, to go simplify the networks, particularly as there’s more complexity. And I say “more complexity” meaning that the data centers themselves are getting bigger and need to manage that traffic. You also have companies looking to have their own private clouds combined with public clouds. How they manage their network across different organizational barriers—All those things are driving transformation.

The two outcomes of that are going to be network function virtualization, where more people are going to want to run – rather than using special purpose hardware – those workloads virtualized on general purpose Intel control plane based switches. Second, how that’s all managed through a software defined network. If you look at forthcoming announcements we see in the industry from all the industry leaders—If it wasn’t interesting enough, the announcements around OpenDaylight or VMware’s acquisitions or what some of the folks in the startup community are doing are going to be even more interesting in the next six months. People will have to figure out the standards, or how software defined networks will plug in with different types of orchestration solutions, whether that be VMware, Microsoft, or OpenStack. There’s a tremendous amount of change. I would expect that we’ll see a lot of news from leading vendors around that change.

intel servers

VB: What about the notion of where processing is going to happen? Whether that’s local or in the cloud. What’s the latest thinking on that?

Waxman: It will continue to ebb and flow based on connectivity. It’ll test some of the norms of what people think. It’s funny, because we interact with our devices. We think about gesture or voice as something that’s a client side activity. But to drive gesture recognition or voice recognition—A lot of the way it works is that it’s constantly taking samples of something, a picture or a pattern, and comparing that against a huge database. If I say a certain word, how do I know what the best match for that is? It’s hard, at least today, to get the type of database that can provide great speech recognition in a single client device. Things that require huge amounts of compute or breakthrough capabilities may lean more heavily on the data center for compute.

On the flip side, there are things that have been traditional data center activities, but due to bandwidth and connectivity, they’re moving more toward the client. Back to big data for a moment, the whole notion behind some of the frameworks that are produced is that you move the compute to the data rather than moving the data all the time. If you think about a network of surveillance cameras, you could take all the video and pictures from those cameras and try to consolidate them into the data center. But that becomes pretty costly from a bandwidth and storage perspective. What you’d rather do is have a camera signal when it sees a certain event, and that means the compute here goes from the data center out to the edge, the device itself. Having the device identify an object or an event and sending relevant data when it’s needed back to the data center. The traditional view of where compute is being done can change depending on how much data is required. The rule becomes that the compute moves to where the data resides.

VB: The internet of things, does that cause you guys to make some changes? Did you start thinking about it a long time ago, or is it more recent? It reminds me of the transition from PCs to mobile, how long that took, and the different calculations and bets that companies made to get ready for that. I wonder what the internet of things causes you guys to do now that you’re talking about it a lot more this year.

Waxman: You could say that there’s an inflection of a number of different things that come together. Our roots in the space could be traced back at least a decade into some of the things we’ve done for the embedded market. A lot of the intelligence in the internet of things is reserved for the smartest, if you will, of machines. That could be computer numerically controlled factory equipment, as an example.

But like a lot of things, as compute becomes more powerful and cheaper, people find new uses for it. One of the things we’re seeing is that a natural evolution is occurring. Now you can take the same level of compute and have a 10X level of capability in a certain type of silicon package than what you had just three or four years ago. That allows people to embed intelligence in devices where it really wasn’t feasible previously.

While we’ve been continuing to look at applications in retail or manufacturing or health care or energy all along, the thing we’ve noticed is that there’s this huge pyramid of devices. At the base of that pyramid, where there’s a lot of volume, you’ve got what most would consider pretty dumb, rudimentary sensors out there. It opens up an opportunity. How can someone harvest all of those sensors that are embedded in all of these devices and start to add intelligence?

You’ve got the convergence of all these different things. That led us, a couple of years ago, to start looking at how we can produce Intel silicon that hits the right price point, performance, and power to address that new range of capabilities. That’s what led us to announce Quark. You combine that with having a chief executive that knows fabs better than anybody in the industry, and it lends itself to a powerful strategy as far as how we can go make Intel architecture ubiquitous through a new architecture such as Quark.

Intel Xeon Ivy Town chip

Above: Intel Xeon Ivy Town chip

Image Credit: Intel

VB: Do you think all of the servers in the world are going to wind up in Iceland?

Waxman: [laughs] I don’t think they have to. It’s interesting. One would think that locating a server where it’s cold all the time would certainly reduce the cooling costs. But one of the things that’s funny is that a lot of data center location is more tied to the latency and the speed of light. Number two, it’s the humidity, or cheap power.

Certainly if Iceland has cheap power and it’s easy to cool, those are big drivers. But we’ve also found that you can do free cooling in even hot temperatures. There are companies doing it in Arizona and Las Vegas. You wouldn’t think that would be an easy place to run a free cooling data center, but it has a lot more to do with humidity as a key driver than the actual temperature. Availability of power and proximity to population will continue to be the two primary drivers. But it’s interesting to see the different approaches that people take to free cooling.

VB: What do you notice about the behavior of some of the folks who buy most of these things – Google, Facebook, Apple, Amazon. Are there any points you’d like to make about the behavior of the biggest server customers?

Waxman: One of the things that’s been interesting for me in learning from them and their requirements is that—If you think about it from their perspective, they have giant data centers designed to deliver an application. Traditional data centers were designed to offer a few servers here, a few servers there, with many different types of applications. The desire to optimize for that scale brings in a whole new perspective on some of the challenges they have.

One of them is consistency. They like repeatable. They like consistent. The more that you do things that are one-off or variable, the more it creates that fly in the ointment effect. What we want to go do is provide them with that reliability and consistency. That’s part of the reason that having a consistent instruction set and compatibility is important.

The other element is that they’re always looking for competitive advantage. They’re looking to deploy the technology sooner. We had a number of our largest cloud customers deploying the latest generation Xeon E5 2600s before they were publicly launched, because there’s so much demand to get a better time to market. It’s all about economics and competitive advantage.

That also leads me to the conventional wisdom, which is that sometimes we hear our competition talk about how they’re designing for the large cloud service provider design point and touting power efficiency. That certainly is a level of interest to them. But by far the biggest driver is getting more performance and more capacity out of these massive data centers. It separates the truth of what people are deploying from what’s sometimes just the industry hype.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More