Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":812826,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,enterprise,mobile,","session":"B"}']

Micro server pioneer Andrew Feldman sees huge growth in power efficient computing (interview)

Andrew Feldman of AMD

Image Credit: AMD

Andrew Feldman is one of the pioneers of micro servers. His company, SeaMicro, used low-cost Intel Atom chips to create a category of servers with lots of processors in a single machine with exceptionally low power consumption for a traditional x86 (Intel-based) machine. When the SeaMicro micro servers debuted in 2010, it was like an Atom bomb on the server business, where high-performance, heat-producing chips were the norm.

Advanced Micro Devices bought SeaMicro early last year for $334 million to gain entry into the energy-efficient server market, and that has helped the company gain new insights into server customers. Feldman is now corporate vice president and general manager of the server group at AMD.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":812826,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,enterprise,mobile,","session":"B"}']

He believes that the micro server revolution is just starting in corporate data centers as the drive for low-cost, high efficiency servers increases. And startups who latch onto the “micro server ecosystem” are going to grow with it. We caught up recently for an interview with Feldman. Here’s an edited transcript of our interview.

Above: Andrew Feldman of AMD with SeaMicro board

Image Credit: AMD

VentureBeat: What’s on your agenda these days?

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Andrew Feldman: We’re doing well. It’s an exciting time to be in the data center. The number of startups that are aiming the business there, the rate of growth among the storage guys, the interesting networking that’s going on, whether it’s software-defined networking or network function virtualization—It’s just a tremendous amount of exciting stuff that’s being driven by this unprecedented growth of computing in the data center. We’re participating in that in a number of different ways.

VB: The flash memory startups are getting a lot of attention. A billion dollars worth of deals have happened recently.

Feldman: Yeah. Virident was sold for $645 million. Cisco bought Whiptail for $415 million. Pure Storage might have raised $125 million at a valuation of a billion. I think Nimble and Nutanix are doing well. There’s a whole generation behind those guys that are doing well. It’s interesting.

VB: Is there a micro server economy, then, or an ecosystem?

Feldman: For sure. This is all part of a push to build the new data center. The new data center has different combinations of compute and disc and networking. It has different workloads. It has different demands. The traffic patterns are different. It’s very different thing from what we were building 10 years ago. That plays into—Not just the way SeaMicro builds servers, but at AMD, our choice to add ARM to the portfolio. We believe that will be an exciting and important component of the server market going forward. We think that by 2016, 2017, we’ll see double digit share for ARM servers.

We see a continued demand on traditional x86 as well, but even there we see some changes. We see the importance of things like the Freedom Fabric that SeaMicro invented. We see actions like HP building Moonshot. They’re stepping into the fabric-based micro server market and trying to take a step toward some of the innovations that SeaMicro has made.

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":812826,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,enterprise,mobile,","session":"B"}']

Above: Andrew Feldman with original SeaMicro box

Image Credit: Dean Takahashi

VB: How have things changed inside AMD? You mentioned ARM. What is new?

Feldman: I have two teams. We built the SeaMicro business and we have the server business. We are informing our CPU design with system expertise and direct contact with the customer. We are trying to match work load to processor design. On the SeaMicro side, we’ve doubled the size of our engineering and increased the size of the sales organization. We’re accelerating our work.

As we announced a little while ago, in the ARM part, we’re netting the SeaMicro technology and the Freedom Fabric into the ARM CPU. These are substantial steps.

VB: And the market size—Is micro server the fastest-growing category?

[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":812826,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,enterprise,mobile,","session":"B"}']

Feldman: It is the fastest-growing category, for sure. If you look carefully at not just what we sell or what HP sells, but at some of the Dell systems being sold out of their DCS group – which sells to the largest 10 or 12 players in the world – they look like micro servers as well. They have many of the characteristics of shared infrastructure and being tied together. They don’t talk about those because those customers think that’s providing a source of advantage, but I think you’ll see a significant proportion of—In 2010, Diane Bryant assured me that the Atom would never be a server part. Now they have 13 server SKUs.

VB: Intel came up with their own history of micro servers, kind of.

Feldman: They did! [laughs] Isn’t that remarkable?

VB: They didn’t really mention SeaMicro that much. How one of the Intel fellows was just bashing his head against a wall for a bit and then got listened to.

[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":812826,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,enterprise,mobile,","session":"B"}']

Feldman: Let me tell you. When we had a box up and running, he still had a PowerPoint. Their weaknesses in the Atom reflect a schizophrenia in that company. They don’t want ARM to come up from below. That’s an important thing, because in the history of compute, everyone gets beaten from below. So they want to have parts there – they want to say they’re a player and a visionary – and yet they really want you to buy a Xeon. That’s why their Atom parts are weak.

VB: What I don’t quite understand with computing in general is how it seems to bounce back and forth between centralized and decentralized.

Feldman: Why is that hard to understand? We do the same thing in networking.

Above: Andrew Feldman and Rory Read of AMD

Image Credit: AMD

VB: Well, not hard to understand, but why don’t you just make up your mind?

[aditude-amp id="medium4" targeting='{"env":"staging","page_type":"article","post_id":812826,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,enterprise,mobile,","session":"B"}']

Feldman: Are you a football fan? The same thing happens in many competitive arenas. Offensive linemen get big, so defensive linemen get fast. That game happens. You have these bigger and bigger guys on the offensive line, so all of a sudden the speed rusher came in. He could get outside the big guys fast.

So you had centralization and more centralization. Now the networking, the seed of connectivity, got to be such that you could have decentralization. You could go to the cloud. Originally you had centralization because the cost of compute was so high. That was the shared compute of the early ‘70s. You had your modem. The real limited resource was the compute cycle.

Now we’re centralizing compute at Amazon or Verizon or the other major cloud companies because communication is fast. That’s very interesting. You see the same thing in networking. We used to have decentralized routing. That’s what routers do. Now we’re going to an SDN model where it’s centralized. You have a big view of what’s happening in the network, rather than everybody having two hops of view.

VB: With the way it’s going now, how does that help you, say, beat the traditional approach?

[aditude-amp id="medium5" targeting='{"env":"staging","page_type":"article","post_id":812826,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,enterprise,mobile,","session":"B"}']

Feldman: In a number of ways. The big driver in the market today – and what’s so different today – is the power and the rate of growth of the mega data center and the big data center guys. Just to give you an idea, I think it took JPMorgan Chase 100 years to be one of the largest consumers of compute. It took Facebook four years. We’re talking about crazy new business models that produce unbelievable demand for compute.

That demand for compute isn’t the same as the old demand for compute. They have had to rethink their software. That means the work load is different. That means the underlying machine, the CPU, is different. In the cloud, when you go to an Amazon AWS or to Verizon, you don’t know what CPU your using. The brand has been disintermediated. It’s been removed. You just a get a slice. That’s true in the private cloud, too. If you’re an engineer, you don’t care. What you want is eight gig DRAM and some compute. Those are tremendous changes, which in our view work very much against Intel.

There are some fundamental changes as well — the rise of the ARM ecosystem, the fact that these very large demanders of compute would like customization. That is done extremely easily in an ARM ecosystem and very painfully in the traditional Intel approach. In the x86 world it takes three or four years and $400 million to build a part. In the ARM world it takes 18 months and $30 million. You can do a custom part for a very different type of customer.

How that relates to VentureBeat is, when was the last time there was a startup doing an x86 part? Montalvo? They raised $150 million and blew up. There’s no innovation there, no startups doing it, because it’s too expensive. There are many startups doing ARM parts of one type or another, because you can do it for a reasonable amount of money.

Above: Andrew Feldman of AMD

Image Credit: AMD

VB: There’s also the GPU compute wave.

Feldman: There is. That’s part of the same general thrust, where you can specialize your compute for a particular type of work. That’s a great example of a type of work that’s better done with a slightly different type of core, a graphics core, than is done with a traditional processor core. That’s very much the same notion. There is so much work now, and the distribution of work is such that you can use a particular type of engine, a graphics engine, to do that type of work. You can use an ARM engine to do this type of work. You can use an x86 engine to do this type of work.

VB: So micro servers and GPU compute are benefiting each other? Or are they competing in some way?

Feldman: I’d say they’re benefiting from the same underlying trend. One size doesn’t fit all in compute. Not in form factor, not in processor. That’s really what’s happening.

VB: How do you guys stack up now against Intel? They seem to have another refresh wave coming here.

Feldman: We’re slightly smaller. [laughs] I think we stack up really well. Our platforms, ranging from the client side all the way through servers, are extremely strong right now. We’ve made super progress. Our small core parts, which is the most interesting part of the server market right now, are better not just than Centerton, which they’re shipping now, but better than Avoton, which they will be shipping soon. We’re pleased with how we stack up right now.

VB: The percentage of the server business that shifts to micro servers, are you getting a better sense—

Feldman: More than 20 percent. That’s our estimate.

VB: Last year, Intel was talking about how it would be 10 percent. It’s already over 20 percent? Or is that projected?

Feldman: Projected, for 2016. Some of the largest micro server customers don’t report. Google doesn’t report to IDC. But if you look carefully, I think it’s already in the four to six category, and we’ll continue with steady growth.

VB: Who are some other big names that have endorsed it?

Feldman: We have a big announcement at the end of the month, a really big one. Stay tuned.

VB: What does Facebook use right now?

Feldman: Their open compute rack looks very much like a rack of micro servers. It’s shared infrastructure, which was a tenet. It’s efficient processors.

VB: You’re making progress on the ARM design, then?

Feldman: We announced that we’d be sampling parts in Q1. Eight core and 16 core, 857-based. It’ll be a phenomenal part.

VB: The 64-bit ARM movement, is it—Apple just announced their first part in that space.

Feldman: How cool is that? For servers, you need 64-bit.

Above: Andrew Feldman of AMD

Image Credit: AMD

VB: Is the rest of the ecosystem coming along for now, as far as 64-bit ARM?

Feldman: The rest of the ecosystem is doing really well. It has a ways to go, but whether it’s OS guys like Fedora or Ubuntu or Red Hat, we’re going to have OSes. We’re going to have a Java JIT. We’re going to have everything you need, a tool chain. By the middle of next year, you’ll see amazing things.

VB: You could put micro servers in everyone’s home, I guess?

Feldman: I’m okay with that. [laughs] How do you make a micro server a consumer electronic device? Now we’re talking. That is a weird one. There are a lot of startups in storage, a lot of startups in different flavors of flash, a lot of startups in network function virtualization.

VB: Is Nebula’s mission consistent with yours as well?

Feldman: Absolutely. Nebula is doing the OpenStack appliance. These are all tributaries off the main stream. The fundamental tectonic shift is the transformation to the big data center. That means storage is different. That means software. That means virtualization. That means which CPUs you use and how you package them. All of these are part of the same notion, that these new facilities and these new customers don’t want the same shit they had 10 years ago.

I’m a big supporter of Nebula, because we believe very strongly in OpenStack. We think, among the cloud, nobody uses VMware. KVM, Xen, that’s what’s used. The OpenStack approach is first-rate. We think that, just like DataStax makes it easier to adopt Cassandra, guys like that will make it easier to adopt OpenStack.

VB: Are you waiting for something to physically play out, then? It seems like that movement started years ago, now. You’re waiting for a big milestone to happen, a trend or a crusade here.

Feldman: The reason I’ve worked at startups—You don’t work at a company called Trailing Edge Technology. [laughs] I like to be there early. There’s an unfolding. We saw a fundamental change in 2007. At Google they probably saw it in 2004. They didn’t tell anybody. They began making changes inside their business. We saw it across their entire landscape. We built a company. We changed the way servers are made. We brought to the fore the notion of workloads and matching work to compute. We brought to the fore the notion of a fabric. That’s going to play out over the next 10 years.

It’s a big wave. One of the fun parts about your job and mine is that when you see a big wave and you get out on it, it goes thundering past. You want to be riding that. Sometimes it dumps you and you drown. Other times you get the ride of your life. That’s why we do this, to see waves like that and ride them as far as they can be ridden.

Above: Andrew Feldman of AMD

Image Credit: AMD

VB: Do you feel that AMD is strong enough at this point to ride that wave, as you put it?

Feldman: We are. It’s an interesting question – whether, in very big waves, you’d rather be in a big boat or a small one. It’s not clear. I think we’re clearly more nimble, more agile. The fact that we don’t have fabs allows us tremendous flexibility. Intel has to pay $5, $7, $10, $12 billion a year to maintain a performance advantage. That advantage is shrinking with each fab geometry level. That’s an extraordinary strength they have, and a weakness on the flip side.

At some point, when you’re very large, your strength becomes your weakness and you get beaten quickly. In all of technology, nothing destroys capital faster than an empty fab. They cost $7 billion to make and have 1,500 or 1,700 days of life. If you have six or 10 or 12 hours a day of open fab time, you never get that back. Tens of millions of dollars a day of fab. It’s a source of advantage, and then bang! When will that be? It’s tough to say.

We like our odds. We like the flexibility we have. We like the fact that we can see a trend like ARM or micro servers and we can be in it. We don’t have politics and this lumbering-ness that you see in a company of that size. Now, there are things we wish we’d done better and things we are doing better than we’ve done in the past. There are opportunities to improve and pursue greatness. But that’s fun too. Those are the micro-adjustments you make when you’re on the big wave.