VB: I understand that’s a big business in Iceland now.
Patel: Right. The motivation for Iceland was low-cost geothermal electricity, cooling, the external layer. But I looked at three things. Not just cooling, but power, and also ping, the networking element. I felt that the data center costs needed to be reduced significantly to connect the entire world.
We worked on what I call the first and second generation. Then I went on to drive the third-generation design, where the data center should be completely off the grid, for example run with micro breeder power supplies. It could be solar. But one of my favorites was that data centers should be integrated with dairy farms. A big commercial dairy farm in the U.S. might have 50,000 cows, and only 2,000 cows could generate a megawatt of electricity. The anaerobic digestion of manure generates methane and you use that to run a standard engine. G.E. makes one. It’s a very good engine. Use that to generate electricity. It’s systemic innovation. You just put things together.
Then we considered that the heat from the IT equipment is low grade. Generally we just dump it. Why not take the hot water and put it through the manure in those big digesters? Farmers call it soup. If you dump the heat into the soup, you enhance methane generation. Then use methane to generate electricity. Even the waste heat from the generator is high grade. You could use that to drive the absorption/refrigeration cycle. Our contention was that IT and manure have a symbiotic relationship. [laughs]
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
VB: Cows and data centers.
Patel: Right! At the time we wrote a paper. It didn’t get much interest. But when we said that IT and manure were symbiotic, it took off. We got a lot of press. Farmers invited us out. So we wrote another paper and contended that when you build a data center, you must think of your own power station. You must make sure your demand side matches the supply side. All work loads are not equal. If someone is going to pay more to execute his work load right away, execute his job before the guy who can wait. We built a whole software around dynamically provisioning the demand side balance. I felt that design was needed. The industry doesn’t seem to have gone forward with that.
Data centers, to a large extent, have not succeeded. 60 percent of the world is not connected. There’s talk of using satellites and drones, but I don’t think that will work. I felt it was time to move again. When the split happened and I had a choice between HPE and HPI, I said, “You know, data centers will remain. I know it. We’ll have a hybrid opportunity. But I should go back to the edge.” The edge devices, in a peer-to-peer network, will connect the rest of the world. It will happen south of the internet. Maybe even at cell towers.
The reason why I think connectivity will happen at cell towers, why we’re going back to computing at the source, is twofold. One, people are not connected. In places where they’re not connected, though, telco is prevalent. When I go to India, in my wife’s village where the leopards still roam, I get a better connection than Hanover Page Mill. They have great voice. Cell towers are everywhere. But only 250 to 300 million people have data services. A billion people don’t. And yet those are the people who need it most.
When I go to India I get a train reservation because I’m connected. The guy who makes five dollars a day has to go stand in line for the same seat that I take. It’s a digital divide. The prime minister wants to connect them all. My contention that a hybrid model will evolve. There will be data centers, but the action is going to be at the edge.
One of my goals at HP is, what does edge computing look like? The other thing on the edge I think about is, what are the cyber physical applications? We have failed, in many cases, to address those. In the 21th century, those of us who were connected, we followed tweets from Ashton Kutcher or Beyonce, but we failed to follow tweets from airplanes. We lost airplanes. We couldn’t follow them. And yet an airplane generates a terabyte of data. We should be analyzing it, tracking it, helping the pilot. When the pilot fell from 35,000 feet because his Pitot tube was frozen, we could have helped him.
To me, a whole class of physical application hasn’t been addressed. The 21st century is about the integration of cyber and physical. If I want to analyze things on physical systems like airplanes, I’m going to have to do analysis on the airplane and provide real-time insights.
Events might go back to the data center. We may collect data for six months to do historical analysis of the entire fleet. A hybrid model will evolve. To me, computing at the edge is very important.
VB: So there’s an awful lot of computing that happens in a local space, but you don’t always need to go back to the main internet?
Patel: No. Robots today traverse pipes. They take video data. That video is sent to Bangalore and people sit in a room looking for cracks in the pipe. Shouldn’t we be more sophisticated than that? I contend that tomorrow, the robots will go in the pipe like they do today. We have an aging infrastructure. Water mains break in the bay area all the time. That inspection will happen, but the video data – which is terabyte scale – will have to be analyzed on the robot itself. Maybe we’ll also close the loop with autonomous systems where the robot traverses the pipe and fixes it as well.
That’s the world we’re getting into. If we dream big with autonomous systems, there will be a lot of local computing and local action. Then there will be data centers that collect the information for historical data mining purposes. That’s one big driver.
VB: Back to the peer-to-peer model, it seems like you’d put a lot more computing into the edge to make up for the lack of bandwidth there. I used to think of that network computer model that Larry Ellison would talk about. Put a small computer here, connect to the network, and the network provides the computing. As long as the connection is good, you don’t lose anything. But if you’re talking about the edge, where you don’t have the bandwidth, does what you put in the computing on the edge matter a lot more?
Patel: It does. The trend I’m seeing—I’m a mechanical engineer. I did CAD, CAE, CFD. Better and better workstations were built to do that, and then we had a general-purpose computer that worked. I see in verticals – transportation, biomedical–if they do lensless microscopy for imaging blood samples or something, they’re hacking together computers. But that doesn’t scale. We must think of ways we can scale.
As a computing company we have to ask what the architecture looks like. Maybe it’s a performance machine related to just a pig in a pipe, or in an airplane. But what does it look like? Even though we’re addressing a vertical, how do I make it flexible across many verticals? Is there an FPGA?
When we did dynamic smart cooling, we ran the control system on Matlab and Excel. We would do the calculations, go to Excel, change a cell, and then the cell would change the blower speed. The whole data center ran like that. If I can do Matlab code and say, “I’m going to put it on that computer in the field,” can I have an FPGA so I can use my code in a world with more and more Teslas and the like where you’re updating physical systems? Can we go beyond updating firmware to creating a completely new logic?
It’s high time that Silicon Valley came back to the cyber physical world. This hurts me, because when things blow up, whether it’s in a factory or—the pressure was building for days. If only we followed that the way we follow the frivolous stuff that happens in our world. It’s what I call operating at the crossroads of people, profit, and planet.