Back in 1965, Gordon Moore, now chairman emeritus of Intel, predicted that the number of components on a chip would double every two years. His prediction proved remarkably prescient, and Moore’s Law, as his calculation is known, has set the pace for technological progress. Gaming has grown up with Moore’s Law, and after 50 years, it’s easy to be lulled into thinking that computers are good enough. After all computers are now millions of times more powerful than they were back then.
You may think you don’t need all of the power of Nvidia’s Titan X, a graphics chip announced a few weeks ago. It has 8 billion transistors, or 3.47 million times more than the 2,300 transistors on Intel’s first microprocessor, the 4004, back in 1971. A gaming PC with a Titan X graphics card can run 2K’s latest game, Evolve, at 74 frames per second in 4K resolution. Most games look good at 30 frames per second or maybe 60. And high-definition TVs have a quarter of the pixels that a 4K TV has. Who needs that?
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":1690447,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"games,","session":"B"}']Maybe that game doesn’t need it. But I guarantee you that the games of the future — those still in the imaginations of game developers — will need every bit of that processing power and more. We have to remember that imagination powers games, and it moves the industry and technology forward. And it almost always reaches farther than what we can comfortably accomplish today. Our GamesBeat Summit won’t focus exclusively on gaming tech, but I’m hoping that we’ll get into this discussion at our event that is coming up on May 5-6 at Cavallo Point in Sausalito, Calif.
We have to remember that games are not expanding along some linear path, where graphics are the only thing that gets better. Games have expanded in all directions. They still drive technology forward, and they take advantage of everything hardware engineers can deliver.
In the not so distant future, Facebook’s Oculus VR division is expected to deliver virtual reality goggles that will tax the power of the best computers. Mike Abrash, chief scientist at Oculus VR, said at Facebook’s recent F8 conference that his company’s Crescent Bay prototype can deliver images at 90 hertz, or 90 times a second. It has a 90-degree field of view and a limited color gamut. By comparison, he noted that the real world has continuous illumination, a full color gamut, and a 280-degree field of view. No one can do that yet.
I was waiting for Abrash to say what would be good enough. He did say that if the images are presented correctly, the brain will perceive it as if it is real. But he didn’t venture to guess how soon technology will deliver the realistic VR that mimics the real world. I’ll assume that realistic VR is going to require more technological muscle than Oculus can deliver right now. That’s one reason the engineers still have to push ahead.
John Carmack, chief technology officer at Oculus and another game graphics wizard, said in his talk at the Game Developers Conference that he thinks that virtual reality will be bigger than gaming. It’s a new platform, and gaming will pave the way for it. You’ll be able to watch 360-degree movies and teleport yourself to places where you won’t be able to travel. But doing games in VR will likely be a huge market earlier than many of those markets. Carmack believes a billion people will use VR.
But first, it has to be an experience that doesn’t make people sick. If the hardware delivers something like 75 frames per second, or the rate delivered by Oculus’ earlier Development Kit 2 prototype, then it doesn’t completely synchronize. When you turn your head, the images smear. It takes a while for the software images to keep up with your moving head. And when the images and your expectations are out of sync, you can get sick. Some people are very sensitive to this, and the solution is to drive the technology toward 120 frames per second, so that there is no delay between when you expect to see something and when the images appear.
“Oculus has had a lot of fear of poisoning the well,” Carmack said. “If a really bad VR product goes out, it could set the industry back to the 1990s.”
That was when VR failed. We don’t want to go back to the days when 80 percent of the people get sick from a product. But a turn or two more of Moore’s Law, and we’ll be there. Oculus also can’t ship a headset for $1,000 and require you to use a $3,000 PC. So the component costs have to drop at the same time they become more powerful. Oculus wants to deliver a low-cost consumer product. Somebody else may shoot for the high end. Again, we need Moore’s Law to deliver that for us.
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":1690447,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"games,","session":"B"}']
We also have other needs. These devices can’t run so hot that they burn our foreheads. That means they have to be more power efficient. Batteries, unfortunately, haven’t progressed in efficiency at the same rate as processing power. In fact, the more you concentrate processing power in a tiny space, the worse the heat problem gets. If you want to run VR software on devices such as the Samsung Gear VR, which uses Samsung Galaxy Note 4 smartphone as its display, then the challenge becomes exponentially more difficult.
The Samsung Galaxy Note 4 has a 2,560 x 1,440 pixel organic light-emitting diode display. It is 5.7 inches diagonally, and it uses a multicore Snapdragon 805 processor. But it’s not quite good enough. Not only does the processing power in the smartphone have to be great, it can’t run so hot that it melts down the components in the tiny space of the smartphone. We still need better power efficiency.
I’ve tried out the Samsung Gear VR Innovator Edition, and it doesn’t run hot. It works, and there are some fun apps to go with it. But it falls short of what Oculus VR can deliver with a more powerful gaming computer driving the display through a wire. Of course, it would be great if Oculus VR could deliver the imagery from the gaming PC to the headset display wirelessly, so you could get up and walk around with the VR headset. Delivering a great VR gaming experience on a mobile device, with sufficient battery life, and without making someone sick — and doing it all on a budget — it still seems like we’re not there yet.
Some other ideas besides virtual reality will demand the best technology. Shinra Technologies, a startup that spun out of Japan’s Square Enix, and Improbable, a London startup funded by Marc Andreessen’s Andreessen Horowitz venture firm, are envisioning games that are massive virtual worlds — gigantic simulations where you could have thousands of characters interacting in real time in the same persistent virtual world. These worlds are hosted in the cloud, or web-connected data centers with loads of supercomputers. Those supercomputers can use the Nvidia Titan X graphics cards. But it would be great if one supercomputer could serve many users.
[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":1690447,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"games,","session":"B"}']
In the early days of cloud gaming, it required one graphics card in a server to deliver a high-end game to one user on a simple client device. That was too expensive to be cost-effective, and it’s one of the problems that drove cloud gaming service OnLive into its troubles — its cloud gaming service never became viable, and Sony purchased its assets for an undisclosed price this week. But the Titan X can now enable a single graphics card to serve as many as 64 users.
That bodes well for Nvidia’s Grid cloud gaming service, which it will deliver via the $200 Nvidia Shield set-top box, shipping in May. But Nvidia will still need the help of broadband providers. The company suggests that you use the Shield set-top with a cable modem that can deliver 30 megabits a second. If you use less than that, you may see occasional flaws in the high-end games that you run on your big-screen TV.
Nvidia needs Moore’s Law so that it can keep packing better and better hardware into the data centers, so it can deliver cheaper and cheaper cloud gaming services to users.
Then we have sensors and input systems. We need Moore’s Law to deliver better ways for us to interact with games. Game controllers are getting a little boring, and Microsoft’s Kinect gesture-control systems for the Xbox 360 and Xbox One still fall short in delivering true, seamless, and effortless gesture control.
[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":1690447,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"games,","session":"B"}']
I’m hopeful that there’s a solution for that. Alex Lidow, chief executive of Efficient Power Conversion, believes that using high-performance, low-cost, and power-efficient gallium nitride chips will enable some cool gaming tech.
As I noted in my story on Lidow earlier this week, EPC’s eGaN chips in their Light Illuminated Detection and Ranging (LiDAR) systems, or remote systems which sense the environment around them akin to radar. They illuminate an area with a laser and analyze the reflected light to map out a 3D space with an extremely accurate 2-inch resolution. Google uses LiDAR in the spinning rotors atop its self-driving cars.
These gallium nitride chips will power the LiDAR systems that can be used in virtual reality goggles and augmented reality glasses, Lidow said. The LiDAR system can map out your room, identify the obstacles you can trip over, and detect your body movements precisely. I’m guessing that Valve and HTC are using this LiDAR technology in the latest Vive virtual reality headset that the companies announced at the GDC in San Francisco.
Gallium nitride still needs the benefit of Moore’s Law to live up to its full promise. I’m excited to see what it delivers. And even if it fails to live up to its promises, it will provide the competition needed to drive silicon chips forward faster than they would otherwise evolve.
[aditude-amp id="medium4" targeting='{"env":"staging","page_type":"article","post_id":1690447,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"games,","session":"B"}']
So there you have it. We still need Moore’s Law to deliver better 3D graphics. We need it for better virtual reality, inexpensive gaming systems, better cloud gaming, fantastic sensors, mobile systems with great power efficiency, and magical gesture detection systems. We need it to deliver that Star Trek Holodeck experience where you can’t tell reality from illusion. The next logical company to take advantage of the latest advances is Nintendo, which is launching its Nintendo NX dedicated game system in 2016. I can’t wait to see how they’re going to take advantage of Moore’s Law to design their next big machine.