The promise of virtual-reality gaming is almost here. But because the technology is so new, developers haven’t figured out the best way to make games for it yet.
Graphics chips manufacturer Nvidia believes it has something that’ll help. I met with the company in Los Angeles ahead of its appearance at this week’s Computex trade show in Taipei, Taiwan. Using an Oculus Rift headset, Nvidia engineer Tom Petersen walked me through the benefits of a new piece of VR tech called multi-resolution shading.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":1739342,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"games,","session":"D"}']The demo was running in real time and took place in a pretty hallway inside of a castle. Petersen flicked multi-res on and off while I was wearing the headset. When I looked around, I couldn’t tell the difference — and that’s the point.
Multi-resolution shading is a more efficient way of making VR games without sacrificing performance or image quality. Specifically, it tweaks the way images from your PC become the images you see in the headset. They start off looking straight (as it does when you’re playing on a regular monitor or TV), but when they reach the headset, the internal screen displays them as warped images (one for each eye) to simulate depth in the virtual world. The headset’s pair of lens act like an “anti-warp,” distorting the image again so that by the time you see it, it looks straight.
Without multi-res shading, developers lose precious resources during the conversion. The problem, Petersen said, is that the warped image isn’t natively rendered on the PC’s graphics processor unit. Instead, the GPU renders the image normally (straight lines and all), and then it goes through an additional processing pass before it becomes warped and distorted. But not all of the image’s pixels survive this transition. By the time you see the image, you’re only seeing about half of the pixels. Developers waste time and energy generating a bunch of pixels no one will ever see.
Multi-res shading prevents that loss from happening in the first place. It divides the image into a grid with nine “viewports,” with only the center viewport rendered at full resolution. Since the pixels on the outside viewports will shrink and merge during the warping process, multi-res renders them at smaller resolutions. By rendering fewer pixels, developers can improve the performance of their VR games.
“Either they’re gonna add more stuff to their games to make them look even better, or they’re going to allow the games to run on more hardware platforms,” Petersen said.
Both scenarios would be good news for players. The Facebook-owned Oculus VR recently estimated that it’d cost less than $1,500 to buy a Rift headset and a good PC that can run VR games. But if enough developers use multi-res shading, you might be able to get by with a cheaper PC configuration. And if you already have a great GPU, you can enjoy all the little details that developers now have room to cram into their games.
Nvidia’s announcement comes just months before the first wave of VR devices hit store shelves. The HTC Vive is releasing before the holidays, and both the Oculus Rift and Sony’s Project Morpheus are arriving sometime in 2016.