Tim Kipp of Oxide Games

Above: Tim Kipp of Oxide Games

Image Credit: Oxide Games

It took a new startup game studio with a big vision to create something that real-time strategy fans will salivate over. At the 2014 International CES tech trade show in Las Vegas earlier this month, the Star Swarm demo from Oxide Games showed 3,000 to 5,000 starships fighting in a massive battle on PCs running on the latest Advanced Micro Devices Kaveri processors.

The demo was all in the name of pushing innovation in 3D graphics for games to new levels in a way that is unconstrained by the limits of today’s platforms. The Hunt Valley, Md.-based game company used its next-generation 64-bit game engine, Nitrous, and AMD’s Mantle application programming interface to create the demo, which will remind you of the huge space battles from the Star Wars: Return of the Jedi movie and the Homeworld game.

Tim Kipp, a co-founder of Oxide Games, talked with us in detail about how his small team created the demo with funding from game publisher Stardock. They’re going to use their new engine to create their own game, and they’re also licensing it broadly to other game developers. We decided it was worth a deeper dive to describe how Oxide pulled off the demo, which is embedded below.

Oxide will release a version of Star Swarm for modders in the first quarter. Here’s an edited transcript of our interview.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

GamesBeat: Tell me about Oxide and how you started working on that project.

Star Swarm demo lone spaceship

Above: Star Swarm demo lone spaceship

Image Credit: Oxide

Tim Kipp: Oxide was formed last year. The four of us – Dan Baker, Brian Wade, Marc Meyer, and I – wanted to see what we could do as far as pushing RTS and strategy games forward. We felt like everyone had fallen into a place that wasn’t necessarily stagnant, but everyone had gotten very comfortable with the performance levels we were getting. We felt like there was a lot of performance on the table that we could still capture, and if we were going to be able to capture that, the goal was to enable next-generation RTS games that we hadn’t seen before.

From our point of view, we felt like there was a lot of opportunity to change the dynamics of what current games were like, by focusing on making sure that all of our software, our entire engine, was built from the ground up to take advantage of the hardware that’s out there today.

GamesBeat: Did you start with any particular help or goals in mind? Were you syncing up with AMD or anyone like that when you started?

Kipp: We’ve had a good relationship with not just AMD, but also Nvidia and Intel. We’ve worked with all three of them in the past. They’ve all been excellent partners. We’re still working with all three going forward. Stardock has provided a lot of the seed capital to get us off the ground. They’ve been a tremendous help, not only in terms of capital, but also in terms of business finesse and support. What’s been fantastic about Stardock is that they’ve allowed us to focus on the technology. We don’t have to work quite as much on the business end, the day-to-day things.

GamesBeat: What’s your team’s background in making games? Did you have a lot of technology skills already?

Kipp: We’ve been doing this for a while now. I probably have the longest MobyGames page, but we’ve all got a slightly different background. Marc Meyer, Brian Wade and I all worked at BreakAway Games together, probably 10 years ago at this point. Dan Baker has worked with us at Firaxis. We’ve been making strategy games for the last 12 or 13 years now. We’ve gotten a lot of expertise in terms of what the problems are in that space, and a good sense of ways to solve that problems.

We’ve worked on everything from Command & Conquer expansions to—I don’t know if you played Rise of the Witch-King. It was one of my favorite EA expansions that we worked on, for Battle for Middle-Earth. We added a bunch of new units and did a lot of work on the A.I. systems.

Some of our games have fallen a bit under the radar. Marc and I worked on a large-scale sensor simulation for some government agencies. Brian Wade has also worked on simulations in the past. A lot of our background spans this large arena of the simulation and game space, mostly in the RTS genre.

I suppose the most popular game that you guys would know at this point would be Civilization V. We were among the architects that designed and put that system together.

Star Swarm demo battle

Above: Star Swarm demo battle

Image Credit: Oxide

GamesBeat: Talk about some of these pain points. The Civ games are always painful for me every time I hit the turn button, and it takes forever to calculate what’s going on.

Kipp: [Laughs] It was very interesting for us. There are a couple of different disciplines at Firaxis. Dan and I, and Marc and Brian, were more on the engine side of things. A lot of the end-turn times come down to—It’s one of those things where I don’t want to point fingers, but I’d say that we did not have a lot of visibility into how that stuff worked. That wasn’t the area we focused on.

GamesBeat: But basically, there’s a lot going on under the hood there.

Kipp: Strategy games, especially when it comes to A.I., can be phenomenally complex. When you’re designing an engine, part of the way we’ve designed ours is that we’ve tried to make it as easy as possible for the designer to take advantage of parallel cores and everything else. We’re building a lot of supporting systems in, which is part of the reason why Star Swarm runs so well. There’s a tremendous amount of A.I. and logic going on there to make that happen. We’ve built a lot of facility to allow the gameplay programmers to spread that logic across multiple cores and do it asynchronously. The effect is very dramatic. Unfortunately that’s one of the things we didn’t necessarily have time to do for Civilization V.

With the Star Swam demonstration, we tried to make it as much of a game simulation as possible. This is the code that we use to test out the systems we’re building and so on. Our gameplay wizard, Brian, is putting all that stuff in because he wants to stress it out and make sure that when we and our licensees are making games, all that the things we’re doing are fast and lean and scalable. There’s not a restriction put on the game developer as far as what they can think about doing or not doing.

GamesBeat: What’s the basic difference between something like making a game with DirectX and making it with Mantle? My rough understanding is that you can write closer to the metal and get around a bunch of bottlenecks, but can you describe that more for me?

Kipp: At a high level, from a game developer’s standpoint, if you’re working on something like the Nitrous engine, you’ll never know whether you’re running on Mantle or DirectX. We have an abstraction layer. That’s never anything that the designer knows about. The licensees we currently have, they don’t do anything special to take advantage of it. The engine handles all that stuff for us.

At a lower level, when we’re writing the engine, we’ve created a layer on top of the graphics API that we talk to that’s very efficient. It allows us to take advantage of the CPU to then drive the GPU. This layer is where we translate and either directly talk to Mantle or talk to DirectX.

Now, in Mantle, when we’re starting to build at that layer, because Mantle looks like a much more modern API as far as what’s going on under the hood — I suppose the best way to say it is that the Mantle API is designed in such a way that the Mantle driver does not have to try to second-guess what the developer is doing and optimize for them. When you’re working on top of Mantle, you’re providing all the context that driver needs in order to operate efficiently.

That’s the key difference. In DirectX, the way the API works is that you’ll make a series of calls into it. Under the hood, once those calls are finished, the driver is then going to try to interpret those commands and make guesses about what’s the best thing to do. In Mantle, it doesn’t have to worry about doing that, because we’ve already delivered all the information it needs to make an optimal pass.

GamesBeat: Does some of this also have to do with making proper use of the cores available, whether they’re CPU cores or GPU cores?

Kipp: We haven’t actually worked on utilizing the GPU cores independently yet, but in terms of CPU cores, the nice thing about Mantle is that any time we call into the Mantle API, we can always configure that to any thread we want to run on. In DirectX, the call goes into the API, and then the driver itself has additional threads working around the clock, waiting for information to come in, and those will spin up and try to do the work in an asynchronous manner.

The difficulty with that, from my point of view, is that we have a very efficient way of scheduling the GPU commands. When you say, “Send these commands off to DirectX,” to a service that’s going to try and thread them out, you don’t have the best view on how to operate. There’s going to be a conflict between the application threads and the driver threads. In the case of Mantle, we don’t have a conflict, because we can schedule out to as many cores as we want and optimize that.

GamesBeat: What are some of the effects this has on visuals — the impact on the speed of the game?

Kipp: That depends on what the developer is doing, what they’re trying out. The main impact for us with Mantle is that we don’t have to be as a restrictive with a lot of the visuals that are on screen. A lot of games will optimize either the camera view or the level or the characters to have a certain number of components, so that they don’t exceed a draw call limit. You wind up developing within a series of constraints.

When you’re developing with Mantle, because the draw calls or the graphics commands are much less expensive, the designers and the artists have a lot more freedom as far as what they want to try and do and how they can capture that. What you see is that you wind up with richer, more interesting worlds. Not only can you display more objects on the screen at once, but you can also move the camera around.

We’ve probably barely scratched the surface on different things we can try. One of the things we did in the Star Swarm demo, we implemented a film-quality motion blur. There are two reasons for that. One, we were doing a fast-paced game, so we wanted to make sure that when objects were in motion, they felt like they were in motion. For some games, that’s not necessarily appropriate, but for this demo, it made a big difference. The other reason we do that, we also get something called temporal anti-aliasing out of it. That allows us to get a better-quality anti-alias than what you typically see with low levels of MSAA. You get this neat double bonus.

Not only does it reduce the jagged edges, it also reduces the shimmering. You’ll notice, in a lot of games, when there’s a level of complexity in an object, it’ll tend to shimmer. When you try to rasterize a triangle on the screen, you can only pick one pixel. What we do is spread that pixel out a bit. While it may give a slightly softer look, from our view it’s much cleaner. While you may not notice it as much in a first-person game, except for objects that are off in the distance, when you look at something like an RTS, you have tons of stuff on the screen, all at varying frequencies and levels. Part of the reason why we started researching that was because we wanted a cleaner look to strategy and RTS. There’s going to be a lot of impact there in terms of visual quality.