It took a new startup game studio with a big vision to create something that real-time strategy fans will salivate over. At the 2014 International CES tech trade show in Las Vegas earlier this month, the Star Swarm demo from Oxide Games showed 3,000 to 5,000 starships fighting in a massive battle on PCs running on the latest Advanced Micro Devices Kaveri processors.
The demo was all in the name of pushing innovation in 3D graphics for games to new levels in a way that is unconstrained by the limits of today’s platforms. The Hunt Valley, Md.-based game company used its next-generation 64-bit game engine, Nitrous, and AMD’s Mantle application programming interface to create the demo, which will remind you of the huge space battles from the Star Wars: Return of the Jedi movie and the Homeworld game.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":885006,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,games,","session":"B"}']Tim Kipp, a co-founder of Oxide Games, talked with us in detail about how his small team created the demo with funding from game publisher Stardock. They’re going to use their new engine to create their own game, and they’re also licensing it broadly to other game developers. We decided it was worth a deeper dive to describe how Oxide pulled off the demo, which is embedded below.
Oxide will release a version of Star Swarm for modders in the first quarter. Here’s an edited transcript of our interview.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
GamesBeat: Tell me about Oxide and how you started working on that project.
Tim Kipp: Oxide was formed last year. The four of us – Dan Baker, Brian Wade, Marc Meyer, and I – wanted to see what we could do as far as pushing RTS and strategy games forward. We felt like everyone had fallen into a place that wasn’t necessarily stagnant, but everyone had gotten very comfortable with the performance levels we were getting. We felt like there was a lot of performance on the table that we could still capture, and if we were going to be able to capture that, the goal was to enable next-generation RTS games that we hadn’t seen before.
From our point of view, we felt like there was a lot of opportunity to change the dynamics of what current games were like, by focusing on making sure that all of our software, our entire engine, was built from the ground up to take advantage of the hardware that’s out there today.
GamesBeat: Did you start with any particular help or goals in mind? Were you syncing up with AMD or anyone like that when you started?
Kipp: We’ve had a good relationship with not just AMD, but also Nvidia and Intel. We’ve worked with all three of them in the past. They’ve all been excellent partners. We’re still working with all three going forward. Stardock has provided a lot of the seed capital to get us off the ground. They’ve been a tremendous help, not only in terms of capital, but also in terms of business finesse and support. What’s been fantastic about Stardock is that they’ve allowed us to focus on the technology. We don’t have to work quite as much on the business end, the day-to-day things.
GamesBeat: What’s your team’s background in making games? Did you have a lot of technology skills already?
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":885006,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,games,","session":"B"}']
Kipp: We’ve been doing this for a while now. I probably have the longest MobyGames page, but we’ve all got a slightly different background. Marc Meyer, Brian Wade and I all worked at BreakAway Games together, probably 10 years ago at this point. Dan Baker has worked with us at Firaxis. We’ve been making strategy games for the last 12 or 13 years now. We’ve gotten a lot of expertise in terms of what the problems are in that space, and a good sense of ways to solve that problems.
We’ve worked on everything from Command & Conquer expansions to—I don’t know if you played Rise of the Witch-King. It was one of my favorite EA expansions that we worked on, for Battle for Middle-Earth. We added a bunch of new units and did a lot of work on the A.I. systems.
Some of our games have fallen a bit under the radar. Marc and I worked on a large-scale sensor simulation for some government agencies. Brian Wade has also worked on simulations in the past. A lot of our background spans this large arena of the simulation and game space, mostly in the RTS genre.
I suppose the most popular game that you guys would know at this point would be Civilization V. We were among the architects that designed and put that system together.
[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":885006,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,games,","session":"B"}']
GamesBeat: Talk about some of these pain points. The Civ games are always painful for me every time I hit the turn button, and it takes forever to calculate what’s going on.
Kipp: [Laughs] It was very interesting for us. There are a couple of different disciplines at Firaxis. Dan and I, and Marc and Brian, were more on the engine side of things. A lot of the end-turn times come down to—It’s one of those things where I don’t want to point fingers, but I’d say that we did not have a lot of visibility into how that stuff worked. That wasn’t the area we focused on.
GamesBeat: But basically, there’s a lot going on under the hood there.
Kipp: Strategy games, especially when it comes to A.I., can be phenomenally complex. When you’re designing an engine, part of the way we’ve designed ours is that we’ve tried to make it as easy as possible for the designer to take advantage of parallel cores and everything else. We’re building a lot of supporting systems in, which is part of the reason why Star Swarm runs so well. There’s a tremendous amount of A.I. and logic going on there to make that happen. We’ve built a lot of facility to allow the gameplay programmers to spread that logic across multiple cores and do it asynchronously. The effect is very dramatic. Unfortunately that’s one of the things we didn’t necessarily have time to do for Civilization V.
[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":885006,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,games,","session":"B"}']
With the Star Swam demonstration, we tried to make it as much of a game simulation as possible. This is the code that we use to test out the systems we’re building and so on. Our gameplay wizard, Brian, is putting all that stuff in because he wants to stress it out and make sure that when we and our licensees are making games, all that the things we’re doing are fast and lean and scalable. There’s not a restriction put on the game developer as far as what they can think about doing or not doing.
GamesBeat: What’s the basic difference between something like making a game with DirectX and making it with Mantle? My rough understanding is that you can write closer to the metal and get around a bunch of bottlenecks, but can you describe that more for me?
Kipp: At a high level, from a game developer’s standpoint, if you’re working on something like the Nitrous engine, you’ll never know whether you’re running on Mantle or DirectX. We have an abstraction layer. That’s never anything that the designer knows about. The licensees we currently have, they don’t do anything special to take advantage of it. The engine handles all that stuff for us.
At a lower level, when we’re writing the engine, we’ve created a layer on top of the graphics API that we talk to that’s very efficient. It allows us to take advantage of the CPU to then drive the GPU. This layer is where we translate and either directly talk to Mantle or talk to DirectX.
[aditude-amp id="medium4" targeting='{"env":"staging","page_type":"article","post_id":885006,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,games,","session":"B"}']
Now, in Mantle, when we’re starting to build at that layer, because Mantle looks like a much more modern API as far as what’s going on under the hood — I suppose the best way to say it is that the Mantle API is designed in such a way that the Mantle driver does not have to try to second-guess what the developer is doing and optimize for them. When you’re working on top of Mantle, you’re providing all the context that driver needs in order to operate efficiently.
That’s the key difference. In DirectX, the way the API works is that you’ll make a series of calls into it. Under the hood, once those calls are finished, the driver is then going to try to interpret those commands and make guesses about what’s the best thing to do. In Mantle, it doesn’t have to worry about doing that, because we’ve already delivered all the information it needs to make an optimal pass.
GamesBeat: Does some of this also have to do with making proper use of the cores available, whether they’re CPU cores or GPU cores?
Kipp: We haven’t actually worked on utilizing the GPU cores independently yet, but in terms of CPU cores, the nice thing about Mantle is that any time we call into the Mantle API, we can always configure that to any thread we want to run on. In DirectX, the call goes into the API, and then the driver itself has additional threads working around the clock, waiting for information to come in, and those will spin up and try to do the work in an asynchronous manner.
[aditude-amp id="medium5" targeting='{"env":"staging","page_type":"article","post_id":885006,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,games,","session":"B"}']
The difficulty with that, from my point of view, is that we have a very efficient way of scheduling the GPU commands. When you say, “Send these commands off to DirectX,” to a service that’s going to try and thread them out, you don’t have the best view on how to operate. There’s going to be a conflict between the application threads and the driver threads. In the case of Mantle, we don’t have a conflict, because we can schedule out to as many cores as we want and optimize that.
GamesBeat: What are some of the effects this has on visuals — the impact on the speed of the game?
Kipp: That depends on what the developer is doing, what they’re trying out. The main impact for us with Mantle is that we don’t have to be as a restrictive with a lot of the visuals that are on screen. A lot of games will optimize either the camera view or the level or the characters to have a certain number of components, so that they don’t exceed a draw call limit. You wind up developing within a series of constraints.
When you’re developing with Mantle, because the draw calls or the graphics commands are much less expensive, the designers and the artists have a lot more freedom as far as what they want to try and do and how they can capture that. What you see is that you wind up with richer, more interesting worlds. Not only can you display more objects on the screen at once, but you can also move the camera around.
We’ve probably barely scratched the surface on different things we can try. One of the things we did in the Star Swarm demo, we implemented a film-quality motion blur. There are two reasons for that. One, we were doing a fast-paced game, so we wanted to make sure that when objects were in motion, they felt like they were in motion. For some games, that’s not necessarily appropriate, but for this demo, it made a big difference. The other reason we do that, we also get something called temporal anti-aliasing out of it. That allows us to get a better-quality anti-alias than what you typically see with low levels of MSAA. You get this neat double bonus.
Not only does it reduce the jagged edges, it also reduces the shimmering. You’ll notice, in a lot of games, when there’s a level of complexity in an object, it’ll tend to shimmer. When you try to rasterize a triangle on the screen, you can only pick one pixel. What we do is spread that pixel out a bit. While it may give a slightly softer look, from our view it’s much cleaner. While you may not notice it as much in a first-person game, except for objects that are off in the distance, when you look at something like an RTS, you have tons of stuff on the screen, all at varying frequencies and levels. Part of the reason why we started researching that was because we wanted a cleaner look to strategy and RTS. There’s going to be a lot of impact there in terms of visual quality.
GamesBeat: With Mantle, can you talk about what kind of efficiency you get there? Normally, it would seem like you would gain efficiencies just from a faster graphics chip coming along. You wait for the move from 1Ghz to 1.2Ghz and get faster performance. It sounds like Mantle is giving you better performance for a given graphics chip.
Kipp: Mantle is interesting in a couple of ways in that regard. There are two sides to the performance equation. One is the GPU side and one is the CPU side. If your bottleneck is on the CPU side – typically the driver or something else – if you get a faster graphics card, you’re not going to be able to utilize it to its full potential. On the GPU side, if you can’t do things like asynchronous compute, you can’t necessarily keep the graphics card full the entire time.
Mantle works in two different ways to address those problems. On the CPU side, Mantle has much lower overhead in terms of the way it’s set up, the processing of its commands, the way we send it through. The Mantle driver doesn’t have to work as hard trying to second-guess and potentially make mistakes as the DirectX or OpenGL driver at this point. The other thing it does, because we can configure Mantle to run on as many thread as we want, it’s scalable. If you have a two-core processor, we can design our layer so it effectively works with only that dual core. When you go up to four or six cores, we can capitalize on that and get more benefit from more cores, which has been difficult to capture with DirectX or OpenGL.
On the GPU side, the graphics core runs through a list of commands, but what will happen is that you can get bubbles within those lists, in terms of whether the rasterizer is busy or the AOUs are busy. There’s a bunch of compute units working within that graphics card. Mantle opens up and allows us to submit commands into that asynchronously, so the GPU can keep itself full the whole time. You don’t have to wait for those in a serial fashion.
GamesBeat: How long ago did you start working on the Star Swarm demo?
Kipp: Star Swarm started out as a benchmark test internally. Then there’s the application that we used to port everything over to Mantle when we began. We’ve been talking to AMD about Mantle for the last year — giving them our input, looking at the documentation. We started implementing the Mantle port in August. It’s been a very quick and wild ride for us. The genesis of the Nitrous engine was DirectX first, trying to get as much out of DirectX as we possibly could. Then we were waiting for Mantle to come along.
We’ve only been working on the Star Swarm demo, in my opinion, for a very short period of time. The main goal of the demo was, from our point of view, to create a technical demonstration, more than an artistic demonstration. It’s to stress-test all the code bits and stress-test Mantle and DirectX. It’s for us to push those APIs and the hardware to what we believe are the limits and see how they respond. Then we can figure out what we need to do next.
When we started working with Mantle, we didn’t know what to expect in terms of performance gains. It wasn’t until we started that we saw how exciting it could be. We didn’t know we would get a 3X or 4X improvement until we tried it.
GamesBeat: The interesting issue is that you now have more moving objects available on the screen than you actually might want to even try to use in a game.
Kipp: That’s always a possibility. In some sense the demo is a little unfair in that the objects we’re drawing are the most expensive objects for us. Say, for instance, each one of the fighters that we had on screen actually had three or four independent wing flaps. Perhaps they had detachable missile pods. That would be less expensive for us to do. Those are all things that tend to add up in terms of draw calls. This is really a stress test. We could take that same amount of complexity and convert that into fewer objects on screen and actually make them more complex.
That’s the goal of Nitrous. You could go for massive amounts of simple things, or you could go for a significant number of things that are highly complex.
GamesBeat: What was some of the difference there as far as how many units you could have on the screen with Mantle and without it?
Kipp: It comes down to the number of batches we need to draw. Off screen, we’re comfortable with 10,000 or 15,000 units. It’s one of those things where, because that’s all CPU-side stuff operating asynchronously and it’s very fast for us, between the number of units in the world and the number of units in view, there’s a possibility for a large disconnect. That’s where part of the difficulty between DirectX and Mantle lies. In a DirectX game, you have to be much more careful about what’s going on there.
In terms of an RTS game or a strategy game, since the player is the level designer, you can’t guarantee how much complexity is going to be on-camera at any given time. In a first-person shooter, it’s much more straightforward to restrict what’s going to happen and to give a planned-out map and play style. With RTS and strategy, the player can wreak havoc on an engine if it’s not flexible. That’s part of where Mantle shines. Since we can’t guarantee what the user’s going to do, it’s hard for us to figure out, “OK, well, we’ll make sure there’s never more than 5,000 draw calls there.”
GamesBeat: What did you wind up with as far as how many ships were on screen?
Kipp: That’s hard to tell. [Laughs] At points in time, it’s easy to get 3,000, 4,000, 5,000 ships on screen.
GamesBeat: As you say, these are not just dots. They’re something that you could zoom in on and see a lot of detail.
Kipp: They have varying levels of detail. It depends on the spaceship and how much time the artists have. I believe there are two cases for LOD. One is the mesh LOD. That happens as they start to drop out. We also have a shading LOD. Because we do what we’re calling object space lighting, we calculate the projected size of each of those objects on screen, and based on that we shade it in a priority manner based on how large they are. We can scale the shading quality at a different frequency than we scale the geometry level or something else.
The big space ships only have one level of detail, just because we didn’t have time to get to that, but all the fighters have at least two, if not three. I’m not sure how many the bombers have. The bombers may have three levels of detail. They’re another example of a more complex object. One of the bombers, the long red one, has a turret on top that activates. The other bulky one has three independent turrets on the back, and those are all calculating their own firing solutions and tracking enemy targets.
Every space ship there has its own unique target list, its own weapon refresh rate, and so on. Those are all data-driven controls on our end. It’s meant to stress every portion of what our engine does, not just the graphics portion.
GamesBeat: What does this remind people of? Is it like a big Star Wars battle or RTS games like Homeworld?
Kipp: A lot of people have mentioned that it reminds them of Homeworld. That was neat. There are certainly a bunch of touchstones for us as far as what we’ve seen in the large-scale Star Wars battles. Certainly it would be neat to do some Battlestar Galactica stuff as well. We can’t do anything like that due to licensing issues. [Laughs] Or necessarily encourage that. I wish we had more time to work on the art, though, because we would love to throw a bunch more spaceships in there.
GamesBeat: Now that you’ve been able to do this demo, what are you thinking about for the future?
Kipp: We’re interested in releasing the demo to the public. That’s going to be a lot of fun. All the data files we’re going to ship with should allow people to mod it. We’re accepting .SBX files. Part of what we’re looking at doing right now is some basic documentation for modders if they want to play with it. Going forward, we’re trying to consider what we’d like to do to Star Swarm next.
At the same time, we’re not developing the Nitrous engine in a vacuum. We have two licensees that we’re currently working with, and we’re also working on our own internal project at Oxide. We haven’t announced that yet, but we’ll release more about it some time in the future. We have a lot of wheels in motion.
GamesBeat: What is this engine good for, besides a great big space battle? What else could you envision as far as different genres where it could make a difference?
Kipp: We’ve only shown off a lot of the space stuff right now, but honestly, any sort of open-world game or sandbox game — in an RPG, item count is one of those things that often comes up a lot. Look at Skyrim. If you go into the options panel, you can set things like the object draw distance and tailor it in to your system. Essentially, you’re trying to scale back the number of things that have to be drawn in the world. Nitrous opens up the idea that if you’re building an intricate RPG world, all of a sudden you don’t have to have all those sliders for, say, grass distance or object distance or character distance. There’s a lot of neat stuff there.
Right now we’re building three different games, and they’re all going to be very different — anything from a traditional strategy game, like a Civilization, to an RTS game like Company of Heroes, or a StarCraft even. Those would be very exciting games to make with Nitrous. You get a ton more flexibility as far as how you want to build those characters.
GamesBeat: Do you have any timing in mind for any of these things, like when you might release it for modding?
Kipp: We’re still targeting the first quarter of this year. We’re waiting on a few logistical elements to finish up. The demo is pretty stable, or I should say standard. We’re not working on making many changes at this point. We’re just waiting for our partners to finish up and give us the go-ahead to release that. As always, it’s a mix between the developer and, in this case, everyone else that’s working together on this.
GamesBeat: Do you face any challenges developing for Mantle and developing for Nvidia and Intel platforms as well? Is the cross-platform issue a challenge?
Kipp: What’s interesting is that all three of them are making great hardware. [Laughs] I don’t mean that to sound like a cop-out. For what we’ve seen, the big difficulties tend to be the software layers between us and the hardware. But no, Nvidia’s been great about their DirectX support. Intel, in the same way, their DirectX support has been good. There hasn’t been much of a challenge there.
What becomes challenging is when hardware feature sets aren’t consistent, or performance feature sets. That’s where it starts to be problematic. But we haven’t had any trouble at all. I would expect that a DirectX implementation – or if Nvidia ever decides to support Mantle – would be on par.
GamesBeat: If you get some next-generation chips – things that might hit in the next year or so — along with software improvements like Mantle, what do you predict might be possible?
Kipp: One thing that’s we see as unexplored territory, which a lot of people are just getting into, is the asynchronous compute that’s been talked about. That’s where not only are you sending graphics commands to the GPU in a listed-command fashion, but you can also submit other compute work to be done by the GPU at the same time. Then the GPU will schedule that in there.
One of the things we think is neat is that both AMD and Intel are shipping a GPU alongside their CPUs at the same time now. Part of what I think is going to be interesting for the future is that game developers will start looking at, “Well, if I have a discrete card, and I’ve also got this integrated GPU compute, how can I use both of them at the same time?” That’s going to be interesting. The integrated GPU could become a new math coprocessor or something else. We could run physics simulations. We could run particle fluid simulations. A lot of things could be done with that added chip. It’s going to be an interesting time for gamers.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More