For more like this, check out the Intel Game Dev Channel

Google and Blizzard are opening up StarCraft II to anyone who wants to teach artificial intelligence systems how to conduct warfare, because apparently I’m the only one who has ever seen The Terminator.

Researchers can now use Google’s DeepMind A.I. to test various theories for ways that machines can learn to make sense of complicated systems, in this case Blizzard’s beloved real-time strategy game. In StarCraft II, players fight against one another by gathering resources to pay for defensive and offensive units. It has a healthy competitive community that is known for having a ludicrously high skill level. But considering that DeepMind A.I. has previously conquered complicated turn-based games like chess and go, a real-time strategy game makes sense as the next frontier.

The companies announced the collaboration today at the BlizzCon fan event in Anaheim, California, and Google’s DeepMind A.I. division posted a blog about the partnership and why StarCraft II is so ideal for machine-learning research.

“StarCraft is an interesting testing environment for current AI research because it provides a useful bridge to the messiness of the real-world,” reads Google’s blog. “The skills required for an agent to progress through the environment and play StarCraft well could ultimately transfer to real-world tasks.”

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Most notably, StarCraft requires players to send out scouts to learn information. To succeed, the player then needs to retain and act on that information over a long period of time with ever-changing variables.

“This makes for an even more complex challenge as the environment becomes partially observable,” Google’s blog explains. “[That’s] an interesting contrast to perfect information games such as chess or go. And this is a real-time strategy game where both players are playing simultaneously, so every decision needs to be computed quickly and efficiently.”

If you’re wondering how much humans will have to teach A.I. about how to play and win at StarCraft, the answer is very little. DeepMind learned to beat the best go players in the world by teaching itself through trial and error. All the researchers had to do was explain how to determine success, and the A.I. can then begin playing games against itself on a loop while always reinforcing any strategies that lead to more success.

For StarCraft, that will likely mean asking the A.I. to prioritize how long it survives and/or how much damage it does to the enemy’s primary base. Or, maybe, researchers will find that defining success in a more abstract way will lead to better results, discovering the answers to all of this is the entire point of Google and Blizzard teaming up.

And, of course, once we are dealing with the fallout of the A.I. realizing that its best strategy for winning is to overtake every computer on the internet to use as its massive, worldwide cloud-based brain, you just steer clear of my completely analog bunker out in the Rocky Mountains.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More