Google has achieved something major in artificial intelligence (AI) research. A computer system it has built to play the ancient Chinese board game Go has managed to win a match against a professional Go player: the European champion Fan Hui. The research is documented in a paper in this week’s issue of the journal Nature.
The Google system, named AlphaGo, swept France’s Hui, who is ranked a 2-dan, in a five-game match at the Google DeepMind office in London in October. AlphaGo played against Hui on a full 19-by-19 Go board and received no handicap. Now, Google is preparing to put AlphaGo up against the highest-ranked Go player, South Korea’s Lee Sedol, at a match in Seoul in March.
“If we win the match in March, then that’s sort of the equivalent of beating [Garry] Kasparov in chess,” said Demis Hassabis, cofounder of Google-owned DeepMind, during a press briefing on the research earlier this week. “Lee Sedol is the greatest player of the past decade. I think that would mean AlphaGo would be better than any human at playing Go.”
Go ain’t easy
With its white and black pieces that players place on a board with a grid, Go bears some resemblance to chess. And chess has been a focus of AI research for decades. But even though Go’s gameplay is simpler than chess, Go poses more difficult challenges for intelligence both artificial and human because there are many more possible moves that a player can make at any given turn, and many more possibilities for the outcome of the game as a result.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
An expert system such as IBM’s Deep Blue computer, which beat chess grandmaster Kasparov in 1997, just won’t scale for this problem set. So Google’s brightest minds have brought together an ensemble of AI techniques in order to succeed in this domain.
Overall, Google’s DeepMind is calling on a type of AI called deep learning, which involves training artificial neural networks on data — such as photos — and then getting them to make inferences about new data. Google has picked up plenty of talent in the area through acquisitions — DNNresearch, Jetpac, and certainly DeepMind. And deep learning is working inside several Google services, from Google Photos to Google Translate.
Baidu, Facebook, and Microsoft all conduct research on deep learning and use it in their own products as well. Coincidentally (or not), Facebook yesterday published a paper on its own progress in the problem set of using AI to play Go.
But unlike Google, Facebook has not succeeded in having its AI win a Go match against a professional player.
How it works
Google’s system relies on many components.
First, to predict which moves to play next, Google trains a 13-layer policy network on data reflecting expert Go players’ moves in games. And it’s data at scale — 30 million positions from the widely used KGS Go Server. Google enhances this policy network with reinforcement learning, which uses a process of trial and error. Effectively, it gets smarter by playing against itself.
Then, Google trains a value network that can predict which side will win a game.
Google takes these two deep neural networks and brings them together with a Monte Carlo tree search, which is commonly used for Go bots. From there, the Google AI is ready to play.
Testing the system
The DeepMind researchers additionally had AlphaGo play against other Go AI programs, like Crazy Stone, Fuego, Pachi, and Zen. Out of 495 matches, AlphaGo won 494. Even when the researchers gave a handicap to the other programs, AlphaGo generally was able to win.
Unsurprisingly, the program became even better when the researchers scaled out AlphaGo across multiple servers in a distributed fashion.
And at least one of the Google DeepMind researchers also tried his hand at beating the system.
“In the early days of AlphaGo I played against it, but it was quickly apparent that AlphaGo was way beyond my skill level,” DeepMind’s David Silver said in an interview for a video Nature produced in association with the paper.
What comes next
Google can do a lot with the AlphaGo technology. Perhaps it could be used to help amateur Go players improve their skills, especially in areas where you can’t easily find a Go teacher nearby.
Practically speaking, within a year or two, AlphaGo’s core capabilities could be brought to bear inside of Google services, Hassabis said. But he’s thinking beyond that.
“Ultimately we want to apply these techniques in important real-world problems,” such as climate modeling or medical diagnostics, Hassabis said. But in keeping with the agreement that DeepMind made with Google during the acquisition in 2013, the AI technology will never be used for military purposes, he said.
As for what game the DeepMind team might take on after Go (and the Atari 2600), no-limit poker may be an option, Hassabis said.
See the Nature paper and Hassabis and Silver’s post on the Google Research blog for more detail on the Go work.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More