The number of games at which a human can reasonably be expected to beat a computer is dropping rapidly. The latest to fall? The ancient board game Go, which some people call “the Eastern version of chess.”

Google announced in late January that its Go-playing program, AlphaGo, had beaten reigning three-time European Go champion Fan Hui in five out of five games, according to a blog post by Demis Hassabis, of Google’s DeepMind artificial intelligence (AI) group, which Google purchased in 2014 for $400 million  The company also published a paper detailing its method.

(Google isn’t alone—Facebook reported that same week that it, too, was working on an AI-based version of Go and that it, too, had published a paper, in November, 2015.)

As recently as 2014, scientists had expected it could take up to ten years before this could happen, according to Alan Levinovitz in Wired. “In 1994, machines took the checkers crown, when a program called Chinook beat the top human,” he writes. “Then, three years later, they topped the chess world, IBM’s Deep Blue supercomputer besting world champion Garry Kasparov. Now, computers match or surpass top humans in a wide variety of games: Othello, Scrabble, backgammon, poker, even Jeopardy. But not Go. It’s the one classic game where wetware still dominates hardware.”

“Go was the last bastion of human superiority at what’s historically been viewed as quintessentially intellectual,” MIT AI research Bob Hearn told blogger Gary Antonick in the New York Times. “This is it. We’re out of games now.”

What made Go particularly challenging—compared with, say, tic-tac-toe or chess—is the many options it has for each play, IBM chip designer Rodrigo Alvarez explains to Antonick. “The best chess playing games could quickly search up to about 14 moves ahead, and at this point they start beating grand masters,” he says. “The problem is that for chess the tree grows very quickly, and for Go it goes even quicker because there are less constrains on each move and the board is larger.”

“As simple as the rules are, Go is a game of profound complexity,” writes Hassabis. “There are 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 possible positions—that’s more than the number of atoms in the universe, and more than a googol times larger than chess.”

Like Rock, Paper, Scissors, every few years some breakthrough in computing ability takes place that lets computers play Go much better, Hearn explains. About ten years ago, computers began using “Monte Carlo” tree search (MCTS). “The idea here is to play out hundreds of thousands of random games from the current position all the way to the end, and accumulate statistics on wins and losses,” he says. “If you put the statistics together the right way, this actually works.”

Google’s AlphaGo takes it a step further by applying “deep learning” to the process by using neural networks, Hearn explains. “The insight here was to not play games out to the end, but cut them off at 20 moves or so, then use a deep-learning based evaluator,” he says.  “So they don’t have to play out as many games as traditional MCTS.” While computers don’t have the same perceptual insight as champion Go players do, their raw computing power helps make up the difference, he says.

Google trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time (the previous record had been 44 percent), Hassabis writes. Beyond that, “AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning.”

Next month, the Google AlphaGo program is scheduled to play the world’s top player, Lee Sedol, who is ranked at “9 dan” in the Go world, compared with   Hui’s “2 dan.” Whether AlphaGo beats him or not at this particular bout, experts said in any event it wouldn’t be long before he, too, would fall.

You may wonder why it’s important that a computer can play, let alone win, a game. But the AI behind the ability to play—and beat— humans at their own games can eventually be used for other projects. Techniques used in the Go simulation, such as deep learning, can also be applied to business. Deep learning has “already proven indispensable for training robots to understand the contents of images, videos and audio. Some companies now aim to use the approach to train robots how to see, grasp and reason,” writes Will Knight in MIT Technology Review.  And that’s no game.

Related Posts