Technological history saw a new advancements in a match of Go between an artificial intelligence developed by Google and 18-time world champion, Lee Sedol in Seoul, South Korea. Five matches were held over a span of a week, starting on March 9 and ending March 15.
“Yesterday, I was surprised,” said Sedol after his defeat in game two, “but today, more than that, I’m speechless…there was not a moment in time when I felt that I was leading the game.”
AlphaGo, the AI machine, claimed victory 4–1 against Sedol. The win was a shock as experts had predicted that Go would not be conquered by a machine for another decade.
“This is definitely the moment when we realized computers could beat us at this game,” said Andrew Okun, president of the American Go Association.
AlphaGo was created by a team of researchers at DeepMind, a Google-owned AI lab based in London. It was co-founded by Demis Hassabis, Shane Legg and Mustafa Suleyman before it was acquired by Google in 2014 in the largest European acquisition to date. After a historic victory over the legendary figure in the world of Go, AlphaGo has proven to be an artifical intelligence system at the forefront of its field today. Hassabis has said the main reason for having organized such a historical match was to test AlphaGo’s capabilities and push it to its limits.
Go may be an unfamiliar game to many in the western hemisphere, despite its 4000-year history. The game originated from ancient China. The sanctity of philosophical and moral thinking that goes into the game of Go has captivated Go players for many generations.
There are 40 million Go players in the world today. Unlike in ancient times, the game is now a brain sport played on a 19-by-19 board. Each player is assigned to white or black stones; it is the goal of the players to secure most “territories” on the board by surrounding the opponents’ stones with one’s own. The number of different possible configurations of moves that can be made on a Go board is said to be more than that of the number of atoms in the universe. Therefore, players of Go must rely not only on quick calculations, but also on intuition from years of experience.
AlphaGo originally was trained using a deep neural network—a network of hardware and software that works like the human brain. This is the technology that drives online services like Google, Facebook and Twitter to help identify faces in photos and recognize voice commands on smartphones. AlphaGo is able to learn through a continuous feed of information. By feeding AlphaGO 30 million moves from expert players, it learned to play the game.
With each move AlphaGo makes, it is able to examine its future probability of winning the game. AlphaGo is able to not only mimic how humans play, but also see beyond that on a new level. This is how it was able to play move 37 in Game Two—something that took observers by surprise, including Lee Sedol himself.
“What is important is that I really enjoyed the game, I have not felt this kind of excitement in a long time,” Sedol said after the first machine-versus-man match was over.
Change is imminent in the world of Go. With technology, more Go players have access to master-level training. AlphaGo’s moves have also inspired many high level Go players and expanded their paradigm of game playing.
It is not new for artificial intelligence to conquer some of most treasured games humans play. Historically, chess and Jeopardy had already seen its best players defeated in a match against the machine.