Computer AI researchers hit the headlines earlier this year when a Go program, Zen19, won a game with a 4-stone handicap against Takemiya Masaki 9p (ninth dan professional). Zen’s positional awareness is good; it’s the least bot-like AI to watch. There are many proprietary Go programs, but much cutting- edge AI research can be found in open source Go programs – and there’s an opportunity to get involved at many levels.
What is Go?
Go is a board game developed in China, where it is called Wei Qi, 2,500 years ago. It combines the simplest of rules with incredibly complex strategy. Two players alternately place black and white stones on a grid of 19×19 lines (smaller boards are used by beginners, and often by Go programs), starting with black. The weaker player may have several stones’ head start (handicap), in which case these black stones are placed on special strategic points.
The game is won by controlling territory. Adjacent stones of the same colour form strings, expanding or joining until they surround empty areas. If the four cardinal points around a stone are surrounded by the other side, the stone is removed from the board. Otherwise the stones stay on the board until the end, when surrounded territory is counted up.
The vast majority of the world’s 40 million Go players are in East Asia, but Go features regularly at Mind Sports Olympiads, and Go clubs can be found across the UK and elsewhere. However, for many people in Europe and America, regular games can only be played thanks to Go servers running on the internet – where as well as other people of all levels, examples of all the current AIs can be found playing.
Get started with Go on Linux
UNIX users are used to modular programs plugging together. In Go the AI and board are usually separate. The board often works with human and AI players and internet Go servers.
Ubuntu users can open a terminal and type:
sudo apt-get install gnugo cgoban
Alternatively, the Fuego tarball is on SourceForge, along with the GoGUI board.
All modern Go programs can communicate via the Go Text Protocol, and as well as playing against an AI, you can pit two AIs against each other. This can be done either through the GoGUI or the KGS Go Server.
If you want to play against human beings, and not just over the internet, the British Go Association has a list of UK clubs – most big cities and towns have one.
While the rules may be simple, the strategy is extremely difficult to quantify. Indeed, it’s not really possible for a computer to reliably assess whether a given position is a winning one or not – although stones may have been removed from the board, this is not as important as the points- based tally in chess.
The challenge is to approach the human ability in pattern-matching. Go is a game of balance. Libraries of blocks enable AIs to make local judgement, but the influence of many nearby weak groups on a string involves matching and discarding too many possible moves.
To put this in perspective, there are 361 points on a Go board – 361 different places where the irst stone may be placed; by the fourth move in, there are 16 billion possible board positions, which makes brute force – perfectly suited to chess’s limited board positions – an inadequate tool even with today’s supercomputers. In fact, a 1000 teraflop computer would require more than five days to assess all possible combinations of the next eight moves in order to make a single play [see the ‘Number Crunching’ section below for more details].
PSPACE or EXP-TIME?
The only choice an AI must make is where to place a stone, and yet… Just over 2×10170 legal board positions is staggering enough (there are estimated to be around 1080 protons in the entire universe), but the number of possible games – found by an expansion of possible moves over the length of the game – is often given as 10768.
Computer chess AI has to look at perhaps 40 moves, often far fewer, and give each a score to select the best option. Looking at possible moves in a game of Go on an n x n board is EXPTIME-complete (which is how complexity theorists describe something that will give them a severe headache).
Deciding if a ladder (a common series of moves, where strings of stones zigzag up the board, alternating direction to escape capture) works is PSPACE hard. Romanian 5-dan player Marcel Crasmaru showed in his master’s thesis that PSPACE hard ladder problems are effectively EXPTIME-hard. However, “The complexity of Go with superko rules remains an open question.”
In other words, straight number-crunching is not a realistic AI strategy for Go.
Accomplished Go players spend years studying known good series of moves. Human strengths – including pattern recognition, strategic thinking, decision making and even intuition – emerge from this experience. The challenge for AI researchers is how to match such intuition, given a machine has no ‘understanding’ of what it’s doing. John McCarthy – the AI pioneer and father of Lisp who died last year – described the problem thus: “A good Go player could make a move and other players say, ‘Yes, that’s a good move,’ but they can’t explain to you why it’s a good move, or how they even know it’s a good move.”
Go is a strategic game. You can lose skirmishes, but still win the war – in other words you can make small mistakes and merely lose a few points off your score: in chess, a small mistake is usually disastrous – in other words, Go is forgiving of humans, who make mistakes.
By 1995, when IBM was well on its way to creating a chess grandmaster-beating machine, the best Go program was defeated by a professional player, Janice Kim, despite a 25-stone handicap – an imbalance almost unimaginable in a game with two human players. Moore’s law and gradual refinements saw incremental improvements in knowledge- based systems and minimax tree search AIs, at least on 9 by 9 boards – but 15 years ago, the best AIs were being regularly beaten by children. What makes Go so difficult for computers?
In a state
Even representing the current state of the game is a significant challenge. Given how many possible moves there are, just holding positional information, along with which stones are under threat, involves trade-offs between memory and processing power. The next preoccupation is ‘life and death’ – a Go term for deciding whether a group of stones will stay alive indefinitely, or be captured before the end of the game. This involves a resource-hungry tree-search, although Benson’s algorithm allows unconditionally alive chains to be identified.
Early successes in knowledge-based Go programs relied on the programmer’s Go skill in identifying local patterns in positions in several dozen areas of play, and applying pattern matching and pattern recognition algorithms to find the most apposite each move, often among conflicting close matches. The first program capable of completing a game of Go was written in 1968 by Albert Zobrist, as part of a PhD thesis on pattern recognition. It introduced the influence function, still used in Go programs today, to award scores to the influence of shapes on various parts of the board.
As computer hardware gained power, tree searches also had success, providing Go’s high branching factor could be ameliorated with suitable pruning techniques, as well as those using caching and hashing. GNU Go is one example of a program that gradually improved in this way, yet its selective pruning, eg ignoring certain classes of position, meant that human opponents identified weaknesses and ruthlessly exploited them to win above their level.
The next avenue of research was neural networks, adaptive systems of artificial neurons with interconnections modelled on certain parameters. The neurons are collected in layers, and different models have different connection patterns between the layers. What all have in common is that the connections are weighted in importance, and these weights can be changed through machine learning, according to allowed models, and thereby inferring functions from its observations.
While Go research in neural networks has helped cognitive science, and made better Go AIs, it hasn’t yet been the most successful avenue of investigation.
Monte Carlo or bust
Monte Carlo Tree Search (MCTS) has transformed computer Go, bringing several programs up to the serious dan level, though still some way below the top professional players. Developed during the Manhattan Project to develop the atomic bomb, Monte Carlo methods rely on random sampling. Numerous samples are taken of games and the move which leads to the sets of random games with the best outcome is chosen. Strategically this is good, although it does mean that numerous good moves are discarded simply by not being in the few thousand possibilities which are randomly sampled. MCTS has also been very successfully applied to playing Battleships.
UCT (Upper Confidence bounds applied to Trees) extends MCTS by guiding the tree search down already successful paths for 50 per cent of searches, using a rating to keep track of the relative successes of different branches, and maintaining random sampling for the rest. David Fotland’s proprietary Many Faces of Go, which dates back to HP-1000, Vax, and PA-RISC, is a knowledge-based AI that has incorporated MCTS and UCT so successfully that it has become one of the best Go-playing programs available, along with Crazy Stone, MoGo and Fuego.
MoGo, based on MCTS, is currently a binary-only download, but runs well on most Linux distros, and there’s even an Emacs mode available for it. It’s a French project (with some Taiwanese contributions). For important competitions, it runs on the 5,000-core, Debian-based, ‘Grid5000’, which was built for the study of large- scale parallel and distributed systems. Sylvain Gelly’s PhD thesis, the basis for much of MoGo’s design, is available online.
Another gratis proprietary program is the Java-based, MadLab Tesuji Solver. Tesuji are skilful tactical plays, often leading to the capture of stones. Madlab analyses “a given problem until a solution is found, and the underlying algorithm guarantees that if a solution is found, it is 100 per cent definitive. This makes the program well suited for learning purposes, and to get you started the program comes with 997 tesuji problems extracted from real high-level games for you to solve”.
The Computer Go group at the University of Alberta, Canada, has been using Go and other “hard” games to push the development of AI software for decades. They were pioneers with neural-net software, but results were disappointing in Go with their NeuroGo program. Alberta also produced Fuego, a “collection of C++ libraries for developing software for the game of Go” released under the GNU Lesser General Public License. Alberta also maintains the Computer Go Test Collection, with realistic full-board test problems taken from computer Go games.
The strongest open source Go program is Pachi, based on Petr Baudiš’s postgraduate thesis: “On 19×19, it might be about KGS 1-kyu [highest non-dan rank], assuming reasonable hardware, eg two-core Athlon 64 machine. On a higher-end (eg four-way Intel i7) machine, it can hold a solid KGS 2-dan rank. When using a large cluster (64 machines, 20 cores each), it maintains KGS 4-dan and has won, eg a 7-stone handicap game against Zhou Junxun 9p [9-dan professional rank]”. As well as the tarball and a board like GoGUI you should, if you have the memory to spare, get the extra pattern files to further boost its strength. Pachi has been beating the proprietary Zen in matches.
The traditional choice for Go AI on GNU/Linux is the long-established GNU Go. This mixes various strategies, and Monte Carlo, in a tunable platform that makes it an ideal test bed for research. It also comes with great documentation: the manual is a good start for anyone looking to understand some of the programming challenges involved in computer Go.
Give it a Go
There are some good ideas for improvements to GNU Go on the task list page of the development site, but development is not as active as it once was. If you’ve recently taken the free, online Machine Learning course from Stanford, say, and are looking for a challenge, this may be worth a look.
So, where is this going now? Monte Carlo Tree Search continues to be refined, and AIs which combine this with Go knowledge are now in the middle dan ranks, but can the pace of improvement be maintained? If it does, the work will be useful in many other areas, but possibly not so many as neural networks could have reached. What is clear is that those glorified adding machines, our supercomputers, are still a huge leap away from the efficiency – and even the mystery – of the human brain.
For anybody looking for a field with breakthroughs waiting to be made, AI remains in relative infancy. Anyone looking for a fun game – as well as feeling humans still have the edge over machines, thanks to our 100 billion neurons – should go to the British Go Association website and find out how to get started with playing Go.