AI vs. Human: Playing Kasparov
Deep Blue I played six games against Russian chess Grandmaster Garry Kasparov in February 1996, and Kasparov won 4-2.
In May, 1997, however, Deep Blue II played Kasparov again and beat the Grandmaster 3.5-2.5. This was the first time that a computer had defeated a reigning world champion when playing under standard tournament time controls.
There was some ill feeling after the match, with Kasparov claiming that there had been human intervention during the games. The Deep Blue team denied this, saying they had only consulted with humans and made changes between the games, which was allowed under the rules.
Kasparov requested a rematch, but IBM rejected his request, and dismantled Deep Blue.
Methodologies used by Deep Blue
Using Chess chips is obviously important to Deep Blue’s success, as it was able to analyze more positions in the time available, an important factor as Chess is played against the clock.
Deep Blue also drew upon, and further developed, existing techniques, such as quiescence search and transposition tables.
- Quiescence search explores interesting positions to a deeper level than uninteresting positions. This attempts to overcome the horizon effect, where a computer might think it has found a good move but it might be running into trouble just a few moves ahead.
- Deep Blue also used transposition tables. These seek to reduce the amount of searching that has to be done by recognizing that a given position could be arrived at in a number of ways. If Deep Blue arrives at a position that has already been seen, but reached by a different set of moves, we can save ourselves a lot of time by simply recalling the outcome of the earlier analysis, rather than repeating it. Transpositions tables are designed to store this information so that it can be efficiently recalled when needed.
The evaluation function is a critical element of many, if not all, game playing programs. It returns a numerical value which gives an evaluation of the position being considered. It is now possible to compare two positions and say which one is best. The design of an evaluation function is often more of an art than a science and, in the case of Deep Blue, it was constantly being tweaked, often under the guidance of Chess Grandmasters.
As the evaluation function is often computationally expensive, using a lot of resources, it has to be as efficient as possible. Deep Blue had a fast evaluation function, when just an approximation was required, and a slow evaluation function when an accurate evaluation was needed.