The second era of computer chess also saw
more emphasis placed on the development of
dedicated chess hardware. Typical of such
developments was Ken Thompson’s chess
machine Belle which won the ACM North
American Computer Chess Championship in
1978. The special purpose hardware increased
the speed of Belle enabling it to analyse 30
million positions in 3 minutes and search
exhaustively to 8 or 9 ply in middlegame
positions. It also had an extensive opening book.
Belle won the 3rd World Computer Chess
Championship in 1983, achieving a rating of
over 2200 in 1980 and was the first program to
receive a Master rating. It was for Belle that
Thompson devised his first end game database,
solving the KQKR problem and hence partially
removing the perceived weakness at that time of
computers in endgame positions.
In 1975 Hyatt commenced work on Blitz
which was then entered in the ACM 1976 North
American Computer Chess Championship.
Initially Blitz was selective and relied on a local
evaluation function to discard moves. However,
the availability of the world’s fastest
supercomputer, the Cray, enabled the program
appropriately renamed Cray Blitz, to use brute
force and in 1981 it was searching approximately
3000 nodes per second and consistently
achieving six ply searches. This rate of analysis
was improved by the use of assembly language
and the availability of the Cray XMP computer
with multiprocessing facilities, allowing 20,000 -
30,000 nodes to be searched per second in 1983.
The third stage, aptly named algorithmic by
Donskoy and Schaeffer (1990), extends onwards
from the mid 1980s to the current time. This has
seen some considerable activity in the refinement
of the basic tree searching algorithms used (see
later). It has also seen the emergence of personal
computers on a scale originally unenvisaged.
The widespread practice of incorporating vast
opening books and more cd-roms for end games.
This current phase has been most fruitful and
seen a considerable increase in the playing
strength of chess programs to the point where the
top commercial software programs are generally
recognised as being superior to humans at speed
chess and in sharp tactical positions. Under strict
tournament conditions the most highly rated
player of all time Kasparov has now lost a game,
but not the match, against Deep Blue.
Move Ordering Techniques
The previous section has outlined the historical
development of computer chess. The transition
to Shannon B type programs to Shannon A is not
solely attributable to increased computing power.
It also partly arose out of increased
understanding of the alpha-beta algorithm which
was subjected to deep analysis by Knuth and
Moore (1975). Other techniques for increasing
the efficiency of the search and pruning
mechanisms also became more prominent at the
beginning of the second era.
Producing a cutoff as soon as possible with the
alpha-beta algorithm considerably reduces the
size of the search tree. Consequently, move
ordering is an important aspect of achieving
maximum speed since, if we know the best move
in a certain situation producing it early rather
than late, will have beneficial results. The worst
case is where the moves are examined in such an
order as to produce no cutoffs, generating the
maximum number of nodes to be analysed. This
is the maximal tree which grows exponentially
with depth, d, so that the number of nodes
examined assuming a uniform branching factor,
b, is given by b
d.
The best situation is where the
moves are all well-ordered and provide
immediate cutoffs, producing the minimal tree
which, although it also grows exponentially with
depth, is very much reduced in size by the
cutoffs. In between these two extremes we have
game trees generated by chess programs which,
initially are unordered, but become progressivley
more ordered as the depth of search increases.
Algorithmic developments and various
heuristics are moving the programs closer to the
minimal tree. This progressive improvement in
reaching closer and closer to the minimal tree is
borne out by Belle, which it is estimated came
within a factor of 2.2 of the minimal tree,
Phoenix within a factor of 1.4 and Zugzwang
within an impressive factor of 1.2 (Plaat 1996).
Iterative deepening is a useful device for
allowing moves to be re-ordered prior to the
depth being increased as well as providing a
method for limiting search depth in response to
time constraints. Originally introduced in 1969,
it has become standard practice in brute force
programs.
The killer heuristic is another move ordering
technique using information from the alpha-beta
pruning algorithm to facilitate it. Whenever a
certain move causes a cutoff in response to
65