Check out Grant Acedrex, our featured variant for April, 2024.


[ Help | Earliest Comments | Latest Comments ]
[ List All Subjects of Discussion | Create New Subject of Discussion ]
[ List Earliest Comments Only For Pages | Games | Rated Pages | Rated Games | Subjects of Discussion ]

Single Comment

Aberg variation of Capablanca's Chess. Different setup and castling rules. (10x8, Cells: 80) [All Comments] [Add Comment or Rating]
H.G.Muller wrote on Tue, Apr 29, 2008 08:07 AM UTC:
I asked, because these standard meanings did not seem to make sense in your
statement. None of what you are saying has anything to do with piece values
or my statistical method from determining them. You are wondering now why
Shannon-type programs can compete with Human experts, which is a
completely different topic.

The statistical method of determining piece values does _not_ specify if
the entities playing the games from which we take the statistics are Human
or computer or whatever. The only thing that matters is that the play is of
sufficiently high quality that the scores have attained their asymptotic
values (w.r.t. play quality). And indeed, Larry Kaufman has applied the
method on (pre-existing) Human Grand-Master games, to determine piece
values for 8x8 Chess.

Piece values themselve are an abstract game-theoretical concept: they are
parameters of a certain class of approximate strategies to play the game
by. In these strategies the players do not optimize their Distance To Mate
(as it is beyond their abilities to determine it), but in stead some other
function of the position (the 'evaluation').

In general both the approximate strategy and perfect play (according to a
DTM tablebase) do not uniquely specify play, as in most positions there
are equivalent moves. So the games produced by the strategy from a given
position are stochastic quantities. The difference between perfect play
and approximate strategies is that in the latter the game result need not
be conserved over moves: the set of positions with a certain evaluation
(e.g. piece makeup) contains both won and lost positions. Implicitly,
every move for the approximate player thus is a gamble. The only thing he
can _proof_ is that he is going for the best evaluation. He can only
_hope_ that this evaluation corresponds to a won position.

So far this applies to Humans and computers alike. And so does the
following trick: most heuristics for staticaly evaluating a position
(point systems, including piece values) are notoriously unreliable,
because the quantities they depend on are volatile, and can drastically
change from move to move (e.g. when you capture a Queen). So them players
'purify' the evaluation from errors by 'thinking ahead' a few moves,
using a minimax algorithm. That is enough to get rid of the volatility,
and be sure the final evaluation of the root position only takes into
account the more permanent characteristics (present in all relevant end
leaves of the minimax tree). This allows you to evaluate all positions
like they were 'quiet', as the minimax will discover which positions are
non-quiet, and ignore their evaluation.

So far no difference between Humans and Shannon-type computer programs.
Both will benefit if their evaluation function (in quiet positions)
correlates as good as it can with the probability that the position they
strive for can be won.

The only difference between Humans and computers is that the former use
very narrow, selective search trees, guided by abstract reasoning in their
choice of cut-moves of the alpha-beta algorithm and Late-Move Reductions.
Computers are very bad at this, but nowadays fast enough to obviate the
need for selectivity. They can afford to search full width, with only very
limited LMR, and on the average still reach the same depth as Human
experts. So in the end they have the same success in discriminating quiet
from volatile positions. But they still are equally sensitive to the
quality of their evaluation in the quiet position.

But, like I said, that has nothing to do with the statistical method for
piece-value determination. The piece values are defined as the parameters
that give the best-fit to the winning probabilities of each equivalence
class of positions (i.e. the set of positions that have equal evaluation).
These winning probabilities are not strictly the fraction of won positions
in the equivalence class: in the first place non-quiet positions are not
important, in the second place, they will have to be weighted as to their
likelyhood occurring in games. For example, in a KPPPPPKPPPPP end-game,
positions where all 5 white Pawns are on 7th rank, and all black Pawns are
on 2nd rank are obvious nonsense positions, as there is no way to reach
such positions without a decisive promotion being possible many moves
earlier. SImilarly, 30-men positions with all 15 white man on ranks 5-8
and all black men on ranks 1-4 will not occur in games (or in the search
tree for any position in any game), and their (mis)evaluation will not
have the slightest effect on performance of the approximate strategy.

My method of determining piece values takes care of that: by playing real
games, I sample the equivalence class in propostion to the relevant
weights. The opening position of certain material composition will quickly
diffuse in game state space to cover a representatve fraction of the
equivalence class, with only little 'leakage' to nearby classes because
of playing errors (as the level of play is high, and blunders in the first
10 opening moves are rare). The actual games played are a sample of this,
and their result statistics a measure for the probability that a relevant
position in this equivalence class will be won with perfect play (the
playing errors due to the fact that they were actually played out with
slightly imperfect play working in either direction, and therefore
cancelling out).