Check out Symmetric Chess, our featured variant for March, 2024.


[ Help | Earliest Comments | Latest Comments ]
[ List All Subjects of Discussion | Create New Subject of Discussion ]
[ List Latest Comments Only For Pages | Games | Rated Pages | Rated Games | Subjects of Discussion ]

Comments/Ratings for a Single Item

Later Reverse Order EarlierEarliest
Aberg variation of Capablanca's Chess. Different setup and castling rules. (10x8, Cells: 80) [All Comments] [Add Comment or Rating]
💡📝Hans Aberg wrote on Sat, May 3, 2008 04:43 PM UTC:
H.G.Muller:
| But the point is that this does not alter the piece values.

Right, though that might be just a preferred way to structure theory because it suits human thinking. Essentially, define contexts, and attach values to them. First define piece values in nutralk settings. Then observe that the bishop pair gets an added value. Then try to figure out values for good and poor bishops. And so on. By contrast, computers tend to be very poor at handling such contexts, so other methods might be suitable for programs.

Derek Nalls wrote on Sat, May 3, 2008 03:26 PM UTC:
Muller:

You have my best regards toward your worthwhile effort to publish your
empirical, statistical method for obtaining the material values of pieces
in the ICGA Journal.  My assessment is that it will surely be a much
better paper than the junk [name removed] published in the same journal
regarding piece values.

[The above has been edited to remove a name and/or site reference. It is
the policy of cv.org to avoid mention of that particular name and site to
remove any threat of lawsuits. Sorry to have to do that, but we must
protect ourselves. -D. Howe]

H. G. Muller wrote on Sat, May 3, 2008 03:22 PM UTC:
Sure, this is what people do and have done for ages. It is well known that the advantage of having the move is worth 1/6 of a Pawn, (corresponding in normal Chess to a white score of 53-54%) and that, by inference, wasting a full move is equivalent to 1/3 of a Pawn.

But the point is that this does not alter the piece values. It just adds to them, like every positional advantage adds to them. In my test the advantage of having the lead move is neutralized by playing every position both with white to move and black to move.

💡📝Hans Aberg wrote on Sat, May 3, 2008 11:50 AM UTC:
H.G.Muller:
| As piece values are only useful as strategic guidelines for quiet
| positions, they cannot be sensitive to who has the move.

At the beginning of the game, white is thought to have a slight advantage, and the first task of black will be attempting to neutralize that. And it might be possible set a picee value to that positional advantage, just as when reasoning in terms of getting positional compensation for a sacrifice. Somewhat less han a pawn, perhaps. If one knows about the black/white winning statistics, one might be able to set a value on it that way. It may not be usable for a computer program as it does not change sides, but only computes the relative values of moves.

H. G. Muller wrote on Sat, May 3, 2008 10:34 AM UTC:
As piece values are only useful as strategic guidelines for quiet positions, they cannot be sensitive to who has the move. A position where it matters who has the move is by definition nont quiet, as one ply later that characteristic will have essentially changed. So at the level of piece-value strategies, Chess is a perfectly symmetric game.

💡📝Hans Aberg wrote on Sat, May 3, 2008 09:46 AM UTC:
H.G.Muller:
| Note that a Nash equilibrium in a symmetric zero-sum game must be the
| globally optimum strategy.

Chess isn't entirely symmetric, since there is in general a small advantage of making the first move. But for players (or games) adhering to a piece value theory throughout as a main deciding factor, perhaps such balance may occur. The only world champion that was able to do that, winning by playing with very small positional and material advantages, was perhaps Karpov. Kasparov learned to break through that heavily positional playing, in part by training against the Swedish GM Andersson who specialized in a similar hyper-defensive playing. A more normal way of winning is at some point making material sacrifices in exchange for strong initiative particularly combined with a mating attack, and then winning by either succeeding by a mate or via some material gains neutralizing to a winning end-game. Perhaps when determining piece values, such games should be excepted. And since computers are not very good at such strategies, perhaps such game exclusion occurs naturally when letting computers playing against themselves.

H. G. Muller wrote on Sat, May 3, 2008 09:15 AM UTC:
Note that a Nash equilibrium in a symmetric zero-sum game must be the globally optimum strategy. If it weren't, the player scoring negative could unilaterally change its strategy to be the same as his opponent applies, and by symmetry then raise his score to 0, showing that the earlier situation could not heave been a Nash equilibrium.

💡📝Hans Aberg wrote on Fri, May 2, 2008 09:42 PM UTC:
H.G.Muller:
| Indeed, I plan to submit a paper to the ICGA Journal discussing the
| piece values and the empirical statistical method used to obtain them.

You might have a look at things like:
  http://en.wikipedia.org/wiki/Perfect_information
  http://en.wikipedia.org/wiki/Complete_information
  http://en.wikipedia.org/wiki/Nash_equilibrium
  http://en.wikipedia.org/wiki/Prisoner's_dilemma
Your claims are similar to the idea that chess players under some circumstances get a Nash equilibrium. This might happen, say, if the players focus on only simple playing strategies where piece vales have an important role, and they are unable to switch to a different one. Note that the prisoner's dilemma leads to such an equilibrium when repeated, because players can punish for past defections. In chess, this might happen if chess players are unable to develop a more powerful playing theory, say due to the complexity. - Just an input, to give an idea of what reasoning one might expect to support claims of predictions.

H.G.Muller wrote on Fri, May 2, 2008 05:55 PM UTC:
Indeed, I plan to submit a paper to the ICGA Journal discussing the piece values and the empirical statistical method used to obtain them.

💡📝Hans Aberg wrote on Fri, May 2, 2008 05:21 PM UTC:
H.G.Muller | Fairy-Max is already able to play most Chess variants, and WinBoard | protocol already supports those variants. I just found Jose-Chess that supports both Xboard and UCI protocols, and may have the future for it worked up (right now it is somewhat buggy), as it is open source. | Many engines are now able to play Capablanca-type variants under | WinBoard protocol, some of them quite strong. Perhaps the Dragon knight D = K+N and what you call the Amazon M = Q+N should be included. I am thinking about a 12x9 variant R D N B A Q K M B N C R which has the property that all pawns are protected, and tries to keep a material balance on both king sides. On a 12x10 board, one might use a rule that pawns can move 2 or 3 steps, if that does not make them cross the middle line. | I have no interst in convincing anyone to use my empirically derived | piece values. The normal thing would be that the values are just published, with indications on how they were derived. Different authors may have different values, if using different methods to derive them.

H.G.Muller wrote on Fri, May 2, 2008 01:31 PM UTC:
Fairy-Max is already able to play most Chess variants, and WinBoard
protocol already supports those variants. Many engines are now able to
play Capablanca-type variants under WinBoard protocol, some of them quite
strong. But as I already have accurate piece values, and my engines seem
to be significantly stronger than the competition in any Chess variant I
have bothered to configure them for, there is no incentive whatsoever to
do as you say. First build an engine that beats mine, then I might worry
when apparent misevaluations of a position correlate with the presence or
absence of certain pieces.

I have no interst in convincing anyone to use my empirically derived piece
values. On the contrary, if engine builders want to insist on using guessed
piece values that make their engines play losing Chess, I only applaud it,
as it means my engines will remain the best forever. If someone wants to
prove that there exists a set of piece values that works better, I
encourage them to do it (and then I mean by play-testing, rather than idle
talk or fanciful numerology). I am not going to waste my time on such a
wild goose chase.

💡📝Hans Aberg wrote on Fri, May 2, 2008 01:08 PM UTC:
H.G.Muller:
| If piece values cannot be used to predict outcomes of games, they
| would be useless in the first place.

If one is materially behind, one knows one should better win the middle game, or win material back before coming into the end-game, unless the latter is a special case.

| Why would you want to be an exchange or a piece ahead, if it might
| as frequently mean you are losing as that you are winning?

This is indeed what happens if with programs focusing too much on material, or weak players starting with a piece ahead.

| Precisely knowing the limitations of your opponent allows
| you to play a theoretically losing strategy (e.g. doing bad trades) in
| order to set a trap.

Sure, this seems to be essentially to be the effect of a brute force search on a very fast computer.

| In general, this is a losing strategy, as in practice
| one cannot be sufficiently sure about where the opponent's horizon will
| be.

In human tournament playing, a top player either plays against opponents of lower horizon, or against well known opponents whose playing style has been well analyzed. In the first case, there is not much need to adapt playing as one can see deeper, but in the latter case, one certainly does choose strategies adapted against the opponent. Now, with computer programs, at least in the past, the GMs were pitted against program which they did not know that well, the latter which ran in special versions and on very fast computers when tried on humans. So humans do not get much of  chance to develop better strategies. But this may not matter if the strategy does not allow them to handle the theoretical faulty combinations used by computer by relying on a somewhat deeper search.

| Fact is that I OBSERVE that the piece values I have given below
| do statistically predict the outcome of games with good precision.

You only observe past events, not the future, and a statistical prediction is only valid for for a true stochastic variable, or in situations that continue behave as such. But don't worry about this:

If you find methods to duplicate the analysis by Larry Kaufman, and use that to compute values for various pieces on boards like perhaps 8x8, 10x8, 12x8, then it seems simple enough to modify engines to play with different chess variants (if protocols like UCI are extended to cope with it).

I think though the real test will be when humans play against those programs.

H.G.Muller wrote on Thu, May 1, 2008 11:53 AM UTC:
I still can't see the point you are tring to make. If piece values cannot
be used to predict outcomes of games, they would be useless in the first
place. Why would you want to be an exchange or a piece ahead, if it might
as frequently mean you are losing as that you are winning? Fact is that I
OBSERVE that the piece values I have given below do statistically predict
the outcome of games with good precision.

If, according to you, that should not be the case, than apparently you are
wrong. If, according to you, that is not what 'good' piece values should
do, than that satisfactorily explains why you would consider your set of
piece values, which would cause whatever entity that uses them to play by
to lose consistently, in your eyes can still be 'good', and further
discussion could add nothing to that.

What you say about opponent modelling doesn't have anything to do with
piece values. Precisely knowing the limitations of your opponent allows
you to play a theoretically losing strategy (e.g. doing bad trades) in
order to set a trap. In general, this is a losing strategy, as in practice
one cannot be sufficiently sure about where the opponent's horizon will
be. So it will backfire more often than not.

And the datasets that are analyzed by me or Kaufman are not dominated by
players engaging in such opponent modelling. I can be certain of that for
my own data, as I wrote the engine(s) generating it myself. So I know they
only attempt theoretically optimal play that would work against any
opponent.

💡📝Hans Aberg wrote on Wed, Apr 30, 2008 08:24 PM UTC:
H.G.Muller:
| Chess as we play it is a game of chance...

The main point is that such a statistic analysis is only valid with respect to a certain group of games, as long as as the players stick to a similar strategy. The situation is like with pseudo-random numbers, where in one case one discovered that if the successive numbers generated were plotted in triples, they fell into a series of sloped planes. Such a thing can be exploited. So there results a circle of making better pseudo-random generators and methods to detect flaws, without ever resulting in true random numbers. A similar situation results in cryptography.

In chess, the best strategy is trying to beat whatever underlying statistical theory the opponent is playing against. When playing against computer programs this is not so difficult, because one tries to figure what material exchanges the opposing program favors and shuns, and then tries playing into situations where that is not valid. Now, this requires that the human player gets the chance of fiddling around with the program interactively for some time in order to discover such flaws - learning this through tournament practice is a slow process - plus a way to beat the computers superior combinatorial skills if the latter is allowed to do a deeper search by brute force.

| Anyway, you cannot know what Kaufman thinks or doesn't.

His stuff looks like all the other chess theory I have seen, only that he uses a statistic analysis as an input, attempting to fine-tune it. By contrast, you are the only guy I have seen that thinks of it as a method to predict the average outcome of games. You might benefit from asking him, or others, about their theories - this is how it looks to me.

You might still do values and percentages and display them as your analysis of past games in a certain category, but there is gap in the reasoning claiming this will be true as a prediction for general games.

H.G.Muller wrote on Wed, Apr 30, 2008 06:30 PM UTC:
To Rich:
For balancing armies, piece values the way I derive them are exactly what
one needs. The pice values represent winning robabilities, and balancing
them will creae a game where the winning chances will be equal.

H.G.Muller wrote on Wed, Apr 30, 2008 06:24 PM UTC:
Hans Aberg:
| You do not get a theory that predicts winning chances, as chess 
| isn't random. If the assumption is that opponents will have a 
| random style similar in nature to the analyzed data, then it might 
| be used for predictions.
This is where we fundamentally differ, and it makes it completely
pointless to discuss anything in detail as long as we argue based on such
mutually excusive axioms. Chess as we play it is a game of chance, as
players don't have perfect knowledge, and thus randomly choose between
positions they cannot distinguish based on the knowledge they have. And
there is no logical necessity for the condition of similar randomness that
you impose. It is well known that the eventual distribution of  random
walker does not depend on the details of the steps he can make. Only on
the variance. (The central limit theorem of probability theory!) In
particular, it is an empirical fact that statistical analysis of
computer-computer games, as I did, produces the same winning probabilities
as analyzing GM games (as Kaufman did). So even if you were in principle
right (which I doubt) that sufficiently different nature of the randomness
would produce different overall statistics, observations then apparently
show that computers and Human GMs are not 'sufficiently different'.

| It is clear that Larry Kaufman does not think of his theory in 
| terms of 'x pawns ahead leads to a winning chance p'. You can 
| analyze your data and make such statements, but it is an incorrect 
| conclusion it will be a valid chess theory predicting future games 
| - it only refers to the data of past games you have analyzed.
For one, this is not clear at all, and very unlikely to be true. Anyway,
you cannot know what Kaufman thinks or doesn't. Fact is that he
translates the statistics into piece values in exactly the same way I do.
The rest of you statement is totally at odds with standard statistical
theory, which maintains that probabilities of stochastic events can be
measured to any desired accuracy / confidence by sampling. This is really
to ridiculous for words.

💡📝Hans Aberg wrote on Wed, Apr 30, 2008 03:02 PM UTC:
Rich Hutnik:
| 2. When pitting one side against another, if the sides are unbalanced,
| this system should allow a balancing in points for handicapping reasons
| of the forces.

Games where both sides have equal material are also unbalanced, as in general, there is an advantage to play the first move.

Rich Hutnik wrote on Wed, Apr 30, 2008 02:11 PM UTC:
My take on the value of the pieces is for the following purposes:
1. If you want to do a universal build your own army variant, this allow you to see if the sides would be balanced.
2. When pitting one side against another, if the sides are unbalanced, this system should allow a balancing in points for handicapping reasons of the forces.

💡📝Hans Aberg wrote on Wed, Apr 30, 2008 02:07 PM UTC:
H.G.Muller:
| Define 'suggestions'. What I get are a set of piece values from which
| you can accurately predict how good your winning chances are, all other
| thing being equal or unknown.

You do not get a theory that predicts winning chances, as chess isn't random. If the assumption is that opponents will have a random style similar in nature to the analyzed data, then it might be used for predictions.

It is clear that Larry Kaufman does not think of his theory in terms of 'x pawns ahead leads to a winning chance p'. You can analyze your data and make such statements, but it is an incorrect conclusion it will be a valid chess theory predicting future games - it only refers to the data of past games you have analyzed.

H.G.Muller wrote on Wed, Apr 30, 2008 05:53 AM UTC:
Hans Aberg:
| You can sync your method against his values, to get piece value 
| suggestions. But that is just about what you get out from it.

Define 'suggestions'. What I get are a set of piece values from which
you can accurately predict how good your winning chances are, all other
thing being equal or unknown. If other (positional) features are known,
you can factor those in in the same way.

This is by definition what piece values are meant to do. If the
'classical' system does 'something else', because it was obtained in
another way, they are simply no piece values.

💡📝Hans Aberg wrote on Tue, Apr 29, 2008 09:49 PM UTC:
H.G.Muller:
| If you read Larry Kaufman's paper, you see that he continues
| quantifying the principal positional term ...

This is nothing new: this was done in classical theory. He just uses statistical input in an attempt to refine the classical theory.

| The piece values Kaufman gets are very good. And, in so far I tested
| them, they correspond exactly to what I get when I run the the piece
| combinations from shuffled openings.

You can sync your method against his values, to get piece value suggestions. But that is just about what you get out from it.

H.G.Muller wrote on Tue, Apr 29, 2008 07:17 PM UTC:
Hans Aberg:
| In other words, he is using a statistical approach merely as a point 
| of departure for developing a theory which combines point values with 
| other reasoning, such as positional judgement.
Of course. PIECE VALUES are only a point of departure. If you do a
principal-component analysis of the score percentage as a function of the
various evaluation characteristics, the piece values are the most
important components. But by no means the only ones.

If you read Larry Kaufman's paper, you see that he continues quantifying
the principal positional term responsible for the B vs N value, namely the
number of Pawns left on the board. And again, he uses the statistical
method to derive the numerical value, concluding that each own Pawn
affects the B-N difference by 1/16 Pawn. So his statistical analysis
doesn't stop at piece values. He also applies it to get the value of
positional terms.

The piece values Kaufman gets are very good. And, in so far I tested them,
they correspond exactly to what I get when I run the the piece combinations
from shuffled openings. For instance, I also find B=N and BB=0.5. Note that
Kaufman's Rook value is also dependent on the number of Pawns on the
board.

💡📝Hans Aberg wrote on Tue, Apr 29, 2008 12:58 PM UTC:
H.G.Muller:
| Larry Kaufman has applied the method on (pre-existing) Human
| Grand-Master games, to determine piece values for 8x8 Chess.

If I look at:
http://home.comcast.net/~danheisman/Articles/evaluation_of_material_imbalance.htm
he says things like:
  [...] an unpaired bishop and knight are of equal value [...], so
  positional considerations [...] will decide which piece is better.
and also see the section 'Applications'.

In other words, he is using a statistical approach merely as a point of departure for developing a theory which combines point values with other reasoning, such as positional judgement.

The values he gives though are interesting:
  P=1, N=3¼, B=3¼, BB=+½, R=5, Q=9¾
where BB is the bishop pair.

H.G.Muller wrote on Tue, Apr 29, 2008 08:07 AM UTC:
I asked, because these standard meanings did not seem to make sense in your
statement. None of what you are saying has anything to do with piece values
or my statistical method from determining them. You are wondering now why
Shannon-type programs can compete with Human experts, which is a
completely different topic.

The statistical method of determining piece values does _not_ specify if
the entities playing the games from which we take the statistics are Human
or computer or whatever. The only thing that matters is that the play is of
sufficiently high quality that the scores have attained their asymptotic
values (w.r.t. play quality). And indeed, Larry Kaufman has applied the
method on (pre-existing) Human Grand-Master games, to determine piece
values for 8x8 Chess.

Piece values themselve are an abstract game-theoretical concept: they are
parameters of a certain class of approximate strategies to play the game
by. In these strategies the players do not optimize their Distance To Mate
(as it is beyond their abilities to determine it), but in stead some other
function of the position (the 'evaluation').

In general both the approximate strategy and perfect play (according to a
DTM tablebase) do not uniquely specify play, as in most positions there
are equivalent moves. So the games produced by the strategy from a given
position are stochastic quantities. The difference between perfect play
and approximate strategies is that in the latter the game result need not
be conserved over moves: the set of positions with a certain evaluation
(e.g. piece makeup) contains both won and lost positions. Implicitly,
every move for the approximate player thus is a gamble. The only thing he
can _proof_ is that he is going for the best evaluation. He can only
_hope_ that this evaluation corresponds to a won position.

So far this applies to Humans and computers alike. And so does the
following trick: most heuristics for staticaly evaluating a position
(point systems, including piece values) are notoriously unreliable,
because the quantities they depend on are volatile, and can drastically
change from move to move (e.g. when you capture a Queen). So them players
'purify' the evaluation from errors by 'thinking ahead' a few moves,
using a minimax algorithm. That is enough to get rid of the volatility,
and be sure the final evaluation of the root position only takes into
account the more permanent characteristics (present in all relevant end
leaves of the minimax tree). This allows you to evaluate all positions
like they were 'quiet', as the minimax will discover which positions are
non-quiet, and ignore their evaluation.

So far no difference between Humans and Shannon-type computer programs.
Both will benefit if their evaluation function (in quiet positions)
correlates as good as it can with the probability that the position they
strive for can be won.

The only difference between Humans and computers is that the former use
very narrow, selective search trees, guided by abstract reasoning in their
choice of cut-moves of the alpha-beta algorithm and Late-Move Reductions.
Computers are very bad at this, but nowadays fast enough to obviate the
need for selectivity. They can afford to search full width, with only very
limited LMR, and on the average still reach the same depth as Human
experts. So in the end they have the same success in discriminating quiet
from volatile positions. But they still are equally sensitive to the
quality of their evaluation in the quiet position.

But, like I said, that has nothing to do with the statistical method for
piece-value determination. The piece values are defined as the parameters
that give the best-fit to the winning probabilities of each equivalence
class of positions (i.e. the set of positions that have equal evaluation).
These winning probabilities are not strictly the fraction of won positions
in the equivalence class: in the first place non-quiet positions are not
important, in the second place, they will have to be weighted as to their
likelyhood occurring in games. For example, in a KPPPPPKPPPPP end-game,
positions where all 5 white Pawns are on 7th rank, and all black Pawns are
on 2nd rank are obvious nonsense positions, as there is no way to reach
such positions without a decisive promotion being possible many moves
earlier. SImilarly, 30-men positions with all 15 white man on ranks 5-8
and all black men on ranks 1-4 will not occur in games (or in the search
tree for any position in any game), and their (mis)evaluation will not
have the slightest effect on performance of the approximate strategy.

My method of determining piece values takes care of that: by playing real
games, I sample the equivalence class in propostion to the relevant
weights. The opening position of certain material composition will quickly
diffuse in game state space to cover a representatve fraction of the
equivalence class, with only little 'leakage' to nearby classes because
of playing errors (as the level of play is high, and blunders in the first
10 opening moves are rare). The actual games played are a sample of this,
and their result statistics a measure for the probability that a relevant
position in this equivalence class will be won with perfect play (the
playing errors due to the fact that they were actually played out with
slightly imperfect play working in either direction, and therefore
cancelling out).

💡📝Hans Aberg wrote on Mon, Apr 28, 2008 08:35 PM UTC:
H.G.Muller:
| I am not sure what 'brute force' you are referring to.

See
  http://en.wikipedia.org/wiki/Computer_chess

| What do you mean by 'classical theory'?

What was used before the days of computers. A better term might be 'deductive theory' as opposed to a 'statistical theory', i.e., which has the aim of finding the best by reasoning, though limited in scope due to the complexity of the problem.

| What does it matter anyway how the piece-value system for normal Chess
| was historically constructed anyway?

It is designed to merge with the other empiric reason developed to be used by humans.

You might have a look at a program like Eliza:
  http://en.wikipedia.org/wiki/ELIZA
It does not have any true understanding, but it takes while for humans to discover that. The computer chess programs are similar in nature, but they can outdo a human by searching through many more positions. If a human is compensated for that somehow (perhaps: allowed to use computers, take back at will, or making a new variant), then I think it will not be so difficult for humans to beat the programs. In such a setting, a statistical approach will fail sorely, since the human will merely play towards special cases not covered by the statistical analysis. The deductive theory will be successively strengthened until in principle the ideal best theory will emerge. This latter approach seems to have been cut short, though by the mergence of very fast computers that can exploit the weakness of human thinking, which is the inability of making large numbers of very fast but simple computations.

25 comments displayed

Later Reverse Order EarlierEarliest

Permalink to the exact comments currently displayed.