Check out Symmetric Chess, our featured variant for March, 2024.


[ Help | Earliest Comments | Latest Comments ]
[ List All Subjects of Discussion | Create New Subject of Discussion ]
[ List Earliest Comments Only For Pages | Games | Rated Pages | Rated Games | Subjects of Discussion ]

Single Comment

Sac Chess. Game with 60 pieces. (10x10, Cells: 100) [All Comments] [Add Comment or Rating]
H. G. Muller wrote on Sat, Dec 19, 2015 08:05 AM UTC:
> <i>For example, if one incorrectly sets the value of a rook (or, I would opine for argument's sake, even a bishop) exactly equal to the value of a knight, I'd imagine in a number of playtest games the side with an extra rook would erroneously trade it for the extra knight of the opposing side, say when thinking the position was approximately equal in all respects.</i> <p> Indeed, this is exactly what happens. The Rook side will needlessly squander a Rook for a Knight, and because the initial setup was likely to give the already two Pawns worth of compensation, would badly lose after that. So making the programs erroneously believe that a certain trade is equal is one of the major pitfals of this method. This especially holds for 1-on-1 trades, as opportunities for concerted multiple trading do not occur very frequently. The worst of those is if two values differ exactly a Pawn, as Pawns are abundant, and X + Pawn for Y opportunities are not so rare as the others. <p> But you know which values have been given to the pieces in the opponent armies, so you know which of those are close to the initial estimate of the value or one Pawn above/below it. And then I usually try to stay ~20cP away from those points. If needed you can make two test runs, one with a programmed value 20cP above what you think it should be, one 20cP below it. If the results are nearly equal there is no reason to distrust it. If they are different the one that used the programmed value closest to what the score outcome suggests will obviously be the more reliable, and if the programmed value is different from the score-outcome value, you repeat the test with the latter. If the value suggested by the score is very close to that of a piece in the opponent value, you can repeat the test against armies of another composition, lacking the offending piece. It is always better to base the value assignment not on a single imbalance, but on a variety of imbalances anyway. (E.g. not only play A vs Q and A+P vs Q, but also A+P vs 2R, A+P vs 2N+B and A vs R+N+P.) <p> > <i>Regarding when a preliminary value is assigned to an Archbishop that is at least slightly different than that of a Queen when pitting the two pieces against each other in playtesting (other material being equal at the start), ...</i> <p> Well, obviously predicting the desirability of some trades the wrong way around will lead to unnatural play, which might affect the statistics of other trading opportunities compared to 'natural' games, which then affects the outcome. This is all possible in theory. In practice, however, you will be able to see that when it happens. This is why pondering about it is not the same as actually doing it. You can repeat the test with all kind of different programmed values for A, and see how the result score varies by this. If it doesn't vary at all, <i>apparently</i> the problem does not occur in practice. And in the worst case the value implied by the scores does significantly depend in some way on the programmed value, and you have to search for self-consistency, i.e. the value you have to program to get the same value out of the score. <p> With the usual resolution I am aiming for (~0.2 Pawn), however, I have never seen that happen, though. Trying to get more precise values is probably meaningless, as you will start to resolve all the higer-order corrections to the model of additive piece values. E.g. some pieces might do better against Knights, other might do better against Bishops.