# Comments/Ratings for a Single Item

I like the dummy rating idea too.

The computation seems to skip some ratings; see e.g. Omega Chess which has 8 ratings but only 2 scores on this page.

If you intend for ratings now to only apply to game quality, perhaps you should remove the ability to give a rating in a comment on a non-Game page?

I tried it out, then updated this page to sort first by an adjusted mean. To calculate it, I just added 15 to the numerator and 5 to the denominator. The sorting seems to be more accurate than it used to be.

I like the idea of including dummy ratings. This is analogous to starting an Elo (or GCR) rating at 1500 and gradually changing it as games are played. This prevents someone from becoming a highly-rated Chess champion just by beating Bobby Fischer once, for example. Given that the largest sample size is 17, most are 10 or below, and about half are just two or one, 20 seems to be too many. So I'm thinking 5 dummy ratings of Average might work. Also, 5*3 is 15, which is a hundredth of 1500.

Of all the options you listed, I think the last one (making the scale -2 to 2, then summing all the scores) is the best one. But for another option, here's how they do it on BoardGameGeek:

People can rate games on a scale from 1 to 10. Games are then ranked based on the average score they receive. But to avoid having a game with a single score suddenly becoming the top ranked game, each game also receives a set of "dummy scores", which are included in the average.

Here's how it could look here: our rating scale is from 1 to 5. Let's say each game starts off with 20 "dummy scores" of 3. (I picked 20 off the top of my head; I have no idea what number would be the best choice.) Then if one person rates a game 5 (excellent), that gives an average of (20*3+1*5)/21=3.095. If another game receives 10 ratings of 4 (good), that gives an average of (20*3+10*4)/30=3.333.

What do you think?

Since some people here are mathematicians, I thought I would share my thoughts on sorting the results and ask for advice on how to do it better. I chose not to sort by mean first, because a game with a single excellent rating could end up with a higher mean than a game that got several excellent ratings and one lower rating. Given that the latter game has attracted more attention and popularity, it doesn't seem fair to count the game with one rating (that happens to be excellent) higher than it. Mode and median don't have this problem so much. I chose to go with mode first, because the same mode can be distinguished by size, and this gives an indication of popularity. If I went with median first and tried to distinguish the same medians with sample size, it wouldn't have the same effect, since a larger sample size increases the chance of there being ratings below the median.

The main problem with using mode is that a sample may have no mode or multiple modes, and then a single mode cannot be calculated. Currently, its methods for handling multiple modes are inconsistent. For two modes, it favors the lower mode if there are more Poor and BelowAverage ratings than there are Good and Excellent ratings, or the higher mode if there are more of the latter. If these are equal, it returns a value of Average. For three modes and for five modes (which also includes no modes), it returns the median of the modes. But for four modes, it returns the mean of the modes. One thought I have is to make this all consistent by always returning the median of the modes. For one mode, this would be the mode, for two, this would also be the mean of the modes, for 3 and 5, this would be the middle mode, and for 4, this would be the mean of the two middle modes. Looking over the raw scores column, I see that most games do have single modes, a small number have two modes, and I didn't notice any with three or more.

One drawback to using mode first is that not every rating has an effect on determining the mode. So, for example, results such as 1, 1, 1, 1, 4, 4, 4, 5, 5, 5 would have a mode of 1 even though the 4s and 5s are greater in number together. If I solve this problem by using mode only for a majority, that gives the same value as using median. After all, whenever the mode size is over 50%, the median value will be the mode. So I have thought of dropping mode and using median instead, or perhaps of sorting by median before mode. A median is affected by all ratings, but not with the finetune precision that a mean is. For the small samples of ratings each game has, this could be more suitable than relying on mode or mean first.

In general, I want the order to reflect both average rating and popularity. One thought was to total up the scores and sort those, but this won't work well when Poor and BelowAverage count as 1 and 2 points. Alternately, I could shift the points so that Poor is -2, BelowAverage is -1, Average is 0, Good is 1, and Excellent is 2, then sort the totals.

6 comments displayed

**Permalink** to the exact comments currently displayed.

I see only six ratings for Omega Chess. Three of these are clearly from people not signed in as members, and these are not being counted. The one from David Short gives a link to his profile page, but this is actually the error. In the database entry for this comment, his name is given as David Short, not as DavidShort, which is his PersonID. If you look at the icon beside his name, it is a question mark in a circle, the icon used for non-members. So, there are only two ratings from members signed in as members, and the script accurately counts them.

Since ratings on non-game pages don't count towards game ratings, and we have already had them for years, I figured there is no harm in keeping them.