Hoo boy. Another Ludum Dare coming up, and I am going to exacerbate
an existing problem by also talking about the rating system. But first,
I have a confession.
I like the rating system. Screw accuracy, it encourages people to
play your game. We’re all hungry to see who made the best games this
time around, so we play each other’s games, rate them, and give
feedback. The rating system creates more feedback, because it
encourages play. And I also like seeing the top games: the top games
are pretty solid games these days, even if (especially if) they’re
short. And on top of that, I love having the opportunity to get
quantitative feedback on my game!
So I came to talk about improving system accuracy, because I dug up a
pretty simple statistical analysis of the voting system that shows the
likelihood distribution of the top three overall ranked games from LD
#23, and the picture is not pretty. (From: Let’s make a voting system)
Well, it is kind of pretty, but the implications aren’t. It kind of
implies that the ranking (not the ratings! the ranking!) is a bit of a
farce. I provide the mathematical model, a discussion of community
goals, and a possible fix in my article.
Article: Let’s make a voting system