I think the rating system in Ludum Dare is not good (and in many other websites as well). I don’t know a better solution, but I think it is worthy thinking about this problem.
Let’s say there are 1000 submissions and 1000 reviewers. If each submission receives a lot of reviews, then the average of the reviews will give a good estimation of the quality of the submissions.
However, if each reviewer only reviews a few submissions, each submission will only get a few reviews, which means that the average will yield a very poor and “noisy” estimation that doesn’t represent the opinion of the community as a whole.
This means that simply asking people to rate and averaging the rates is not a good idea in this case.
I suggest that we invite our mathematician friends to come up with a smarter and more elegant solution for this. Wouldn’t that be awesome?