Chess has been a popular game for all ages. It has some of the most celebrated and genius minds out there. But throughout history, there have been some legendary players in each era that simply cannot be compared to another player from a different part of the time. The Elo ratings are subject to inflation and also, one can never know how a game may turn out. Each game of chess is a battle and the outcome depends solely on who is better at the time. There is a lot of grey area when it comes to choosing, are we looking at the best, greatest, the most talented of the most influential players? Most of these are subjective. It would be hard to pin down top players in any of these categories, let alone all of them. So, to make sure we enlighten you on how subjective player comparisons can be, let’s take a look at some of the common methods used to try and compare them.
The Elo Rating System
The Elo rating system is one of the most common systems used to try and estimate the relative strength of chess players before they sit down at the chess table. It was created by Physicist Arpad Elo, and it’s the system by which we determine the strength score of players to this day. Despite the popularity in this era, it is by no means a perfect system. In fact, Arpad Elo often said that it could only provide an estimate of skill, he once compared it to, “the measurement of the position of a cork bobbing up and down on the surface of agitated water with a yardstick tied to a rope and which is swaying in the wind.”
These days one can say the Elo system is a bit more accurate than that, but it does have its drawbacks. For instance, it is not suitable for measuring the relative strength of players of different eras who never played against one another. It is only able to offer accurate ratings on contemporary players. Furthermore, because of a system artifact is known as rating inflation, the average Elo score of chess players has been gradually rising over the years. That may be why Magnus Carlsen, the most recent World Chess Champion, was able to achieve the record-breaking high Elo score of 2882. Garry Kasparov achieved a score of 2851 15 years earlier, which may mean that he was still the stronger player. It’s impossible to say for sure, though Carlsen did beat Kasparov in a match that took place when Carlsen was only 13 years old, which may mean that the ratings are spot on.
The basics of how the Elo rating system works are simple – each player has an Elo rating represented by a number which will increase or decrease after every match played against another rated player. In essence, the winner takes points away from the Elo score of the losing player and adds them to his or her own Elo score. The relative difference in strength between the two players will determine the magnitude of the exchange. For example, if a high rated player wins a match against a player who is rated vastly lower, the transfer of points will be minimal. However, if the lower-rated player wins in that match, they will earn themselves a large boost in rating. Because of this factor, the system is largely self-correcting because of a player who is rated too low for his or her actual skill level will consistently outperform expectations and quickly make up the point difference.
FIDE classifies tournaments according to the average rating of the players. Each category is 25 rating points wide. Category 1 is for an average rating of 2251 to 2275, category 2 is 2276 to 2300, etc. For women's tournaments, the categories are 200 rating points lower, so a Category 1 is an average rating of 2051 to 2075, etc
Chessmetrics is a comparison system created by statistician Jeff Sonas. It is somewhat based on the Elo rating system, as most modern chess rating systems, is, but it claims to have corrected for the rating inflation that plagues the Elo system, making it impossible to compare players from different eras accurately. However, the Chessmetrics system doesn’t do a good job of that either, considering that it takes into account the frequency of play. As soon as you go a month without playing a chess match, your Chessmetrics rating begins to go down. Sonas himself had this to say on the subject, “Of course, a rating always indicates the level of dominance of a particular player against contemporary peers; it says nothing about whether the player is stronger/weaker in their actual technical chess skill than a player far removed from them in time.”
Glicko rating system
The Glicko system was invented by Mark E. Glickman as an improvement of the Elo system. The Glicko-2 system is a refinement and is used by the Australian Chess Federation and some online playing sites.
The RD measures the accuracy of a player's rating, with one RD being equal to one standard deviation. For example, a player with a rating of 1500 and an RD of 50 has a real strength between 1400 and 1600 (two standard deviations from 1500) with 95% confidence. Twice the RD is added and subtracted from their rating to calculate this range. After a game, the amount the rating changes depends on the RD: the change is smaller when the player's RD is low (since their rating is already considered accurate), and also when their opponent's RD is high (since the opponent's true rating is not well known, so little information is being gained). The RD itself decreases after playing a game, but it will increase slowly over time of inactivity.
The Universal Rating System
The Universal Rating System (URS) is a system for rating chess players devised by Jeff Sonas, Mark Glickman, J. Isaac Miller, and Maxime Rischard. It was introduced to determine seedings and qualification for the 2017 Grand Chess Tour. The main difference from FIDE's Elo rating system is the combination of all three-time controls (classical, rapid and blitz) into a single rating list, whereas FIDE maintains three different rating lists.
Warriors of the Mind
In the book, Warriors of the Mind, authors Raymond Keene and Nathan Divinsky claim that there’s is the only rating system that claims to be able to directly compare the strength of chess players across different eras. The ratings assigned to players using this system are referred to as Divinsky numbers and they don’t correspond to Elo ratings at all. Based on their calculations, they came up with their list of the top 10 chess players.
This system is not widely accepted within the chess community, and critics claim that the ratings were assigned arbitrarily with a considerable bias toward more modern chess players. There is not a lot of solid evidence to suggest that this is a credit rating system and not just someone’s published list of their top ten favorite chess players.
So, there you have it, now you know of a few methods that can be used to rate players. What do you think is the best way to compare your favorite chess players? Do you have your own list of greatest players or Do you think it is best to just rate the most influential players, the ones that have had an impact on chess? If so, stay tuned, we just might have a list that would be a great read for you, soon!
You May Also Like: Women In Chess