This is a quick post visualizing my new Zenon Rankings for faceoffs. For an introduction to my Zenon Rankings and Elo-type models see this post.

The below plot shows 8 players throughout the 2015-16 season. As mentioned in the introductory post my model assumes that players start off with an initial ranking of 1500 and a deviation of 50. As players play more and more games their deviation parameters drop because we deem their rankings to be more and more reliable. Large deviations also mean that we also adjust rankings in a more extreme way in accordance with results.

Winning faceoffs particularly against tough opponents leads to ratings increases and losing lots of faceoffs especially to weak opponents causes decreases. In my other posts I focused on displaying the final Zenon Rankings or the rankings for each player after they took their last faceoff in the time-frame of the model. This only captures some of the information that an elo-type model can tell us. A well-specified elo-type model can detect improvements and declines during the season, and sometimes blending these kinds of ranking over a time-period can be more informative about actual ability than the final rankings alone.

*The bolded line represents the weighted average Zenon Ranking of 1507. Players that ended the season above the line are considered by the model above average drawmen and expected to win more than 50% of their draws against neutral competition. The opposite is true of those below the line. The numbers on the x-axis represent days during the 2015-16 season. There were 226 days in the 2015-16 season including the post-season where games were played. As you can see by looking at the Nuge or McDavid who played on teams not included in the playoffs, the post-season began on approximately day 180.*

All of the players in the graph start at the same point, but they quickly diverge based on their initial results due to the deviance parameter. Although these initial divergences are drastic, they are not condemning in the sense that it is very possible to recover from a low ranking early in the season. Just look at the green line depicting Tyler Seguin for an example. On average, however, players that diverge downwards quickly, do not end up above average by the end of the season. This is part of the reason why the glicko model outperformed a standard elo-model in this application. Elo models do not have deviance parameters and so all players are adjusted equally regardless the number of faceoffs they have taken.

Looking at the purple line representing Evgeni Malkin we see that when he was hurt over two stretches that his ranking stays constant. When not playing, our model doesn’t update his ranking or deviation parameter. Second we can notice that he seemed to improve somewhat during the playoffs as represented by the uptick at the end of the season. It is too early to tell exactly how sensitive this model is to real improvements, but assuming the model is a decent measure of actual ability (it does have some levels of predictive power as outlined in the introduction post) we can say that Malkin was a better drawmen in the playoffs than the period just before he was injured. Whether this is because he was taking draws more seriously or playing injury free is another matter.

*Red line represents Leon Draisaitl’s ranking after each day of the 2015-16 season including playoffs. The dark grey error bars are 95% confidence region based on Draisaitl’s deviance parameter and ranking. Here we can see that Draisaitl is approximately league average at taking faceoffs.*

Here is a visualization of how the deviation parameter works. At the beginning we are assuming that Leon Draisaitl is highly likely to have a true value between 1400 and 1600. As he actually takes faceoffs his rating adjusts and his deviation shrinks. The error bars around the red line represent 95% confidence regions based on the deviance parameter. The dotted line on the graph roughly represents the beginning of playoffs. Since the Oilers didnt’t make the playoffs, Draisaitl never took any faceoffs and accordingly his ranking nor his deviation was changed. But even before the dotted line we can see that his deviance became less volatile. One feature of the glicko model that Mark Glickman originally recommended is that there is some sort of lower bound or slowing mechanism on the deviance parameter so that the model doesn’t reach a state of stasis. We can see the consequences of this model specification visually in the above graph.

All of the 2015-16 Zenon Rankings are here

For the complete rankings for 2008-2016 check here

And for a refresher on the Zenon Model check here.

If there are any specific players or teams that you would like me to take a look into, just let me know and I’ll get back to you. I’m always looking for feedback on what users find useful.