In my last post, I talked about grading the rankings of fantasy football experts using the root mean square error (RMSE), and listed some results for Week 9. This time, I took the data from ten analysts and examined their performance for weeks 1-9. The numbers seem to imply a certain amount of group think:
Analyst | QB | RB | WR | TE | K | DST |
---|---|---|---|---|---|---|
FFToolBox.com | 8 | 14.5 | 21.1 | 6.7 | 7 | 6.9 |
FFToday.com | 7.8 | 15.2 | 20.8 | 7.3 | 7 | n/a |
(Yahoo) B. Funston | 7.7 | 14.9 | 21.4 | 7.2 | 7.3 | 6.6 |
(Yahoo) A. Behrens | 7.5 | 14.6 | 20.9 | 7.1 | 7.6 | 6.6 |
(Yahoo) B. Evans | 7.8 | 14.6 | 21 | 7 | 7.4 | 6.5 |
(Yahoo) S. Pianowski | 7.8 | 14.6 | 20.7 | 6.9 | 6.8 | 6.5 |
(ESPN) M. Berry | 7.9 | 14.8 | 21.8 | 7.6 | 7.6 | 6.4 |
(ESPN) C. Harris | 7.7 | 15.2 | 21.5 | 7 | 7.5 | 6.8 |
(ESPN) E. Karabell | 8 | 14.9 | 21.2 | 6.8 | 7.7 | 6.5 |
(ESPN) E. Kuselias | 7.9 | 15 | 21.2 | 7.1 | 6.8 | 6.3 |
The results are pretty discouraging… all the analysts are basically the same! The biggest difference is 1.1 between the WR performance of ESPN’s Matthew Berry and Yahoo’s Scott Pianowski. So if Pianowski tells you that Randy Moss is the #1 WR for the week, then Moss is pretty much guaranteed to be in the top 21; if Berry tells you that Andre Johnson is the #1 WR for the week, then Johnson will fall somewhere in the top 22. Is that a big enough difference to say that Pianowski is a better analyst than Berry?
Amusingly, if you sum up the errors for each analyst, you’ll find that Matthew Berry, face of ESPN’s fantasy football team, is the worst. Still, they are all so close that being best or worst doesn’t have much meaning.
All this homogeneity had me wondering if there was a bug somewhere in my code. As a sanity check, I decided to see how different the rankings of the analysts were from one another. To do this, I took the average of the standard deviation of the rankings for each player. The result:
Analysts | Avg(StdDev) |
---|---|
All | 3.90563914 |
ESPN | 2.99869145 |
Yahoo | 2.96695123 |
ESPN+Yahoo | 3.44933692 |
So you end up with an average standard deviation of about 4
for each player ranking. That is a pretty small number when you consider that each of these analysts is ranking 40 running backs and 50 wide receivers. If you include only rankings in the top 10, then the number plummets to 1.42704758
.
Even more interesting is the very clear illustration of group think. The ESPN guys have an average deviation of about 3
amongst themselves, as do the Yahoo guys. But combining the two groups, the number increases by about 15%. It’s all very fascinating; the conclusion I draw is that these analysts are not significantly different from one another.
Notes about methodology
When calculating the RMSE table, I leveled the playing field by using the top N rankings from each position; see the table below:
Position | Players ranked |
---|---|
QB | top 20 |
RB | top 40 |
WR | top 50 |
TE | top 15 |
K | top 15 |
DST | top 15 |
The other thing to note about the scoring is that I capped the maximum error for situations where a player’s actual output put them outside the top N rankings. If they were outside of N, then I squashed their ranking to N+1. This was to avoid the situation where a player posts a goose egg and therefore winds up amongst the hundreds of other players with zeros.
Was it worth it?
I had fun doing this as a hobby project, but I was disappointed not to uncover a “super analyst”. All these guys are doing better than if you were to draw names out of a hat, but I wonder how much better they are than the average Joe Football. I am half-tempted to try ranking players myself and see if an amateur can compete with the experts.
I like your idea, I’ve often thought about doing the same, but honestly balked at collecting the data in order to shove in to Stata.
The experts are garbage and rarely deviate much from their pre-season rankings.
Is this still an interest of yours? I’ve been trying to come up with a predictive methodology based on weekly matchups and a given players historical deviation from the expected results. This would look at each player and each defense vs the position. If you have an interest in working something up email me and I’ll be more descriptive…if not, no worries.