Week 14: Spread Picks, Results and Analysis

by thesanction1

It was a moderately successful week. The new algorithm got off to a decent start – looking forward to next weeks picks. This is the kind of week that doesn’t warrant much commentary (57% wins), so let’s get on with it.

Overall performance – week: 8 of 14 (57%), season: 27 of 55 (49%) 

Performance_OverallPicks_Season_2013_Week_14

SCI greater than 1.0 – week: 7 of 13 (54%), season 22 of 43 (51%) 

Performance_HighConf1_Season_2013_Week_14

SCI greater than 2.0 – week: 5 of 10 (50%), season: 11 of 25 (44%) 

Performance_HighConf2_Season_2013_Week_14

SCI greater than 3.0 – week: 4 of 7 (57%), season: 8 of 13 (62%)

Performance_HighConf3_Season_2013_Week_14

With the new algorithm, the picks come out with larger SCI values on average. I’m adding two more levels of SCI confidence here (4.0 and 5.0) to track how these higher SCI picks perform.

SCI greater than 4.0 – week: 3 of 5 (60%), season: 3 of 5 (60%)

Performance_HighConf4_Season_2013_Week_14

SCI greater than 5.0 – week: 2 of 3 (67%), season: 2 of 3 (67%)

Performance_HighConf5_Season_2013_Week_14

One of the toughest things about understanding whether or not you’ve got a tool that actually has predictive power is that there is such a small sample size of games to predict. In an effort to get more diagnostic info about the algorithm performance I’m producing the following grid which describes the size of the agreement or the miss of the SCI at every different  SCI threshold. What we hope to see in this grid is that the games picked correctly are picked correctly by a wider margin than the games that are missed are missed by.

Week margin of victory statistics:

Performance_WeekWinMarginGrid_Season_2013_Week_14

This grid might be a bit confusing so let me break it down.  A larger “Win.Margin” implies that picks have covered the spread by a larger margin. Conversely, a smaller “Loss.Margin” implies that picks have failed to cover the spread by a larger margin.  “Margin.Difference” is then the difference between these two numbers – a positive “Margin.Difference” means the games that have been correctly were picked correctly by a wider margin, on average, than the games that were picked incorrectly.  A negative “Margin.Difference” means that the games picked correctly were picked correctly by a smaller margin, on average, than the games that were picked incorrectly.

Here is this same grid for the picks this season (from week 11):

Season margin of victory statistics:

Performance_SeasonWinMarginGrid_Season_2013_Week_14

What does this information really mean?  We shouldn’t make too much of it – the most important thing is whether or not we are picking games correctly. If we covered the spread by 1 point every game, we’d still be 100% for the week. But if we expect the algorithm to be picking up on information that is relevant for how teams match up against one another in the real world, we expect that even if we are losing picks, we are losing them by a lesser margin than those picks we are winning.  It looks a bit worse if the algorithm fails to pick a game that is an absolute blowout relative to the spread because, theoretically, one key thing contributing to a blowout is a tangible advantage of one team over the other (as opposed to something that’s more random and less predictable, like winning the turnover battle or getting key calls from the referees).

To the extent that a tangible advantage exists, we’d like the algorithm to pick up on it.  Every game contains a large element of randomness, but on average we expect this randomness to have a neutral effect on our pick-accuracy – looking at the longer-term margin of win/loss can give us a sense of just how lucky/unlucky we’ve been.

That’s all for now – hope everyone has a great week!