In my first post on this topic, I examined whether a team’s performance against preseason predictions affected it’s success against the spread. I found some encouraging results from that analysis (more specifically, that there was a positive correlation between a team’s record vs. their predicted record , and their number of wins against the spread), but it left me with a few additional questions.
In the first analysis, I looked at four measurement points when testing my hypothesis: 1/4 way through the season, 1/2 way through the season, 3/4 way through the season, and the season’s end. However, I noticed a potential problem with doing things this way: Since measuring at the 3/4 way point also includes all the data from the 1st and 2nd quarters of the season, it is possible that there ceases to be a correlation as the season progresses (this would be my hypothesis, or at least that the strength of the effect subsides as bettors start to react to new information), and that any observed correlation at that point is more reflective of previous quarters. As a result, I decided to test my hypothesis on each quarter by itself, in isolation.
In addition to testing the strength of the above-mentioned correlation in each isolated quarter of the season, I also wanted to see if there was any confounding variables to worry about. What I mean by this is that I wanted to make sure that the increase I observed in WATS (Wins Against The Spread) was actually due to teams outperforming their preseason predictions, and not something else unaccounted for. Namely, in an previous analysis I observed a correlation between straight up wins and WATS. It would make sense logically that teams who are beating their preseason predictions generally have more wins, so I definitely saw this as a potential problem area. As a result, I decided to run a multiple regression to see the combined effects of straight up wins and “wins minus projected wins” on WATS.
Results: I found that within each isolated quarter of the 2014-2015 NBA season, teams performances against their predicted win total were positively correlated (all at the .000 level) with the number of times they were able to beat the spread. I also found that, true to my hypothesis, this effect lessened as the season progressed and as the betting public slowly acclimated to new information about the true quality of each team: In quarter 1, the strength of the correlation between “wins minus predicted wins” and WATS was .75. In quarter 2 the strength of this correlation fell to .73, in quarter 3 to .60, and in quarter 4 to .59.
As for the multiple regression, I found that at each point throughout the season (1/4, 1/2, 3/4 and full), “wins minus predicted wins” was positively correlated with WATS when controlling for straight up wins, but that straight up wins were not significantly correlated with WATS when controlling for “wins minus predicted wins”. The coefficients for both of these independent variables are as follows –
- 1/4 Season
- Straight up wins coefficient: -.05, significant at the .581 level
- Wins minus predicted wins coefficient: .51, significant at the .000 level
- 1/2 Season
- Straight up wins coefficient: -.02, significant at the .808 level
- Wins minus predicted wins coefficient: .54, significant at the .000 level
- 3/4 Season
- Straight up wins coefficient: -.03, significant at the .596 level
- Wins minus predicted wins coefficient: .47, significant at the .000 level
- Full Season
- Straight up wins coefficient: -.01, significant at the .855 level
- Wins minus predicted wins coefficient: .37, significant at the .000 level
Analysis: The results from analyzing each quarter in isolation were pretty much in line with what I expected, as I stated above. It makes sense that bettors would slowly start to adjust their expectations as the season progresses, and preseason predictions would start to factor less in their decision-making. However, I find it interesting that preseason predictions still seemed to factor into teams’ performance against the spread, even in the final quarter of the season. It seems that prejudices and stigmas die hard in the minds of the betting public, and teams labeled as either “good” or “bad” will have a hard time shedding those reputations over the course of the season.
The most surprising part of this analysis to me was that there seemed to be no positive effect of straight up wins on WATS when controlling for a teams performance against expectations. In fact, in all quarters there was actually a slight negative effect (although one too small to be significant). This goes against a previous analysis I performed in which I found a strong effect of straight up wins on WATS. However, in that analysis I of course did not perform a multiple regression or include “win minus predicted wins” as a mitigating variable. Ironically, I was expecting some of the effect of “wins minus predicted wins” to be diluted by straight up wins, but the reverse turned out to be true. This just goes to show that I should never be too confident in my findings, and to always think about confounding variables.