The conference formally known as the Big 10 seemingly had one of the worst weekends it could imagine, with most of its football teams either losing on the national stage or struggling against in contests versus perceived lower level opponents.

So how bad was the B1G’s day?

To start, the 13 teams playing (Indiana was off) finished 2-11 against the Las Vegas point spread, with several teams falling well short of the game’s closing number.

Here’s a dot-chart of how each team did, relative to game point spreads. For example, Nebraska, which was favored by 35.5 points over McNeese State but only won by 7, had the conference’s worst day relative to the point spread expectation (-28.5).

On the whole, a conference finishing 2-11 ATS is bad; due to chance, and assuming each game’s ATS result is a coin flip, a sample of 13 games would only produce 2 wins or fewer about 1 in 100 times.

What made the conference’s bad day even worse is that so many of its results did not seem like coin flips; Nebraska, Michigan, Ohio State, Rutgers, and Purdue all finished more than 20 points worse than Las Vegas expected them too. Overall, the conference was about two touchdowns worse per game than expected (172.5 points total, or 13.2 points per game).

Fortunately, its relatively strait-forward to quantify this ineptitude. Football game margins, relative to the point-spread, follow a Normal distribution with mean 0 and a standard deviation of **σ**. In this case, **σ** represents the average distance between the game’s eventual margin of victory and the margin of victory predicted by the point spread.

For NFL games, it’s been suggested that **σ **≈ 13. Given that there’s likely more variability in NCAA game outcomes, let’s allow for **σ **≈ 15, which, given results shown here, seems fair. One option is to compute the probability of 13 games finishing, on average, 13.2 points worse than 0 using properties of the Normal distribution.

Instead, for reasons that we’ll see later, I chose to simulate game outcomes.

Using **σ =** 15 points, 13 teams would finish with results like the B1G’s (172.5 total points worse than the spread) about 1 in 1500 times.

Here’s a plot of the total point spread margin over 100,000 simulations of 13 games (such a large number of simulations is needed with rare outcomes).

So, it’s pretty clear that the Big1G had a historically bad weekend.

That’s not all, however. The above graph looks *only *at total margin against the spread, and doesn’t account for the fact that the B1G also covered (e.g., finished with a positive margin ATS) just 2 of its 13 games. Those two results are highly correlated, but I figured it was worth checking.

Running the same simulations, I looked for scenarios where 13 games produced just 2 positive numbers ATS and finished with a total margin of 172.5 points or worse. This happened on 241 of 500,000 simulations, or about 1 in every 2000.

Overall, I estimate the probability of the B1G’s weekend as follows:

**Event A** = Covering 2 of 13 games (or fewer): about **1 in 100**

**Event B** = Being outscored be point-spread expectations by 172.5 points or more in 13 games: **about 1 in 1,500**

**A ∩ B **= Covering 2 games or fewer, and being outscored by expectations by 172.5 points or more in 13 games: **1 in 2,000**

Given that there are about 12 weekends a year with the conference playing this many games, we’d expect the B1G to only have a weekend this bad every 165 seasons or so.

So, yes, it was a bad weekend for B1G football.

Like once a century or two bad.

*Notes*:

1) Considering that there are 10 conferences in the FCS, we’d only expect *any* conference to have a weekend this bad about once every 15 seasons.

2) The correlation between the standard deviation of the ATS margin and the game’s total (the expected total number of points in the game) is higher than you might expect, and definitely non-zero (0.35). Given the low totals in B1G games, using **σ =** 15 likely overestimates the true **σ **for the conference’s games this past weekend. Using a **σ** lower than 15 would actually decrease the likelihood of observing this past weekend’s results.

In this sense, these results are likely being slightly generous to the conference.

3) Here’s my *R* code. Thanks for reading!

Teams<-c("Illinois","Nebraska","Penn State", "Purdue", "Rutgers", "Wisconsin","Northwestern", "Minnesota","Iowa","Maryland","Michigan State","Michigan","Ohio State") CDiff<-c(2,-28.5,3.5,-24,-25,-7,-15,-3,-14,-4,-5.5,-27,-25) Games<-data.frame(Teams,CDiff) Games<-Games[order(-Games$CDiff),] par(mar=c(4,5,1,1 )) dotchart(Games$CDiff,labels=Games$Teams,cex=1.2,pch=8, xlim=c(-30,30),xlab="Performance against spread") abline(v=0,col="red",lty=2) mean(CDiff) sumBig<-sum(CDiff) b<-NULL c<-NULL d<-NULL e<-NULL sd<-15 for(i in 1:500000){ a<-rnorm(13,0,sd) b[i]<-sum(a) c[i]<-(sum(a)<sumBig) d[i]<-(sum(a<0)>=11) e[i]<-(sum(a<0)>=11&sum(a)<sumBig)} mean(c) mean(d) mean(e) par(mar=c(2,3,3,1)) hist(b,xlab="",main="Total points, relative to point spread") points(sumBig,2,pch=16,col="red",cex=2.2) legend(-200,13000,"Big 1G",col="red",cex=1.3,pch=16) mean(c) mean(d)

## 1 Comment