# So you want a graduate degree in statistics?

After six years of graduate school – two at UMass-Amherst (MS, statistics), and four more at Brown University (PhD, biostatistics) – I am finally done (I think). At this point, I have a few weeks left until my next challenge awaits, when I start a position at Skidmore College as an assistant professor in statistics this fall. While my memories are fresh, I figured it might be a useful task to share some advice that I have picked up over the last several years. Thus, here’s a multi-part series on the lessons, trials, and tribulations of statistics graduate programs, from an n = 1 (or 2) perspective.

Part IV: What I wish I had learned in my graduate program in statistics (with Greg Matthews) The point of this series is to be as helpful as possible to students considering statistics graduate programs now or at some point later in their lives. As a result, if you have any comments, please feel free to share below. Also, I encourage anyone interested in this series to read two related pieces:

# Regression or Reversion? It’s likely the latter

With interest in statistical applications to sports creeping from the blogosphere to the mainstream, more writers than ever are interested in metrics that can more accurately summarize or predict player and team skill.

This is, by and large, a good thing. Smarter writing is better writing.  A downside, however, is that writers without a formal training in statistics are forced to discuss concepts that can take more than a semester’s work of undergraduate or graduate training to flesh out. That’s difficult, if not impossible and unfair.

One such topic that comes up across sports is the concept of regression toward the mean. Here are a few examples of headlines:

Regression to the mean can be a bitch! (soccer)

Clutch NFL teams regress to the mean (football)

Beware the regression to the mean (basketball)

30 MLB players due for regression to the mean (baseball)

Avalanche trying to stave off regression and history (hockey)

In each case, the regression (i) sounds scary, (ii) applies to over-performance, not under-performance, and (iii) is striving really hard to reach an exact target, in these examples a vaguely specified ‘mean.’

From a statistical perspective, however, regression toward the mean requires strict assumptions and precision, the context of which are almost never discussed in practice.  As a result, examples that refer to a regression to the mean may be ill-informed, and are often best described by a similar sounding but more relaxed alternative.

Using the notation and descriptions in Myra Samuels’ 1991 paper in the American Statistician, “Statistical Reversion toward the mean: More universal than regression toward the mean,” here’s a quick primer through the context of sports.

What is regression towards the mean?

Let X and Y be a pair of random variables with the same marginal distribution and common mean µ. Most often in sports, X and Y are simply team/individual outcomes that we are interested in measuring and describing. For example, X could be the batting average of a baseball player through July 1, and Y his average from July 2 through the end of the season. In this example, µ represents that player’s probability of getting a hit.

The definition of regression toward the mean is based on the regression function, E[Y|X = x]. That is, conditional on knowing one variable (X = x), what can we say about the other? Formally, regression toward the mean exists if, for all x > µ,

µ < E[Y|X = x] < x,

with the reverse holding when x < µ.

This is a fairly strict requirement. For an outcome above a player or team’s true talent, we can expect that the ensuing outcome, on average, will lie in between µ and the original outcome. Linking to linear regression, for any initial observation x, the point prediction of y is regressed towards an overall mean representative of that subject. However, y will still exhibit some natural variation above and below the regression line; some points will fall closer to the mean, and others further away.

There are easy pitfalls when it comes to applying regression toward the mean in practice. The most common one is assuming that what goes up must come down. For example, assuming that players or teams become more and more average over time is not regression toward the mean. A second misinterpretation is linking regression toward the mean with the gambler’s fallacy, which entails assuming that a team or player that was initially lucky is then going to get less lucky. This is also not true. The probability of a fair coin landing heads, given that it landed heads five, ten, or even fifteen times in a row, remains at 0.5.  Such misinterpretations are frequent in sports, particularly when describing team performance with respect to point spreads or performance in close games.

While its easy to confuse regression toward the mean with such scenarios, there’s good news, in the form of some easy to understand alternatives.

What’s the better alternative?

To start, replacing ‘regression’ with ‘reversion’ relaxes the assumptions presented above while still implying that extreme observations are more likely to be followed by less extreme ones. More often than not, when writers speak of regression to the mean, using reversion is sufficient and accurate. Furthermore, Samuels proves mathematically that ‘regression toward the mean implies reversion toward the mean, but not vice versa.’ Namely, reversion is a more relaxed alternative; the conditional mean of the upper or lower portion of a distribution shifts, or reverts, toward an unconditional mean µ.

For example, in the headlines listed above, good soccer teams, MLB players hitting for high numbers, and the Colorado Avalanche were all more likely to revert to a more standard form that was indicative of their true talent. No regression equation is necessary.

In addition to generally being a more appropriate term, use of the word reversion has a side benefit, in that it is more interpretable when applied to outcomes that initially fell short of expectations. It is recognizable, for example, to expect an MLB batter hitting 0.150 to revert to form; meanwhile, it doesn’t make sense to claim that the same MLB batter will regress, given the negative connotations of the latter.

And is it regression/reversion ‘to’ or ‘toward’ the mean?

Well, it depends. While increased use of the word reversion is part of the solution, more precise writing should also consider both the outcome of interest and that outcome’s expected value. For example, here are two examples of over-performance:

Mike Trout hits 0.500 in his first ten games of the season.

Mike Trout tosses a coin 10 times, landing nine heads.

And here’s the same sentence to describe our future expectations – can you tell which one is accurate?

Mike Trout’s batting average will revert to the mean.

Mike Trout’s ability to land heads will revert to the mean.

In the first example, the outcome of interest is Mike Trout’s probability of getting a hit. Because we can comfortably say that Mike Trout is better than the league average hitter, while his batting average is going to come down, it is reverting towards an overall average, but not to the overall average.

Meanwhile, unless Trout can outduel the Law of Large Numbers, I can comfortably say that in the long term, his observed ability to land heads will revert to a probability of 0.5. In this silly example, the second statement is the more precise one.

Anything else worth discussing?

Well, maybe. In searching for some of the examples used above, I found it strange how little was written of the one word that tends to encompass much of a players’ performance above or below his or her true talent.

Luck.

The obvious aspect linking, say, the Colorado Avalanche winning games while being outshot and Mike Trout tossing coins and landing heads, is that each was on the receiving end of some lucky breaks. So while we expect some type of reversion to or towards a more traditional performance, that’s to no fault of the Avalanche or Trout. With outcomes that are mostly (or entirely) random, variability above or below the league average is simply luck.  As a result, there’s nothing for the Avalanche, Trout, or even us to be scared/beware of. We wouldn’t tell Trout to fear a balanced coin, nor should we tell Avalanche fans to beware of reversion towards a more reasonable performance given their teams shot distribution.

The issue here lies not in a distinction between regression and reversion, but a deeper and more serious problem; humans have a poor grasp of probability. In sports (and likely in other areas of life), lucky outcomes are all too often touted as clutch, while unlucky players or teams are given the label of chokers. It’s standard practice to use terms like savvy to describe the Patriots win over Seattle, for example. A more skilled writer, however, would perhaps recognize that the Patriots were on the better end of a 50-50 coin toss, from more or less the start of the game all the way until the end (in more ways than one; the game closed as a near pick-em at sports books. Even bettors couldn’t nail down a winner).

Writes Leonard Mlodinow in The Drunkard’s Walk,

`the human mind is built to identify for each event a definite cause and can therefore have a hard time accepting the influence of unrelated or random factors.`

It’s difficult and counterintuitive to describe an outcome in sports as lucky. However, that’s what many of them are are.

So while it may sound trendy to toss around terms like ‘regress to the mean,’ it is often more accurate, and certainly more simple, to propose that some luck was involved in the initial outcome. As a result, a decline from overperformance is nothing more than a player or a team, much like a coin tosser no longer landing heads five times in a row, not getting as lucky as they initially had been.

# JSM 2015

There’s a fun session at JSM 2015 on referee decision making in sports, held Wednesday at 10:30.

I’m presenting some new work on a sideline pressure in the NFL that appears to impact referee behavior. For defensive judgement penalties, including pass interference and aggressive calls like unsportsmanlike conduct and personal fouls, we find statistically and practically significant differences in the call rates based on which sideline the play occurred in front of. There are also significantly different rates in the rate of holding calls on outside run plays.

Here are my slides, and here’s a more technical paper. I encourage feedback!

# One soccer ref makes every judgement decision. Is that absurd?

In last night’s Gold Cup semi-final between Mexico and Panama, Mexico escaped with a 2-1 extra-time victory. Like many recent CONCACAF games, a few judgmental calls more or less decided the outcome. This game included an early red card to a Panama player, and a late penalty kick awarded to Mexico. See Deadspin for highlights here.

Much of the commentary after the game ripped the game’s referee, American Mark Geiger. However, I’m not quite sure Geiger’s to blame.  Specifically, while I’m not smart enough to get into the technicalities of any soccer call, I did notice that Geiger was forced to make a red call decision from at least 40 yards away. This seems absurd.

Using the dimensions from each of the fields/rinks in the NFL, NBA, NHL, and FIFA, as well as each organizations respective number of officials, I estimated the amount of square footage that each ref is responsible for. For example, three NBA referees cover 4,700 square-feet, or about 1,600 per ref.

Here’s a chart with the estimated square-footage (in thousands) covered by each ref.

It’s no contest. A soccer referee covers about 7 times as much ground in a game as NFL and NHL ones, and about 50 times as much ground as NBA refs.

There are obviously several caveats with such a simple chart. NBA officials have to call a much larger assortment of violations than FIFA ones, and NFL plays stop and start from one spot on the field, making it easier for the group of referees to reset. Further, its silly to think that putting a half-dozen more refs out there would make soccer games more equitable. Finally, I’m obviously aware that soccer has assistant referees; from my perspective, like NHL linesmen, the role of assistant referees is secondary on the game’s most important decisions.

However, its patently absurd to blame referees for wrong calls when they are making the decisions from half the field away. Would we expect an NBA ref to assess a flagrant foul from the opposite end of the court?  Or an NFL official to whistle pass interference from the opposite sideline?

Of course not. That’d be crazy.  But it seems just as crazy to blame soccer refs for failing a test that they never had a chance to pass.

From a relative outsider’s perspective, an extra ref would yield more accurate calls and could help curtail flopping. There’s probably a good reason why soccer has only one referee, but a quick Google search didn’t help. What am I missing?

Finally, here’s Noah’s take:

I think part of it is that play is more wide open than most sports, so it’s a bit easier to spot fouls. Which always seemed like sort of a dumb argument to me (the same thing with kickoffs in the NFL) but it does make some sense. And I think they’ve tried to empower the assistants to make more calls, but there’s always a strange balance of power issue because the assistants are just assistants. so yeah, having a second ref would make sense. If anything, it’s probably a man-power issue. There are so many terrible refs already that i can’t imagine having to double that number worldwide.

# MLB win percentage versus salary – a follow up

Noah and I had heard and read a bunch of discussion about the rise of small budget teams in baseball. When we set out to prove it, we actually found the opposite to be true. Here’s our article for 538, titled “Don’t be fooled by baseball’s small budget success stories.”

There were several interesting follow-up questions, as well as some anecdotes that didn’t quite fit in the article. I encourage to read Tango’s blog for interesting comments on my article, as well as general thoughts on salary and winning in baseball.

Anyways, I’ll answer a few of the questions here (comments in bold).

1 – Can you rank teams over 30 years by area between team regression line and mlb regression line (via @beerback)?

Sure.

Given that some franchises (Montreal, Tampa, Washington) have only played in a portion of the seasons we covered, I looked at the average yearly residual between each franchises win percentage and its expected win percentage, given its relative payroll. Here’s a barplot.

No surprises here. Relative to their payroll, the Cubs have been about 5 annual wins worse than expectation, with Oakland about 6 wins better. Montreal, St. Louis, and Atlanta all stand out as teams that have spent wisely over the last 30 years, on average. By and large, these results match our intuition.

Also, its worth pointing out that Montreal’s run in the 90’s nearly matches Oakland’s in the 2000’s as far a small-budget team spending wiser. In three of four seasons between 1993 and 1996, the Expos finished with a win percentage above 0.540 while spending less than \$20 million. In relative salaries, that’d be equivalent to spending \$42 million in 2015…which is about a third less than the Astros current payroll.

2- I don’t like the idea of creating a best-fit curve, if a best-fit line will do.  And we can see for the overall 30-year league average, it IS practically a straight-line.  That it doesn’t look like a straight line at the team level simply means “small sample size” (@tangotiger).

In our article, I used smoothed lines to express the relationship between winning and spending for each team. However, by and large, the plot for all teams together is nearly linear. Are the funky team-specific curves just due to chance?

As one way of considering this question, I calculated a residual for each team in each season, which represents the distance above or below the line of best fit for that year’s winning percentage. As an example, positive residuals represent teams that outperformed expectations.

Next, I used bootstrap resampling, taking each fitted value from the line-of-best-fit and adding random noise, where the error was sampled (with replacement) from the observed set of residuals. This gives a set of imputed winning percentages, representing a sample of seasons that could have occurred if there was simply noise above or below a straight line.

We can compare the imputed curves to the observed ones to answer a few questions. First, are there as many curved lines when we bootstrap? If so, the curved relationships that we observed are likely explained by chance. Another question  – are there as many teams that are consistently above or below their payroll expectation?

Here’s the first simulation. Click for a second iteration, if you are interested.

As a reminder, here’s what we are comparing to – the observed curves. And as an example, this is the set NL East curves. The x-axis is standardized salary, and the y-axis is win percentage.

Few, if any, of the smoothed curves that were simulated using an underlying linear association were able to match either the (i) impressive performance (relative to salary) of the Braves or (ii) the Mets’ bizarre u-shape.

This exercise tends to support a few conclusions.

First, results like those of the Braves and the A’s, which, on average, outperformed their expectations, were likely not due to chance. None of the simulated curves were consistently above or below the line the Atlanta’s and Oakland’s curves were.

Second, while most teams can be fit using a straight line, the relationship may not have been linear for all teams. No franchises in the simulated iteration seems to match the Mets’ u-shape (or a few similar ones from other teams).

3- How strong did you find the correlation to be? It seemed like most points were clustered along the wins (y) axis and not necessarily following the average curve.

The average yearly correlation between winning percentage and standardized salary has been between 0.30 and 0.65 during each season between 1993 and 2014. In all but four seasons, the correlation is significantly different from 0.

Also, it’s worth pointing out that Tango used a similar strategy and aggregated salary and win percentage across a decade’s worth of seasons. He found that the correlation between winning and a salary index to be about 0.70, using the seasons 2002-2011.

# Thoughts on the Sloan research paper contest

Folks who have submitted abstracts over the past two years to the Sloan Sports Analytics Conference research paper contest were recently surveyed as to their thoughts on the contest.

Here are my (expanded) answers to the open ended question “Do you have any other suggestions or comments that will help us improve the research papers competition?

1- Maintain a strong prize pool, but eliminate the crazy discrepancy between 1st and, say, 5th place.

In the current set-up, first place is \$20k, second place \$10k, and third place onwards is nothing. This structure incentivizes researchers to oversell their findings, because admitting that your work is simply building on the research of others is not nearly as sexy as claiming to be the first in your field to find something.

What’s a more equitable system? One that encourages good content, appropriate citation of sources, and makes it clear why each paper is relevant to advancing sports analytics.

From a prize perspective, this makes it less of a crapshoot. Financially, each finalist gets \$2k and a free ticket. Winner get \$10k. Boom, done.

2- Reward participants whose submission is reproducible.

I cannot remember a single finalist paper that has either included (i) its data or (ii) its source code. This is not a good (note: I’m also guilty. I didn’t submit code or data two years ago).  Given that the majority of findings in professional research are not reproducible, it is difficult – perhaps impossible – to know if each paper truly got things right. Rewarding papers that include data and source code would be a major step in promoting reproducible research.

Of course, work is only reproducible if the data set is public.  A more aggressive but related idea would be to use separate tracks for both proprietary and public data. This was suggested a year ago by analyst Christopher Long (and perhaps by others).  Such a distinction levels the playing field among researchers who have good work to share but are working with standard data, where it is becoming more and more difficult to make novel discoveries each year.

3- Implement a conference proceedings section.

For many people in academics, there is a lesser incentive for submitting to Sloan given that, unless your paper finishes as one of the finalists, all of your work is for not. Having a conference proceedings would likely encourage more submissions in this regard. If you are worried about the cost, publish online only and charge anyone who wants a hard copy. This would be very cheap.

4- Also allow submissions in TeX.

For years, the conference has used the same Microsoft Word template for participants. But many analytics researchers use TeX and only TeX for their work, as the formatting, particularly for mathematical notation, is substantially easier and more readable in TeX than in Word. TeX is also more visually appealing than Word.

Allowing submissions in both TeX and Word seems like an easy compromise.

**********

Happy to hear other takes as well. I am appreciate of the fact that SSAC has upgraded the rewards for poster recipients over the past few years. Futher, the fact that SSAC has implemented a survey in the first place is hopefully a promising sign of changes to come.

# Two reasons the future two-point conversion rate might be higher than current estimations

The NFL recently updated its extra point rules, moving the yard line for extra points from the 2 to the 15.

In the wake of the the change, one topic of conversation is whether or not the offensive teams will choose to go for two more often. The thought is that if extra points become more difficult, perhaps it is worth the risk of getting two points.

Critical to the conversation is the idea of expected points, which weigh the points and probabilities of conversions against those of extra points.* However, a basic expected points analysis requires, among other assumptions, both that we have reliable data and that all two-point conversions are created equally.

That may not be the case. Here are related reasons that the conversion rate might be higher that its currently being estimated (in most places, around 48%).

1 – Data issues

Using data from Armchair Analysis (AA), I was accurately able to confirm that teams converted 48% of the two-point conversions since the 2000 season, a number that has been reported in several outlets. But I was also able to obtain that the primary rusher or passer on 40 of these plays did not exist and that a punter/kicker rushed or passed the ball on another 26 plays. Here are some of the players of the players listed with conversion attempts: C Kluwe, B. Moorman, K. Walter, S. Koch, T. Sauerbrun. We can’t expect that crew to be leading conversion attempts on real conversions from the two-yard line in 2015 and beyond

Overall, offensive teams converted just 6% of such attempts into two-points (4 for 26 on plays handled by the kicker or punter, 0 of 40 otherwise). More likely than not, these 66 plays were fumbled snaps, fake extra points by designs, or muffed somethings.

If you remove the unknowns and kickers/punters from AA’s conversion data, things look a bit different, with teams converting at 51% since 2000.

2 – Teams that have gone for two have been generally playing from behind

Here’s a chart of the scoring margin at the time in which a team is attempting a conversion (I focused on point differentials between -20 and 20, which ignores a few games on the outside).

Treating all conversions as identical misses the fact that under the previous system, most teams going for it had to do so given the score differential. Moreover, 60% of the teams that went for two were trailing at the time of the attempt. And if those teams were trailing, you could make the argument that they were likely worse than their opponent in terms of overall talent.

Comparing the success rates of these teams yields a small difference: teams leading converted 53% of the time, compared to 49% among trailing teams.

It seems reasonable to think that even the 51% is a slight underestimation of what the success rate of teams would be if there were more evenly distributed attempts by team talent. Further, there’s also somewhat of an association between conversion rates and a teams offensive proficiency: see the next section for a chart.

*******

Other notes:

-There are likely more issues remaining with the data. For example, a botched snap featuring a pass from TE Jay Riemersma shows up in our data (the play is listed here, many thanks to a loyal reader for finding this stuff). But there are also purposeful conversions from non-kickers, including Antwaan Randle El, who had three of them. Further work is needed.

-Teams passed on 71% of their conversion attempts. Strange, given that passing attempts were only 48% successful, compared to 59% of rushes.

Update: A few smart folks pointed out that many of the rushes might be QB scrambles. Here are the success percentages and counts by play type:

Rushes: RB’s 109 of 190 (57%), QB’s 47 of 72 (65%), WR’s 4 of 6 (66%)

Passes: QB’s 313 of 659 (47%), WR’s 5 of 7 (71%), RB/DB/TE 1 of 4 (25%)

-There did not seem to be any differences in conversion rates by weather or surface.

-Here’s a plot of success rate and conversion attempts for each team since 2000. Jets and Cardinals doing their thing.

********

*Here’s a primer on expected points:

Teams have successfully converted 48% of their two-point conversions over the past 15 years – it’s at 49% over the last three years – making the expected number of points on a two-point conversion attempt approximately 0.49 x 2 = 0.98. Alternatively, given that teams make between 90 and 95% of their field goals from near the 32-yard line, and that the extra point remains worth a single point, we can assume that the expected value of a longer extra point is somewhere between 0.90 and 0.95. Game, score, and coaching conditions aside, on average, it is evident that there’s now a slight advantage to going for two.  Benjamin Morris makes the excellent point that we should also expect the number of expected points on XP’s to rise, too, given how awesome kickers have become.

# 85% is a unicorn – on predictions in the National Hockey League postseason

On February 20, the National Hockey League announced a partnership with software company SAP. The alliance’s primary purpose was to bring a new enhanced stats section to NHL.com, built in the shadows of popular analytics sites like war-on-ice and the now dormant extra-skater.

It was, it seemed, a partial admission from the league that it’s best metrics were hosted elsewhere.

“The stats landscape in the NHL is kind of all over the place,” suggested Chris Foster, Director of the NHL’s Digital Business Development, at the time. “One of the goals is to make sure that all of the tools that fans need are on NHL.com.”

One tool presented in February was SAP’s Matchup Analysis, designed to predict the league’s postseason play. The tool claimed 85% accuracy, which Yahoo’s Puck Daddy boasted was good enough to make “TV executives nervous and sports [bettors] rather happy.”

There’s just one problem.

85% is way too high in the NHL.

Specifically, at the series level, 85% accuracy is a crazy good number for the short term, likely impossible to achieve long-term. And at the game-level, 85% is reachable only in a world with unicorns and tooth fairies. A more reasonable upper bound for game and series predictions, in fact, lie around 60% and 70%, respectively.

So what in the name of Lord Stanley is going on?

To start, the model began with 240 variables, eventually settling on the 37 determined to have the best predictive power. Two sources (one, two) indicate the tool used 15 seasons of playoff outcomes, although an SAP representative is also quoted as saying that the model in fact used four or five years worth of data. This is a big difference, as using 240 variables is a risky idea for 15 seasons (225 playoff series), much less five.

But it’s also unclear if the model was predicting playoff games or playoff series. Puck Daddy, like most others, indicated that it was meant for predicting playoff series, but in its own press release, SAP indicated that the 85% actually applied to game-level data.

So as the details of the algorithm remain spotty, here are two guesses at what happened.

1- SAP’s 85% is an in-sample prediction, not an out-of-sample one.

Let’s come up with a silly strategy, which is to always pick the Kings, Bruins, and Blackhawks to win. In all other series, or in ones where two of those teams faced one another, we’ll pick the team whose city is listed second alphabetically.

This algorithm – using just four variables – wins at a 68% rate over the 2010-2014 postseasons.  But note that the 68-percent is measured in-sample, where I designed it. Predictions are useful not in how they perform in-sample, but in how they do out-of-sample –  that is, when they are applied to a data set other than the one in which they were generated.

The NHL’s 85-percent seems feasible as an in-sample prediction only, and it’s overzealous to use in-sample accuracy to reflect out-of-sample performance. Our toy approach with four variables, for example, hits at a not-so-impressive 47% clip between 2006 and 2009.

2- SAP’s model included overfitting and multicollinearity, a dangerous strategy.

It’s easy to assume that using 240 variables is a good thing. It can be, but including so many variables with small sample sizes runs the risk of overfitting, where a statistical model includes too many predictors and ends up describing random noise instead of the underlying relationship.

And with so many possible predictors, it shouldn’t be surprising that at least one will be surprisingly accurate in a sample of games. For example, shorthanded goal ratio, a somewhat superfluous metric, predicted more playoff series winner between 1984 and 1990 than both goals for and goals against.

Further, while it is tempting just to combine several such predictors together, many of these variables are likely correlated. Including highly correlated variables in the same model is known as multicollinearity, which can make predictions sensitive to small changes in the data, including when applied to out of sample data.

Fortunately for skeptics of SAP’s model, the 2015 postseason provides our first example of out-of-sample data with which to judge SAP’s predictions.

Through rounds one and two, the Matchup Analysis tool looks not much different than a balanced coin, correctly pegging seven of the twelve series winners. But a tool with 85% accuracy would pick 7 in 12 winners or worse only about 1 in 50 times (2%). In other words, unless the 2015 tournament is a 1 in 50 type of outlier, we can be confident that the model’s true accuracy lies below the 85% threshold. Finally, keep in mind these are rounds 1 and 2, which should be the easiest round to predict, given that these rounds tend to feature the largest gaps in team strength.

The Matchup Analysis tool might be awesome, and perhaps it is more accurate than using any of the NHL’s enhanced stats alone. However, it appears likely that the algorithm will fail to meet its own high standards; even if it accurately predicts each of the final three NHL series, the tool won’t crack 70%.

It has been said that sports organizations have a cold start problem with analytics. Writes Trey Causey, “How does an organization with no analytics talent successfully evaluate candidates for an analytics position?”

In such situations, it is easy to fall prey to sexy numbers like 85%. But like unicorns and tooth fairies, such predictive capabilities in the NHL are likely too good to be true.

# Penalties and the NHL

Noah and I wrote an article about penalty patterns in the NHL. It’s over on FiveThirtyEight – many thanks to the folks over there for efforts in helping us put it together.

I wanted to share two plots that I thought were interesting, and related to our study.

First, here’s a contrast of penalty violations, comparing the probability of a home team penalty by game type (regular, postseason).

Home teams are unlikely to get penalties when they are owed penalties. They are even less likely to get those penalties in postseason play.

Also, Sam and Micah suggested that it would be worth looking at the type of penalty called given each scenario. While some of the penalties are of unknown types (strange data entry), here’s a mosaic plot of the types that are known, using the first four letters in the columns as the penalty type, along with the penalty differential.

I might be missing something, but there doesn’t seem to be huge variations in the frequency of each penalty call given the previous penalty differential. This would support an argument that penalties are not substantially driven by retaliation.

# A generalized linear mixed model approach to estimating fumble frequencies in the National Football League

I told myself I was done with with Deflategate – and really, I was – that is, until I read this.

Now I actually have some validation in the field,” Sharp said. “‘Hey, this guy was right all along.'”

Wait, what?

Forget the data twisting and statistical errors of the original analysis. The author claims to be vindicated by the fact that the Wells report found Patriots quarterback Tom Brady to be ‘more likely than not’ to have been involved with the deflation of footballs.

Okay then.*

*******

But despite my skepticism regarding Sharp’s analysis, two of the brightest minds in football analytics also taken the time to look at Patriots fumble rates, eventually concluding that the Patriots were indeed outliers.

First, after comparing Sharp’s critics to Nabisco running a study on snack cookies**, Brian Burke used multiple linear regression to model the number of fumbles in each NFL game since 2000, finding that the Pats posted much lower rates than the rest of the league in the years following 2007. Next, Benjamin Morris argued that the likelihood of team fumbling rates being at the Patriots levels or lower to be about 1 in 10,000. Linking low fumble rates and Deflategate findings, Morris writes that it “makes it more likely that the relationship between inflation levels and fumbling is real.”

One thing that Morris argues for – which I agree with – is that “there’s definitely more to be done on the Patriots fumbling to isolate for the fact that they were the most consistently winning team, the types of plays they ran.

As Morris indicates, and what Burke hints at, is that modeling fumble rates is not straightforward, nor close to it. Because NFL teams aren’t randomized to run the same plays with the same time on the clock and from from the same spot on the field, any finding through this point has been evidence on the aggregate, averaged over games, plays, or perhaps a few in-game variables.

A play-by-play analysis, however, is missing.

And while it doesn’t ‘vindicate’ any particular finding, nor leave the Patriots free from suspicion, I found the task of looking at NFL play-by-play data to determine fumble rates quite interesting.

*******

I took the last 15 years of play-by-play data from Armchair Analysis (AA). All the code is linked here: the data costs \$35, so I can’t provide that, unfortunately. However, if you have AA’s data, feel free to play around. Also, I’m going to focus on data from 2007 onwards. If you are interested in contrasting whether or notPatriots fumble rates changed substantially at any point over the last 15 years, I’d recommend a change-point analysis.

Point 1: Teams are less likely to fumble on QB kneel downs.

It’s easy one to begin with.

In fact, you are probably laughing right now, and you should be. There have been 5284 NFL kneel-downs since 2000, and not a single one resulted in a fumble using AA’s data. So who cares?

Here’s a plot of the teams who have taken the most kneel downs since 2007.

More than 25 snaps ahead the second place team, the Patriots have the most kneel downs.

Mentioning kneel downs seems silly, but this matters. Including kneel downs in an analysis of fumbles per play inflates the denominator (number of total plays) among teams more likely to be taking a knee, as the Patriots apparently were. In fact, the correlation between fumbles per play and kneel downs is -0.6. Here’s the relationship between the two variables. Teams with lower fumble rates tend to take more kneel downs (for one of several reasons).

After making the graph above, I deleted these plays. I also deleted QB spikes (Patriots had more of these than the average team) and any pass that was intercepted (Patriots had fewer than average). It’s hard for the offensive team to fumble on these plays. It’s even harder to fumble on kneel downs.

Point 2: Teams are less likely to fumble when they have the lead.

This was a bit surprising to me. For the regression model, I characterized each play based on the possession lead (3+, 2+, 1+, 0, 1-, 2-, or 3-) of the team with the ball. For example, an offensive team leading by more than 16 points would be up by three or more possessions.

Like kneel downs, scoring differential matters. Teams with the ball up by three possessions or more fumble more than 20% less often than other teams with the ball. So let’s see which teams have run the most offensive plays while up by three touchdowns.

Again, the Patriots show up, with nearly three times the median number of plays when holding a three possession lead. Again, this matters. To generally contrast New England’s fumble rates with Cleveland’s, when the Patriots have run more than 11x as many plays with a 3+ possession lead as the Browns, is silly. Teams fumble with the ball less when they are leading on the scoreboard.

Point 3: Yard line matters

Given the tighter window with which to run a successful play, it stands to hold that teams would fumble less on plays close to their opponents end zone. So, similar to points 1 and 2, any aggregated analysis of fumble rates could abnormally penalize teams that run a disproportionate number of plays in this area. Here’s the number of goal-to-go plays for each team since 2007.

The Patriots have run nearly 200 more goal-to-go plays than any other NFL team since 2007.

*******

Hopefully we can agree that not all plays are created equal. So how can we account for all of these factors?

Using hierarchical generalized linear mixed models (GLMM) of binary data via the lme4 package in R, I modeled the log-odds of a fumble occurring (Fumble = Yes/No) as a function of several play and game specific factors that are conceivably associated with fumble likelihood.

A hierarchical mixed model is advantageous for a few reasons. First, we can account for game conditions (such as the weather), play conditions (like down, distance, and yard line) and play characteristics (run left or pass deep right, for example) that may dictate fumble rates. Next, instead of model with several dozen fixed effects for each team’s offense and defense, we’ll use random intercepts for both the offensive and defensive units. Of particular interest will be the random intercept for New England; if this intercept is extremely low, it would provide evidence that after accounting for all the game and play specific variables, the Patriots fumble rate remains mysteriously lower than other teams. We can also test the significance of the random intercept for each team – if it is variance term is significantly different from 0, it would provide evidence that there remains substantial variation in the fumble rates driven by the team with the ball or the team on defense.

Please note that some of these results mirror a live-tweet version of the model that I ran in late January, but please check out the R code for how I decided to syphon things like down & distance, etc. These decisions were not easy, but were made with the intent of identifying what characteristics of each play might determine fumble outcomes. Here are the fixed effects included in the GLMM:

Score, Play direction, Final Minutes (Y/N), Playoffs (Y/N), Weather/Surface, Goal to Go (Y/N), Home team on Offense (Y/N), Goal to Go (Y/N), Down/Distance, No huddle (Y/N), Shotgun (Y/N), over/under, and spread.

And here are the random intercepts***:

Offensive Unit, Defensive Unit

And here’s the code. Model results are here:

```
fit.rush<-glmer(Fumble10~Score+playcall+FinalMins+Playoffs+Weather+GoaltoGo+OffHome+
control=glmerControl(optCtrl=list(maxfun=300)),
verbose=TRUE,family=binomial())
summary(fit.rush)

fit.pass<-glmer(Fumble10~Score+playcall+FinalMins+Playoffs+Weather+GoaltoGo+OffHome+
,data=filter(pbp,type=="PASS"),control=glmerControl(optCtrl=list(maxfun=300)),
verbose=TRUE,family=binomial())
summary(fit.pass)

```

*******

The first thing we’ll look at is a plot of random effects for each of the GLMM fits. On the left is passing plays, on the right, running plays.

Once you account for play and game characteristics, it is really difficult to distinguish between the fumble rates of NFL teams.

In looking at passing plays, the random intercept terms for each offensive team are not significant predictors of fumble rates. The Patriots ranked as third in terms of teams least likely to fumble, given our model’s parameters. No teams intercept is noticeably different from 0.

There’s slightly more descriptive ability in using random intercepts with rushing plays. The Patriots’ intercept lies the furthest from 0, but it is not noticeably different from teams like Indy, Jacksonville, and Atlanta, which also boast significantly lower rates of fumbling on running plays.

Interestingly, Washington has the highest intercept on both rushes and passes.****

*******

If you are still reading, it is greatly appreciated. Mixed models have been used in awesome ways to answer really good questions in sports (see catcher framing and deserved run average for recent examples).***** This is not one such awesome application.

However, we learn in Introduction to Statistics that two variables are often associated for reasons beyond a causal mechanism. Given the results here, it seems safe to say that part of the link between the Patriots and low fumble rates was driven by game and play-specific conditions that those two variables were also associated with. Further, its easy to forget about funny data quirks in nearly all applied work, as we noticed with kneel downs and spikes in the football play by play data.

*******

Footnotes:

*There are a few other issues to consider. First, the Wells report also proposed that the Patriots started purposely deflating footballs in mid-2014. So, any lower fumble rates prior to this would have been, in relative terms, within league rules. Further, there’s also the issue of whether or not the Patriots ‘deflater’ travelled with the team, which unfortunately goes against the author’s inclusion of all games simultaneously. I can’t believe I just wrote the word ‘deflater.’

**This comparison seems ironic looking back, given that the NFL hired Exponent for its Wells report. Exponent was once was paid to argue that secondhand smoke did not cause cancer, among other suspicious claims.

***You may be asking yourself if we should be including effects (intercepts) for each running back. This is a fair question; if we include running backs as intercepts in the model, all team intercepts go to essentially 0. Given the RB’s are not randomized to carries, any team that purposely avoids playing running backs with high fumble rates would be penalized in our current fitting strategy.

****As a final step, I looked the significance of the random intercepts, given that from a model building standpoint, it’s generally preferred to use a model as parsimonious as possible. Including the random intercepts for both the offensive and defensive units significantly improves the model of fumbles on running plays, as judged by comparing the BIC of models with each random intercept to those without. On passing plays, the intercepts should be dropped from the model; there’s no evidence that, after accounting for game and play-specific covariates, teams’ fumbling rates differ from one another on passes.

*****A Bayesian strategy is also easy to implement. My guess is that a prior on team by team intercepts would only work to drag each team closer to 0.

*******

UPDATE: Scott from Football Outsiders requested that I include year effects as opposed to aggregating data across every season. I nested intercepts for each year within each team; this would account for seasonal trends for each unit, incorporated within some larger team effect. The yearly effects were indeed significant, both for offensive and defensive units.

Here’s the plot of the random intercepts for each team, after accounting for seasonal trends.

# NBA win totals, 2014-15 edition

We’re back again to judge preseason predictions, this time looking at NBA prognosticators. Our outcome of interest is the number of wins for each NBA team, and we’ll compare predictions to the totals set by sportsbooks in late October.

Despite my best efforts on social media, I could only find three competitors: Team Rankings, the Basketball Distribution, and Seth Burn. I also merged the predictions from those three sites, in what I’ll call the statheads Aggregate.

Our first criterion is Mean Absolute Error (MAE), which represents the average deviation between the predicted win total from each site and the observed total. In addition to the prediction sites, I also calculated the MAE using last year’s win totals, as well as 41’s, which represents a prediction of every team finishing 41-41.

For the 2014-15 season, statheads reigned supreme.

The Basketball Distribution and the aggregated picks from the three sites were both noticeably better than the totals set by sportsbooks. This matched results from last year for the Basketball Distribution. (Note: Nathan from the Basketball Distribution sent along a table with similar results to these).

Of course, its easy to assume that these results could be due to chance. To check this, I simulated 1000 seasons by using the sportsbook totals as the true mean team totals (slightly scaled, to account for the fact that the sportsbook totals add up to 41.5 wins per team), while also assuming that the 2014-15 season standard deviation of 9.3 marks the actual standard deviation of win totals. Here’s a plot of the simulated MAE for both Basketball Distribution and the Aggregate picks.

In only about 1% of simulations was the MAE for Basketball Distribution lower than the observed total. That’s a pretty solid effort. The aggregated picks from the three sites finished in about the 4th percentile, another solid effort.

And while it was a good year for the statheads, it wasn’t quite that for the Knicks. At 17 wins, New York finished more than 20 wins below its expected total of 40.5.

Here’s a team by team play of observed and predicted totals, arranged in order of projected total (the V). The calculator image refers to the stathead projection.