Featured post

So you want a graduate degree in statistics?


After six years of graduate school – two at UMass-Amherst (MS, statistics), and four more at Brown University (PhD, biostatistics) – I am finally done (I think).

At this point, I have a few weeks left until my next challenge awaits, when I start a position at Skidmore College as an assistant professor in statistics this fall.

While my memories are fresh, I figured it might be a useful task to share some advice that I have picked up over the last several years. Thus, here’s a multi-part series on the lessons, trials, and tribulations of statistics graduate programs, from an n = 1 (or 2) perspective.

Part I: Deciding on a graduate program in statistics

Part II: Thriving in a graduate program in statistics

Part III: What I wish I had known before I started a graduate program in statistics  (with Greg Matthews)

Part IV: What I wish I had learned in my graduate program in statistics (with Greg Matthews)

The point of this series is to be as helpful as possible to students considering statistics graduate programs now or at some point later in their lives. As a result, if you have any comments, please feel free to share below.

Also, I encourage anyone interested in this series to read two related pieces:

Cheers, and thanks for reading.

NHL game outcomes using R and Hockey Reference

I’m always impressed with the contest and accessibility of the Baseball with R website (here), which features a great cast of statisticians writing about everything from Hall of Fame entry to umpire bias.

In a similar vein, I highly recommend Sam and AC’s nhlscrapr package in R. I’ve used it extensively to analyze play-by-play data from past seasons (for example, this post on momentum in hockey).

However, I have a soft spot for overtime outcomes in the NHL, and while the nhlscrapr package has game-by-game results, there isn’t a straight-forward mechanism for identifying whether or not a given game went to overtime. Further, data in the nhlscrapr package only goes back about a decade or so.

Thankfully, Hockey Reference has easily accessible (and scrapable) tables for us to use. Given that I am doing some updated analyses over NHL overtime rates, and that I wanted an easier method than copying and pasting .csv files from nhl.com, I figured I would post the code that I used to scrape NHL game outcomes. The code that follows extracts each game’s outcome for the last five years; if you are interested in other years, its easy enough to change the url’s.

Feel free to use, and hope you enjoy!


urls<- c("http://www.hockey-reference.com/leagues/NHL_2011_games.html",

for (i in 1:length(urls)){
 tables <- readHTMLTable(urls[i])
 n.rows <- unlist(lapply(tables, function(t) dim(t)[1]))

nhl<-nhl[nhl$OTCat!="Get Tickets",]

Wiola!  That easy. We are in business with a few lines of code.

Here’s the output:

Screen Shot 2014-11-21 at 10.01.32 PM

A snapshot of math, computer science, and statistics enrollments at a liberal arts institution

Like it probably did at many institutions, student registration opened this past week for the Spring semester at Skidmore College.

As a statistics professor in the Department of Mathematics and Computer Science, I was struck at how quickly the courses in our department filled up. Was it like that elsewhere at Skidmore?

Using public data available from the registrar (here), I extracted the course enrollments for each of the school’s departments. Next, after dropping independent studies and a few similarly designed courses, I categorized each course as either “Closed” (waitlisted at 5 students or more), “Filled” (spots remaining on the waitlist only), or “Open” (enrollment less than capacity).

Next, in a few cases, I aggregated departments that were very similar (say, foreign languages, or math, computer science, and statistics) to simplify things. Further, due to the small sample sizes of courses offered in a few departments (ex: Asian studies), I also had to restrict my analysis to departments which are offering at least 8 courses in the Spring.

In any case, here’s a mosaic plot featuring the course status (y-axis) by department. The width of each department’s x-axis is proportional to the number of courses offered by that department in the Spring. For example, the English department and the Art department (which is a conglomerate of art history and arts administration) offer more courses than any other department.

Screen Shot 2014-11-15 at 12.06.56 PM

The Department of Math & Computer Science is abbreviated MA, and, as indicated above, boasts among the largest percentage of courses which are closed (red) and filled (yellow) the Spring. The “PA” department, if you are curious, is also doing well; it’s the Department of Physical Activity.

So, apparently, Skidmore students are desperate to differentiate some integrals but it makes them want to exercise afterwards.

Most department abbreviations are what you expect them to be – the first two letters of their name – but because my interest was mostly in comparing mathematics to other departments as a whole, I was purposely vague with the labeling.

So, overall, it appears enrollments in my department’s courses are doing well.

Other notes:

-Restricting to courses designed for larger enrollments (in general, these are 100 and 200 level courses) exacerbates the enthusiasm students are showing for courses in the math department. Here’s that plot.

Screen Shot 2014-11-15 at 12.17.07 PM

-It might be unfair to treat all course offerings in the same manner. This only makes for one quick look at the data.

-If you are curious, statistics courses did well enough that there is a chance another one might be added!

-Perhaps this shouldn’t be surprising; math is a sexy choice for the future (as per Jacob Rosen), who writes “A math major – or at least several courses in math – can be the differentiation point to lift your resume to the top of the pile.”

Analyzing the Super Contest, Part II

A few months ago, I wrote about how looking at the Westgate Hotel’s Super Contest, in which contestants pony up $1500 to pick five NFL games per week against the spread, could make for some interesting data analyses.

In this post, I’ll dig into the data to try and answer a few interesting questions.

What teams have been picked most often? And do betters ride the `hot-hand’?

Economist Brad Humphreys and colleagues have an interesting paper that found that NFL bettors believe in the hot-hand; that is, there is a significant increase in bets among teams entering games on winning streaks. If that were the case, we could expect teams supported in the Super Contest to be the ones that have been previously successful.

Here’s a plot of the weekly support for each NFL franchise, organized by division (West to East) and team. The size of each circle is proportional to the number of bettors in the Super Contest which backed that team, and the color of the circle depicts whether or not that team covered the spread in that week (green is a win, red is a loss, and click to enlarge)

Screen Shot 2014-11-11 at 8.43.26 PM

A few interesting aspects of the chart: (i) Kansas City is on a seven game win streak ATS, (ii) the number of bettors backing Arizona and Indianapolis has increased over the course of the season, and (iii) few bettors have liked St. Louis, Oakland, or Cleveland at any point thus far.

At some point, I might try to come up with a strategy to analyze a bettor belief hot-hand in this data set, but for now, I don’t see much of an obvious pattern with respect to sequential circle sizes.

2) Do bettors back teams off a bye-week?

To the best of my knowledge, the point spread given in the Westgate contest incorporates the possible advantage that teams coming off of a bye would have.

However, the size of circles in the figure above among teams playing after a bye week appears larger than the typical circle size (to look for bye weeks, look for the blank spaces).

To look at this idea, I isolated weeks 5 through 10, when teams were coming off of the bye week. Teams coming off of a bye week were picked, on average, by about 320 bettors. Teams not coming off of a bye week were picked, on average, by 240 bettors. That seems like a massive difference to me, particularly considering that the point spread, in principal, should incorporate the fact that teams are coming off of a bye week.

3) Are bettors consistent?

This is a question that I was really excite to look at – are the bettors who do well early in the year also the ones that do well later on? Here’s a plot of win percentages for each participant, splitting the season into two halves (Weeks 1-5, Weeks 6-10). Points are jittered to account for overlap in the scatter plot.

Screen Shot 2014-11-11 at 9.15.28 PM

The correlation (0.07) is small but significant (p-value < 0.005), implying that there could be some level of underlying skill (or lack of skill) to picking these contests.

However, comparing all participants has two flaws. First, bettors that do poorly may change their strategy in an attempt to catch the players in front of them. Worse, bettors with a bad start might just give up; indeed, there are some participants who are no longer submitting picks.

I looked at the same graph, isolating those participants who posted win percentages above 48% in the first five weeks. The correlation remains small (0.05), but it’s positive. Indeed, bettors that are superior in the early weeks could conceivably be superior in later weeks. That said, most of a bettors performance appears due to chance.

Screen Shot 2014-11-11 at 9.23.58 PM

What’s next?

At some point, I want to try and look at the following ideas. Let me know if you can think of others:

Idea 1: Do bettors back teams playing in Sunday night and Monday night games more often than those in Sunday day games?

Idea 2: Do bettors back teams that they have had success backing before? In other words, if Mike picks the Patriots successfully three weeks in a row, is he more likely to pick the Patriots again?

What else should is there to know?

I did not enter the Super Contest this year (or any year, for that matter).

I did, however, enter a contest (purely for fun) with the same rules, run by affiliates of USC’s Jeremy Abramson. While the prize money won’t compare to the Super Contests’, I did get this email last week: “just remember that good teams are good and bad teams suck.”

Can’t put a price tag on that advice.

Finally, the data for the Super Contest isn’t perfect, with some participants changing their team’s name mid-season. This created a slight problem when looking at success levels over time. As a result, I cannot guarantee 100% accuracy regarding the correlations, although my quick cross-check suggested that most season picks matched the website’s standings.

Favoritism under social pressure, this time on a personal level

About a year ago, I wrote a piece for Deadspin that linked anecdotal evidence with peer-reviewed literature on how referee behavior is impacted by a home bias.

To sum, referees are susceptible to exact same fears and pressures that we would expect them to be, and that as a result, the overwhelming majority of the time, judgement calls favor the home team when the game is on the line.

Fast forward to this past Friday night, when I received an unpleasant reminder about how painful but predictable this behavior can be. For the last decade or so, I’ve moonlighted as an assistant football coach (with my father Tom, the head coach) at Lincoln-Sudbury Regional High School (L-S). On Friday, I had the opportunity to try and help out during the team’s Division 2 North semifinal playoff contest on the road at North Andover.

Before getting to the moment in question, let’s start with a play early in the contest. In statistics, it is always nice to have a control group with which we can compare the effects of a treatment or intervention of interest. Luckily enough, in the L-S North Andover game, a play from early in the first half provides a perfect comparison for a play that occurs later.

Midway through the first quarter, North Andover’s quarterback lofted a sideways pass, which, after being batted down, was whistled dead as an incomplete pass. Notice both the audible whistle and the official at the bottom of the screen that runs into signal as incomplete.

Alternatively, the official could have ruled that the pass was actually a lateral, in which case the ball and play would still be live.

Keep this in mind when action moves to the final seconds of the third quarter, with North Andover clinging to a 6-0 lead. At this point, it’s also worth noting after after the first half shutout, the hosts had now gone more than 14 consecutive quarters without allowing a point. Points in this game were critical and hard to come by, even more-so considering the seemingly gail-force winds at field level.

On a third and long, North Andover goes back to nearly the identical play you saw earlier. This time, the result is different, as shown below. Notice the lack of any whistle, no signal of incompletion, and the L-S player that picks up an apparent lateral and turns it into a touchdown.

One aspect of the video that was cutoff is that the referees on the field actually signaled for the touchdown, to the point that L-S’s extra point team came onto the field.

Not so fast, however!

After some commentary from the sidelines, the officials decided to huddle up in the middle of the field. The decision loomed large; the call on the field would have given L-S the chance for a 7-6 advantage, and it had also drawn a lengthy choir of boos from the hometown North Andover crowd and sideline.  After three or four minutes of debate, the group decided to overturn the touchdown! 

According to the head official who spoke with the L-S coaches, including myself, on the sideline, the basis of the decision to rule an incomplete pass was that the referee on the North Andover sideline blew his whistle and signaled an incomplete pass.  Even if this decision had been the incorrect call and the pass had been a lateral, this meant that the action on the field had to end immediately.

Given the video evidence, such an explanation is difficult to believe.  On the video above, it’s obvious that no whistle had been blown. Further, here’s a second clip (no audio) that includes the sideline official on North Andover’s sideline, who is clearly not making an incomplete pass motion.

At this point, think back to the same type of play from earlier in the game. The same official, given a nearly identical play, blew his whistle and gave the signal for an incomplete pass!

After giving the opposite call later in the game, however, the official reversed his course. What happened?

Indeed, it appears the social pressure of being the primary person responsible for giving the away team a crucial touchdown was too overwhelming to consider. Had the touchdown stood, that same official would have no doubt spent the remaining 11 minutes of the contest on the North Andover sideline hearing an unending banter from the crowd and coaches.

In the Deadspin piece, I asked that if pushgate was at Gillette Stadium, would the referees have had the guts to call a penalty on the Patriots?

Relatedly, if this contest had been at Lincoln-Sudbury, would the referees have had the gumption to overturn an L-S touchdown?

No chance.

For the same reason that officials almost never make such controversial calls against the home team, the alternative decision, in this case to overturn the touchdown and appease the crowd, was all too easy and predictable of an out.

And as usual, it was the road that was taken.


-Despite two trips inside the red zone from L-S, North Andover completed the shutout, holding on for a 12-0 win.

-It’s easy – and tempting – to pass blame to the officiating crew. That’s not the point of this. Like I wrote at the end of the Deadspin article, the officials are behaving exactly as we expect them to, which is the same as most of us would act in that situation. Things like instant replay, however, have helped tremendously, at least in professional sports, as far as reducing bias in favor of the home team (e.g., the Scorecasting chapter on fumble recovery rates in the NFL, or the PitchFx developments in MLB).

-I’ve watched the play in question several times. It’s 50-50 whether or not its a lateral or a forward pass. For what it’s worth, at the end of the game, the North Andover chain crew confided that from their direct and optimal viewpoint, it was definitively a lateral. Hearing this, the sideline judge on the L-S sideline told the L-S coaches, ‘oops. I guess it should have been 6-6.’

-L-S plays in the Dual County League, and North Andover plays in the Merrimack Valley Conference (MVC). Per state rules, the referees for the L-S contest with North Andover were officials from the MVC. There’s the real conflict of interest and an actual conspiracy theory, if you are looking for one.

-Hard not to think of this great commentary on a related play (more EMass officials!)

-If you were wondering why there was a flag on the play, North Andover was penalized for an illegal formation.

-Lastly, I stole the headline from Garicano et al’s article on refereeing in British soccer. It’s one of my favorites – give it a read!

On Thursday Night Football outcomes

The competitiveness of Thursday Night NFL football (TNF) contests has been discussed practically everywhere this past week. The Bleacher Report, Pro Football Talk, and the USA Today, for example, all got in on the action, deriding the NFL for the number of blowouts in its Thursday contests.

Over on Forbes, Jim Pagels offers an interesting and more quantitative take, showing that, on the whole, Thursday night outcomes are not much different from the average outcomes from weekend contests. Pagels provides this table:

Screen Shot 2014-10-09 at 3.08.55 PM

These results mirror the work of Grantland columnist Bill Barnwell, whose research found no evidence that TNF games were any sloppier than the rest of the league’s games, as judged by, among other metrics, the number of turnovers and dropped passes.

One aspect missing from much of the discourse on TNF contests is whether or not those games were supposed to be close to begin with. This is important because the NFL, and its television networks, get to pick who plays on Thursday night!

For example, if TNF contests were played between teams that were more equivalent in terms of talent, we would actually expect TNF contests to be closer. Under such a scenario, if the average margin of victory for each contest (Thursday night vs. other days) was identical, that would actually imply that TNF contests were yielding more blowouts than we expected.

Fortunately, there is an easy way to judge the “closeness” that we expect in an NFL game- using the game’s point spread! Using Sunshine’s data system (with a major h/t to the “weekdays” function in R), here’s the average absolute spread for each NFL game, by year and weekday, since 1978.


From the plot above, it’s fairly clear that the average point spread of TNF contests (in black) is not much different than that of other NFL games (in red or grey). If anything, over the past few years, TNF contests have had larger absolute point spreads than games played on weekends (the data goes through 2013).

Of course, it’s also strait-forward to compare each game’s point spread to it’s final margin of victory. To do so, I used the root mean square error, as FiveThirtyEight does here. The RMSE represents the square root of the average squared error between the point spread and the margin of victory.

Here’s a similar plot; this one shows the RMSE by year and game day, for all NFL games since 1978.


On the whole, the variability across the point spread does not appear to vary by the day of the week in which the game was played. If anything, Thursday night contests have actually been closer than the point spread anticipated over the last decade or so (2014, of course, aside). Also, it’s interesting how consistent the RMSE for all NFL games has been over time.

Overall, given these results, we can be more confident that the results of Pagels, Barnwell, and many others are not due to the fact that TNF contests were supposed to be closer, perhaps making it more reasonable to assume that the blowouts over the past few weeks are not accounted for by the day the game is played on, and instead simply due to chance.

It’s that time of year again! Play along with #PlayforOT

The National Hockey League is beginning its 98th season, and the 2014-2015 campaign marks the 10th consecutive season that the league will continue using the point system that it initially implemented after the 2004 lockout.

To review, here’s the current point system:

Win (regulation, overtime, shootout): 2 points
Loss (overtime, shootout): 1 point
Loss (regulation): 0 points

And here’s the expected point total for each team:

Overtime game: 1.5 points/team
Regulation: 1 point/team

For any of this blog’s newer readers, the increased incentives for overtime games have had three primary effects on game outcomes. They are as follows:

1) Teams play more overtime games than they used to.

Exactly 1 in 4 games went to overtime last year, compared to 1 in 5 games from past point systems. While that doesn’t sound like a big difference, that’s between 60 and 70 additional overtime games per season in the new points system.

Or, 60 to 70 extra points floating around the league’s standings.

2) Teams play more overtime games towards the end of the regular season. 

As the playoff chase heats up, so to do the rates of overtime games. About 30% of March and April games have reached overtime over the past few years, an increase over the 20% of games that reached OT between October and December. The implication is that teams play for overtime when the pressure to improve in the standings is higher, because overtime guarantees each participating team at least one point.

Last April was a great example: 6 of the league’s final 11 games went to overtime!

As a point of reference, in April of past point systems (for example, 1997-1999, when there was no point for overtime losers), only about 15% of games went to overtime.

3) Teams play overtime games more frequently against nonconference opponents. 

In my opinion, this is the most damning issue. If you are the Bruins, conceding a point to Vancouver is much preferred over conceding a point to Montreal, because the Canucks, unlike the Canadiens, are not a threat when it comes to postseason qualification. As a result, we would expect teams to be more apt to play OT against nonconference opponents. Sure enough, in previous research (see link here), I estimated about a 15-20% increase in the odds of overtime against nonconference opponents, relative to conference ones.

Further, certain teams appeared to have identified this inefficiency more than others. From the Sloan Conference last year, here’s my poster with team specific odds of overtime, comparing nonconference to conference games in different point systems.

You can also read more about this issue in this article that I wrote for The Hockey News. 

What happened last year?

Beginning last winter, I started to monitor teams playing for overtime using the twitter hashtag #playforOT. While many of those tweets have been lost in the archives, here are a few examples of boxscores that I uncovered last year.

Ex. 1: Columbus and Calgary posted just two shots on goal in final 4.5 minutes, none from inside 60 feet, to play for OT.

Ex. 2: In a tilt between Chicago and Florida in October, neither team recorded a shot within 36 feet in the game’s final eight minutes.

Ex. 3:In Chicago’s contest with Carolina, there were no shots in the game’s final 120 seconds (counting missed, blocked, or on goal shots). In hockey, that’s pretty hard to do.

Ex. 4: it must have been fun to watch the last three minutes of this one between Anaheim and Tampa Bay!

Screen Shot 2014-10-07 at 11.53.11 PM

Overall, given the league’s updated realignment (now, teams are judged relative to only their divisional opponents), we also observed a higher proportion of non-divisional within conference games (26%) than ever before.

So what about the 2014-2015 season?

If the league’s point system inefficiencies (or its loser point, or the 19 columns that the league standings require) bother you too, then play along.

Tweet (using #playforOT, or to @StatsbyLopez) box scores or anecdotes from games where teams stop trying to score in the waning minutes of tie game. While many OT games are likely due to chance, I feel strongly that because the league’s incentives for teams to play OT are as strong as ever, and that we will continue to see several games in which teams stop trying to score in order to reach OT.

Thanks for reading, and I’ll see you in the extra session.

This might be the best thing I did in graduate school

I was looking for a few old Intro Stat activities today, and came across this gem.

It’s a mathematical statistics cheat-sheet!

Specifically, it’s a .pdf file containing the 15 most common distributions in statistics, their density or mass functions, support, first and second moments, and additional notes (For example, did you know that the uniform distribution is a special case of the beta distribution?).

If you want the .word file, feel free to email me and I’ll send it over. I used this when studying for inference exams and then eventually qualifying exams.

Here’s a screenshot: