Several people expressed interest in Friday’s post about the Sloan Sports Analytics Conference. (click here to read Part I)
Thanks for reading, and here I provide a few follow up notes that I found interesting.
1) Over on his website The Spread, I thought Trey Causey wrote a great piece on reproducibility, parts of which I’ll share below. I encourage you to read his piece, as it made my point better than I could.
When research is not reproducible, it is difficult to verify its veracity. How many models did the authors estimate before arriving at the one in the paper? Are the results robust to different model specifications? What happens when you include or exclude various variables? These are unanswerable questions. Sports organizations want advanced analytics capacity to make better decisions and get an edge over their competition. The way to get an edge is not to have someone write a proprietary paper and then say ‘trust me on the findings.’ That’s how bad decisions are made.
2a) While initially no representatives from Sloan or MIT reached out (not that I expected them too), poster presenters received an email on Tuesday afternoon informing us that we would be allowed to come, with one free ticket per poster. Great news!
2b) Also, I spoke with both Kirk Goldsberry (phone) and John Ezekowitz (email, twitter). I appreciated both of them taking the time to share their views.
Looking back, my intent was to share my story about not getting to present a poster in person, and that, for the most part, had nothing to do with Kirk or John or their research. As a result, it would have been prudent for me to have contacted both of them before writing about their roles, and I think their input would have been valuable towards my post. Both of them had a right to hear me out, and defend themselves, where appropriate, and I regret not giving them such an opportunity. Lesson learned.
3) A few points still stand:
-The Sloan RP process was, at best, unprofessional
-Valuing reproducible research would greatly improve the RP contest and help the conference’s overall academic growth (but does the conference care?)
-RP submissions should be blinded*, and field experts should be used where appropriate
*That being said, I don’t believe that the ‘connections,’ as I called it in my blog, influenced Kirk’s successful paper in 2014, or his follow-up post on Grantland. This type of praise reinforced such a belief:
4) Over e-mail, John and his co-authors apologized for their literature review “not being as complete as it could’ve been.” The group also touched base with previous hot-hand authors regarding the oversight. John and his co-authors agreed to cite such papers in future disseminations, and indicated that they were already citing at least one of them in a current, longer draft.
5) Lastly, admitting for a large amount of selection bias here – the people who contacted me may differ from those who didn’t with respect to their opinions on my post – here’s a summary of what others had to say.
From current college professors:
Members of the media: