The 2013 Atlantic Causal Inference Conference (ACIC) was held this past Monday and Tuesday at Harvard. It’s an annual meeting that brings together several biostatisticians and epidemiologists interested in causal theory and applications, and I was part of a small but proud contingent from Brown who attended. It was my first ACIC, and I hope to attend next year. Here’s a list of several of my take-home lessons.
Lesson 1: Conservative intervals aren’t always a good thing.
An old mentor told me if you can pick up just one new thing from each talk you attend, you are doing well. One new thing that I picked up on ACIC came during a talk by Cassandra Wolos Pattanayak, a college fellow in Harvard’s Department of Statistics. Cassandra’s research looked at post roll-out drug testing, where drugs which had been initially approved were then analyzed to determine if, after public consumption, the drugs were harmful.
Cassandra’s data is one example where a “conservative” confidence interval (CI) – one which errs on the side of including the null more often – could be a bad thing, and this is a topic I hadn’t thought of. A CI which is too wide increases the chance that the null value is included, and in testing for harmful drug side effects, the null is that there are no side effects. As a result, intervals which are too wide could yield a hypothesis test which fails to reject a false null hypothesis. in other words, this pitfall could mean finding that the drug has no harmful side effects when it actually does. Cassandra and her advisers have worked on developing test statistics which, for certain types of causal data, are less prone to give CI’s which are too wide.
Lesson 2: If you are presenting at a poster session, bring gum.
This one should’ve seemed pretty obvious to me beforehand anyways, but during my poster session conversations were animated and held not very far apart. Having no gum myself, I felt quite guilty that I had just consumed coffee and an everything bagel.
Lesson 3a: If your method bests another one, be prepared to answer a question about why.
Lesson 3b: If your method bests another one and the author of the original method is sitting in the second row, be very prepared to answer a question about why.
This type of preparation is likely needed in most fields of statistics. Not surprisingly, there were some good conversations between causal inference big-shots regarding preference of methods and research ideas. For those who feel uncomfortable justifying why their method is better, an obvious option is to simply develop bad methods that don’t improve on anything.
Lesson 4: Nail down the time of your talk.
I think 4 out of every 5 ACIC talks went a little over time. In many cases, this wasn’t a bad thing, but for a few talks, the speakers missed getting to their main points because they were short on time.
Lesson 5: Twitter & Verizon were not ready for ACIC
I used my twitter account (@StatsbyLopez) to live-tweet the ACIC. For this purpose, I used the hashtag #acic for several of these posts. It looks like I was alone on twitter in this endeavor, which I don’t mind. Of course, my Verizon phone wasn’t ready for ACIC either; all #acic entries were autocorrected into #acid.
Lesson 6: Causal inference is growing, but it needs help
The strongest applications of causal work presented at the 2013 came via talks on progress in measuring teacher and student outcomes. Raj Chetty from Harvard and NBER presented a strong talk on the long term impacts of having a good teacher, and Dan McCaffrey of Educational Testing Service presented his work on how bonuses have (mostly) not impacted teacher performance. I’m hoping to summarize their findings in a separate blog post.
In any case, several speakers also focused on how much of a challenge it was implementing causal theory in departments, organizations, or companies which were resistant to this type of thinking. For all in attendance, it presented an interesting quandry; while the work we do to further the field of causal inference is important, the effect of our research is only felt when we spend time teaching and implementing the existing causal tools. As a result, which should be prioritized…our research or our teaching/implementation?