I am a data scientist at Uptake in Chicago. This website is primarily about the research I carried out during my academic career.
I have a Ph.D. in History and Philosophy of Science and an MA in Statistics, both from the University of Pittsburgh. My research academic addressed questions about the proper use and interpretation of statistical methods.
One question I addressed is whether our statistical methods should take into account facts about why an experiment ended when it did. Consider, for instance, a case in which a pharmaceutical company watches the data from a drug trial as it comes in and ends the trial as soon as it produces a nominally statistically significant result in favor of the claim that the drug is effective. It might seem that we should be more reluctant to accept the claim that the drug is effective on the basis of this trial than in a parallel case in which the same data are produced but the sample size is fixed in advance. On the other hand, there is something odd about this idea. I have described two hypothetical scenarios that produce exactly the same data. The only difference between the two scenarios is a difference in the intentions of the drug company about when to end the trial which turns out to make no difference to when the trial actually ends. It is hard to see how this ineffectual difference in intentions could make a difference to the evidential import of the data, and if it doesn’t make a difference to the evidential import of the data, then it is hard to see why it should affect our willingness to accept the conclusion that the drug is effective.
Frequentist statistical methods accord with the first view, according to which a difference in an experiment’s “stopping rule” can make a difference to the conclusions it warrants. Bayesian statistical methods are generally regarded as according with the second, according to which stopping rules cannot make such a difference under typical circumstances (including the circumstances described above). The view that stopping rules are typically irrelevant to evidential import also follows from a Bayesian claim called the Likelihood Principle, which I have written about here (preprint). My contribution to this debate is to show frequentists are often right in practice even if Bayesians are right in principle: stopping rules typically do not make a difference to Bayesian posterior probabilities, but maximizing expected utility may require a Bayesian decision-maker who engages in repeated interactions with scientists to take them into account in order to avoid incentivizing experimental designs that he or she regards as suboptimal. This conclusion is a victory for frequentists in that it shows that they are often right in practice, and it is a victory for Bayesians in that it at least goes some way toward reconciling their principles with widely held intuitions about scenarios like the two described above. My paper about this topic is available here.