I met with my adviser yesterday to discuss a draft of my prospectus that I completed over the Thanksgiving break. He agreed that it’s ready to share with potential committee members to get their feedback and move toward scheduling a defense, which is great news.
My adviser had a nice way of situating my overall project in relation to competing views in the philosophy of statistics that I’d like to share here.
There are two dominant views in the philosophy of statistics (as it is done by philosophers), which can be roughly summarized as follows:
1. Statistical methods should be grounded in relations of evidential support. Relations of evidential support conform to the Likelihood Principle. Thus, statistical methods should conform to the Likelihood Principle. The kinds of methods currently on offer that conform to the Likelihood Principle are Bayesian methods and pure likelihood methods. Thus, statisticians should in principle use those kinds of methods.
2. Statistical methods should be grounded in relations of evidential support. Relations of evidential support do not conform to the Likelihood Principle. Instead, they are best understood in terms of the notion of a severe test. The kinds of methods currently on offer that best capture the idea of a severe test are frequentist methods. Thus, statisticians should use frequentist methods.
The alternative view I want to develop is something like the following:
3. The notion of “evidence” plays an important role in a folk epistemology that serves us well in everyday life and certain professional settings (e.g. the law, and even science to some extent), but it turns out not to be a good guide for serious methodology. Birnbaum showed that compelling intuitions about evidence lead to the Likelihood Principle, but in some cases we have good reasons to use methods that violate the Likelihood Principle. Statisticians should use the methods that they and other stakeholders have reason to believe will yield the best results in the case at hand, which will sometimes be Bayesian, sometimes frequentist, and perhaps sometimes both or neither. At present one can provide some rough guidelines for when a Bayesian approach is likely to work better than a frequentist approach and vice versa, but no more than that. In practice, one must typically rely on a hodgepodge of various kinds of considerations (such as those described here) to decide what kind of method is likely to work best in the problem at hand.
This viewpoint fits well with the “toolkit” mentality that I described in my previous post, and I think that many statisticians would find it congenial. It is going to be a harder sell in philosophy, where the claim that methods of inference should be grounded in relations of evidential support is deeply ingrained. I need to develop an argument against this claim that does not appeal to technical concepts from statistics that are not part of most philosophers’ training. I can at least say that the assumption that statistics should be beholden to our intuitions about the concept of evidence seems to come from a naïve form of Platonism that most philosophers reject.