Many statisticians today consider themselves “toolkit statisticians.” That is, they do not think that there is one correct statistical method for all problems. Instead, there are many methods each of which is useful in some settings but not others. A good statistician, like a good carpenter, is one who is familiar with the strengths and weaknesses of many tools and is skilled at using the right tool in the right way to address the job at hand.

Many toolkit statisticians believe that some jobs call for frequentist tools, while others call for Bayesian tools. As a result, much of the heat has gone out of Bayesian/frequentist debates in statistics. Few statisticians today are frequentist enough to claim that Bayes’ theorem should only be applied to known frequencies or Bayesian enough to claim that using Bayes’ theorem to update probability distributions is the only legitimate method of inference. Some use frequentist methods in most of their applied work, while others use primarily Bayesian methods, but these differences are regarded as a a matter of individual preference, like one carpenter’s preference for screws over nails. One can give reasons for holding a certain preference, but those reasons will always be inconclusive because there is no single, authoritative measure of goodness according to which one kind of tool is better than the other.

Such ecumenical versions of the toolkit perspective give up on the idea that debates about the foundations of statistics will end with either Bayesian methods or frequentist methods standing triumphant. Thus, many of those who hold such a perspective doubt whether there is any point in pursuing such debates. But this response is too hasty. A toolkit statistician needs to decide which tools to include in his or her kit and when and how to employ each of these tools. If foundational disputes yielded clear and simple verdicts, then these decisions would be easy. As matters stand, they can be quite hard. Making them intelligently requires not only knowing a great deal about the properties of particular procedures, but also thinking carefully about what those properties mean and why they matter. Thus, it requires thinking about foundations.

To be an ecumenical toolkit statistician is to take a stance on foundational issues. Many statisticians and philosophers believe that allowing Bayesian methods in science is dangerous. Others believe that frequentist methods have been thoroughly discredited. An ecumenical toolkit is viable only if those views are overblown. Deciding whether they are overblown or not in a responsible requires examining the relevant arguments.

By claiming that foundational issues are relevant to a toolkit statistician, I am not claiming that toolkit statistics needs foundations. It would be nice to have a few simple, exceptionless principles to guide practice, but it is not clear that good statistics requires such principles any more than good carpentry does. Whether any such principles exist or not is a subject that requires detailed investigation.

Want to keep up with new posts without having to check for them manually? Use the sidebar on the left to sign up for updates via email or RSS feed!

## Leave a Reply