After I finished my previous post it occurred to me that there is a kind of truth-valued assumption to which one can appeal in order to provide a performance-based justification for a Bayesian analysis: that the true hypothesis is within the set of hypotheses on which the method performs well relative to a set of relevant alternatives. Roughly speaking, one can assume that one has chosen good priors for the problem at hand.
I suggest that it would be appropriate to use the term “objective” with reference to Bayesian statistics to refer not to views that prescribe rules for choosing priors, but instead to the view that insofar as a given Bayesian analysis is justified, it is justified not by the Likelihood Principle or the usual coherence arguments, but instead by the assumption that the priors used are well chosen relative to the true state of affairs. In accordance with the views of Bayesian statisticians who regard a prior as “just another assumption,” whether the prior represents anyone’s degrees of belief or not and whether it is “informative” or not are irrelevant from this point of view.
There are results from the field of “comparative statistics” which indicate roughly that choosing good priors is easy in problems with low-dimensional hypothesis spaces and hard in problems with high-dimensional hypothesis spaces (see e.g. Samaniego 2010). Such results are of great interest for those who are objectivists about Bayesian methods in the proposed sense. They are quite different in kind from robust Bayesian results, which concern not the degree to which a given prior performs well on various plausible states of affairs in repeated applications with varying data, but rather the degree to which various priors lead to the same conclusion in a given application with given data. Whereas comparative statistics seeks to evaluate Bayesian methods objectively, robust Bayesianism seeks to evaluate particular Bayesian conclusions intersubjectively.
Added on 2/22: One attractive feature of frequentist methods is that they provide some kind of guarantee about performance given only the assumption that the true hypothesis is in a given set. Bayesian methods offer better expected performance from the perspective of an agent whose priors are being used but do not control worst-case performance.
Want to keep up with new posts without having to check for them manually? Use the sidebar on the left to sign up for updates via email or RSS feed!