The Law of Likelihood states that the outcome E of an observational situation favors hypothesis H_{1} over hypothesis H_{2} if and only if Pr(E|H_{1})>Pr(E|H_{2}), and the degree to which E favors H_{1} over H_{2} is given by the likelihood ratio Pr(E|H_{1})/Pr(E|H_{2}). The Law of Likelihood is controversial. It has been championed by Hacking (1965), Edwards (1972), and Royall (1997). Bayesian methods are consistent with it, but many Bayesians have no interest in it because they believe that the posterior distribution is all one needs for inference and decision making. Frequentist methods can violate it in various senses; for instance, there are uniformly most powerful level αhypothesis tests of a null hypothesis H_{0} against an alternative H_{a} that reject H_{0} if and only if an outcome occurs that according to the Law of Likelihood favors H_{0} over H_{a}, and moreover does so more than any other outcome (see my previous post).

My view regarding principles such as the Law of Likelihood is that no matter how well they might accord with our intuitions, they need to earn their keep by somehow helping us achieve our epistemic and/or practical goals. In this post, I will show that there is a way in which following the Likelihood Principle can help us achieve such goals: a hypothesis test that violates it fails to maximize expected utility. [Read more…]