The Law of Likelihood states that the outcome E of an observational situation favors hypothesis H1 over hypothesis H2 if and only if Pr(E|H1)>Pr(E|H2), and the degree to which E favors H1 over H2 is given by the likelihood ratio Pr(E|H1)/Pr(E|H2). The Law of Likelihood is controversial. It has been championed by Hacking (1965), Edwards (1972), and Royall (1997). Bayesian methods are consistent with it, but many Bayesians have no interest in it because they believe that the posterior distribution is all one needs for inference and decision making. Frequentist methods can violate it in various senses; for instance, there are uniformly most powerful level αhypothesis tests of a null hypothesis H0 against an alternative Ha that reject H0 if and only if an outcome occurs that according to the Law of Likelihood favors H0 over Ha, and moreover does so more than any other outcome (see my previous post).
My view regarding principles such as the Law of Likelihood is that no matter how well they might accord with our intuitions, they need to earn their keep by somehow helping us achieve our epistemic and/or practical goals. In this post, I will show that there is a way in which following the Likelihood Principle can help us achieve such goals: a hypothesis test that violates it fails to maximize expected utility. [Read more…]