### Where We’ve Been

I have *argued for* the **Likelihood Principle**, which says that the evidential meaning of a datum with respect to a partition depends on the probabilities that the elements of that partition ascribe to that hypothesis, up to a constant of proportionality. (Here)

From the Likelihood Principle, I have *argued for* the **Law of Likelihood**, which says that the degree to which a datum favors one element of a partition over another is given by the ratio of the probabilities that those hypotheses ascribe to that datum. (Here and here)

I have *argued against* **methodological likelihoodism**, which says that characterizing data as evidence in accordance with the Law of Likelihood is an adequate self-contained methodology for science (at least as a fallback option in cases in which prior probabilities are “not available”). (Here)

### Where We’re Going: The Methodological Likelihood Principle

The next claim I want to consider is the **Methodological Likelihood Principle**, which says that an adequate methodology for science respects evidential equivalence as characterized by the Likelihood Principle. That is, if the elements of a partition ascribe the same probabilities to $x_1$ and $x_2$ up to a constant of proportionality, then an adequate methodology for science yields the same conclusions over that partition given either $x_1$ or $x_2$ (holding fixed prior beliefs and other relevant evidence about that partition; the set of decisions to which one’s beliefs about that partition are relevant; and the utility functions for those decisions).

If the Methodological Likelihood Principle is true, then the use of a methodology that violates the Likelihood Principle (such as a standard frequentist methodology) is **impermissible** (and *a fortiori* non-obligatory) except insofar as the use of an inadequate methodology is permissible (e.g. perhaps as a computational or rhetorical expedient).

Roughly speaking, methodological likelihoodism is the conjunction of the Law of Likelihood with the evidentialist thesis that proper characterizations of data as evidence are **sufficient** for doing science, while the Methodological Likelihood Principle is the conjunction of the Likelihood Principle with the evidentialist thesis that respecting proper characterizations of data as evidence is **necessary** for doing science.

One might think that there’s not much to discuss here. After all, it seems obvious and perhaps even analytically true that one’s conclusions about a partition in light of some datum should depend on that datum only through its evidential meaning with respect to that partition—for ideally rational agents, evidential meaning “screens off” the relationship between data and beliefs about hypotheses. Moreover, one could just reformulate proofs of the Likelihood Principle expressed in terms of evidential meaning in terms of the features of an adequate methodology, thereby eliminating the need to argue from the Likelihood Principle to the Methodological Likelihood Principle. In this post I argue that there is in fact something to discuss here because **both of these easy attempts to bridge the gap between the Likelihood Principle and the Methodological Likelihood Principle fail**.

### Why one cannot take for granted that evidential meaning screens off the relationship between data and beliefs about hypotheses for ideally rational agents

One might think that given the Likelihood Principle, the Methodological Likelihood Principle is analytically true. After all, “the evidential meaning of datum $x$ with respect to partition **H**” just *means* “how datum $x$ ought to bear on one’s beliefs about **H**.”

The problem with this idea is that **if one regards the claim that “evidential meaning” means “import for one’s beliefs” as analytically true, then one should examine proofs of the Likelihood Principle with that claim in mind**. When I do so, I find that **those proofs no longer seem so compelling**. Indeed, they seem to beg the question against reliabilist views, including the views of frequentists such as Jerzy Neyman who claim that we should think about scientific methodology not in terms of learning about a particular partition from particular datum, but in terms of the long-run operating characteristics of our methods.

My approach has been to treat the premises that lead to the Likelihood Principle as attempts to precisify a nebulous intuitive notion of evidential meaning. When taken in that way, they do seem to me quite compelling. But this approach requires treating the idea that evidential meaning captures “import for one’s beliefs” as an additional, substantive claim.

One can argue about whether one should take as primary (1) the premises that lead to the Likelihood Principle or (2) the idea that “evidential meaning” means “import for one’s beliefs.” The approach I take is not mandatory, but it does have advantages in terms of continuity with the statistical literature on proofs of the Likelihood Principle and with philosophical debates between evidentialists and reliabilists. Ultimately, which approach one takes should not make any *substantive* difference in terms of one’s judgments about methodology, although it could make a difference in how one uses the word “evidence” and its cognates.

**What one must not do is to take proofs of the Likelihood Principle such as mine or Birnbaum’s formulated in terms of a nebulous notion of evidential favoring as direct arguments for the Methodological Likelihood Principle. There is an evidentialist/reliabilist dispute that those proofs fail to address**.

### Why one should not reformulate proofs of the Likelihood Principle in terms of what methods one should use

Birnbaum formulates his proof of the Likelihood Principle essentially as follows (see Birnbaum 1962, Gandenberger 2014).

**The Sufficiency Principle:**Let T(X) be a sufficient statistic for the index set {$\theta$} of the partition under consideration in experiment $E$ with sample space $\mathcal{X}$. For any $x_1,x_2 \in \mathcal{X}$, if $T(x_1)=T(x_2)$ then $\mbox{Ev}(E,x_1)=\mbox{Ev}(E,x_2)$ (that is, outcomes $x_1$ and $x_2$ of $E$ are evidentially equivalent with respect to {$\theta$}).**The Weak Conditionality Principle:**For any outcome $x$ of any component $E’$ of any simple mixture experiment $E$, $\mbox{Ev}(E,(E’,x))=\mbox{Ev}(E’, x)$.

**The Likelihood Principle:**Let $x$ and $y$ be possible outcomes of $E_1$ and $E_2$, respectively, such that $P_\theta(x) =cP_\theta(y)$ for all elements of the partition {$\theta$} and some positive $c$ that is constant in $\theta$. Then $\mbox{Ev}(E_1, x) =\mbox{Ev}(E_2, y)$.

Under this formulation, the Likelihood Principle implies weak methodological likelihoodism given the following further premise:

**The Respect-for-Evidential-Equivalence Principle:**An adequate methodology for science would never lead one to draw different conclusions about a partition {$\theta$} depending upon whether one learns $(E_1, x)$ or $(E_2, y)$ where $\mbox{Ev}(E_1, x) =\mbox{Ev}(E_2, y)$ (holding fixed prior beliefs and other relevant evidence about {$\theta$}; the set of decisions to which one’s beliefs about {$\theta$} are relevant; and the utility functions for those decisions).

Alternatively, one could argue for the Methodological Likelihood Principle more directly by formulating the Sufficient and Weak Conditionality Principles in terms of the features of an adequate methodology, as follows (holding fixed throughout prior beliefs and other relevant evidence about {$\theta$}; the set of decisions to which one’s beliefs about {$\theta$} are relevant; and the utility functions for those decisions).

**The Methodological Sufficiency Principle:**Let T(X) be a sufficient statistic for the index set {$\theta$} of the partition under consideration in experiment $E$ with sample space $\mathcal{X}$. For any $x_1,x_2 \in \mathcal{X}$, if $T(x_1)=T(x_2)$ then an adequate methodology for science would not lead one to draw different conclusions about {$\theta$} depending upon whether one learns $(E, x_1)$ or $(E, x_2)$.**The Methodological Weak Conditionality Principle:**For any outcome $x$ of any component $E’$ of any simple mixture experiment $E$, an adequate methodology for science would not lead one to draw different conclusions about {$\theta$} depending upon whether one learns $(E,(E’, x)$ or $(E’, x)$.

**The Methodological Likelihood Principle**Let $x$ and $y$ be possible outcomes of $E_1$ and $E_2$, respectively, such that $P_\theta(x) =cP_\theta(y)$ for all elements of the partition {$\theta$} and some positive $c$ that is constant in $\theta$. Then an adequate methodology for science would not lead one to draw different conclusions about {$\theta$} depending upon whether one learns $(E_1, x)$ or $(E_2, y)$.

**Both versions of the argument face the objection that the Likelihood Principle conflicts with the frequentist-reliabilist position that one should use methods that provide certain kinds of guarantees about long-run operating characteristics.** In the first version of the argument, this objection can be localized to the Respect-for-Evidential-Equivalence Principle. In the second version, it get smeared across the Methodological Sufficiency and Weak Conditionality Principle. **One should in principle reach the same conclusion whichever version of the argument one considers, but the first seems to me more perspicuous.**

### Conclusions

**There is no quick and easy way to bridge the gap between the Likelihood Principle and the Methodological Likelihood Principle.** Making the leap from the former to the latter requires addressing the frequentist-reliabilist position that one should choose methods on the basis of their long-run operating characteristics rather than on the basis of their conformity to intuitively plausible claims about evidential equivalence.

To share your thoughts about this post, comment below or send me an email.

Comments support Markdown formatting and $\LaTeX$ mathematical expressions (surround expressions with single dollar signs for in-line math or double dollar signs for display math).

## Leave a Reply