### How My Work Provides an Argument for Subjective Bayesianism

Subjective Bayesianism seems to be the only serious alternative to methodological likelihoodism that conforms to the Likelihood Principle. (Objective Bayesian methods violate the Likelihood Principle in their rules for selecting priors.) **My arguments for the Likelihood Principle and against methodological likelihoodism can thus be used as an argument for subjective Bayesianism.**

### Common Criticisms of Subjective Bayesianism

Subjective Bayesianism is often criticized for being too subjective and too permissive. It’s about what you believe rather than, say, what your evidence supports. On standard formulations, it only requires that one’s beliefs be synchronically and diachronically coherent–that is, to conform to the axioms of probability at a time and to update by Bayesian conditioning over time. It doesn’t tell you what you ought to believe about a given hypothesis on the basis of your evidence except as a function of what you believed about it before and what you believe about other related hypotheses.

**I don’t find these objections persuasive.** Subjective opinions are important. What am I to use in making decisions if not my own values and opinions (appropriately influenced by my evidence and my interactions with others, of course)? I’d like to be able to give an account of “what one’s evidence supports” in an objective sense that provides good norms of belief or action, but I’m not aware of any such account (likelihoodism does not provide it), and skeptical arguments such as Hume’s problem of induction lead me to doubt that such an account is possible.

### My Main Concern About Subjective Bayesianism

The main in-principle problem with subjective Bayesianism, it seems to me, is** not that it uses subjective beliefs** when the agent in question has them, but that **it doesn’t give much guidance to the agent who doesn’t have them**. (It faces many additional problems in practice, but I will leave those for another day.)

I’m inclined to be skeptical about my opinions, but I have them nevertheless. (I recognize the force of Hume’s argument that any inference from past experiences to future events requires assumptions that are not justified by my evidence. I nevertheless expect the sun to rise tomorrow.) Subjective Bayesianism tells me how to work with those opinions. It does not care about where they came from or about my second-order reflections on them.

There are even subjective Bayesian accounts about how to work with certain kinds of semi-definite opinions. **If I’m presented with an urn and told by a source that I (somehow) know to be completely reliability that the fraction of balls in the urn that are black is between 1/4 and 3/4 (inclusive), for instance, then given no further information I would be inclined to assign to the proposition that a ball drawn at random from the urn will be black the interval-valued probability $[1/4,3/4]$.** I would then update that opinion on new data by updating each probability in that interval by Bayesian conditioning and letting my new belief state be the union of the resulting probabilities. This procedure will converge to the truth with probability one given good data, e.g. the results of a sequence of independent random draws from the urn.

**But suppose I’m told nothing about the urn.**

**Then I would be inclined to assign to the proposition that a ball drawn at random from the urn will be black the interval-valued probability $[0,1]$. That opinion seems to apt in light of what I know, but it gives rise to a problem: it does not allow one to update one’s way to more definite opinion by the procedure described above**. That procedure would continue to yield the entire $[0,1]$ interval even after one has seen a million black balls in a row, for instance. Actually, that’s not quite right: the value 0 has been conclusively refuted, so the procedure would yield the half-open interval $(0,1]$. But it would not on any possible data yield a proper subset of the open $(0,1)$ interval. One seemingly bad consequence of this result is that on standard assumptions about the relationship between beliefs and betting behavior, this procedure would not lead one to accept a bet that the next ball will be black on any odds whatsoever no matter how many black balls in a row one had seen.

### Possible Responses

There are several possible responses to this result. **One option is to bite the bullet.** On this view, it is appropriate to start with the entire $[0,1]$ interval and to update it in the way described. So much the worse for our inclination to think that the proportion of black balls in the urn is quite high after seeing one million black balls in a row.

This line is hard to accept. It is in a way even stricter than Pyrrhonian skepticism, which directs one to “act according to appearances” despite refraining from belief.

**A second option is to deny that the entire $[0,1]$ interval was an appropriate starting point**. The challenge for this proposal is to specify an alternative. The open interval $(0,1)$ fares no better, and the endpoints of any proper subset of that interval will be arbitrary. One possible approach is to consider what one’s opinion would be after seeing some hypothetical data stream and to reverse-engineer a prior opinion. For instance, suppose, that after seeing one million black balls in a row, my opinion about the proposition that the next ball will be black would be appropriately represented by the probability interval $[1-10^{-5}, 1]$. It would be a simple matter to work backwards to the initial interval that would yield this result by Bayesian updating.

Unfortunately, this procedure too requires arbitrarily adopting a more definite opinion than one would naturally have. I would have a hard time saying exactly what probability interval would appropriately represent my belief in the proposition that the next ball will be black after seeing one million black balls in a row. It would certainly be a proper subset of $[1/2,1]$, but I would not be able to specify the exact proper subset in a non-arbitrary way.

**A third option is to give up diachronic coherence**. For my initial opinion about the proposition that the next ball will be black to be appropriately represented by the entire $[0,1]$ interval and my opinion about it after seeing one million black balls to be appropriately represented by a proper subset of the interval $[1/2,1]$, I would have to violate diachronic coherence at some point. **That violation seems to me not too high a price to pay**. There is no Dutch book argument against the kind of diachronic incoherence I have in mind, which would involve moving to a more definite belief state over time by pruning the relevant set of probability distributions. The worst that could happen is that one would turn down a bet at time $t_1$ that one would at a later time $t_2$ regard as having been favorable.

**The question, then, is how to violate diachronic coherence.** I’m inclined to say that any “pruning” strategy is permissible but to look to objective Bayesianism and perhaps also to frequentism for strategies that have some kind of principled basis. Those approaches violate the Likelihood Principle, and thus on my view fail to respect evidential equivalence. But so what? We are already

*choosing*on

*pragmatic grounds*to adopt more definite beliefs than we would get by updating on our evidence in a diachronically coherent way. Why should the way in which we make this decidedly non-evidentialist move be constrained by considerations of evidential equivalence?

To share your thoughts about this post, comment below or send me an email.

Comments support Markdown formatting and $\LaTeX$ mathematical expressions (surround expressions with single dollar signs for in-line math or double dollar signs for display math).

Thanks to Eric Brown, James Joyce, John Norton, Susanna Rinard, Teddy Seidenfeld, and Andrew Wong for conversations that contributed to the development of the ideas presented here.

Image by Matt Buck is licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license. Its use does not indicate that its creator endorses any of the views expressed here.

lotharson says

Simply amazing Greg!

As far I am (subjectively) concerned, this is the most insightful and profound post you have ever written (your other ones are good too but I prefer this one).

This raises several questions.

1) ” (Objective Bayesian methods violate the Likelihood Principle in their rules for selecting priors.) ”

My main concern about objective Bayesianism is that the principle of indifference turns utter ignorance into knowledge in a way very similar to magic.

Could you please sum up the manner in which the likelihood principles is violated by OB?

You should publish a paper

specificallydealing with this argument against OB.2) On my blog I explained how a frequentist can consider the probabilities of unique events.

This post ONLY deals with the

ontologicalnature of such probabilities and not with their practical determination.What is your take on my own frequentist ontology?

3) Let us consider a completely indeterminate process such as a coin whose landing frequency

keeps oscillating between 0 and 1as the number “approaches” infinity.In other words, there exists no limit.

Do you agree with me that under such circumstances, there ALSO exists no degree of belief every rational agent should have?

How would a subjective Bayesian view a degree of a belief for a purely imprevisible event?

I personally would utterly refuse to bet in such a situation.

So I’d be glad to read your answers since I do want my thoughts to evolve a bit 🙂

Friendly greetings from Europe.

Greg Gandenberger says

Thanks for this comment! I haven’t had time to respond yet, but I will.

Greg Gandenberger says

1)

Objective Bayesian methods violate the Likelihood Principle by choosing priors by a rule that is sensitive to either the parameterization chosen (e.g. the principle of indifference) or the sampling distribution of the experiment (e.g. reference Bayesianism). As a result, they can yield different conclusions depending on whether Datum A or Datum B is fed into them, for some Datum A and Datum B that the Likelihood Principle entails are evidentially equivalent with respect to the set of hypotheses in question.

This issue comes up in the objective Bayesian literature from time to time, e.g. here. I think it does deserve more attention. In particular, the common claim that objective Bayesian methods violate the Likelihood Principle only slightly is in need of analysis.

2) I am sympathetic to your frequentist ontology. It seems to be similar to a half-baked idea I have had that frequentist probabilities need to be understood relative to a set of conditions that are to be held fixed and a set of conditions that are to be allowed to vary in some kind of “natural” way.

3) One could of course try to model the way the coin’s “propensity” to land heads varies over time. But perhaps it does so in a completely unsystematic way. Then any kind of inductive inference we might try to perform is bound to follow. Some problems just are not solvable.

Of course, for all we can tell from our evidence, our universe could be like that over long time scales. I don’t see any way to address this problem except by assuming it away.