Neyman and Pearson (e.g. 1933) treat the problem of choosing the best rejection region for a simple-vs.-simple hypothesis test as what computer scientists call a 0/1 knapsack problem. **Standard examples of 0/1 knapsack problems are easier to grasp than hypothesis testing problems, so thinking about Neyman-Pearson test construction on analogy with those examples is helpful for developing intuitions. It is also illuminating to think about points of disanalogy between those scenarios and hypothesis testing scenarios, which give rise to possible objections to the Neyman-Pearson approach.**

# Abstract for a Paper on Stopping Rules

My conversations at the Munich Center for Mathematical Philosophy keep coming back to stopping rules, so I’ve decided to write a paper on the topic. Here is the general line that I plan to develop.

[Read more…]

# An Introduction to Bayesian, Likelihoodist, and Frequentist Methods (Encore)

# Introduction

My goal in this series of posts is to provide a short, self-contained introduction to likelihoodist, Bayesian, and frequentist methods that is readily available online and accessible to someone with no special training who wants to know what all the fuss is about. [Read more…]

# An Introduction to Likelihoodist, Bayesian, and Frequentist Methods (2/2)

### Welcome, Daily Nous readers!

# Introduction

##### My goal in this post and the previous one is to provide a short, self-contained introduction to likelihoodist, Bayesian, and frequentist methods that is readily available online and accessible to someone with no special training who wants to know what all the fuss is about.

In the previous post, I gave a motivating example that illustrates the enormous costs of the failure of philosophers, statisticians, and scientists to reach consensus on a reasonable, workable approach to statistical inference. I then used a fictitious variant on that example to illustrate how likelihoodist, Bayesian, and frequentist methods work in a simple case. **In this post, I discuss a stranger case that better illustrates how likelihoodist, Bayesian, and frequentist methods come apart.** [Read more…]

# An Introduction to Likelihoodist, Bayesian, and Frequentist Methods (1/2)

### Welcome, Daily Nous readers!

# Introduction

I have been recommending the first chapter of Elliott Sober’s Evidence and Evolution to those who ask for a good introduction to debates about statistical inference. That chapter is excellent, but it would nice to be able to recommend something shorter that is readily available online. Here is my attempt to provide a suitable source. [Read more…]

# Why I Don’t Find Titelbaum’s Example Persuasive (2/2)

In my previous post I presented some reasons to resist a clever counterexample to the Law of Likelihood developed by Mike Titelbaum. In that post I chose to stay at the level of intuitions about the example and about what kinds of features we might want a measure of evidential favoring to have. **In this post I go deeper by examining Mike’s example in light of the purpose of the Law of Likelihood.** [Read more…]

# Why I Don’t Find Titelbaum’s Example Persuasive (1/2)

###### Last week’s post in which I presented a purported counterexample to the Law of Likelihood (due to Mike Titelbaum) generated a lot of interest. In this post I comment on the counterexample. Next week I plan to “go meta” by commenting on the purpose of the Law of Likelihood and of counterexamples such as Mike’s.

# Titelbaum’s Counterexample to the Law of Likelihood

A talk I recently gave prompted Mike Titelbaum to develop a purported counterexample to the Law of Likelihood. In this post I present the counterexample with minimal comments for the sake of eliciting reactions to it that are not influenced by my thoughts. I plan to follow up with my comments next week. [Read more…]

# Why It May Be Permissible to Violate the Likelihood Principle–Even if It’s True

### Where We’ve Been

I have *argued for* the **Likelihood Principle**, which says that the evidential meaning of a datum with respect to a partition depends on the probabilities that the elements of that partition ascribe to that hypothesis, up to a constant of proportionality. (Here)

From the Likelihood Principle, I have *argued for* the **Law of Likelihood**, which says that the degree to which a datum favors one element of a partition over another is given by the ratio of the probabilities that those hypotheses ascribe to that datum. (Here and here)

I have *argued against* **methodological likelihoodism**, which says that characterizing data as evidence in accordance with the Law of Likelihood is an adequate self-contained methodology for science (at least as a fallback option in cases in which prior probabilities are “not available”). (Here)

### Where We’re Going: The Methodological Likelihood Principle

The next claim I want to consider is the **Methodological Likelihood Principle**, which says that an adequate methodology for science respects evidential equivalence as characterized by the Likelihood Principle. [Read more…]

# If Likelihoodism Can Provide Good Norms of Commitment, Then the Rule Discussed Here is Among Them

My argument against methodological likelihoodism has three steps:

- Argue that an adequate self-contained methodology for science provides good norms of commitment vis-à-vis hypotheses.
- Argue that if there are any good purely likelihood-based norm of commitment, then they have a particular form.
- Show that any norm of that form has undesirable consequences.

**I focus in this post on Step 2. [Read more…]**

- 1
- 2
- 3
- …
- 5
- Next Page »