Neyman and Pearson (e.g. 1933) treat the problem of choosing the best rejection region for a simple-vs.-simple hypothesis test as what computer scientists call a 0/1 knapsack problem. Standard examples of 0/1 knapsack problems are easier to grasp than hypothesis testing problems, so thinking about Neyman-Pearson test construction on analogy with those examples is helpful for developing intuitions. It is also illuminating to think about points of disanalogy between those scenarios and hypothesis testing scenarios, which give rise to possible objections to the Neyman-Pearson approach.
My conversations at the Munich Center for Mathematical Philosophy keep coming back to stopping rules, so I’ve decided to write a paper on the topic. Here is the general line that I plan to develop.
Welcome, Daily Nous readers!
My goal in this post and the previous one is to provide a short, self-contained introduction to likelihoodist, Bayesian, and frequentist methods that is readily available online and accessible to someone with no special training who wants to know what all the fuss is about.
In the previous post, I gave a motivating example that illustrates the enormous costs of the failure of philosophers, statisticians, and scientists to reach consensus on a reasonable, workable approach to statistical inference. I then used a fictitious variant on that example to illustrate how likelihoodist, Bayesian, and frequentist methods work in a simple case. In this post, I discuss a stranger case that better illustrates how likelihoodist, Bayesian, and frequentist methods come apart. [Read more…]
Welcome, Daily Nous readers!
I have been recommending the first chapter of Elliott Sober’s Evidence and Evolution to those who ask for a good introduction to debates about statistical inference. That chapter is excellent, but it would nice to be able to recommend something shorter that is readily available online. Here is my attempt to provide a suitable source. [Read more…]
In my previous post I presented some reasons to resist a clever counterexample to the Law of Likelihood developed by Mike Titelbaum. In that post I chose to stay at the level of intuitions about the example and about what kinds of features we might want a measure of evidential favoring to have. In this post I go deeper by examining Mike’s example in light of the purpose of the Law of Likelihood. [Read more…]
Last week’s post in which I presented a purported counterexample to the Law of Likelihood (due to Mike Titelbaum) generated a lot of interest. In this post I comment on the counterexample. Next week I plan to “go meta” by commenting on the purpose of the Law of Likelihood and of counterexamples such as Mike’s.
A talk I recently gave prompted Mike Titelbaum to develop a purported counterexample to the Law of Likelihood. In this post I present the counterexample with minimal comments for the sake of eliciting reactions to it that are not influenced by my thoughts. I plan to follow up with my comments next week. [Read more…]
The Sense in Which Frequentist Methods Violate the Likelihood Principle
It is widely accepted that frequentist methods violate the Likelihood Principle. After all, there can be two pieces of data A and B such that the Likelihood Principle implies that A and B are evidentially equivalent with respect to the set of hypotheses H, yet frequentist methods will yield different conclusions about H depending on whether A or B is fed into them.
But What About Using Frequentist Considerations to Choose Among Priors?
There is another sense in which many frequentist methods do not violate the Likelihood Principle. A frequentist method is often equivalent (in a sense) to a Bayesian method with a particular prior probability distribution. From a Bayesian perspective, such frequentist methods involve updating a prior probability distribution in a way that does conform to the Likelihood Principle. They violate the Likelihood Principle only by using “implied priors” that vary with the sampling distribution of the experiment to be performed. [Read more…]