site banner

Small-Scale Question Sunday for June 16, 2024

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

2
Jump in the discussion.

No email address required.

Michael Huemer has published an article on Substack criticizing "pure empirical reasoning". His thoughts on the matter :

Say you have a hypothesis H and evidence E. Bayes’ Theorem tells us:

P(H|E) = P(H)*P(E|H) / [P(H)*P(E|H) + P(H)*P(E|H)]

To determine the probability of the hypothesis in the light of the evidence, you need to first know the prior probability of the hypothesis, P(H), plus the conditional probabilities, P(E|H) and P(E|~H). Note a few things about this:

This is substantive (non-analytic) information. There will in general (except in a measure-zero class of cases) be coherent probability distributions that assign any values between 0 and 1 to each of these probabilities.

This information is not observational. You cannot see a probability with your eyes.

These probabilities cannot, on pain of infinite regress, always be arrived at by empirical reasoning.

So you need substantive, non-empirical information in order to do empirical reasoning.

This argument doesn’t have any unreasonable assumptions. I’m not assuming that probability theory tells us everything about evidential support, nor that there are always perfectly precise probabilities for everything. I’m only assuming that, when a hypothesis is adequately justified by some evidence, there is an objective fact that that hypothesis isn’t improbable on that evidence.

He later goes on to criticize "subjective Bayesianism":

Subjective Bayesians think that it’s rationally permissible to start with any coherent set of initial probabilities, and then just update your beliefs by conditionalizing on whatever evidence you get. (To conditionalize, when you receive evidence E, you have to set your new P(H) to what was previously your P(H|E).) On this view, people can have very different degrees of belief, given the same evidence, and yet all be perfectly rational.

Subjective Bayesians sometimes try to make this sound better by appealing to convergence theorems. These show, roughly, that as you get more evidence, the effect of differing prior probabilities tends to wash out. I.e., with enough evidence, people with different priors will still tend to converge on the correct beliefs.

The problem is that there is no amount of evidence that, on the subjective Bayesian view, would make all rational observers converge. No matter how much evidence you have for a theory at any given time, there are still prior probabilities that would result in someone continuing to reject the theory in the light of that evidence. So subjectivists cannot account for the fact that, e.g., it would be definitely irrational, given our current evidence, for someone to believe that the Earth rests on the back of a giant turtle.

The thread's OP asks if there's a question that I'm kinda embarrassed to ask. Well, I'm completely embarrassed to say that I understand very (if any) of the arguments posited here by Huemer regarding Bayesian probability, because I know little of it besides its very basics (make statements in terms of likeliness, not absolutes). I don't fully understand Bayes Theorem and I'm not quite sure what math skills are required to know it. My question (not embarrassed to ask it) is: where is a good place to start learning Bayesian probability and how to use it? Apart from what's mentioned in LW Sequences, is there a beginners book anyone can recommend?

Are you familiar with the calculus concept of a limit? I will explain it, in case you aren't.

If you start with the number 1 and divide by 2, we get one-half. If we divide by 2 again, we get one-fourth. If we divide by 2 again, we get one-eighth, and so on. It should become apparent the following facts:

  1. No matter how many times we divide by two, the number will always be greater than 0.
  2. No matter how small of a number you give me, e.g. 0.0000001, there is some way we get below it.

On the subjective Bayesian view, collecting evidence is kind of like "dividing by two," and the resulting number is kind of like "the probability that I am wrong."

  1. No matter how much evidence I collect, there is always the possibility that I am wrong.
  2. No matter how confident someone asks me to be, there is some amount of evidence I can collect to justify it.

The blogpost seems to think (1) is a weakness. The standard LW Sequence reply would be 0 and 1 are not Probabilities

(Oh and to go back to calculus, we would say "the limit equals zero")

In my experience "Bayesian Inferences" are just "biases and preconceptions" that the speaker wants to distinguish from those of thier interlocutor.

IE, you are biased, where as i am just being rational.