site banner

Culture War Roundup for the week of June 24, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

10
Jump in the discussion.

No email address required.

I actually don't hold to the (standard formulation of) the axiom of completeness. It doesn't work with infinites.

But there are reformulations for infinites that end up letting you still use all the same theorems. But you're right, even there, it's a modelling convenience; it's possible that you're preferences be stronger than the ratio between 1 and every infinite.

But then it would seem like you could dismiss the smaller ones, and only care about the commensurable ones in the largest class? (That is, with nothing incommensurably larger than them?)

I'd also start wondering whether it's possible to take this and model it with infinities anyway, but there probably wouldn't be a unique way to do that.

But it sounds like you're more technically informed on these matters. What do you mean by a sigma-algebra with regard to deductive beliefs? It seems reasonable enough to me to assign probability to some set of incoherent beliefs. Like, it might make sense to guess how subjectively likely it is that some math problem works out one way or the other—I'm certainly entitled to be surprised by it.

Could you elaborate on determining a partition? My thought would be that it would be impossible to actually do things like that for everything in practice, and that generating precise probabilities in general is hard, but in theory, it would be correct if an agent acted that way? (See the page on Solomonoff induction)

But then it would seem like you could dismiss the smaller ones, and only care about the commensurable ones in the largest class? (That is, with nothing incommensurably larger than them?)

It's not so much a question of caring about the importance, but rather whether one is rationally obliged to have a preference over all of the options.

What do you mean by a sigma-algebra with regard to deductive beliefs?

A sigma-algebra is a set that is closed under complement, (countable) intersections, and (countable) unions. For example, if A in the algebra and B is in the algebra, then A U B is in the algebra.

Deductive closure is the requirement that a set of propositions contains every implication of conjunctions from that set. This is also called the logical omniscience requirement of Bayesianism: it assumes you know all the logical relations and have updated accordingly.

It seems reasonable enough to me to assign probability to some set of incoherent beliefs.

Not sure what you mean here?

Like, it might make sense to guess how subjectively likely it is that some math problem works out one way or the other—I'm certainly entitled to be surprised by it.

Agreed, but then you're going beyond the Bayesian model of belief.

Could you elaborate on determining a partition? My thought would be that it would be impossible to actually do things like that for everything in practice, and that generating precise probabilities in general is hard, but in theory, it would be correct if an agent acted that way? (See the page on Solomonoff induction)

There are quite a few things going on with partitions in Bayesianism, but for example, P(H) = P(H | A1) P(A1) + ... + P(H | An) P(An), where {A1, ... An} is a partition of propositions (mutually exclusive and exhaustive). The probabilities for the elements of such partitions must add up to one, by the Law of Total Probability.

To create such partitions, Bayesian epistemologists use "catch-all" hypotheses, meaning basically "The disjunction of all the possibilities that I haven't considered." Problem: how do you determine P(H | Ac), where Ac is a catch-all hypothesis? If you can't do this, then you don't know whether your probability distribution is coherent.

Bayesian decision theorists and statisticians stare at me blankly when I bring this up, because they don't do Bayesianism the way that philosophers do it. They assume that the probability distribution is over what Savage called a "small world", with a nice simple and manageable set of events (they almost all prefer that domain rather than propositions, AFAIK) that is an idealised model of some portion of the real world. That's definitely a great way to reason if you're making some practical decision or making an inference within a simplified model of some phenomenon, but it's incompatible with the high aspirations of Bayesian epistemologists, who are interested in a rational agent's reasoning, and agents don't just reason about small worlds.

Solomonoff induction is popular among some rationalists, but it has no particular status within Bayesianism: http://philsci-archive.pitt.edu/12429/

It's also controversial within Bayesianism (and even moreso statistics/decision theory/philosophy) whether people's beliefs should be representable as precise probabilities over a sigma-algebra, but that's a huge topic beyond the scope of what I have time to discuss here.

It's not so much a question of caring about the importance, but rather whether one is rationally obliged to have a preference over all of the options.

Ah, that was off the top of my head. I actually was referring to the Archimedean property, not completeness, so I didn't respond to you properly.

Since completeness is defined, at least per wikipedia, with a ≤ instead of a <, it would seem relatively hard to deny? The others are less obviously necessary.

Not sure what you mean here?

What followed: there are inconsistent, deductively false beliefs, that nevertheless need subjective credences.

Agreed, but then you're going beyond the Bayesian model of belief.

Fair enough—well, not necessarily in the sense that you're not performing updates, but in the sense that you have no universal probability function.

partitions

Ah, yes, that is a serious problem.

Nice paper, as well.

I've definitely had conversations with people—or, well, more, rants on my part—over these problems, though put in far less precise of a manner. Yes, these are serious issues.

I guess I just don't have any better, clearer way to handle things.

When we are considering any actual possibility, we are moving it out of the catchall part of the partition, and there it can behave a lot better, I think, so I don't know how much it messes things up, though I imagine still enough that there might be intractable problems.

Thanks for the precision, and the reminder that all this is a just-so story covering over a sea of infinite complexity and Humean doubt.

Anywhere I should direct myself for that last paragraph?

Since completeness is defined, at least per wikipedia, with a ≤ instead of a <, it would seem relatively hard to deny? The others are less obviously necessary.

Suppose you have a revealed preference analysis of preference. Note that I do NOT NOT NOT mean a revealed preference theory of evidence about preferences, but the idea that observed choice behaviour is what preference is. In that case, Completeness holds trivially, provided the choices in question are in fact made.

However, if you understand preferences as mental attitudes, then it is perfectly possible that someone does not have an attitude such that (1) they prefer A to B, (2) they prefer B to A, or (3) they are indifferent (in the technical sense) between A and B. For example, Duncan Luce did experiments that found that, under some conditions, people's choices in apparently repeatable choice-situations fluctuated probabilistically. IIRC, they preferred A or B to a random choice between the two, indicating that this was not indifference. Now, it's possible that they were interpreting those choice-situations as non-repeatable, but it could also be that their preferences with respect to A and B don't form a strict ordering.

What followed: there are inconsistent, deductively false beliefs, that nevertheless need subjective credences.

There's no basis in decision theory or mathematics for that claim, AFAIK.

Fair enough—well, not necessarily in the sense that you're not performing updates, but in the sense that you have no universal probability function.

There is a cool literature on imprecise probabilities you might like to look at:

https://plato.stanford.edu/entries/imprecise-probabilities/

I haven't read any applications of this approach to Pascal's Wager, but since IP is arguably a more realistic model of human psychology than maximising expected utility (which assumes unique additive probabilities) someone should definitely do that.

I guess I just don't have any better, clearer way to handle things.

Me too! I don't want to dox myself, but I think that Bayesian decision theory is similar to things like General Equilibrium Theory, neoclassical capital, and other concepts in economics, in that they can be useful tools to make decisions given idealised assumptions, but they shouldn't be taken too literally. Like any scientific model, their value comes not from their approximation of truth, but because of empirical and formal properties they possess, e.g. track-records and approximations of relevant features in the world (empirical) and tractability/computability properties (formal).

For more about the topics raised in the last paragraph in my comment above, the Stanford Encyclopedia page is pretty good - written by a very good young philosopher, Seamus Bradley, who has done other work on the topic worth reading. Much of the great work in this literature, e.g. by John Maynard Keynes, Teddy Seidenfeld, Peter Walley, Clark Glymour, Henry E. Kyburg, and Isaac Levi, is extremely technical, even for decision theorists. The SEP page covers most of their ideas at a more accessible level, sometimes in its appendix. The best young guy in the field is probably Richard Pettigrew, who has done some magnificent work that has still not been incorporated into the broader Bayesian consciousness, e.g. https://philarchive.org/rec/PETWIC-2 (see this video for a relatively easy introduction to that paper - https://youtube.com/watch?v=1W_wgQpZF2A ).

I find this topic very interesting, because (like you, I think?) I see Pascal's Wager (or something like it) as the best current case for religious belief. I actually like Arthur Balfour's variation of this type of reasoning, which avoids some of the features of Pascal's arguments that are awkward, e.g. regarding infinite expected payoffs:

https://archive.org/details/adefencephiloso01balfgoog/page/n12/mode/2up?view=theater

Here's John Passmore's summary of Balfour's position, from One Hundred Years of Philosophy (1968):

In his A Defence of Philosophic Doubt, being an Essay on the Foundations of Belief (1879), Balfour set out to show that the naturalism of nineteenth-century science rests on principles-the principle of the Uniformity of Nature, for example-which cannot possibly be derived demonstratively. This negative conclusion is the starting-point of The Foundations of Belief (1895). Naturalism, Balfour argues, conflicts with our moral and aesthetic sentiments, whereas theism satisfies them. If naturalism were demonstrable, he admits, it ought for all its distastefulness to be preferred to theism; but since it is not, our feelings should carry the day. He denies that there is any impropriety in thus bowing to our feelings: our beliefs, he says, are always determined to a large extent by non­rational factors.

Basically, the idea would be that, at least assuming a common human nature, it is prudentially rational to believe in God, because one is permitted to do so in the absence of a refutation; there is no refutation of God's existence; and one can expect better consequences from such belief.

Whether that reasoning is sound is one of the most important questions of philosophy, in my view, and it's brought me very deep into epistemology/decision theory. Balfour's starting place is Hume, but Subjective Bayesianism (either with precise or imprecise probabilities) seems very apt for such reasoning. Indeed, on a Subjective Bayesian view, I don't think there is any reason to think that theism is less rational than belief in even our most supported scientific theories.