This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Sorry for the slow reply, there's a bit to address.
Yeah, I like to think about this too. My impression is that there are two main ways that people come to form beliefs, in the sense of models of the world that produce predictions. Some people may lean more towards one way or the other, but most people are capable of changing their mind in either way in certain circumstances.
The first is through direct experience. For example, most people are not born knowing that if you take a cup of liquid in a short fat glass, and pour it into a tall skinny glass, that the amount of liquid remains the same despite the tall skinny glass looking like it has more liquid. The way people become convinced of this kind of object permanence is just by playing with liquids until they develop an intuitive understanding of the dynamics involved.
The second is by developing a model of other people's models, and querying that model to generate predictions as needed. This is how you end up with people who think things like "investing in real estate is the path to a prosperous life" despite not being particularly financially literate, nor having any personal experience with investing in real estate -- the successful people invest in real estate and talk about their successes, and so the financially illiterate person will predict good outcomes of pursuing that strategy despite not being able to give any specifics in terms of by what concrete mechanism that strategy should be expected to be successful. As a side note, expect it to be super frustrating to argue with someone about a belief they have picked up in this way -- you can argue till the cows come home about how some specific mechanism doesn't apply, but they weren't convinced by that mechanism, they were convinced by that one smart person they know believing something like this.
For the first type of belief, I definitely don't consider there to be any element of choice in what you expect your future observations to be based on your intuited understanding of the dynamics of the system. I cannot consciously decide not to believe in object permanence. For the second type of belief, I could see a case being made that you can decide which people's models to download into your brain, and which ones to trust. To an extent I think this is an accurate model, but I think if you trust the predictions generated by (your model of) someone else's model and are burned by that decision enough times, you will stop trusting the predictions of that model, same as you would if it was your own model.
There are intermediate cases, and perhaps it's better to treat this as a spectrum rather than a binary classification, and perhaps there are additional axes that would capture even more of the variation. But that's basically how I think about the acquisition of beliefs.
Incidentally I think "logical deduction generally works as a strategy for predicting stuff in the real world" tends to be a belief of the first type, generated by trying that strategy a bunch and having it work. It will only work in specific situations, and people who hold that kind of belief will have some pretty complex and nuanced ideas of when exactly that strategy will and won't work, in much the same way that embodied humans actually have some pretty complex and nuanced ideas about what exactly it means for objects to be permanent. I notice "trust logical deduction and math" tends to be a more widespread belief among mathematicians and physicists, and a much less widespread belief among biologists and doctors, so I think the usefulness of that heuristic varies a lot based on your context.
Interesting. This is not really how I would describe my internal experience. I would describe my experience as something more like "when I take data in, I note the data that I am seeing. I maybe form some weak rudimentary model of what might have caused me to observe the thing I saw, if I'm in peak form I might produce more than one (i.e. two, it's never more than two in practice) competing models that both might explain that model. If my model does badly, I don't trust it very well, whereas if it does well over time I adopt the idea that the model is true as a belief".
But anyway, this might all be esoteric bullshit. I'm a programmer, not a philosopher. Let's move back to the object level.
Ehhh. Mostly true, at least. True in cases where there's an arrow of time that points from low-entropy systems to high-entropy systems, at least, which describes the world we live in and as such is probably good enough for the conversation at hand (see this excellent Wolfram article for nuance, though, if you're interested in such things -- look particularly at the section titled "Reversibility, Irreversibility and Equilibrium" for a demonstration that "the direction of causality" is "the direction pointing from low entropy to high entropy, even in systems that are reversible").
Seems likely to me, at least in the sense of "the entropy at the moment of the Big Bang was not literally zero, nor was it maximal, so there was likely some other comprehensible thing going on".
I think if we managed to get back to either zero entropy or infinite entropy we wouldn't need to keep regressing. But as far as I know we haven't actually gotten there with anything resembling a solid theory.
I'd nominate a fourth hypothesis "the big bang is the point where, if you trace the chains of causality back past it, entropy starts going back up instead of down. time is defined as the direction away from the big bang" (see above wolfram article). In any case, the question "but can we chase back the chain of causality further somehow, what imbues some mathematical object with the fire of existence" still feels salient, at least (though maybe it's just a nonsense question?)
In any case, I am with you that none of these hypotheses make particularly useful or testable predictions.
But yeah, anyone claiming that materialism is complete in the way you are looking for is, I think, wrong. For that matter, I think anyone claiming the same of deism is wrong.
I think those people are wrong. I think free will is what making a decision feels like from the inside -- just because some theoretical omniscient entity could in theory predict what your decision will be before you know what your decision is doesn't mean you know what that decision would be ahead of time. If predictive ML models get really good, and also EEGs get really good, and we set up an experiment wherein you choose when to press a button, and a computer can reliably predict 500ms faster than you that you will press the button, I don't think that experiment would disprove free will. If you were to close the loop and light up a light whenever the machine predicts the button would be pressed, a person could just be contrary and not press the button when the light turns on, and press the button when the light is off (because the human reaction time of 200ms is less than the 500ms standard we're holding the machine to). I think that's a pretty reasonable operationalization of the "I could choose otherwise" observation that underlies our conviction that we have free will. IIRC this is a fairly standard position called "compatibilism" though I don't think I've ever read any of the officially endorsed literature.
That said, in my personal experience "internally predict that this outcome will be the one I observe" does not feel like a "choice" in the way that "press the button" vs "don't press the button" feels like a choice. And it's that observation that I keep coming back to.
This might just be a difference in vocabulary -- what you're calling "axioms" I'm calling "models" or "hypotheses", because "axiom" implies to me that it's the sort of thing where if you get conflicting evidence you have to throw away the evidence, rather than having the option of throwing away the "axiom". Maybe you mean something different by "choice" than I do as well.
If we're going by "stated beliefs" rather than "anticipatory beliefs" I just flatly agree with this.
That pattern of misbehavior happened before the enlightenment too though. And, on balance, I think the enlightenment in general, and the scientific way of thinking in particular, left us with a world I'd much rather live in than the pre-enlightenment world. I will end with this graph of life expectancy at birth over time.
More options
Context Copy link