site banner

Culture War Roundup for the week of December 11, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

I assume you have some reason you think it matters that we can't use mathematics to come up with a specific objective prior probability that each model is accurate?

Edit: also, I note that I am doing a lot of internal translation of stuff like "the theory is true" into "the model makes accurate predictions of future observations" to fit into my ontology. Is this a valid translation, or is there some situation where someone might believe a true theory that would nevertheless lead them to make less accurate predictions about their future observations?

I assume you have some reason you think it matters that we can't use mathematics to come up with a specific objective prior probability that each model is accurate?

I don't think reasoned beliefs are forced by evidence; I think they're chosen. He's arguing that specific beliefs aren't a choice, any more than believing 1+1 = 2 is a choice. To support that thesis, he's claiming that the math determines that one of those is less complex than the other, and therefore the math determines that the less complex one is more likely, and therefore he did not choose to adopt it, but rather was compelled to adopt it by deterministic rules. If in fact he's mistaken about the rules, then they can't be the source of his certainty, which means it has to come from somewhere else. I think it can be demonstrated that it's derived from an axiom, not a conclusion forced by evidence.

also, I note that I am doing a lot of internal translation of stuff like "the theory is true" into "the model makes accurate predictions of future observations" to fit into my ontology.

Close enough, I think? The larger point I'm hoping to get back to is that the deterministic model of reason that seems to be generally assumed is a fiction, and that one can directly observe the holes in this fiction by closely examining how they themselves reason. You drew a distinction between "beliefs as expected consequences", and "beliefs as models determining action". I would argue that our expectation of consequences are quite malleable, and that the we choose decisively shape both the experiences we have and how we experience them.

[EDIT] - Sorry if these responses seem a bit perfunctory. I always feel a bit weird about pulling people into the middle of one of these back-and-forths, and it feels discourteous to immediately unload on them, so I try to keep answers short to give them an easy out.

I don't think reasoned beliefs are forced by evidence; I think they're chosen. He's arguing that specific beliefs aren't a choice, any more than believing 1+1 = 2 is a choice.

The choice of term "reasoned belief" instead of simply "belief" sounds like you mean something specific and important by that term. I'm not aware of that term having any particular meaning in any philosophical tradition I know about, but I also don't know much about philosophy.

He's arguing that specific beliefs aren't a choice, any more than believing 1+1 = 2 is a choice.

That sounds like the "anticipated experiences" meaning of "belief". I also cannot change those by sheer force of will. Can you? Is this another one of those less-than-universal human experiences similar to how some people just don't have mental imagery?

The larger point I'm hoping to get back to is that the deterministic model of reason that seems to be generally assumed is a fiction

I don't think I would classify probabilistic approaches like that as "deterministic models of reason".

But yeah I'm starting to lean towards "there's literally some bit of mental machinery for intentionally believing something that some people have".

The choice of term "reasoned belief" instead of simply "belief" sounds like you mean something specific and important by that term.

My opposite above pointed out that some people have beliefs induced by severe mental illness, and that these beliefs are not chosen. It's a fair point, and those certainly aren't the type of belief I'm talking about. Likewise, 1+1=2 or a belief in gravity are self-reinforcing to a degree that it's probably not practical to shift them, and may not be possible at all. Most beliefs are not caused by mental illness, though, and are not as simple as 1+1=2. We have to reason about them to arrive at an answer, so "reasoned beliefs" seems like a more precise term for them.

That sounds like the "anticipated experiences" meaning of "belief". I also cannot change those by sheer force of will. Can you?

in terms of 1+1=2 or gravity, no. I think this might be because they're too self-reinforcing, or because there's no incentive to doubt them, or both, but they seem pretty stable.

I don't think I would classify probabilistic approaches like that as "deterministic models of reason".

People talk about reasoning as though it's a deterministic process. They say that evidence has weight, that evidence can force conclusions. They often talk about how their beliefs aren't chosen, they just followed where the evidence led. They expect evidence to work on other people deterministically as well: when they present what they think is a weighty piece of evidence, and the other person weighs it lightly, they often assume the other person is acting in bad faith. People often expect a well-crafted argument to force someone on the other side to agree with them.

I used to believe all these things. I saw logic and argumentation as something approximating math, as 1+1=2. I thought if I could craft a good enough argument, summon good enough evidence, people on the other side would be forced to agree with me. And likewise, I thought I believed things because the evidence had broken that way.

Having spent a couple decades debating with people, I think that model is fatally flawed, and I think believing it makes people less rational, not more. Worse, I think it interferes with peoples' ability to communicate effectively with each other, especially across a large values divide. Further, I think it's pretty busted even from its own frame of reference; while evidence cannot compel agreement, it can encourage it, and there is a lot of very strong, immediately available evidence people do not actually reason the way the common narrative says they should.

I think that's a very pragmatic and reasonable position, at least in the abstract. You're in great intellectual company, holding that set of beliefs. Just look at all of the sayings that agree!

  • You can't reason someone out of something they didn't reason themselves into
  • It is difficult to get a man to understand something, when his salary depends on his not understanding it
  • We don't see things as they are, we see them as we are
  • It's easier to fool people than to convince them that they have been fooled

And yet! Some people do change their mind in response to evidence. It's not everyone, it might not even be most people, but it is a thing that happens. Clearly something is going on there.

We are in the culture war thread, so let's wage some culture war. Very early in this thread, you made the argument

What does replacing the Big Bang with God lose out on? Both of them share the attribute of serving as a termination point for materialistic explanations. Anything posited past that point is unfalsifiable by definition, unless something pretty significant changes in terms of our understanding of physics.

What does replacing the Big Bang with God lose out on? I think the answer is "the entire idea that you can have a comprehensible, gears-level model of how the universe works". A "gears-level" model should at least look like

  1. If the model were falsified, there should be specific changes to what future experiences you anticipate (or at the very least, you should lose confidence in some specific predictions you had before)
  2. Take the components of your model. If you take one of those parts, and you make some large arbitrary change to it, the model should now make completely different (and probably wrong, and maybe inconsistent) predictions.
  3. If you forgot a piece of your model, could you rederive it based on the other pieces of the model?

So I think the standard model of physics mostly satisfies the above. Working through:

  1. If general relativity were falsified, we'd expect that e.g. the predictions it makes about the precession of Mercury would be inaccurate enough that we would notice. Let's take the cosmological constant Λ in the Einstein Field Equation, which represents the energy density of vacuum, and means that on large enough scales, there is a repulsive force that overpowers the attractive force of gravity.
  2. If we were to, for example, flip the sign, we would expect the universe to be expanding at a decreasing rate rather than an increasing rate (affecting e.g. how redshifted/blueshifted distant standard candles were).
  3. If you forget one physics equation, but remember all the others, it's pretty easy to rederive the missing one. Source: I have done that on exams when I forgot an equation.

Side note: the Big Bang does not really occupy a God-shaped space in the materialist ontology. I can see where there would be a temptation to view it that way - the Big Bang was the earliest observable event in our universe, and therefore can be viewed as the cause of everything else, just like God - but the Big Bang is a prediction (retrodiction?) that is generated by using the standard model to make sense of our observations (e.g. the redshifting of standard candles, the cosmic microwave background). The question isn't "what if we replace the Big Bang with God", but rather "what if we replace the entire materialist edifice with God".

In any case, let's apply the above tests to the "God" hypothesis.

  1. What would it even mean for the hypothesis "we exist because an omnipotent, omniscient, omnibenevolent God willed it" to be falsified? What differences would you expect to observe (even in principle)
  2. Let's say we flip around the "onmiscient" part of the above - God is now omnipotent and omnibenevolent. What changes?
  3. Oops, you forgot something about God. Can you rederive it based on what you already know?

My point here isn't really "religion bad" so much as "you genuinely do lose something valuable if you try to use God as an explanation".

And yet! Some people do change their mind in response to evidence. It's not everyone, it might not even be most people, but it is a thing that happens. Clearly something is going on there.

Exactly. My goal is to investigate how exactly that happens. How we reason, how evidence works on us, how we draw conclusions and form beliefs.

What does replacing the Big Bang with God lose out on? I think the answer is "the entire idea that you can have a comprehensible, gears-level model of how the universe works". ...What does replacing the Big Bang with God lose out on? I think the answer is "the entire idea that you can have a comprehensible, gears-level model of how the universe works".

...Well, crap. Poor articulation on my part spoils everything. Well, let's try to fix this.

Side note: the Big Bang does not really occupy a God-shaped space in the materialist ontology.

I agree, and with all the points you make above [edit: and below!] this as well. The Big Bang is observable, falsifiable (and has been confirmed a lot of different ways), fits neatly into the standard model, allows people to make predictions about other things, and so on. It's solid, reliable knowledge. I see no reason to question it. I even agree that using God as an explanation is a bad idea.

The reference above, as you might see in some of the rest of the exchanges, is supposed to be to the cause of the big bang, not the big bang itself. The big bang is observable. The cause, as I understand it, is not.

Before we get into the following, I want to reiterate that this entire conversation about the origins of the universe is not actually about the origins of the universe. It is about how we form beliefs. Specific models of the origins of the universe is a belief that people here reliably hold, so it's a useful for examining how they came to hold those beliefs: specifically, whether they are forced by the evidence to hold those beliefs, or whether they have consciously chosen to hold those beliefs by adopting specific axioms, not themselves dependent on evidence.

So with that disclaimer, let's begin.

One of the bedrock parts of Materialism is that effects have causes. Therefore, under Materialist assumptions, the Big Bang has a cause. We have no way of observing that cause, nor of testing theories about it. If we did, we'd need a cause for that cause, and so on, in a potentially-infinite regress. One way to solve this would be a model of physics that causes the universe to loop infinitely, but we haven't managed to find that within the data we can access. We have a hard wall, and more or less a certainty that there's something unobservable on the other side of it.

So, one might nominate three competing models:

  • The cause is a seamless physics loop, part of which is hidden behind the back wall.

  • the universe is actually a simulation, and the real universe it's being simulated in is behind the back wall.

  • One or more of the deists are right, and it's some creator divinity behind the back wall.

My claim is that we cannot analyze the relative probabilities of these three options in any meaningful sense, because we cannot observe or rigorously define them in any meaningful sense. To the extent that any theory we might have is both largely undefined and entirely devoid of supporting evidence, we cannot draw evidence-based conclusions from it. Because of this, none of these three explanations are meaningfully more or less "materialistic" than the others, in the sense people commonly use the term. Further, none of these can be said to be a "simpler" explanation, in an information theory sense. You can't compare their Kolmogorov complexity, or Minimum Message Length, or employ any other test to determine which of them is more likely than the other, any more than you can calculate out a high-def audio file of a Beatles album from the text string "Sergeant Pepper's Lonely Hearts Club Band". This fact seems both obvious and quite inescapable to me, and yet I've argued the point at length and my opposite remains certain that I'm wrong.

Likewise, the claim I've run across that Simulationism is a materialistic theory because it assumes the base universe is Materialistic is false for the same reason: once you've appealed to the entirely unobservable and unfalsifiable, you are outside the bounds of Materialism. If we are in a simulation, we have no grounds to presume anything about the base reality at all, because all our data is from inside a system we know to be artificial. Even a rigorous chain of entirely material causes and effects is not Materialist if it is entirely unobservable and unfalsifiable.

The above two claims are the core of the above discussion. What follows is why I find these claims interesting.

If the above two claims are correct, then it seems to me that a critique of Materialism as it is commonly understood and practiced is necessary.

In the first place, we know for a fact that Materialism is incomplete. We know that there is a Back Wall, and everything we have learned about physics says we can't look behind it. Despite this, many Materialists make affirmative claims about what is behind it, and attempt to defend those claims as Materialistic in nature, the same as their claims about the observable universe. If Materialism is valuable because it confines itself to the observable and falsifiable, it has to actually confine itself to the observable and falsifiable. Losing track of this principle seems to me to be a pretty serious problem, especially because history shows me that this sort of losing track is something of a habit for Materialist groups and ideologies.

In the second place, many proponents of Materialism reject large amounts of highly significant evidence that we do have access to. It is common here to encounter people who claim the human mind is something akin to deterministic clockwork, and therefore free will can't exist. They claim that this position is necessitated by their commitment to Materialism. But we can observe our own Free Will directly, and our observations are pretty nearly as unambiguous as "1+1=2" and "gravity" and "fire burns". The evidence for human free will appears to me to be overwhelmingly strong, and if it must be rejected because it contradicts Materialism, that means that it contradicts Materialism. Worse, multiple previous generations of Materialists claimed that the determinism of the mind could be demonstrated, attempted to do so, and uniformly failed. Current generations have retreated to a "determinism of the gaps", where they admit that determinism cannot be demonstrated, makes zero testable predictions, and the only sensible option is to act as though free will exists, but to nonetheless insist that it doesn't actually exist because doing otherwise breaks Materialism.

So by all the rules of Materialism, we are sure that we at least one very large hole in our understanding of the chain of cause and effect. We have strong evidence that free will exists, to the point that even those insisting it doesn't exist are forced by practicality to endorse acting as though it did. And the kicker is that the people doing this insist that none of this is a choice, but that they're simply compelled by the evidence.

Allow me to present a competing model.

We reason based on data.

When we take data in, we can accept it uncritically, and promptly form a belief. This is a choice.

Alternatively, we can interrogate the data, check it for validity, and search for connections and correlations between it and other datapoints. There are an infinite number of datapoints. There are an infinite number of false data points. There is an infinite number of valid correlations and connections between both the true and false data points. further, there are an infinite number of methods by which to weight a given piece of evidence relative to other pieces. Because of these facts, it is impossible to ever conclude the interrogation in any objective sense; we follow the chain of evidence as far as we want, down the branches we want, measure it according to the weights and standards we want, and then, at some point, we make an entirely subjective decision to stop and to form a belief off the mass of evidence we've mapped. Every step of this process is a choice. (and as an aside, it's worth pointing out another thing we can conclude here: all reasoning is motivated reasoning.)

Finally, we can adopt an axiom. Axioms are not evidence, and they are not supported by evidence; rather, evidence either fits into them or it doesn't. We use axioms to group and collate evidence. Axioms are beliefs, and they cannot be forced, only chosen, though evidence we've accepted as valid that doesn't fit into them must be discarded or otherwise handled in some other way. This, again, is a choice.

It seems to me that all beliefs we acquire through reason are acquired in one of these three ways. Therefore, all our reasoned beliefs are beliefs we've chosen.

Under this model, the above example of Materialist beliefs is no longer mysterious: The specific variety of Materialism described above arises from an axiom, chosen because people prefer the set of data that fit within it to the set of data that do not fit within it. Free Will is part of the data that doesn't fit, and so it is discarded, not by contrary evidence, but by an appeal to the axiom.

It seems to me that such axiomatic thinking is not only fair, but necessary. I can see no other way for human reason to operate, and we need reason to function. The problem, as I see it, is that people do not seem to understand the nature of the choices they are making, which gives rise to a number of pernicious outcomes.

Primarily, the belief that one's other beliefs are not chosen but forced seems to make them more susceptible to accepting other beliefs uncritically, resulting in our history of "scientific" movements and ideologies that were not in any meaningful sense scientific, but which were very good at assembling huge piles of human skulls. Other implications branch out into politics, the nature of liberty and democracy, the proper understanding of values, how we should approach conflict, and so on, but these are beyond the scope of this margin. I've just hit 10k characters and have already had to rewrite half this post once, so I'll leave it here.

In conclusion, I'm pretty sure this is all the Enlightenment's fault.

Sorry for the slow reply, there's a bit to address.

Exactly. My goal is to investigate how exactly that happens. How we reason, how evidence works on us, how we draw conclusions and form beliefs.

Yeah, I like to think about this too. My impression is that there are two main ways that people come to form beliefs, in the sense of models of the world that produce predictions. Some people may lean more towards one way or the other, but most people are capable of changing their mind in either way in certain circumstances.

The first is through direct experience. For example, most people are not born knowing that if you take a cup of liquid in a short fat glass, and pour it into a tall skinny glass, that the amount of liquid remains the same despite the tall skinny glass looking like it has more liquid. The way people become convinced of this kind of object permanence is just by playing with liquids until they develop an intuitive understanding of the dynamics involved.

The second is by developing a model of other people's models, and querying that model to generate predictions as needed. This is how you end up with people who think things like "investing in real estate is the path to a prosperous life" despite not being particularly financially literate, nor having any personal experience with investing in real estate -- the successful people invest in real estate and talk about their successes, and so the financially illiterate person will predict good outcomes of pursuing that strategy despite not being able to give any specifics in terms of by what concrete mechanism that strategy should be expected to be successful. As a side note, expect it to be super frustrating to argue with someone about a belief they have picked up in this way -- you can argue till the cows come home about how some specific mechanism doesn't apply, but they weren't convinced by that mechanism, they were convinced by that one smart person they know believing something like this.

For the first type of belief, I definitely don't consider there to be any element of choice in what you expect your future observations to be based on your intuited understanding of the dynamics of the system. I cannot consciously decide not to believe in object permanence. For the second type of belief, I could see a case being made that you can decide which people's models to download into your brain, and which ones to trust. To an extent I think this is an accurate model, but I think if you trust the predictions generated by (your model of) someone else's model and are burned by that decision enough times, you will stop trusting the predictions of that model, same as you would if it was your own model.

There are intermediate cases, and perhaps it's better to treat this as a spectrum rather than a binary classification, and perhaps there are additional axes that would capture even more of the variation. But that's basically how I think about the acquisition of beliefs.

Incidentally I think "logical deduction generally works as a strategy for predicting stuff in the real world" tends to be a belief of the first type, generated by trying that strategy a bunch and having it work. It will only work in specific situations, and people who hold that kind of belief will have some pretty complex and nuanced ideas of when exactly that strategy will and won't work, in much the same way that embodied humans actually have some pretty complex and nuanced ideas about what exactly it means for objects to be permanent. I notice "trust logical deduction and math" tends to be a more widespread belief among mathematicians and physicists, and a much less widespread belief among biologists and doctors, so I think the usefulness of that heuristic varies a lot based on your context.

We reason based on data.

When we take data in, we can accept it uncritically, and promptly form a belief. This is a choice.

Interesting. This is not really how I would describe my internal experience. I would describe my experience as something more like "when I take data in, I note the data that I am seeing. I maybe form some weak rudimentary model of what might have caused me to observe the thing I saw, if I'm in peak form I might produce more than one (i.e. two, it's never more than two in practice) competing models that both might explain that model. If my model does badly, I don't trust it very well, whereas if it does well over time I adopt the idea that the model is true as a belief".

But anyway, this might all be esoteric bullshit. I'm a programmer, not a philosopher. Let's move back to the object level.

One of the bedrock parts of Materialism is that effects have causes.

Ehhh. Mostly true, at least. True in cases where there's an arrow of time that points from low-entropy systems to high-entropy systems, at least, which describes the world we live in and as such is probably good enough for the conversation at hand (see this excellent Wolfram article for nuance, though, if you're interested in such things -- look particularly at the section titled "Reversibility, Irreversibility and Equilibrium" for a demonstration that "the direction of causality" is "the direction pointing from low entropy to high entropy, even in systems that are reversible").

Therefore, under Materialist assumptions, the Big Bang has a cause.

Seems likely to me, at least in the sense of "the entropy at the moment of the Big Bang was not literally zero, nor was it maximal, so there was likely some other comprehensible thing going on".

We have no way of observing that cause, nor of testing theories about it. If we did, we'd need a cause for that cause, and so on, in a potentially-infinite regress

I think if we managed to get back to either zero entropy or infinite entropy we wouldn't need to keep regressing. But as far as I know we haven't actually gotten there with anything resembling a solid theory.

So, one might nominate three competing models:

• The cause is a seamless physics loop, part of which is hidden behind the back wall. • the universe is actually a simulation, and the real universe it's being simulated in is behind the back wall. • One or more of the deists are right, and it's some creator divinity behind the back wall.

I'd nominate a fourth hypothesis "the big bang is the point where, if you trace the chains of causality back past it, entropy starts going back up instead of down. time is defined as the direction away from the big bang" (see above wolfram article). In any case, the question "but can we chase back the chain of causality further somehow, what imbues some mathematical object with the fire of existence" still feels salient, at least (though maybe it's just a nonsense question?)

In any case, I am with you that none of these hypotheses make particularly useful or testable predictions.

But yeah, anyone claiming that materialism is complete in the way you are looking for is, I think, wrong. For that matter, I think anyone claiming the same of deism is wrong.

It is common here to encounter people who claim the human mind is something akin to deterministic clockwork, and therefore free will can't exist

I think those people are wrong. I think free will is what making a decision feels like from the inside -- just because some theoretical omniscient entity could in theory predict what your decision will be before you know what your decision is doesn't mean you know what that decision would be ahead of time. If predictive ML models get really good, and also EEGs get really good, and we set up an experiment wherein you choose when to press a button, and a computer can reliably predict 500ms faster than you that you will press the button, I don't think that experiment would disprove free will. If you were to close the loop and light up a light whenever the machine predicts the button would be pressed, a person could just be contrary and not press the button when the light turns on, and press the button when the light is off (because the human reaction time of 200ms is less than the 500ms standard we're holding the machine to). I think that's a pretty reasonable operationalization of the "I could choose otherwise" observation that underlies our conviction that we have free will. IIRC this is a fairly standard position called "compatibilism" though I don't think I've ever read any of the officially endorsed literature.

That said, in my personal experience "internally predict that this outcome will be the one I observe" does not feel like a "choice" in the way that "press the button" vs "don't press the button" feels like a choice. And it's that observation that I keep coming back to.

Finally, we can adopt an axiom. Axioms are not evidence, and they are not supported by evidence; rather, evidence either fits into them or it doesn't. We use axioms to group and collate evidence. Axioms are beliefs, and they cannot be forced, only chosen, though evidence we've accepted as valid that doesn't fit into them must be discarded or otherwise handled in some other way. This, again, is a choice.

This might just be a difference in vocabulary -- what you're calling "axioms" I'm calling "models" or "hypotheses", because "axiom" implies to me that it's the sort of thing where if you get conflicting evidence you have to throw away the evidence, rather than having the option of throwing away the "axiom". Maybe you mean something different by "choice" than I do as well.

Primarily, the belief that one's other beliefs are not chosen but forced seems to make them more susceptible to accepting other beliefs uncritically, resulting in our history of "scientific" movements and ideologies that were not in any meaningful sense scientific, but which were very good at assembling huge piles of human skulls. Other implications branch out into politics, the nature of liberty and democracy, the proper understanding of values, how we should approach conflict, and so on, but these are beyond the scope of this margin. I've just hit 10k characters and have already had to rewrite half this post once, so I'll leave it here.

If we're going by "stated beliefs" rather than "anticipatory beliefs" I just flatly agree with this.

In conclusion, I'm pretty sure this is all the Enlightenment's fault.

That pattern of misbehavior happened before the enlightenment too though. And, on balance, I think the enlightenment in general, and the scientific way of thinking in particular, left us with a world I'd much rather live in than the pre-enlightenment world. I will end with this graph of life expectancy at birth over time.