This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Step 1 seems very shaky to me, as it assumes the reward-structure of real, Earth theologies. These gods are likely to involve something like "Infinite reward for belief; Infinite punishment for disbelief."
If we assume God operates on the opposite payout, then Pascal's Wager clearly implies we need to be Atheist!
Then your problem isn't with assuming that the things that everything should be done are infinites, in what you just said, you seem to concede that. It's with step 5, maybe, as you think we can't know anything.
Alright, so now, what are you going to do?
It seems fairly unlikely to me that those people claiming divine revelation and eternal rewards would do precisely nothing to affect the probabilities involved, and, if you have no competition in mind that gives you a more likely source of infinite gain/loss, then you should go wholeheartedly after that small chance.
That is, God could have the opposite payout, but revelation, in my opinion, makes it slightly more likely that he has the policies conveyed than that he has the opposite, and any slight likeliness will dominate over the rest of your options. But it would be weird if they cancelled out exactly to zero, so if you really think the other way is more compelling, then you should act fanatically that way. If you're really not sure, well, this is literally the most important thing, so you should think extremely hard, like, lifetime of effort hard, in order to discern any minute difference in probability, so that you can figure out what to orient yourself around. Under no circumstances should you be ignoring all this.
I originally engaged because your step 1 name drops Pascal's Wager. Pascal's Wager assumes the reward structure (God wants to be believed).
It seems the phrasing of your step 1 should be more like "We should avoid infinite punishments and seek out infinite rewards." Then, you introduce the reward structure all the way down in step 5 or 6, where it is awarded the position of Null Hypothesis on account of the scriptures.
This argument seems to me like a rhetorical device, and not reasoning. Nobody decides to think about infinite rewards and punishments, and then stumbles upon sacred texts. People read the sacred texts and then start thinking about the expected utility of infinite rewards and punishments. Someone doing reasoning would notice if the texts are just an incentive structure, and if so, discard the whole infinite reward business.
I guess this makes me not on board with 1, as this is clearly a rigged game with a pre-written Bottom Line.
You're right, I only actually bring up God later. The whole argument here is a more fleshed out version of Pascal's wager that doesn't assume Christianity is the only possible such wager. I didn't mean to include the whole thing in step one.
I guess I don't quite follow why you're rejecting this. You're saying that this isn't usually the way that people approach things. Sure. Does that mean it's wrong?
I don't understand what you're saying in this sentence: "Someone doing reasoning would notice if the texts are just an incentive structure, and if so, discard the whole infinite reward business." Could you elaborate?
I don't think that's a good reason to reject step 1. You seem to be saying that these are specious arguments trying to trick you into being religious, and therefore can be dismissed. But that's not actually any reason to think that 1 (we'll follow your phrasing of it) is wrong.
I find 1 extremely compelling, and it should be true just as a matter of general principles, before we consider any implications: it's worth pursuing better things and avoiding bad things in general, this is just more of that.
I'm glad I didn't misread your points, indeed I felt pretty good about my comprehension once I saw another of your replies (An earlier draft of my post included language like: "It seems I am stuck believing in infinite rewards and punishment" in regards to seeing that step 5 invokes scripture, and step 1 merely invokes infinite reward and punishment. It seems the trap I fell into was intended!)
The impression I get from Pascal's Wager: an a-priori argument for God for those who think it distasteful to apply that "empiricism" business to the beautiful question of theism. When deployed in that manner, it is open to the non-empirical attack of "the Atheist's God." The Thiest's retort "that seems unlikely!" amounts to cherry-picking evidence.
Your more fleshed out version of Pascal's Wager appears to be in the business of evaluating evidence. Of course, one would need evidence in order to even consider the hypothesis about infinite rewards and punishments, given that empirically, there doesn't seem to be infinite of anything around us! The police do not open a phonebook and randomly determine a suspect to investigate when they hear of a new crime. The laws of probability and what we might call "reasonable thought" obligate them to possess evidence before considering any suspect in the first place. It would be even more disturbing to learn the accused is a rival of the sheriff!
Your focus on infinite rewards and punishments is not separate from sacred texts. The reason anyone discusses Pascal's Water and infinite rewards and punishments is because of the sacred texts. So this business of "deciding what are the infinite rewards and punishments" is of course a strategic choice of starting point. It seems to me we should start with the evidence in front of us: the sacred texts. Maybe I chose that strategically? I don't have perfect access to my mind's internals. The sacred texts seem to me quite easily explainable as a lie to steer people's behavior by giving them incentives (That's what I meant by "incentive structure")
Thanks, that clarifies.
Ah, I see we relate to epistemology slightly differently. Let me argue that mine is better and more rigorous.
Have you ever read Eliezer Yudkowsky's The Sequences? I imagine, given that you're in this space, that there's some slight chance. Not that I recommend spending that time, but what follows will have some of the same ideas (though he rejects Pascal's wager, in a somewhat unprincipled manner).
Generally speaking, everything you know has a probability attached to it, according to how likely it is, from your perspective, true. That I'm typing into a computer right now? I'm quite certain of that, but there's always the possibility that Cartesian doubt is right and I'm under some variety of extreme delusion. That you're not in this room right now? I'm quite sure of that as well, though perhaps there's some remote chance that you happen to be in the area and crept in. In these examples it's kind of silly to pay attention to the tiny chances that my evaluation is wrong. There are some cases where it's more useful. If I am expecting someone to arrive soon, there's some subjective probability that someone will arrive in the next five minutes, which might be pretty relevant as to how much I need to be rushing to prepare. I said "subjective probability" there. I want to emphasize that what we are talking about is not what the probability is from some neutral world observer. I am talking about what the probability is to you. This isn't any different from what we ordinarily mean by probability: when you roll a die, you could hypothetically apply the laws of physics and work out exactly how it will land. But we still say the probability is one in six, because that is the probability according to the knowledge of the players involved. Alright, everything has probabilities. It is important to keep in mind that in the more extreme examples, you cannot dismiss that. There's no clear boundary line between a 1 in a graham's number chance of being right, and near certainty, only a sloping gradation. Everything that you can think of has a probability of being the case, somewhere between 0 and 1.
When we learn things, a key part of what is going on is that we think some facts about the world become more or less likely. This happens according to Bayesian updating (or at least, would happen if we were perfectly rational and had unlimited computation at our disposal. But it's a useful concept anyway.): that is, there is some likelihood about the world. You come across evidence. This evidence is more likely under some hypotheses than under other hypotheses. Following Bayes' rule (yes, the basic probability rule), you revise your likelihood of the former hypotheses up, and of the latter ones down. Hooray; you've now taken that piece of evidence into account, placing just the right amount of weight on it, and have new, more accurate, probabilities. One useful concept, then, is of subjective likelihoods attached to every hypothesis, and a universal prior. That is, some probability assigned to every world state or possible hypothesis, and from there, throughout our lives, with every piece of evidence, we adjust all the probabilities accordingly, giving the probabilities that would be the case of a perfectly rational agent. (This is known as Solomonoff induction.)
Such a construct, of course, does not exist. Various parts of that aren't true. We don't have probabilities at hand for every possible hypothesis. Most ideas we haven't even thought of. There are serious questions about how you would even generate the probabilities, if there is some objective way to do so (Kolmogorov complexity—that is, one measure of the amount of information in it—has been suggested, but I don't think that can apply to everything, and there is no clear way to define that neutrally, either). And we couldn't even calculate it if it did, as it is provably noncomputable. Rather, we come up with ideas, assign likelihoods to them by who knows what rule (though it has to be a somewhat reasonable one, since we're right a lot of the time), pay attention to some things and not others, and often have to realize how likelihoods of things change, not compute everything after every piece of evidence. Nevertheless, it still is a useful construct, as it shows how a perfect reasoner might work, and it is something we approximate by our own reasoning. If we build our ideas off of that better form of reasoning, they'll remain theoretically correct and rational, even if what we do only only approximates it.
Enough background. Let's go through your comment. I'll skip the first paragraph.
I'll note that Pascal's Wager isn't really an argument that God exists, it's an argument that it is instrumentally (but not necessarily epistemically) rational to wager for God. It's an argument for a course of action. That said, I don't have a problem with non-empirical arguments. There is no reason why evidence that adjusts our probabilities (as discussed above) has to be real world data; both that and realizations in our ideas will do so.
The argument isn't opposed to empiricism in general, or even in any specific instance. Apply all the empirical evidence you like; it'll only make your picture of the world better. I think my first example clearly involved empiricism, looking at the actual revealed religions. It would be absurd to argue for a religion without at least some empiricism. Arguing that it is unlikely is precisely what it is the relevant question (well, along with how large is the benefit/harm). The wager dismisses as comparatively irrelevant possibilities that do not offer any infinite benefits or harms, but it still cares about empiricism.
Why, then, reject "the Atheist's God"? I don't, actually, reject it in the same way as I ignore the finite benefits. Rather, I compare the probability of that, versus the probability of other options, consider rewards and penalties of possible courses of action, and go with the one with the best expected value. I'm just convinced that that's less likely, comparatively, to a God of some of the various large revealed religion happening to be true, and so it makes more sense to follow the latter rather than the former.
This was the main reason that I gave all that background above.
In this case, then, you talk of bringing up the hypothesis, and argue that even mentioning the possibility is something that needs justification. In general, this isn't necessary. You're always free to think up ideas, just often the probability will be low. There's nothing wrong with me considering the idea that the moon is made of cheese, and that they discovered it during the landings, but didn't reveal it after financial pressure from lobbyists in Big Cheese to prevent cheese mining. I'll just reject it out of hand as technically possible but extremely improbable, under my ordinary, somewhat inscrutable, probability assigning rules.
Then, it is false when you say you need evidence to consider the hypothesis. It is fine to consider the hypothesis that there are infinite rewards and punishments. In fact, this is entirely a rational thing to do, as discussed before: it has some probability. Feel free to think the probability low. But the argument I articulated before does not care if the probability of infinite benefits and harms is low. When the payoff is infinite, that outweighs everything else.
I think what you were saying is that you need a reason to take it seriously. Usually, things are only taken seriously when there's a reasonable likelihood of them happening, because extreme improbability usually outweighs whatever finite considerations we are considering. But here, that doesn't matter, as that infinite probability will overcome whatever finite improbability we are talking about. (Side note: the actual reason police can't start investigating random people is due to labor costs (it's just not efficient) and rules requiring reasonable cause, because we protect citizens, not that it would be impossible to assign probabilities legitimately.)
Sure, sacred texts were what first led me to look into this. But that doesn't mean that the basic Pascalian concerns would not be right, even were the sacred texts never written. I'm still convinced that, were the sacred texts never to have existed, it would still be right to realize that infinites are what matter, try to figure out what's more or less likely (in that case a considerably harder task) and devote one's life to it.
Sacred texts are first in the actual facts of my thinking about it, but that does not mean that there is not independent motivation—indeed, the most extreme possible motivation—to do so.
That is, arguments do not gain their legitimacy from whatever led one to look at them. They have their legitimacy in their own right, by their own merits. And in this case, the merits of the argument are good. Nor does the need to seek infinites depend on any sacred-text-reliant premises.
In the sense of I'm bringing this up to try to present an argument for religiosity, sure, it's strategic. But in terms of whether you should do this, no that's just what you should do. In every choice you make, whatever effects that has dominates over everything else. It would be extremely silly not to look at the thing in comparison to which everything you're ordinarily thinking about it is of infinitesimal value.
I think the authors of the scriptures believed them. Several of them endured physical suffering for it. But that aside, okay, that's possible, and would decrease how likely you are to think each of the sacred texts we're talking about are telling the truth. Fair enough. But that doesn't adjust the overall fact that it is infinites you are to look to and evaluate. That doesn't get you out of the overall question. (And if you can't find anything more reliable, you might turn to the scriptures anyway, on the off-chance that they are what they say they are, but that isn't at all necessary to the initial steps of the argument—seek out infinites, with all your might—which it sounds like is a big departure from how you've looked at life up to this point.)
Sorry to write at such length, but I though giving a better background on epistemology would help. Don't feel the need to respond to each detail.
Thanks for the reminder that Pascal's Wager is about instrumental beliefs and not epistemology. I realized that sometime in between posting this and reading your reply...
I'm not even sure I "should" think according to any mechanistic rules -- everyone notes we don't actually compute Bayes in our heads -- at least not at the high level of thoughts. Just like ethics is more about systematizing what we feel in our guts, I navelgaze because I think systematizing is fun, for example, systematizing what we actually do. I get the impression your argument is prescriptive (not that you personally are evangelizing anyone), so I would like to be up-front and honest that absolutely nothing you say would ever change how I act, except maybe cause me to think of a reply.
It's difficult for me to decouple 1) and 5). The mugging implications seem too real to me. Isn't accepting this just a vulnerability to be mugged by anyone? Upon further reflection, I don't think we even need to bring up infinities to realize that expected value has mugging problems. The mugger will just tell me that there is some amount of reward -- not infinite -- that I should accept since I don't assign anything a probability of zero. As the mugger names higher and higher values, it's true the probability doesn't (seem to) drop comparatively. Without bringing infinity into the mix, expected value seems to have some issues! So I'm not sure if a hyperreal (or whatever) analog to expected value would help me feel any better. You seem smarter than me though, so I'm assuming you already know about this though.
I get that this isn't going to convince you. My goal is mostly just to make you go, "Oh. That's a good argument. I don't really have any answer to that." Planting seeds, etc. Thanks for the honesty, though.
Yeah, I agree that at some point the probability seems to drop less than the value grows. No idea whether I'm smarter, but I've probably thought about this set of issues way more. My answer to the what about muggings question is just that that's way less of an issue when you're already centering your actions around an infinite. At that point, it's not just a finite loss to the mugger, but you're risking losing some infinite amount.
I don't think it makes sense to reject expected value because the Von Neumann-Morgenstern utility theorem says that, to be rational (under a seemingly reasonable definition of rational), your actions need to be able to be treated as following a utility function, so you end up having to act as if you have expected value.
Why do you think that the Completeness Axiom is an axiom of rationality, rather than a modelling convenience? I once checked through the great Bayesian decision theorists, e.g. Savage and Morgenstern for an argument for this axiom, but they ALSO seem to view it as a modelling convenience. As I recall, Savage explained the axiom as, "No, this isn't a requirement of rationality, but I can't do the maths in a simple [by HIS standards!] way otherwise." When I ask living great Bayesian philosophers, decision theorists, or statisticians, they ALSO view it as a modelling convenience, or change the subject from representation theorems to Dutch Book Arguments, epistemic accuracy arguments, and so on.
This isn't just a technical point, since it's not clear to me that a rational agent must assign an additive probability to their belief in Mysteries, such as the Trinity, because in a Bayesian model this also requires determining likelihoods of the deductive closure of your beliefs, over a sigma-algebra of propositions, under the assumption of the Trinity. (Otherwise you don't know whether your credences are coherent.) However, this is a problem for Trinitarian Christianity, rather than unitarian monotheisms. Again, this seems to be another case where your reasoning seems to favour Islam, rather than standard Christianity.
(By the way, I recently talked to a large number of Bayesian statisticians, all of whom were literally laughing out loud when they learned that people like Yudowsky think that you can determine credences in hypotheses like "God exists" or "This interpretation of quantum mechanics is true." That is not how someone who understood Bayesian mathematics would speak, in their view. For one thing, they brought up the problem of determining a partition.)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link