site banner

Small-Scale Question Sunday for October 29, 2023

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

2
Jump in the discussion.

No email address required.

I've never understood why some rationalists act like "faith" is irrational

Because faith is defined as believing something without having a good reason to believe it. If you have a good reason to believe it, then you'd just appeal to the reason and have no need to bring faith into it.

as if you're only permitted to believe in things that are epistemically certain.

In my experience, atheists/rationalists don't claim that certainty is required to be justified in believing something. As you correctly point out, that would be an absurdly high standard that would commit you to a useless stance of Cartesian doubt.

Beyond "cogito ergo sum," there's not much knowledge available to us that's not ultimately based on pragmatic leaps of logic. I can't prove that the world outside my head really exists, or that the past and future really exist, or that causation is real. I don't pretend to understand Godel's incompleteness theorem, but my layman's understanding of it is that even math relies on unprovable assumptions to work. And most of what we call "scientific knowledge" is far more tenuous than these propositions: we say that we know, for example, that an oxygen atom has eight protons, but I've never actually checked.

Not all leaps of logic are equally justified. You may not know anything about the original research that demonstrates how an oxygen atom has eight protons, but you know that scientists have developed systems (that you can distill down to "the scientific method", if you like) to test and discover what things happen to be true about the world we live in and what hypotheses happen not to be true. Planes fly, magic carpets don't. You also know that in general scientists are open about their methods and others who are knowledgeable about the subject matter have the opportunity to replicate and, if appropriate, refute previous findings. If you challenged a scientist of the relevant specialty about whether an oxygen atom has eight protons, you'd know that they'd have the receipts to back it up.

At this point you may be waiting to blurt out "but the replication crisis and the politicization of science!" And you're absolutely correct. But our confidence in any given proposition that comes out of science is proportional to, among other things, how reliable we consider that subfield to be. If we have good reasons to distrust scientists in a particular field of study or doubt a particular finding -- whether because the scientists are politicized (social science) or because figuring out a way to tease out what's actually true is fucking hard (again, social science) -- then we modulate our confidence in any given proposition coming out of that field ("such-and-such remains unclear, more research is needed" is a cliche for a reason.)

The only reliable alternative to bad science is better science. What else could there even be? Holy books? Podcasters and substackers trying to work it out from first principles? Vibes?

Epistemic certainty has to yield to pragmatic utility. Therefore, as long as my religious beliefs aren't provably false (which would be utility-decreasing, because it would cause me to make predictions that turn out to be incorrect, to my detriment), and if those beliefs make me better off (consensus seems to be that religious people tend to be happier and more mentally healthy than nonbelievers), I don't see why it's "irrational" to continue being religious.

It's irrational if don't have a good reason to believe that it's actually true. It may be that believing in a falsehood can be beneficial, but that's a separate argument from whether it's true. If you want to argue that people should believe falsehoods because they're beneficial, you can make that argument (and in this paragraph you seem to be), but be very aware that that's a separate argument from its truth and therefore from whether it's rational to believe that it's true.

And, as an aside, I can't fathom how it could even be possible to believe something that you recognize you have no good reason to be true merely because you think it's beneficial. Belief is an uncontrollable state of being convinced of the actual truth of something, so I can't imagine how belief could even be possible without being convinced of the truth value.

Finally, plenty of prominent rationalists have beliefs that seem just as strange and unfalsifiable as my own religious beliefs

Yes, they do. So don't add to the list.

It's not often that I find myself retreading ground from the Great Atheism War of the Aughts in this era where wokeism has become such a threat that I gleefully find myself allied with evangelicals and even married one and moved to the heart of evangelicalstan to get away from it. But man, I still can't let this shit stand unchallenged.

Because faith is defined as believing something without having a good reason to believe it.

No. What is faith?

"Now faith is the assurance of things hoped for, the conviction of things not seen."

Do I have faith in Jim? Have I good reason to trust him? Why do I have confidence that he will do what he promises?

Do you have faith in reason? What is the basis of your confidence and belief in its efficacy and veracity?

Thanks for this thorough response. Just to clarify, I don't think "people should believe falsehoods because they're beneficial"--people should aspire to have correct beliefs, even if they get warm, fuzzy feelings from having incorrect beliefs. I think arguments about ideas should be focused on whether the ideas or true, without worrying about the collateral concern of whether they are "beneficial" in some other way. What I do think is that, in areas where "such-and-such remains unclear, more research is needed" (which covers an enormous amount of the space of possible truth), it's not an irrational heuristic to select among available truth claims the one that adds the most meaning to your life.

I apologize for my flippant "oxygen" example--it was the best I could think of at the time--since I am absolutely happy to defer to scientific consensus (in proportion to the reliability of the subfield) in all matters. I don't believe in young-earth creationism, for instance, even though a lot of Christians do believe in it and have advanced some conveniently non-falsifiable theories explaining away the physical evidence of fossils, radiocarbon dating, etc. The consensus of lots of reliable subfields--geology, biology, astrophysics, etc.--would need to be wrong in order for young-earth creationism to be right. So--like most Christians who aren't fundamentalist Protestants--I'm happy to accept the mainstream scientific view on that question.

But there are some very important questions where there is no scientific consensus: why is there something instead of nothing? What is consciousness? (Incidentally, I'm often confused by the confidence with which atheists reject the possibility of any sort of "afterlife"--they may not know what consciousness is or how it works, but they're positive it disappears when you die! But that's another discussion.) Is morality even real, and if so, how ought we to act? In my view (you may disagree) these questions have resisted scientific explanation since the dawn of time, and they don't seem likely to be scientifically settled anytime soon. I don't want to get too into the weeds of these particular questions, unless you want me to. Suffice it to say that, if we have to wait for "better science" to explain these things, we may be waiting a long time. What should we believe in the meantime? It's not like we can just brush these questions off; they seem super important to any kind of complete worldview! I can't wait for science to catch up; I need to live now!

Finally, I don't know that "being convinced of the truth value" of something is necessary to belief. Being convinced of the falsity of an idea is, of course, fatal to belief--but as long as something could be true, and isn't patently less probable than other competing ideas, I don't see why one couldn't believe it. I think everyone relies on heuristics like "meaning" to select their most important beliefs from among several more-or-less-as-likely ideas.

But there are some very important questions where there is no scientific consensus: why is there something instead of nothing? What is consciousness? (Incidentally, I'm often confused by the confidence with which atheists reject the possibility of any sort of "afterlife"--they may not know what consciousness is or how it works, but they're positive it disappears when you die! But that's another discussion.) Is morality even real, and if so, how ought we to act? In my view (you may disagree) these questions have resisted scientific explanation since the dawn of time, and they don't seem likely to be scientifically settled anytime soon. I don't want to get too into the weeds of these particular questions, unless you want me to. Suffice it to say that, if we have to wait for "better science" to explain these things, we may be waiting a long time. What should we believe in the meantime? It's not like we can just brush these questions off; they seem super important to any kind of complete worldview! I can't wait for science to catch up; I need to live now!

Why do you feel the need to believe in some explanation for most of these questions? Why is it a problem to simply state that you don't know why there is something rather than nothing or what consciousness is, and thus don't have a belief on the matter?

You're right that these questions are difficult and any solutions/explanations are elusive or woefully incomplete. But it seems to me the only way we're going to solve them, if they are even solvable at all, is by making empirical discoveries about our universe (science) and by applying our capacity to reason. The religious alternative is to believe in explanations given by holy books whose author(s) we have no good reason to believe knew anything more than we do (and usually a lot less). To the extent these holy books have good explanations for any of these questions, we can justify our belief in their explanations by appealing directly to the reasoning and skipping the middle man.

Finally, I don't know that "being convinced of the truth value" of something is necessary to belief. Being convinced of the falsity of an idea is, of course, fatal to belief--but as long as something could be true, and isn't patently less probable than other competing ideas, I don't see why one couldn't believe it. I think everyone relies on heuristics like "meaning" to select their most important beliefs from among several more-or-less-as-likely ideas.

The first problem with that reasoning is that it's not enough for a proposition to be more probable than other competing explanations. Something being 2% probable and all alternatives being <2% probable doesn't mean it's justified to believe it. Which leads to the second problem, which I already mentioned: you're neglecting the possibility of simply not believing any proposition yet offered by anyone.

I could of course quibble over the suggestion that science doesn't have compelling explanations for some or all of the questions you mentioned. But you seem to agree with me that that's a bit of a distraction from the underlying dispute.

(This is tangential to my main point, but just for fun: Is there a probability where it becomes justified to believe something? 2% is too low, but 100% is too high--that would "commit you to a useless stance of Cartesian doubt." Is there a cutoff? If so, where is it and why? Even if you only believe ideas at 99% probability or above, you're still accepting up to a 1% chance that your belief is false. Wouldn't it be safer to say that you simply "don't have a belief on the matter?" On the other hand, if you can believe something at 99%, why not at 80%, or 51%? Why not at, say, 30%, if all the alternatives are even less likely?)

You say "Why is it a problem to simply state that you don't know why there is something rather than nothing or what consciousness is, and thus don't have a belief on the matter?" Good question, and I can't think of a good answer except that it seems painfully unsatisfactory to me, like asking someone starving in the desert "why can't you simply enjoy being hungry?" But I can't help but notice you didn't apply that reasoning to the next big question I mentioned: "how ought we to act?" The is/ought gap can't be bridged empirically. But it has to be bridged somehow--before you can act, you need to know how you ought to act. You can't just throw up your hands and say, "I don't know"; every deliberate action implies a value judgment.

If science is silent on the "ought," then we either need to look outside of science for our values or else give up on objective values altogether. If, as you argue, all beliefs should be scientifically justifiable, then we can't look outside science for our values; therefore, we have no alternative but to abandon the idea of objective values, and with it any ideas about how we "ought" to act.

If this premise: "All beliefs ought to be based on empirical discoveries about the universe"

leads to this conclusion: "Beliefs about what 'ought' to be are baseless and unjustifiable"

then the premise seems to refute itself.

I'm interested to know if you consider yourself a moral realist or not; if you do, how do you respond to this? Apologies if I've grossly misunderstood your position.

(This is tangential to my main point, but just for fun: Is there a probability where it becomes justified to believe something? 2% is too low, but 100% is too high--that would "commit you to a useless stance of Cartesian doubt." Is there a cutoff? If so, where is it and why? Even if you only believe ideas at 99% probability or above, you're still accepting up to a 1% chance that your belief is false. Wouldn't it be safer to say that you simply "don't have a belief on the matter?" On the other hand, if you can believe something at 99%, why not at 80%, or 51%? Why not at, say, 30%, if all the alternatives are even less likely?)

We speak of belief as a binary matter - you either believe something or you don't - but in practice it's a matter of degrees of confidence. For any given proposition, you have some degree of confidence in its truth (even if it's near-zero) and at a certain threshold it's high enough that you say you believe it. But it's just semantics.

You say "Why is it a problem to simply state that you don't know why there is something rather than nothing or what consciousness is, and thus don't have a belief on the matter?" Good question, and I can't think of a good answer except that it seems painfully unsatisfactory to me, like asking someone starving in the desert "why can't you simply enjoy being hungry?"

Well, I'm sorry, but that's just not a good reason to believe something. That doesn't negate the real feelings you describe and the challenge of dealing with them, but it's not going to be convincing to anyone else as a justification for believing what you believe, nor will anyone else be have any reason to think that you're justified in believing it yourself.

But I can't help but notice you didn't apply that reasoning to the next big question I mentioned: "how ought we to act?" The is/ought gap can't be bridged empirically. But it has to be bridged somehow--before you can act, you need to know how you ought to act. You can't just throw up your hands and say, "I don't know"; every deliberate action implies a value judgment.

It depends on the action. Sometimes our actions are justified based on information we have good reason to believe about the physical world (e.g., floors hold our body's weight, and putting one foot in front of the other repeatedly on this floor will soon take you to your kitchen), or about our minds (e.g., you want to walk to the kitchen because you're hungry).

But your later remarks make me think that what you're really trying to get at is essentially "how do we know how to treat other people", i.e., morality. Well, I think you already know that that's a deeply controversial and unsolved topic at an abstract level. Let's consider the approaches on offer.

Consider the religious approach to morality: that God tells us right from wrong. I think the best rebuttal to that has remained unchanged for a couple thousand years when it was introduced by Plato, if I'm not mistaken. It runs as follows. Suppose God says killing is wrong. Did he have some reason to say that it's wrong? Or could he have just as easily said that it's always right to kill anybody else (in which case it would be right because he said it's right)? If you say either that it would still be wrong to kill even if God said it was right, or that God wouldn't/couldn't say killing is right because he had a reason for saying killing is wrong, well then we can appeal directly to the reason and skip the middle man.

Now consider the non-religious approach to morality, which uses science and reason. Let's start with science. Science can provide us information about the world and the predictable consequences of certain actions. Why is this important? Well, take witchcraft for example. Hunting witches and punishing them isn't actually irrational - if there really was a witch casting spells to harm other people, she really should be punished, or even killed! It only doesn't make sense if witchcraft isn't actually a thing. But belief in witches is nearly a cultural universal among primitive humans because the default operating system of Homo sapiens does not allow much room for the intuition that random bad shit sometimes happens. Rather, if a person you care about gets sick or your crop fails, the primitive human believes there must have been a witch that cast a spell to cause it. Today, science has afforded us actually correct explanations for events that used to be explained by witchcraft. That helps shape our morality - i.e., how we "should" act - in an instance like this.

And science's role in morality is far more extensive than finding better explanations for calamities than witchcraft. Again, it provides a more informed understanding of the physical world, and a large part of determining what actions are moral is going to be contingent upon facts about the world that we just don't know without science. A lot of that will come down to scientific knowledge about the state of brains and the fact that brain states constitute experiences like pain (and thus whether a certain action will predictably cause pain), but it can also include things like understanding the effects of certain pollutants on our bodies and ecosystems (and thus whether dumping certain waste will harm others).

But science can't bridge the is-ought gap. It can tell us "this action causes another person pain", but not "you therefore shouldn't take this action". That's where reason comes in.

Suppose someone were to say, "Why should I care if I cause you pain or kill you? Your pain isn't my pain, and besides, I'd like to take your possessions after I kill you." Well, he won't convince anyone else that only his suffering matters and no one else's, so he is in no position to object if others were to treat him that way. Since no one wants to be treated that way, and since one's power over others is uncertain (tomorrow you might be in a position to be killed by a bigger man or a larger mob), it's in everyone's interest to collectively agree that randomly killing and pillaging is wrong.

Or suppose someone says, "I don't think it's immoral to inflict cruel and torturous punishment on this bread thief because we need to deter criminals. The harm caused by inflicting pain on him is less than the harm caused by undeterred criminals." Indeed, criminal deterrence is a defensible rationale for causing pain. But if the goal is deterrence, then any harm inflicted in excess of that which is necessary to deter criminals is arguably pointless harm and should be avoided. And surely short imprisonment is enough deterrence for theft. Furthermore, there's a problem of perverse incentives: if a man knows he'll be tortured and executed for stealing a loaf of bread, well then he might as well kill the shopkeeper while he's at it. Since there can be no greater punishment than what is already expected for the theft, he is incentivized to maximize his chances of getting away with it by killing the witness. Therefore, it makes more sense to have a sliding scale of punishment for criminal activity.

Those aren't scientific or religious arguments, but the use of such reason together with the better understanding of the world that we get from science provides us the building blocks for morality. Now, people who have read way too much Hume might object that it's still smuggling in certain first principles like "all else being equal, pain is bad". But you can play that game with anything. How do we know that the law of noncontradiction is compelling - that A cannot equal not-A? Well, it just... sorta... is. You have to pull yourself up by your bootstraps at some point and stop searching for a deeper proposition that isn't self-justifying. And if someone is unconvinced by the starting point that "all else being equal, pain is worse than no pain", then I think that person is either someone with way too much education who likes playing games, or they're not an honest interlocutor.

I'm interested to know if you consider yourself a moral realist or not

Not really. I think we all just sort of woke up on this backwater planet in this mysterious universe and are just collectively fumbling our way towards making life better for ourselves using the crude cognitive toolkits we evolved with. That includes figuring out facts about ourselves and the world and using reason to try and persuade each other of the best state of affairs to strive towards.

I do think we have an evolved sense of morality. It seems obvious to me that moral intuitions are innate, and they're certainly a human universal. That doesn't mean those evolved intuitions are actually defensible, though, or provide a good basis for morality. Sometimes they are (e.g., indignation at unfairness) and sometimes they're not (e.g., the lives of that other tribe have no value because they're Others).

Sorry for the late reply; I've had a busy couple days. Thanks for the through response!

Consider the religious approach to morality: that God tells us right from wrong. I think the best rebuttal to that has remained unchanged for a couple thousand years when it was introduced by Plato, if I'm not mistaken. It runs as follows. Suppose God says killing is wrong. Did he have some reason to say that it's wrong? Or could he have just as easily said that it's always right to kill anybody else (in which case it would be right because he said it's right)? If you say either that it would still be wrong to kill even if God said it was right, or that God wouldn't/couldn't say killing is right because he had a reason for saying killing is wrong, well then we can appeal directly to the reason and skip the middle man.
You're right, of course, that if morality had some basis more authoritative than God, then God would be a mere "middle man" and would not be necessary to the determination of moral truths. But I don't agree that "it would still be wrong to kill even if God said it was right, or that God wouldn't/couldn't say killing is right because he had a reason for saying killing is wrong." I believe God's nature is the source of goodness; you can't appeal to some standard of goodness higher than God. But it also isn't true to say that God could arbitrarily change good to evil or vice versa; God--being perfect--has no reason to change his nature, and--being omnipotent--his nature can't be changed by anything else. An actions is "good" insofar as it conforms to the immutable will of God.

Your "witchcraft" example conflates a factual dispute for a moral dispute: science can tell us whether or not the village witch is guilty of destroying the crops (a factual question), but it can't tell us whether or not people who destroy crops deserve to be punished (a moral question). I think you acknowledge this, since you agree that science can't derive an "ought" from an "is."

Reason can justify an "ought" statement, but only by presupposing a condition: "you ought to exercise if you want to be healthy; you ought to punish criminals if you want to deter crime" etc. So I don't think your examples work:

Suppose someone were to say, "Why should I care if I cause you pain or kill you? Your pain isn't my pain, and besides, I'd like to take your possessions after I kill you." Well, he won't convince anyone else that only his suffering matters and no one else's, so he is in no position to object if others were to treat him that way. Since no one wants to be treated that way, and since one's power over others is uncertain (tomorrow you might be in a position to be killed by a bigger man or a larger mob), it's in everyone's interest to collectively agree that randomly killing and pillaging is wrong.
Plenty of powerful people can say, with a high degree of confidence, that they will*not* be killed tomorrow by a bigger man or a larger mob. Genghis Khan killed and pillaged to his heart's content, and he lived well into his sixties and, by most accounts, died by falling off his horse and/or contracting an illness. Meanwhile, plenty of moral people end up getting killed or pillaged *in spite of* always behaving as if killing and pillaging are wrong. If morality has no better basis than this sort of social-contract-theory, then the Genghis Khans of the world have no use for it.

Earlier, you (correctly) pointed out that, if God is a middle man between humans and morality, we can just skip God and go straight to morality. But your own view of morality seems to treat it as a "middle man" for rational self-interest. If Genghis Khan says, "Why don't I skip the morality, and go straight for my own rational self-interest (i.e. killing and pillaging with impunity, because I enjoy it and I'm powerful enough to get away with it)?", how could you dissuade him?

Similarly, while I agree humans generally have evolved a "moral intuition," I don't agree with you that it's "universal." Psychopaths seem to be lacking the moral compunctions that are innate in ordinary humans. And while plenty of psychopaths end up dead or in prison, intelligent and capable psychopaths often become wildly successful. It seems like, above a certain level of intelligence, psychopathy is a very useful trait (which might explain why it hasn't been selected out of existence). So, if you can't appeal to Genghis Khan's moral intuitions, because he wasn't born with them--and if you can't appeal to his rational or game-theoretic self-interest--how do you convince him not to kill and pillage?

The only way I can think of is to convince him that killing and pillaging are not desirable because they are not good. And we know they are not good, because God is good and God is opposed to killing and pillaging. If Genghis Khan continues to kill and pillage, his life will be unfulfilling because he has not followed what is good, and after his death he will be punished by God for disobeying his will.

Now, you may not believe these things, and Genghis Khan may not believe them either. In that case, we're no better off than we would be under your system. But we're no worse off, either. And, at the margins, there are some rare instances where religious appeals appear to have moved otherwise implacable pillagers and conquerors; we'll never know what Pope Leo said during his meeting with Attila the Hun, but we do know the latter subsequently called off the invasion of Rome.

But my arguments about the religious basis of moral truths are, obviously, less relevant to moral non-realists like you than to, say, atheists who still believe in objective morality, like a lot of utilitarians (Scott Alexander's Utilitarian FAQ, for example, never actually explains why anyone should assign value to other people; this seems like it's kind of the entire crux of utilitarianism, but Scott brushes it off as a "basic moral intuition" (section 3.1)). If you're willing to bite the bullet that morality is just a spook, then you have no reason to be troubled by materialism's failure to establish an objective basis for morality. But you also don't have much room to criticize people who are convinced of objective morality, if their convictions turn them away from a materialism that's inadequate to justify moral truths.

Thank you for the comment, it makes me a little more confident that the sanity waterline rises as well as falls when I see other people articulate much the same arguments as I would have made myself.