site banner

The scientific method rests on faith in God and Man.

The so-called "scientific method" is, I think, rather poorly understood. For example, let us consider one of the best-known laws of nature, often simply referred to as the Law of Gravity:

Newton's Law of Universal Gravitation: Every object in the universe attracts every other object toward it with a force proportional to the product of their masses, divided by the square of the distance between their centers of mass.

Now here is a series of questions for you, which I often ask audiences when I give lectures on the philosophy of science:

  1. Do you believe Newton's Law of Universal Gravitation is true?
  2. If so, how sure are you that it is true?
  3. Why do you believe it, with that degree of certainty?

The most common answers to these questions are "yes", "very sure", and "because it has been extensively experimentally verified." Those answers sound reasonable to any child of the Enlightenment -- but I submit, on the contrary, that this set of answers has no objective basis whatsoever. To begin with, let us ask, how many confirming experiments do you think would have been done, to qualify as "extensive experimental verification." I would ask that you, the reader, actually pick a number as a rough, round guess.

Whatever number N you picked, I now challenge you state the rule of inference that allows you to conclude, from N uniform observations, that a given effect is always about from a given alleged cause. If you dust off your stats book and thumb through it, you will find no such rule of inference rule there. What you will find are principles that allow you to conclude from a certain number N of observations that with confidence c, the proportion of positive cases is z, where c < 1 and z < 1. But there is no finite number of observations that would justify, with any nonzero confidence, that any law held universally, without exception (that is, z can never be 1 for any finite number of observations, no matter how small the desired confidence c is, unless c = 0). . And isn't that exactly what laws of nature are supposed to do? For Pete's sake it is called the law of universal gravitation, and it begins with the universal quantifier every (both of which may have seemed pretty innocuous up until now).

Let me repeat myself for clarity: I am not saying that there is no statistical law that would allow you to conclude the law with absolute certainty; absolute certainty is not even on the table. I am saying that there is no statistical law that would justify belief in the law of universal gravitation with even one tenth of one percent of one percent confidence, based on any finite number of observations. My point is that the laws of the physical sciences -- laws like the Ideal gas laws, the laws of gravity, Ohm's law, etc. -- are not based on statistical reasoning and could never be based on statistical reasoning, if they are supposed, with any confidence whatsoever, to hold universally.

So, if the scientific method is not based on the laws of statistics, what is it based on? In fact it is based on the

Principle of Abductive Inference: Given general principle as a hypothesis, if we have tried to experimentally disprove the hypothesis, with no disconfirming experiments, then we may infer that it is likely to be true -- with confidence justified by the ingenuity and diligence that has been exercised in attempting to disprove it.

In layman's terms, if we have tried to find and/or manufacture counterexamples to a hypothesis, extensively and cleverly, and found none, then we should be surprised if we then find a counterexample by accident. That is the essence of the scientific method that underpins most of the corpus of the physical sciences. Note that it is not statistical in nature. The methods of statistics are very different, in that they rest on theorems that justify confidence in those methods, under assumptions corresponding to the premises of the theorems. There is no such theorem for the Principle of Abductive Inference -- nor will there ever be, because, in fact, for reasons I will explain below, it is a miracle that the scientific method works (if it works).

Why would it take a miracle for the scientific method to work? Remember that the confidence with which we are entitled to infer a natural law is a function of the capability and diligence we have exercised in trying to disprove it. Thus, to conclude a general law with some moderate degree of confidence (say, 75%), we must have done due diligence in trying to disprove it, to the degree necessary to justify that level confidence, given the complexity of the system under study. But what in the world entitles us to think that the source code of the universe is so neat and simple, and its human denizens so smart, that we are capable of the diligence that is due?

For an illuminating analogy, consider that software testing is a process of experimentation that is closely analogous to scientific experimentation. In the case of software testing, the hypothesis being tested -- the general law that we are attempting to disconfirm -- is that a given program satisfies its specification for all inputs. Now do you suppose that we could effectively debug Microsoft Office, or gain justified confidence in its correctness with respect to on item of its specification, by letting a weasel crawl around on the keyboard while the software is running, and observing the results? Of course not: the program is far too complex, its behavior too nuanced, and the weasel too dimwitted (no offense to weasels) for that. Now, do you expect the source code of the Universe itself to be simpler and friendlier to the human brain than the source code of MS Office is to the brain of a weasel? That would be a miraculous thing to expect, for the following reason: a priori, if the complexity of that source code could be arbitrarily large. It could be a googleplex lines of spaghetti code -- and that would be a infinitesimally small level of complexity, given the realm of possible complexities -- namely the right-hand side of the number line.

In this light, if the human brain is better equipped to discover the laws of nature than a weasel is to confidently establish the correctness an item in the spec of MS Office, it would be a stunning coincidence. That is looking at it from the side of the a priori expected complexity of the problem, compared to any finite being's ability to solve it. But there is another side to look from, which is the side of the distribution of intelligence levels of the potential problem-solvers themselves. Obviously, a paramecium, for example, is not equipped to discover the laws of physics. Nor is an octopus, nor a turtle, nor a panther, nor an orangutan. In the spectrum of natural intelligences we know of, it just so happens that there is exactly one kind of creature that just barely has the capacity to uncover the laws of nature. It is as if some cosmic Dungeon Master was optimizing the problem from both sides, by making the source code of the universe just simple enough that the smartest beings within it (that we know of) were just barely capable of solving the puzzle. That is just the goldilocks situation that good DM's try to achieve with their puzzles: not so hard they can't be solved, not so easy that the players can't take pride in solving them

There is a salient counterargument I must respond to. It might be argued that, while it is a priori unlikely that any finite being would be capable of profitably employing the scientific method in a randomly constructed universe, it might be claimed that in hindsight of the scientific method having worked for us in this particular universe, we are now entitled, a posteriori, to embrace the Principle of Abductive Inference as a reliable method. My response is that we have no objective reason whatsoever to believe the scientific method has worked in hindsight -- at least not for the purpose of discovering universal laws of nature! I will grant that we have had pretty good luck with science-based engineering in the tiny little spec of the universe observable to us. I will even grant that this justifies the continued use of engineering for practical purposes with relative confidence -- under the laws of statistics, so long as, say, one anomaly per hundred thousand hours of use is an acceptable risk. But this gives no objective reason whatsoever (again under the laws of statistics) to believe that any of the alleged "laws of nature" we talk about is actually a universal law. That is to say, if you believe, with even one percent confidence, that we ever have, or ever will, uncover a single line of the source code of the universe -- a single law of Nature that holds without exception -- then you, my friend, believe in miracles. There is no reason to expect the scientific method to work, and good reason to expect it not to work -- unless human mind was designed to be able to uncover and understand the laws of nature, by Someone who knew exactly how complex they are.

-4
Jump in the discussion.

No email address required.

This is a reply jointly to several comments so I will put it as a new semi-top level post. Several of the responses, including such as (what I consider) the most thoughtful ones of @sqeecoo and @Gillitrut, point in the direction that the mission of science is not to discover natural laws that are literally true, but to produce useful fictions -- stories about the world that we are better off believing and acting on. That position, if you really believe it, is immune from my argument. But if you take that position, and at the same time embrace the study of science, then you cannot, at the same time, argue against theism on the grounds that it is literally false.

Why can't I let the other shoe drop and say that "theism is literally false" is also a story about the world that we are better off believing and acting on? In fact, this seems like a natural extension of the "science discovers things that are literally true" act. Sure, this line of argument pressures that there is some "out-of-character" meta level of cognition on which you perform this cost-benefit analysis and are essentially a radical agnostic, but that doesn't mean you have to drop into OOC every time some theist comes along and demands that you explain yourself, any more than a good theatre actor would stop acting and instead break into a rant as to why he needed the job every time someone in the audience indicated they were unhappy with the play.

Why can't I let the other shoe drop and say that "theism is literally false" is also a story about the world that we are better off believing and acting on?

A far as I'm concerned, you are welcome to make that argument; be my guest. I just said that, under certain premises, you couldn't rationally make the other one.

Why can't I let the other shoe drop and say that "theism is literally false" is also a story about the world that we are better off believing and acting on?

Because that would be entirely arbitrary. And thus useless as an argument.

If you want to do things, do things. But legitimacy requires justification.

I'll just poke in to say that I think that the mission of science is to discover the actual, literal truth. I've hopefully made this clearer in my response in our conversation below, so I'll just refer to that instead of repeating myself here.

To add content to this post, I'd say that many epistemological perpectives do indeed give up on truth in favor of usefulness or, in some variants of Bayesianism, in favor of our probability estimates. I don't care whether a scientific hypothesis is probably true, I care whether it is actually true - and if it is true, it will also be useful.

What is truth in the sense you mean here? I wager it isn't the same as what OP means.

In the strict sense, truth is literally inaccessible to any a posteriori method. Error and the senses cannot be absolutely mitigated.

Truth in the classical sense of correspondence to reality. If I say aliens exist and you say they don't, one of us has hit upon the truth despite both of us guessing. We won't know which of the two claims is true, but one of them is true, i.e. it corresponds to reality.

What would be the truth in the "strict sense", as you put it?

Truth in the classical sense of correspondence to reality.

So not the classical, but the non-classical modern sense.

Classicism in truth theories usually refers to the division between theories that rely on criteria and procedures and theories that do not.

Evidence theory (A is true if A is evident), coherence theory (A is true if it can be embedded in a coherent system without destroying its coherence), common agreement theory (A is true if specialists agree about its correctness), utilitarian theory (A is true if A is useful); these are all non-classical theories because they appeal to a mechanism to obtain truth.

Classical theories do not do this, and consider true what is necessarily so without appealing to criteria.

For instance Tarski's STT that works through a relation of satisfaction and solely operates on formal languages is a classical theory of truth in the line of the Aristotelian syllogisms that it was inspired by.

"aliens exist" is a classically meaningless statement because you neither defined what an alien is nor the totality of an existence relationship.

What would be the truth in the "strict sense", as you put it?

Logical necessities. Anything that isn't contingent on evidence and stands by itself. Things that are so by virtue of pure reason, before evaluation of the senses. Things that are true a priori.

Most of mathematics is true in this strict sense, none of science is.

Ok, I was going for a plain language simple answer, but you obviously know your stuff. Tarski's STT in the Popper/Miller interpretation is the theory of truth I adhere to, then.

I see. OP seems to be arguing absolutes, so probabilistic epistemologies are going to be hard to reconcile, but I think I understand your point better with that added context.

I think you're right to say that it's not necessary that theories of our observations that don't assume a metaphysics are fictitious. And Propensity is a good example of this.

But one can probably retort that in application even such theories have to make the assumption that the universe is meaningfully descriptible, a fortiori probabilistically, if they want to make a claim at Truth. Which as I understand is the whole debate around inductive skepticism.

I've never found Popper's arguments to the abilities of pure deductivism to be entirely convincing myself. Even he has to appeal to one hypothesis being better or worse “corroborated” by the evidence which decays him into method. Hence the unfortunate fate of logical positivism.

I'm not sure I can follow everything you're saying here, but I'm interested in what you find unconvincing about Popper, if you feel like expounding on it. I hope you're not implying Popper was a logical positivist :)

It would be a bit silly to say that about one of its most tenacious critics. I'm merely saying his own criticisms of the problems with induction apply to his own ideas when scrutinized. He's a deductivist in the same way Marx is a materialist: only in theory.

I really have two problems with Popper.

First, the aforementioned issue with deductivism requiring some ranking of theories through experimentation.

I think this reintroduces the problems he sees in positivism.

The way he tries to get away with it is, as you know, by refraining from claiming truth and instead having science go for truthlikeness and verisimilitude.

This is all well and good and a more honest account of the scientific process, but his definition of truthlikeness is incoherent (by his own estimation) because it can't rank false theories. We may yet find a satisfactory solution for this but none of the attempts I've seen were very convincing.

Second is the more mundane criticism that his views don't manage to characterize a lot of behavior that we do regard as scientific. There is a lot wrong with Khune bun on this I wager he is correct.

More comments

To be honest, I am a little put-off by your phrasing that science is what we are better off "believing."

When I think, "things we are better off believing," I think of a case where believing and not-believing make a difference. For example, maybe there is a self-fulfilling prophecy involving the prescription "You should be confident." In that case, I might say we are better of believing "I am confident." Science is not a self-fulfilling prophecy, because results of experiment do not depend on beliefs.

Science is stories about the world that we are better off acting on. This phrasing seems better to me. In this way, can't I argue against theism (whatever you mean by that) by saying "acting on theism doesn't make us better off"?

Actually, similarly to the old adage that theism is Not Even Wrong, in this new formulation of "true," theism is Not Even Actionable. I don't think this parallel is a coincidence.

Science is stories about the world that we are better off acting on. This phrasing seems better to me. In this way, can't I argue against theism (whatever you mean by that) by saying "acting on theism doesn't make us better off"?

Yes, feel free. But not (under the premises I described) on the grounds that there is no objective evidence that God actually exists (since that is also true of universal gravitation).

I'd like to hear more about why we can't argue in that direction. Is this like a hypocrisy claim? That since science isn't literally true it would be hypocritical to criticize theism for not being literally true? Or is this more that the acknowledged limits of scientific inquiry do not permit disproving theism?

I am content with believing that the particular empirical claims theists make seem to all have non-theistic explanations. If there is some causally inert god or gods out there, who do not interact with our reality in an empirically testable way, I am not that concerned with their existence.

Is this like a hypocrisy claim? That since science isn't literally true it would be hypocritical to criticize theism for not being literally true?

Yes, that's what I'm saying.

If there is some causally inert god or gods out there, who do not interact with our reality in an empirically testable way, I am not that concerned with their existence.

God's pronouns are He/Him. (For the sarcasm-impaired, that's a joke)

Yes, that's what I'm saying.

Okay well in that case it's also hypocritical to criticize Cthulhu and Star Wars lore for not being literally true. Hooray, solipsism. This entire line of argument advances absolutely nothing.

It essentially amounts to a theist's special request for their beliefs to be treated as intellectually serious even though they can't point to any justification for them that exists outside of their own skull, because hey after all, nothing is really certain, right?

Bluntly, request denied until one of these arguments successfully and meaningfully distinguishes Christianity, theism, whatever, from an infinite number of bullshit things I could make up on the spot.

I have lurked this group since long before it had a site or even a name, and throughout the years, I have almost never commented. So I apologize for being critical of someone like yourself who actually does post and contribute, as that is a bit hypocritical of me.

I think you are pattern-matching OP as a member of your outgroup — that is to say, a theist — then skipping past his argument about the inferential basis of the scientific method to attack him because his post appears to give aid and comfort to the enemy.

I am familiar with some physics, some math, and some epistemology — not enough to be an expert, but enough to where I think OP’s argument is reasonably well-defined and not equivalent to extreme skepticism about everything. I can’t find a clear basis to dismiss OP’s argument (at least as to the inferential limitations of science) as bullshit. (I don’t think OP has established at all the truth of theism or anything like that, but he only appears to make a single terse allusion to it at the end of his argument, and perhaps was using it rhetorically as bait.)

As someone whose intuition is that we do have objective evidence for scientific laws, I’ve been hoping that some mathematician or philosopher would eventually pop in here and formally demolish OP’s argument in a direction pleasing to my sensibilities, and I subscribed to the thread to wait for that to happen, but it hasn’t happened yet. @self_made_human, who will often deliver an incisive and sometimes brutal presentation of the traditional rationalist materialist viewpoint, appears to have lost interest. And of the two posters in this thread, @sqeecoo and @IGI-111, who appear most literate in the epistemology of science (or at least, more literate than me), neither has outright dismissed OP’s major premises as nonsensical or solipsistic, and it appears that I will have to read some Popper if I want to get to the bottom of it for myself.

My questions for you, or anyone, are:

(1) What is the first premise or step in OP’s argument that is clearly unreasonable / irrational?

(2) Is OP’s “principle of abductive inference” truly the inferential basis of the scientific method, and if not, what is, and how does it work?

(3) Is it impossible to infer universal physical laws with greater than 0% confidence, as OP claims?

(4) For OP: you suggest downthread that we should be inclined to trust models like Newtonian or Einsteinian physics. Why should we trust them (if we cannot infer universal physical laws with nonzero confidence) and how much should we trust them?

(4) For OP: you suggest downthread that we should be inclined to trust models like Newtonian or Einsteinian physics. Why should we trust them (if we cannot infer universal physical laws with nonzero confidence) and how much should we trust them?

We should trust them for two reasons. First, we do not need nonzero confidence in full generality to trust them for practical purposes. Being 99% sure the technology works 99% of the time is good enough -- or something like that, depending on the application. Second, I didn't say we cannot infer universal physical laws with nonzero confidence, just that we can't do it without believing in one more miracle, viz. that we are blessed with just enough intelligence, and a simple enough universe, that abductive reasoning is reliable (on top of the miracle that certain equations are physically instantiated in the form of a physical systems and consciousness, that this system continues persistently to be governed by those laws, that the parameters of those laws fall into the narrow range required for stars to form, etc.).

and how much should we trust them?

That depends on how many miracles you believe.

Thank you.

In return, I'll save you some effort about getting to grips with Popperian notions of falsifiability by pointing out that they're obsolete.

Popper claimed that a single contradictory finding is sufficient to sink a hypothesis, whereas no amount of evidence can ever prove it with 100% confidence. In other words, you can't prove anything, only disprove it.

Bayesianism goes even further. It asserts, with the maths to prove it, that it is impossible, mathematically so, to achieve either 0 or 100% confidence in a hypothesis without starting there, at which point literally no finite amount of evidence will sway you.

Starting anywhere in between, it would take an infinite amount of evidence to raise confidence in a hypothesis to 100%, or to reduce it to 0%. And if you start with 100% credence or 0% levels of disbelief, nothing anyone can do to you short of invasive neurosurgery (or maybe a shit ton of LSD) can change it.

What are the practical ramifications? Well, here, what Nelson is trying to argue is a waste of time. If you demand 100% confidence that the laws of physics are "universal" and timeless, you're SOL unless you assume the conclusion in advance. But we can approach arbitrarily close, and the fact that modern technology works is testament to the fact that we can be goddamn bloody confident in them. And worst part is that it's not the poor laws of physics at stake here, it's everything you don't hold axiomatic.

Skip Popper. Get on the Bayes Boat, baby, it's all you need.

Here's a few links if you're curious:

0 And 1 Are Not Probabilities (at least in the Bayesian sense)

And Scott on the Predictive Processing theory of cognition which holds that all human cognition is fundamentally Bayesian, even when it breaks.

And if you start with 100% credence or 0% levels of disbelief, nothing anyone can do to you short of invasive neurosurgery (or maybe a shit ton of LSD) can change it.... What are the practical ramifications? Well, here, what Nelson is trying to argue is a waste of time. If you demand 100% confidence that the laws of physics are "universal" and timeless, you're SOL unless you assume the conclusion in advance. But we can approach arbitrarily close

This is mistaken. There are two quantifiers in an assertions about laws of nature: one might be called generality, which refers to the uniformity with which the law is believed to hold, and the other might be called confidence, which refers to the degree of belief that the law holds with the given generality. For example, if I say I firmly believe that at least 1% of crows are black, this statement would have high confidence and low generality -- whereas if I said, It is plausible that at least 99% of crows are black, that statement would have lower confidence and higher generality. Nothing in any of my posts mentioned 100% confidence; my thesis is about nonzero confidence in 100% generality.

Skip Popper. Get on the Bayes Boat, baby, it's all you need.

Funny thing: everybody loves Bayes rule; but they never state their priors. To that extent they never consciously use it. Nor is there any evidence that it models the unconscious process of real life rational cognition. The evidence to support that would need to be quantitative; not just "Hey I believed something, then I saw something, and I altered my degree of belief. Must have been using Bayes!"

Funny thing: everybody loves Bayes rule; but they never state their priors. To that extent they never consciously use it. Nor is there any evidence that it models the unconscious process of real life rational cognition. The evidence to support that would need to be quantitative; not just "Hey I believed something, then I saw something, and I altered my degree of belief. Must have been using Bayes!"

You evidently don't hang around LessWrong enough.

While Predictive Processing theory, which posits that human cognition is inherently Bayesian, has not been established to the extent it's nigh incontrovertible, it elegantly explains many otherwise baffling things about human cognition, including how it breaks when it comes to mental illnesses like depression, autism, OCD, and schizophrenia. I've linked to Scott on it before. I think it's more likely to be true than not, even if I can't say with a straight-face that it's gospel truth. It is almost certainly incomplete.

In other words, humans are being imperfect Bayesians all the time, and you don't need to explicitly whip out the formula on encountering evidence to get by, but in situations where the expected value of doing so in a rigorous fashion is worth it, you should. The rest of the time, evolution has got you covered.

Besides, the best, most accurate superforecasters and people like quants absolutely pull it out and do explicit work. In their case, the effort really is worth it. You can't beat them without doing the same.

Besides, the best, most accurate superforecasters and people like quants absolutely pull it out and do explicit work. In their case, the effort really is worth it. You can't beat them without doing the same.

I know quants do this, but I think it is a special case. Show me a hundred randomly selected people who are making predictions they suffer consequences for getting wrong, and are succeeding, I will show you maybe 10 (and I think that's generous) that are writing down priors and using Bayes rule. Medical research, for example, uses parametric stats overwhelmingly more than Bayes (remember all those p-values you were tripping over?), as do the physical sciences.

If the effective altruism (EA) crowd are in the habit of regularly writing down priors (not just "there exist cases"), then I must be mistaken in the spirit of my descriptive claim that nobody writes them down. On the other hand, I would not count EA as people who pay consequences of being wrong, or that is doing a demonstrably good job of anything. If they aren't doing controlled experiments (which would absolutely be possible in the domain of altruism), they are just navel gazing -- and making it look like something else by throwing numbers around. I have a low opinion of EA in the first place; in fact, in the few cases where I looked at the details of the quantitative reasoning on sites like LessWrong, it was so amateurish that I wasn't sure whether to laugh or cry. So an appeal to the authority if LessWrong doesn't cut much ice with me.

I should give an example of this. Here is an EA article on the benefits of mosquito nets from Givewell.org. It is one of their leading projects. (https://www.givewell.org/international/technical/programs/insecticide-treated-nets#How_cost-effective_is_it). At a glance, to an untrained eye, it looks like an impressive, rigorous study. To a trained eye the first thing that jumps out is that it is highly misleading. The talk about "averting deaths" would make an untrained reader think that they are counting the number of "lives saved". But this is not how experts think about "saving lives" and there is a good reason for it. Let's suppose that we take a certain child, that at 9 AM our project saves him from a fatal incident; at 10 Am another, at 11 AM another, but at noon he dies from exactly the peril our program is designed to prevent. Yay, we just averted 3 deaths! That is the stat that Givewell is showing you. Did we save three lives? no, we saved three hours of life.

This is the way anyone with a smidgeon of actuarial expertise thinks about "saving lives" -- in terms of saving days of life, not "averting deaths", and the Givewell and Lesswrong people either know that or ought to know it. If they don't know it, they are incompetent; and if they know it, then talking about "averting deaths" in their public facing literature is deliberately deceptive because it strongly suggests "saving lives", meaning whole lives, in the mind of the average reader. To be fair to givewell, their method of analyzing deaths averted apply to saving someone from malaria for a full year (not just an hour), but (1) that would not be apparent to a typical donor who is not versed in actuarial science, and (2) the fact remains that you could "avert the death" of the same person nine times while they still died of malaria (the peril the program is supposed to prevent) at the age of 10. The analysis and language around it is either incompetent or deceptive -- contrary to either one word or the other in the name of the endeavor, effective altruism.

That's not a cherry picked example; it was the first thing I saw in my first five minutes of investigating "effective altruism". It soured me and I didn't look much further, but maybe I'm mistaken. Maybe you can point me to some EA projects that are truly well reasoned, that are also on the top of the heap for the EA community.

More comments

While Predictive Processing theory, which posits that human cognition is inherently Bayesian,

I'm skeptical of this. I think predictive processing theory posits a model with certain qualitative features that Bayesian updating would also have, but there are scads of non-Bayesian approaches that would also have those qualitative properties. They would only look Bayesian from the point of view of someone who doesn't know any other theories of belief updating. Does PPT posit a model that have the quantitative properties of Bayesian updating in particular, and experimentally validate those? That would be a very interesting find. If you know of a source I'd be curious to look at it.

More comments

If you demand 100% confidence that the laws of physics are "universal" and timeless, you're SOL unless you assume the conclusion in advance. But we can approach arbitrarily close, and the fact that modern technology works is testament to the fact that we can be goddamn bloody confident in them.

How can we approach arbitrarily close? As stated, this does nothing to address Hume's argument against induction, which holds equally whether you are aiming for probability or for certainty, and does not address the retro skeptical argument that every reason you can give is either based on something else or based on nothing, leading to infinite regress. I don't see how Bayesianism helps with this. Justification is not to be had, with any level of confidence or probabilty. Which is why you need Popper, who explained how you can maintain the use of logic and reason and maintain truth as the aim of science, while also accepting Hume's and the skeptical arguments as correct and consequently discarding justification alltogether.

Another issue Bayesianism often runs into is that many variants of Bayesianism give up on truth - I'm not interested in the confidence we can assign to a theory given our priors and the evidence, I'm interested in whether the theory in question is actually true. Even if we could be justified in Bayesian calculations of probabity/confidence (which we can't be), this would tell us exactly nothing about whether this probable theory is actually true, which is what we are really interested in. There is no logical connection between probable truth and truth (just because something is probably true, it need not be true), and Bayesianism often focuses on subjective calculations of probable truth and abandons actual truth as the goal of science. But if Bayesianism aims at truth rather than solely at subjective calculations of confidence unmoored from reality, if it is interested in what is true rather than just what we can be confident in, it is in no better a position to provide justification than any other epistemology.

How can we approach arbitrarily close?

By amassing more evidence from observations and updating accordingly. Physics demands 5 sigmas of confidence in experimental results before accepting an experiment as valid. For most purposes, you can get away with a lot less.

As stated, this does nothing to address Hume's argument against induction, which holds equally whether you are aiming for probability or for certainty, and does not address the retro skeptical argument that every reason you can give is either based on something else or based on nothing, leading to infinite regress.

Nobody has a solution to infinite regress, barring "I said so". As far as I can tell, you've got to start somewhere, and Bayesianism leads to more sensible decision theories and is clean and simple.

Another issue Bayesianism often runs into is that many variants of Bayesianism give up on truth - I'm not interested in the confidence we can assign to a theory given our priors and the evidence, I'm interested in whether the theory in question is actually true.

"The next sentence is false. The previous sentence is true." Good luck.

Given that English is an imprecise language, feel free to interpret my 99.9999% confidence that the Sun will rise tomorrow as being equivalent to "it's true the Sun will rise tomorrow".

But if Bayesianism aims at truth rather than solely at subjective calculations of confidence unmoored from reality, if it is interested in what is true rather than just what we can be confident in, it is in no better a position to provide justification than any other epistemology.

The universe we live in does not provide us the luxury of not being "subjective" observers. Bayesianism happens to be entirely cool with that.

Nobody has a solution to infinite regress, barring "I said so". As far as I can tell, you've got to start somewhere, and Bayesianism leads to more sensible decision theories and is clean and simple.

I have no problem with starting somewhere, but I don't claim our theories can ever be anything more than a guess, since, as you seem to have agreed, they are ultimately baseless due to infinite regress. In the context of this discussion on justification and the basis of science, I'm ok with Bayesianism that only claims to be decision theory, a formalized account of how we try to temper our guesses by reason and experience with no justification or basis ever being provided, which is also the Popperian view of the epistemic status of science. Bayesianism would then be a methdology to help in our conjectural decisionmaking, but would never elevate our theories beyond the status of a guess, in the sense of them having some sort of justification or basis. Do we disagee here?

Given that English is an imprecise language, feel free to interpret my 99.9999% confidence that the Sun will rise tomorrow as being equivalent to "it's true the Sun will rise tomorrow".

Ok, so if I'm understanding you right, you do care about the truth of your beliefs, not just about your confidence in them. So what's the logical relationship between your calculation of confidence in a theory and the truth of that theory? What is the epistemic benefit of confidence calculation, as opposed to a Popperian conjecture? It seems to me that if you are mistaken about the truth of the belief in question (as you would be with regard to the sun rising tomorrow if you went to, say, Iceland in winter), your high calculated confidence does nothing to mitigate your mistake. You are equally wrong as a Popperian who would just say he guessed wrong, despite your high confidence. And if the belief in question is true, it's just as true for the Popperian who only claims it to be a guess, regardless of confidence calculation. So what is the epistemic benefit of the confidence calculation?

To clarify a bit more, I see two questions we are discussing. First, whether Popper's falsificationist "logic of science" is a better description/methodology of science than Bayesianism. We can set that aside for now, as it is not the focus of the topic. The second question that's relevant to the topic at hand is whether you think Bayesianism can provide some sort of justification or rational basis for claims about the truth of our beliefs that elevates them to something more than a guess. We certainly seem to agree that we can temper our guesses using logic and reason and experience, but in the Popperian view all of this is still guesswork, and never elevates the epistemic status of a theory beyond that of a guess. So tell me if and where we disagree on this :)

More comments

Okay well in that case it's also hypocritical to criticize Cthulhu and Star Wars lore for not being literally true. Hooray, solipsism. This entire line of argument advances absolutely nothing.

If someone just jumped into this thread without reading the history, they might gather that I (or someone else) had criticized Cthulhu on the grounds of not being literally true. So for anyone who is jumping in in the middle, nothing of the sort happened.

Moreover, I would never detract from the merit of Shakespeare or Homer on the grounds that there is no evidence for the literal truth of their writings. Nor would I detract from the merit of a physics text on the grounds that there is no objective evidence that its contents are literally true. I do not think I am asking for special status for anything. I am arguing against a special status for the physical sciences, that I believe is widely attributed to them.

It essentially amounts to a theist's special request for their beliefs to be treated as intellectually serious even though they can't point to any justification... request denied until one of these arguments successfully and meaningfully distinguishes Christianity, theism, whatever, from an infinite number of bullshit things I could make up on the spot.

I agree that you should deny that request if somebody made it -- but I don't think I did (unless "whatever" casts a very wide net).

My thesis is that (1) if you hold nonzero confidence in the literal truth of a universal physical law, then you should be able to give reasons for your belief, and (2) the only rule of evidence I know of that would justify such a conclusion (abductive inference) -- and the one that is actually used in the physical sciences to establish credibility of physical theories -- rests on premises that are infinitesimally unlikely to hold in the absence of a miracle.

Tagging @marten too so I don't have to post twice.

Look, I'll be honest: If you're not playing some kind of game that amounts to wanting people to stop snorting when someone brings up god in an intellectual context? If this isn't the usual goofy theist sophistry and you're actually just parsing the differences between degrees of philosophical certainty that no one out in the world ever thinks about when making decisions?

Then I'll leave you to your hobby and continue to be puzzled as to the appeal. Back in the world where people make decisions, the fact that science does in fact produce functional results obliterates every other consideration anyway.

If you're not playing some kind of game that amounts to wanting people to stop snorting when someone brings up god in an intellectual context?

I'm glad you mentioned that. I am actually not interested in the reactions of people who scoff (or "snort") when someone brings up God in an intellectual context. The readers that interests me for this argument are people like political scientist Charles Murray and historian Tom Holland, who do not scoff, and who are even sympathetic to the idea, but who are not believers because they cannot find reasons to believe.

just parsing the differences between degrees of philosophical certainty that no one out in the world ever thinks about when making decisions?

My argument isn't about parsing degrees of certainty

Then I'll leave you to your hobby and continue to be puzzled as to the appeal. Back in the world where people make decisions, the fact that science does in fact produce functional results obliterates every other consideration anyway.

Look, I'll be honest:...

I'm glad you are being honest. In that same spirit, I think it is Philistine to separate the effort to reveal the true laws of nature from "the world where people make decisions". Science, conceived as the effort to reveal the laws of nature, involves making many of decisions; I believe it is what many scientists perceive themselves as doing, and I believe it is a worthwhile pursuit for its own sake -- independently from its applications to such things as bread and circuses.

My argument isn't about parsing degrees of certainty

No? Because it sort of sounds like it to my Philistine ears.

Is this like a hypocrisy claim? That since science isn't literally true it would be hypocritical to criticize theism for not being literally true?

Yes, that's what I'm saying.

Except one of these things can produce consistent on-demand results that wouldn't be possible if its claims were false, while the other cannot. By any standard of truth-seeking that doesn't succumb to solipsism and ludicrously rule out observation of the world as a means of understanding it, the former is obviously much more true than the latter.

Ah but while science may contain observable truth, it doesn't meet Nelson Rushton's standard for being "the source code of the universe" and that's important... why exactly? Telling me you have a standard of truth under which apparently absolutely nothing is "literally true" isn't actually interesting.

The reason I keep thinking this is about getting atheists to stop snorting is because I can't think of any other purposes for this whole argument, charitable or otherwise. Like okay, nothing in the universe meets the Rushton Source Code Standard of Literal Truth. Neat, why should anyone care? What decision should anyone make differently now that they've heard this?