site banner

The scientific method rests on faith in God and Man.

The so-called "scientific method" is, I think, rather poorly understood. For example, let us consider one of the best-known laws of nature, often simply referred to as the Law of Gravity:

Newton's Law of Universal Gravitation: Every object in the universe attracts every other object toward it with a force proportional to the product of their masses, divided by the square of the distance between their centers of mass.

Now here is a series of questions for you, which I often ask audiences when I give lectures on the philosophy of science:

  1. Do you believe Newton's Law of Universal Gravitation is true?
  2. If so, how sure are you that it is true?
  3. Why do you believe it, with that degree of certainty?

The most common answers to these questions are "yes", "very sure", and "because it has been extensively experimentally verified." Those answers sound reasonable to any child of the Enlightenment -- but I submit, on the contrary, that this set of answers has no objective basis whatsoever. To begin with, let us ask, how many confirming experiments do you think would have been done, to qualify as "extensive experimental verification." I would ask that you, the reader, actually pick a number as a rough, round guess.

Whatever number N you picked, I now challenge you state the rule of inference that allows you to conclude, from N uniform observations, that a given effect is always about from a given alleged cause. If you dust off your stats book and thumb through it, you will find no such rule of inference rule there. What you will find are principles that allow you to conclude from a certain number N of observations that with confidence c, the proportion of positive cases is z, where c < 1 and z < 1. But there is no finite number of observations that would justify, with any nonzero confidence, that any law held universally, without exception (that is, z can never be 1 for any finite number of observations, no matter how small the desired confidence c is, unless c = 0). . And isn't that exactly what laws of nature are supposed to do? For Pete's sake it is called the law of universal gravitation, and it begins with the universal quantifier every (both of which may have seemed pretty innocuous up until now).

Let me repeat myself for clarity: I am not saying that there is no statistical law that would allow you to conclude the law with absolute certainty; absolute certainty is not even on the table. I am saying that there is no statistical law that would justify belief in the law of universal gravitation with even one tenth of one percent of one percent confidence, based on any finite number of observations. My point is that the laws of the physical sciences -- laws like the Ideal gas laws, the laws of gravity, Ohm's law, etc. -- are not based on statistical reasoning and could never be based on statistical reasoning, if they are supposed, with any confidence whatsoever, to hold universally.

So, if the scientific method is not based on the laws of statistics, what is it based on? In fact it is based on the

Principle of Abductive Inference: Given general principle as a hypothesis, if we have tried to experimentally disprove the hypothesis, with no disconfirming experiments, then we may infer that it is likely to be true -- with confidence justified by the ingenuity and diligence that has been exercised in attempting to disprove it.

In layman's terms, if we have tried to find and/or manufacture counterexamples to a hypothesis, extensively and cleverly, and found none, then we should be surprised if we then find a counterexample by accident. That is the essence of the scientific method that underpins most of the corpus of the physical sciences. Note that it is not statistical in nature. The methods of statistics are very different, in that they rest on theorems that justify confidence in those methods, under assumptions corresponding to the premises of the theorems. There is no such theorem for the Principle of Abductive Inference -- nor will there ever be, because, in fact, for reasons I will explain below, it is a miracle that the scientific method works (if it works).

Why would it take a miracle for the scientific method to work? Remember that the confidence with which we are entitled to infer a natural law is a function of the capability and diligence we have exercised in trying to disprove it. Thus, to conclude a general law with some moderate degree of confidence (say, 75%), we must have done due diligence in trying to disprove it, to the degree necessary to justify that level confidence, given the complexity of the system under study. But what in the world entitles us to think that the source code of the universe is so neat and simple, and its human denizens so smart, that we are capable of the diligence that is due?

For an illuminating analogy, consider that software testing is a process of experimentation that is closely analogous to scientific experimentation. In the case of software testing, the hypothesis being tested -- the general law that we are attempting to disconfirm -- is that a given program satisfies its specification for all inputs. Now do you suppose that we could effectively debug Microsoft Office, or gain justified confidence in its correctness with respect to on item of its specification, by letting a weasel crawl around on the keyboard while the software is running, and observing the results? Of course not: the program is far too complex, its behavior too nuanced, and the weasel too dimwitted (no offense to weasels) for that. Now, do you expect the source code of the Universe itself to be simpler and friendlier to the human brain than the source code of MS Office is to the brain of a weasel? That would be a miraculous thing to expect, for the following reason: a priori, if the complexity of that source code could be arbitrarily large. It could be a googleplex lines of spaghetti code -- and that would be a infinitesimally small level of complexity, given the realm of possible complexities -- namely the right-hand side of the number line.

In this light, if the human brain is better equipped to discover the laws of nature than a weasel is to confidently establish the correctness an item in the spec of MS Office, it would be a stunning coincidence. That is looking at it from the side of the a priori expected complexity of the problem, compared to any finite being's ability to solve it. But there is another side to look from, which is the side of the distribution of intelligence levels of the potential problem-solvers themselves. Obviously, a paramecium, for example, is not equipped to discover the laws of physics. Nor is an octopus, nor a turtle, nor a panther, nor an orangutan. In the spectrum of natural intelligences we know of, it just so happens that there is exactly one kind of creature that just barely has the capacity to uncover the laws of nature. It is as if some cosmic Dungeon Master was optimizing the problem from both sides, by making the source code of the universe just simple enough that the smartest beings within it (that we know of) were just barely capable of solving the puzzle. That is just the goldilocks situation that good DM's try to achieve with their puzzles: not so hard they can't be solved, not so easy that the players can't take pride in solving them

There is a salient counterargument I must respond to. It might be argued that, while it is a priori unlikely that any finite being would be capable of profitably employing the scientific method in a randomly constructed universe, it might be claimed that in hindsight of the scientific method having worked for us in this particular universe, we are now entitled, a posteriori, to embrace the Principle of Abductive Inference as a reliable method. My response is that we have no objective reason whatsoever to believe the scientific method has worked in hindsight -- at least not for the purpose of discovering universal laws of nature! I will grant that we have had pretty good luck with science-based engineering in the tiny little spec of the universe observable to us. I will even grant that this justifies the continued use of engineering for practical purposes with relative confidence -- under the laws of statistics, so long as, say, one anomaly per hundred thousand hours of use is an acceptable risk. But this gives no objective reason whatsoever (again under the laws of statistics) to believe that any of the alleged "laws of nature" we talk about is actually a universal law. That is to say, if you believe, with even one percent confidence, that we ever have, or ever will, uncover a single line of the source code of the universe -- a single law of Nature that holds without exception -- then you, my friend, believe in miracles. There is no reason to expect the scientific method to work, and good reason to expect it not to work -- unless human mind was designed to be able to uncover and understand the laws of nature, by Someone who knew exactly how complex they are.

-4
Jump in the discussion.

No email address required.

Bayesianism would then be a methdology to help in our conjectural decisionmaking, but would never elevate our theories beyond the status of a guess. Do we disagee here?

We don't. I only wish to emphasize that since we have no reason to think that nothing else can, this is not grounds to knock Bayesianism.

So what's the logical relationship between your calculation of confidence in a theory and the truth of that theory?

Bayesian updates are, as I understand it, the optimal way to update on new evidence in the light of your existing priors, and with sufficient evidence, two Bayesian who start out with very different credences can converge to the same place. The Aumann Agreement theorem proves that two perfect Bayesians who are able to full share all their information and priors must converge to unanimity, don't ask me how practical that happens to be.

Logical truths are just propositions we have very high confidence in, downstream of axioms or valid arguments in which we also vest high confidence. Is it possible, as in, is there a non-zero chance, that you or I could be hallucinating and imagining obvious truth where there is none? Well, the base rate of insanity or hallucination is greater than 1 in 8 billion. In practise, we can ignore this, and people who wish to function will summarily do so.

As I've demonstrated, the binary notion of truth you hold so dear is simply unattainable, so you should take what you can get in that regard.

Popperian falsifiability is simply dysfunctional. Taken at face value, recall those experiments that suggested neutrinos move faster than light? That is evidence that neutrinos move faster than light.

A serious Popperian would immediately give up on the idea that nothing can exceed the speed of light in a vacuum. A sensible Bayesian, which humans (and thus physicists) are naturally inclined to be, would note that this evidence is severely outweighed by all the other observations we have, and while adjusting very slightly in favor of it being possible to exceed the speed of light with a massive object, still sensibly choose to devote the majority of their energy to finding flaws in the experiment.

Which did in fact turn out to exist.

Bayesianism applied rigorously is simply far more robust than Popperian notions can be, and to the extent the latter was deemed to be workable, it was only by (pragmatically) ignoring minor bits of evidence against one's hypothesis.

Looks like we're on the same page on the overall epistemic status of scientific theories, namely that they are not justified by the evidence and always remain conjectural. That's not a knock against Bayesianism, I agree!

Bayesian updates are, as I understand it, the optimal way to update on new evidence in the light of your existing priors

The optimal way in order to do what? What would you say is the aim of science?

For me, it's the commonsense notion of truth as correspondence to reality. Of course, we cannot know or be justified in believing that our theories are true, but they can still be true guesses if they correspond to reality, and it is this reality that we are interested in.

You say that this binary notion of truth is unattainable, so what do you replace it with? Probability calculations? What do those achieve? What is their relationship to reality? There are many variants of Bayesiansim, and they are often very fuzzy about this point, so I'm trying to pinpoint your position.

Popperian falsifiability is simply dysfunctional. Taken at face value, recall those experiments that suggested neutrinos move faster than light? That is evidence that neutrinos move faster than light.

A serious Popperian would immediately give up on the idea that nothing can exceed the speed of light in a vacuum. A sensible Bayesian, which humans (and thus physicists) are naturally inclined to be, would note that this evidence is severely outweighed by all the other observations we have, and while adjusting very slightly in favor of it being possible to exceed the speed of light with a massive object, still sensibly choose to devote the majority of their energy to finding flaws in the experiment.

This is not quite right. For a Popperian, accepting the results of a test is a conjecture just like anything else. We are searching for errors in our guesses by testing them against reality - if we suspect a test is wrong, we are very welcome to criticize the test, devise another test, etc. It is only if we accept a negative test result that have to consider the theory being tested to be false, by plain old deductive logic since it is contradicted by the test result. But a serious Popperian is quite capable of being suspicious of an experiment, and looking for flaws in it.

For me, it's the commonsense notion of truth as correspondence to reality. Of course, we cannot know or be justified in believing that our theories are true, but they can still be true guesses if they correspond to reality, and it is this reality that we are interested in.

You say that this binary notion of truth is unattainable, so what do you replace it with? Probability calculations? What do those achieve? What is their relationship to reality? There are many variants of Bayesiansim, and they are often very fuzzy about this point, so I'm trying to pinpoint your position.

This might sound tautological, but it makes sense (to me). When I express high certainty in a hypothesis, that is tantamount, and fully interchangeable, with me expressing that in expectation, I do not expect to see evidence that contradicts the hypothesis.

When someone uses "true" in the usual sense, they are expressing that they have very minimal expectation of being contraindicated by future observation (they might even think this is precisely zero, which is not true, except by fiat or axiom, but I'm not going to quibble). When truth is used in a logical or mathematical sense, it is assumed that each valid intermediate step starting from axioms loses so little probability mass that you can have almost the same level of certainty in the compound hypothesis. The loss is unavoidable, but once again, in practise we can ignore it.

You use truth in the sense that something is consistently and robustly reproducible in the observed environment or universe (or more abstract spaces). Sure. I have no beef with that.

Things can be true and truer. "Men are usually taller than women" can be true even if I pull out the Amazonian chick I just matched with on Bumble to humble you (well, she's only 6', she'd outstrip me in slippers let alone heels). But the more reliably that men are taller than women, the truer this gets and this is reflected in the probability of any given comparison between a random man and woman supporting it.

"Men are taller than women", when stated baldly and without regard to how real humans speak, can be taken to mean that there exists no woman taller than a man. In other words, the probability of finding a woman taller than a man is 0% (or 0+epsilon% if you want more rigor).

Probability and truth can be converted (I'm not sure if they're perfectly interconvertible, but when you say that 3>2 is true, there is a tiny chance you are actually wrong and crazy or a gamma ray or electron tunneling has fucked up even the ECC memory k your computer, and this can never be eliminated, only reduced to tolerable levels such that with sufficient engineering you can keep on making such true statements till the Heat Death, which is good enough for me).

What is clear to me that noise and error are unavoidable, and hence perfect credence out of bounds for us poor computationally bounded entities. But we're used to fighting back.

This is not quite right. For a Popperian, accepting the results of a test is a conjecture just like anything else. We are searching for errors in our guesses by testing them against reality - if we suspect a test is wrong, we are very welcome to criticize the test, devise another test, etc. It is only if we accept a negative test result that have to consider the theory being tested to be false, by plain old deductive logic since it is contradicted by the test result. But a serious Popperian is quite capable of being suspicious of an experiment, and looking for flaws in it.

I am confused. What is the criteria that a Popperian uses to determine what error even is, without smuggling in a bit of Bayesianism?

A Bayesian would say that even an (in hindsight) erroneous result is still evidence. But with our previous priors, we would (hopefully) decide that the majority of the probability mass is in the result being in error than the hypothesis.

The only criterion for something to count as evidence for a hypothesis is if it is more likely to be seen if the hypothesis is true, and vice versa.

In other words, how are you becoming suspicious without Bayesian reasoning (however subconscious it is)?

I like this discussion, it feels like we're really doing our best to understand each other's position better, as it should be.

Let's start with the easy bit.

I am confused. What is the criteria that a Popperian uses to determine what error even is, without smuggling in a bit of Bayesianism?

A Bayesian would say that even an (in hindsight) erroneous result is still evidence. But with our previous priors, we would (hopefully) decide that the majority of the probability mass is in the result being in error than the hypothesis.

The only criterion for something to count as evidence for a hypothesis is if it is more likely to be seen if the hypothesis is true, and vice versa.

In other words, how are you becoming suspicious without Bayesian reasoning (however subconscious it is)?

That's right! Bayesianism is, on one level, an attempt to formalize exactly this kind of thinking. I'd call it "critical discussion" or "conjecture" or just "reasoning", you can call it Bayesian reasoning, I'm totally fine with that. I think this process is messier than Bayesianism claims, but I also think that plenty of Popperian concepts can do with some formalizing, like "degree of corroboration", "state of the critical discussion", "severity of testing in light of our background knowledge", etc. So on this level I'd say Bayesianism is a worthwhile pursuit of formalizing this kind of thinking, and we can set aside the question of how well it does this for now. I say "on this level" because there are many variants of Bayesianism, and some claim Bayesianism does much more, such as Bayesianism providing a (partial) justification and rational basis for scientific theories instead of induction, which we have agreed it cannot do, i.e. it cannot elevate our theories beyond the status of a guess.

It is with regards to truth and the relationship of Bayesian calculations to external reality that I suspect we disagree, although I'm not quite sure about this. There's definitely still some stuff to clear up here.

You use truth in the sense that something is consistently and robustly reproducible in the observed environment or universe (or more abstract spaces). Sure. I have no beef with that.

No, this is definitely not the way in which I use truth. Like I said, I use truth in the sense of correspondence to reality. Our subjective expectations, reproducibility, and observations are totally irrelevant to truth. The only thing that matters with regard to the truth of a claim is whether our claim corresponds to reality, i.e. whether things really are the way we claim they are. So the claim that moving faster than light is not possible is true only if this corresponds to reality, i.e. if it is really, objectively, impossible to move faster than light in this universe. We can do all kinds of experiments on this, but the only thing that matters with regard to the truth of this statement is whether it is actually possible to move faster than light in reality or not. Of course, we cannot know what is true, but that does not stop our claim from being true if does in fact correspond to reality, despite it being just a guess. The experiments are attempts to eliminate our errors in our guesses about reality. And while we can never be sure (or be partially justified) in thinking what we believe is true, it can still be true, objectively, if we have guessed right, if it really is impossible to move faster than light. So we cannot be justified (fully or partially) in thinking what we believe is true, but our theories can still be true, objectively, if they correspond to reality whether we know it or not.

In my view, this is what science is interested in: how the universe really works regardless of what we think and the evidence we have. And the best we can do in figuring this out is to guess, and temper our guesses by critical reasoning and empirical testing (which is also guesswork) in order to eliminate errors in our guesses.

Bayesianism often gives up on this goal, and confines itself to only our expectations and related calculations. It thus gives up on saying anything about objective reality, and confines itself to talking about our subjective states of expectation and calculations of probabilty regarding those expectations. This, for me, is an unacceptable retreat from figuring out objective reality as the aim of science. I'm interested in how the world really is, I'm not interested in our subjective calculations of probabilty (other than if they can perhaps help in the former).

Would this describe your position though? That we cannot rationally say anything about the external, objective reality, but can calculate our subjective expectations/probabilites using Bayesian reasoning? I'm not quite sure this is your position, although there have been hints that it is and it is common among Bayesians, which is why I was asking questions about the aim of science and the relationship between Bayesian calculations and reality/truth.

Probability and truth can be converted

And I suspect this is another point of disagreement - I don't think this is right. Having certain truth would be great - if we could be certain about something being true, we could infallibly classify it as true. Awesome! However, we have agreed that this is not possible. But say we can calculate something is probable using Bayesianism. This tells us exactly nothing about whether it is actually true or not, about whether reality really works that way. Certain truth implies truth, yes, but probable truth does not imply truth. Like I said, if you are mistaken about the truth of the belief in question, i.e. if reality does not work that way, your belief being probable does nothing to mitigate your mistake. You still believe something false. If moving faster than light is possible in reality, you are wrong in believing it is not, no matter how probable your belief may be. You are equally wrong as a Popperian who would just say he guessed wrong, despite your probability calculation. And if the belief in question is true, if reality is such that moving faster than light is not possible, this belief just as true for the Popperian who only claims it to be a guess and does not reference probabilty, regardless of whether your beleif is probable or not according to Bayesianism. I see absolutely no logical connection, i.e. possibilty of conversion, between probability and truth. Reality does not care about our probability calculations.

To sum up, I have no issues with Bayesianism as a theory of decision-making. I see science as guesswork tempered by reason and experience, aiming at objectively true theories about reality - although we can never know or be (partially or fully) justified in believing them to be true: we have to guess, and eliminate our errors by experiments, which are also guesswork. I think we may disagee with regard to truth and the aim of science, but I'm not totally clear on your position here. I also think we may disagree on the connection between probability and truth/external reality.

So on this level I'd say Bayesianism is a worthwhile pursuit of formalizing this kind of thinking, and we can set aside the question of how well it does this for now. I say "on this level" because there are many variants of Bayesianism, and some claim Bayesianism does much more, such as Bayesianism providing a (partial) justification and rational basis for scientific theories instead of induction, which we have agreed it cannot do, i.e. it cannot elevate our theories beyond the status of a guess.

I certainly agree that Bayesianism can't solve the problem of regress or induction. But since nothing can, we have made absolutely no progress in solving it (despite many very intelligent people trying), my strong suspicion is that it's an impossible problem to solve, and so we might as well just make do.

Can you call everything a guess? Yes.

Yet guesses are certainly not made equal. We can be much more certain of some guesses than others, and while we can never get to perfect certainty in anything, we can often get close enough for government work, or even to establish extremely powerful laws of physics.

When a guess becomes so strong that we would be utterly flabbergasted if we suddenly saw evidence it was false, then we can dispense with the formalities and call it a fact, or true.

Bayesianism often gives up on this goal, and confines itself to only our expectations and related calculations. It thus gives up on saying anything about objective reality, and confines itself to talking about our subjective states of expectation and calculations of probabilty regarding those expectations. This, for me, is an unacceptable retreat from figuring out objective reality as the aim of science. I'm interested in how the world really is, I'm not interested in our subjective calculations of probabilty (other than if they can perhaps help in the former).

My stance is that this isn't a form of unwarranted cowardice or retreat, but a pragmatic acknowledgement that we are subjective observers forced to grapple with evidence acquired through sensoria. There is no scope for an actual bird's eye view, we must trudge through the territory and map it as we go.

Our probability theories breakdown utterly, or become garbled, when confronted with things like the possibility of the Simulation Hypothesis being true, Boltzmann brains and so on. I can't rule out with any degree of real confidence that this isn't a simulation, and the laws of physics as we best understand them imply Boltzmann brains are inevitable, let alone the implications if the Universe (and not just the observable part) is truly infinite, at which point every possible arrangement of matter and energy will be forced to tile over and repeat somewhere.

Or I could be crazy. I could be making mistakes.

We cannot eliminate this, and on top of that, we are computationally bounded entities.

Could this be a simulation? Could I be a Boltzmann brain about to vanish into a fizzle? Am I crazy? Regardless of my credence in any of them, I simply choose to operate as if it's not true, since that is the course of action with the highest payoff if they're not true, and if they are, nothing I do matters (we can't even usefully speculate as to what a potential simulation is for, or if the laws of physics at that level correspond to ours).

Thus, if objective truth exists, we can never know with perfect certainty (as always, without assuming the conclusion in advance), and science can only constrain our expectations and improve our ability to model things at the level of our observations of our environment.

Thus, is science useful? Absolutely. But it can never provide perfect certainty, but we can certainly operate without that, and in practise we do. We can converge ever closer to 100% confidence that our map faithfully represents the territory, and that's good enough for me.

Reality does not care about our probability calculations.

And yet we are not Reality itself, or able to stand above it. Hence we should care about our probability calculations.

The closest we have to perfect certainty is in things like logic and math, but those do not actually exist in a vacuum to be poked and prodded, they're instantiated within our brains and other computers, and thus even what you (and I, with usual caveats that I omit when the discussion does not go this deep) might consider self evident truths, like 2=2 being true, could be the product of error in our cognition. Let's just be glad that we think this is extremely, extremely unlikely, and carry on with our lives nonetheless.

Bayesianism often gives up on this goal, and confines itself to only our expectations and related calculations. It thus gives up on saying anything about objective reality, and confines itself to talking about our subjective states of expectation and calculations of probabilty regarding those expectations. This, for me, is an unacceptable retreat from figuring out objective reality as the aim of science. I'm interested in how the world really is, I'm not interested in our subjective calculations of probabilty (other than if they can perhaps help in the former).

My stance is that this isn't a form of unwarranted cowardice or retreat, but a pragmatic acknowledgement that we are subjective observers forced to grapple with evidence acquired through sensoria. There is no scope for an actual bird's eye view, we must trudge through the territory and map it as we go.

I think this is our first central disagreement, then, if you've accepted what I've said is for me an "unacceptable" retreat from talking about objective reality. In my view, yes, anything we say about objective reality is guesswork, as we have agreed, but it is this objective reality that interests us - we are trying to figure out the bird's eye view, although we can never be sure (or partially justified) that we have guessed it correctly. While you, if I'm reading you right, confine yourself to talking about our subjective expectations/calculations of probability.

Thus, is science useful? Absolutely. But it can never provide perfect certainty, but we can certainly operate without that, and in practise we do. We can converge ever closer to 100% confidence that our map faithfully represents the territory, and that's good enough for me.

But what does this confidence achieve - what is its relation to external reality? Our subjective calculation of probability and our confidence do not imply the objective truth of the belief in question - there is no logical connection between the two. I'm interested in whether what I believe is objectively true - whether the map actually does match the territory despite being just a guess - and not in our subjective calculation of probability, which tells us nothing about actual, objective truth. I fail to see how to convert claims of subjectively probable truth into claims about objective truth, in other words.

EDIT: Let me put it this way: A Popperian would say "I believe this is the objective truth, but I am only guessing - help me by criticizing my guess and testing it as best we can so we can reject it if it is objectively false, although this rejection will also be just a guess." A Bayesian would say "Based on the supporting evidence and my priors, I calculate a high probability for this hypothesis". At that point, they will either say nothing about the objective truth of that belief, which for me is an unacceptable retreat from talking about objective truth and reality, or they will say "therefore, this belief is objectively true". In the latter case, it is this "therefore" that I object to - I don't think it holds as it then runs into Humean objections, and thus the Bayesian calculation has not added anything to the Popperian's claim.

And this is our second major disagreement, I believe - Popper and I think that the role of evidence is to contradict scientific theories, while you think its role is to support them with regard to our subjective probability calculations. I fail to see the connection between these subjective probability calculations and the external, objective reality which I am interested in. I'm not interested in maximizing our subjective probability, or maximizing our expected utility. I'm interested in correctly guessing the objective truth, and the actual utility of our beliefs rather than their expected utility. In this, evidence plays a negative role, in my view, i.e. one of assisiting in error elimination (which is of course also guesswork). Positive evidence does nothing to adjust our guesses, but negative evidence does adjust them by contradicting and thus falsifying them (if we accept the evidence as true, of course - it may well be that the error resides in a flaw in the experiment as in the neutrino example, rather than in the theory being tested; this is also guesswork).

Overall though, I think we are in agreement about quite a lot. Science is guesswork, rational in the sense of using reason and experience, but not justified or rational in the sense of being rationally established/warranted/justified, whether partially or fully. Where we disagree is that you confine yourself to talking about our subjective calculations of probability, whereas I explicitly aim at objective truth through guesswork. You think the role of evidence in this is positive - it supports the theory in question, thus raising its probability. I think this kind of support has no bearing on the objective truth I am interested in and think the role of evidence is negative - it can contradict our theories, and thus lead us to (hopefully correctly, but with no guarantees, partial or otherwise) to reject those that are objectively false, retaining those that are objectively true.

If there's anything left to clarify, it may be your position on converting claims about the high Bayesian probability of a belief into claims about the objective truth of that belief, where I fail to see any connection between the two. My postion is that we simply make conjectural claims about the objective truth and try to eliminate errors in them, all of which is guesswork, with the role of evidence being only to possibly contradict our guesses. In this view, Bayesian calculations may be correct under Bayesian rules, but they are superflous in our search for objective truth, other than perhaps as an attempt at formalizing our reasoning - where they, in my opinion of course, miss the mark by failing to talk about the external objective reality we are interested in, instead focusing on our subjective confidence.

Is this a fair summary of our positions? Feel free to correct me where I have misunderstood you. If I have understood you well enough, this might be a decent place to stop, and I'll let you have the last word. Either way, I've thoroughly enjoyed our talk, and learned a lot about Bayesianism and reaching mutual understanding with a Bayesian, clarifying my understanding of my own position in the process as well. I'm of course open to further questions or criticism from you or readers like @marten if you have any.

See, the main crux of it is that if there's an objective reality out there (and there are a lot of very complex ideas hiding in those two simple words), we are simply powerless to directly perceive it.

In practice, I choose to act as if there is one. I have niggling doubts in my mind that the world as we see it could be fake, but I will throw up hands if asked how likely that is to be the case. Our decision theories really don't like infinities, and we need better ones if they're even possible.

So I certainly act as if my observations of reality or the advancements of science are evidence that my subjective reality aligns with (hypothetical) objective reality. Given that I don't have the mental bandwidth to hold that many layers of abstraction in my head at once all the time (and the expected return on doing so is minimal), I will happily say that Science provides "objective knowledge about the world" in the sense that it helps us establish or discover facts that are independent of observers (for all that I can't actually put myself in the shoes of another person and check).

In this view, Bayesian calculations may be correct under Bayesian rules, but they are superflous in our search for objective truth, other than perhaps as an attempt at formalizing our reasoning - where they, in my opinion of course, miss the mark by failing to talk about the external objective reality we are interested in, instead focusing on our subjective confidence.

You take it for granted that an objective reality exists. Which is fine, but I have more subtle theoretical concerns that prevent me from accepting that as axiomatic. I still think, as I've said, that this is potentially true and more importantly I can't show otherwise, so our behaviors do not diverge until the point the Creators/God show up or the butterfly stops dreaming of the human. I don't think those are likely to happen, so we act the same way and trust in empirical analysis in everything but metaphysical debate on a niche online underwater basket weaving forum.

But what this does mean is that I vigorously disagree that this is a failure of Bayesianism. We simply can't do better without taking as axiomatic that our observations are observations of an objective reality.* If you wish to do so, well you won't be harmed by it, but I act the same as you do without holding that belief.

I appreciate the nuanced debate, it's a shame it's buried so deep and we can't get more people into it, but feel free to ping anyone you think might have a useful opinion to offer.

*In the same vein that a theist might claim that Science is flawed because it cannot prove the existence of God, which becomes a pointless criticism for someone who finds the latter's existence dubious.

I'll do another reply since I think we're still talking past each other a bit.

And yeah, it's a shame our talk is buried so deep nobody is likely to read it :D Still, I found it really fun and useful!

First, let me say I don't take it for granted that objective reality exists - I believe it does, which is a conjecture like anything else, and open to criticism and revision. Objective truth, however, would exist even if there is no objective reality: in that case, the statement "there is no objective reality" would be objectively true, and this is what I would like to believe if it is true. Popperianism (or, as it's less cultishly called, critical rationalism) requires no assumptions that are not in principle open to critical discussion and rejection, which is in this view the main method of rational inquiry.

And, if I haven't made it clear enough, I'm actually a big fan of Bayesiansim. If I weren't a Popperian, I'd be a Bayesian! I'd even say it could add a lot to Popperiansim: although I think the basic Popperian picture of rational inquiry is correct, the formalization of the process of critical analysis that Bayesiansim could add to Popperiansim could definitely be useful (although I'm not smart enough and too confused by the many, many variants of Bayesiansim to attempt such a project myself). Overall though, some variants of Bayesianism, yours I believe included, are right about almost everything important to Popperians, especially the central point: accepting the skeptical and Humean objections to rational justification, while retaining the use of reason and evidence as the method of science. Popperians would add "and objective truth as the aim of science", on which I'm still not quite sure where you stand. The main disagreement, as I see it, is on the role of evidence, which is negative for Popper - evidence can only contradict theories - and positive for Bayesians - evidence can support theories, raising their subjective probabilty.

I think the discussion of whether objective reality exists and whether we can be certain of it is a bit of a sidetrack here - I completely agree with everything you said on it: we can never have direct access to objective reality (Popper would say that all our observations are "theory-laden"), and we cannot be sure that it exists, and I'm not saying I require you to demonstrate that it does to practice Bayesiansim. My main point is that Bayesian calculations are unmoored from objective reality (say nothing about it), unless you smuggle in additional induction-like assumptions that allow you to make inferences from Bayesian calculations to objective truth, in which case you run into Humean objections. And this is where I'm still uncertain of your position. You say:

So I certainly act as if my observations of reality or the advancements of science are evidence that my subjective reality aligns with (hypothetical) objective reality.

But do you think your observations are evidence that your subjective reality aligns with objective reality? If yes, how does this relationship work, and how does it avoid Humean objections? If no, like I said, that'd be for me an unacceptable retreat from talking about what we are actually interested in, namely objective truth, not subjective probabilty. We can agree to disagree on that, that's not a problem, but I'm not totally clear what your position is on this, given that you have said things like the quote above, but also talked being able to convert subjective probabilty into truth. I'd like to understand how you think this works, from a logical standpoint. Or is it perhaps that your position is something analogous to Hume's solution to the problem of induction (which I also disagree with) - namely that we act as if induction is rational although we are irrational in doing so, for we have no other choice? This would be saying that while strictly speaking Bayesian calculations have no direct relationship to objective truth, we act as if it they do. This would be what I gather from the above quote, but you've also talked about probability-to-truth conversion, so I'm still unclear on that point.

Let me attempt an analogy using the map and territory metaphor to describe how I see our positions. It's a spur-of-the-moment thing, so I apologize in advance if it misses the mark, but in that case you explaining how it does so will likely be illuminating for me.

So we are blind men in a maze (the "territory"), and trying to map it out. We are blind because we can never directly see the maze, let alone get a bird's eye view of it. Now many people, the majority even, think that we are not blind and convince themselves that they can see the maze (that we can have justified true beliefs directly about objective reality). You and I agree that this is not possible, that our mapping of the maze is ultimately guesswork. We can't be sure there even is a maze! But we're trying to figure out how to act and what to believe. Now I think the best way to go about mapping the maze is to propose conjectures on the layout of various parts of the maze (i.e. scientific hypotheses), which will always be guesswork, and then test them out: if this map section I've proposed is correct, for instance, we should be able to walk 36 steps in this direction, and then turn left. If I attempt this and run into a wall, then my proposed map section guess isn't right - I gotta reject it (the hypothesis is falsified). Of course, I might have miscounted the steps, a wall may have collapsed, or any number of things might have messed up my experiment - like in the neutrino example, the experiment might be wrong, and falsification is guesswork too. But this is the role played by evidence: attempting to walk the maze, i.e. confronting our hypotheses with reality, and seeing where they clash, albeit blindly and without any justification. If my conjectural map seems to work out, if it passes the test, this says nothing additional about it corresponding to the maze. Evidence is used to contradict our guesses, not support them, in my view. And this is where we start to disagree. You think that every step you take that doesn't contradict your proposed map (all supporting evidence for the hypothesis) raises your subjective probabilty/expected utility/confidence in your proposed map of the labyrinth. To which I say ok, your confidence is increased by Bayesian calculation, but what does that tell us about the labyrinth? To me it seems you are calculating your confidence in the map, but it's the labyrinth we are interested in, and I'm not sure if and how you translate your confidence in the map into claims about the labyrinth. If you do translate your confidence in the map into claims about the labyrinth, I am not clear on how. I just directly make claims about the labyrinth, which are guesses, and my subjective confidence in them is irrelevant - the correspondece of my guesses to the labyrinth is what matters and what I'm trying to guess correctly. If you don't claim anything about the labyrinth at all and are only talking about your confidence in the map, then I think you're missing the mark - it's the labyrinth that we are interested in.