The so-called "scientific method" is, I think, rather poorly understood. For example, let us consider one of the best-known laws of nature, often simply referred to as the Law of Gravity:
Newton's Law of Universal Gravitation: Every object in the universe attracts every other object toward it with a force proportional to the product of their masses, divided by the square of the distance between their centers of mass.
Now here is a series of questions for you, which I often ask audiences when I give lectures on the philosophy of science:
- Do you believe Newton's Law of Universal Gravitation is true?
- If so, how sure are you that it is true?
- Why do you believe it, with that degree of certainty?
The most common answers to these questions are "yes", "very sure", and "because it has been extensively experimentally verified." Those answers sound reasonable to any child of the Enlightenment -- but I submit, on the contrary, that this set of answers has no objective basis whatsoever. To begin with, let us ask, how many confirming experiments do you think would have been done, to qualify as "extensive experimental verification." I would ask that you, the reader, actually pick a number as a rough, round guess.
Whatever number N you picked, I now challenge you state the rule of inference that allows you to conclude, from N uniform observations, that a given effect is always about from a given alleged cause. If you dust off your stats book and thumb through it, you will find no such rule of inference rule there. What you will find are principles that allow you to conclude from a certain number N of observations that with confidence c, the proportion of positive cases is z, where c < 1 and z < 1. But there is no finite number of observations that would justify, with any nonzero confidence, that any law held universally, without exception (that is, z can never be 1 for any finite number of observations, no matter how small the desired confidence c is, unless c = 0). . And isn't that exactly what laws of nature are supposed to do? For Pete's sake it is called the law of universal gravitation, and it begins with the universal quantifier every (both of which may have seemed pretty innocuous up until now).
Let me repeat myself for clarity: I am not saying that there is no statistical law that would allow you to conclude the law with absolute certainty; absolute certainty is not even on the table. I am saying that there is no statistical law that would justify belief in the law of universal gravitation with even one tenth of one percent of one percent confidence, based on any finite number of observations. My point is that the laws of the physical sciences -- laws like the Ideal gas laws, the laws of gravity, Ohm's law, etc. -- are not based on statistical reasoning and could never be based on statistical reasoning, if they are supposed, with any confidence whatsoever, to hold universally.
So, if the scientific method is not based on the laws of statistics, what is it based on? In fact it is based on the
Principle of Abductive Inference: Given general principle as a hypothesis, if we have tried to experimentally disprove the hypothesis, with no disconfirming experiments, then we may infer that it is likely to be true -- with confidence justified by the ingenuity and diligence that has been exercised in attempting to disprove it.
In layman's terms, if we have tried to find and/or manufacture counterexamples to a hypothesis, extensively and cleverly, and found none, then we should be surprised if we then find a counterexample by accident. That is the essence of the scientific method that underpins most of the corpus of the physical sciences. Note that it is not statistical in nature. The methods of statistics are very different, in that they rest on theorems that justify confidence in those methods, under assumptions corresponding to the premises of the theorems. There is no such theorem for the Principle of Abductive Inference -- nor will there ever be, because, in fact, for reasons I will explain below, it is a miracle that the scientific method works (if it works).
Why would it take a miracle for the scientific method to work? Remember that the confidence with which we are entitled to infer a natural law is a function of the capability and diligence we have exercised in trying to disprove it. Thus, to conclude a general law with some moderate degree of confidence (say, 75%), we must have done due diligence in trying to disprove it, to the degree necessary to justify that level confidence, given the complexity of the system under study. But what in the world entitles us to think that the source code of the universe is so neat and simple, and its human denizens so smart, that we are capable of the diligence that is due?
For an illuminating analogy, consider that software testing is a process of experimentation that is closely analogous to scientific experimentation. In the case of software testing, the hypothesis being tested -- the general law that we are attempting to disconfirm -- is that a given program satisfies its specification for all inputs. Now do you suppose that we could effectively debug Microsoft Office, or gain justified confidence in its correctness with respect to on item of its specification, by letting a weasel crawl around on the keyboard while the software is running, and observing the results? Of course not: the program is far too complex, its behavior too nuanced, and the weasel too dimwitted (no offense to weasels) for that. Now, do you expect the source code of the Universe itself to be simpler and friendlier to the human brain than the source code of MS Office is to the brain of a weasel? That would be a miraculous thing to expect, for the following reason: a priori, if the complexity of that source code could be arbitrarily large. It could be a googleplex lines of spaghetti code -- and that would be a infinitesimally small level of complexity, given the realm of possible complexities -- namely the right-hand side of the number line.
In this light, if the human brain is better equipped to discover the laws of nature than a weasel is to confidently establish the correctness an item in the spec of MS Office, it would be a stunning coincidence. That is looking at it from the side of the a priori expected complexity of the problem, compared to any finite being's ability to solve it. But there is another side to look from, which is the side of the distribution of intelligence levels of the potential problem-solvers themselves. Obviously, a paramecium, for example, is not equipped to discover the laws of physics. Nor is an octopus, nor a turtle, nor a panther, nor an orangutan. In the spectrum of natural intelligences we know of, it just so happens that there is exactly one kind of creature that just barely has the capacity to uncover the laws of nature. It is as if some cosmic Dungeon Master was optimizing the problem from both sides, by making the source code of the universe just simple enough that the smartest beings within it (that we know of) were just barely capable of solving the puzzle. That is just the goldilocks situation that good DM's try to achieve with their puzzles: not so hard they can't be solved, not so easy that the players can't take pride in solving them
There is a salient counterargument I must respond to. It might be argued that, while it is a priori unlikely that any finite being would be capable of profitably employing the scientific method in a randomly constructed universe, it might be claimed that in hindsight of the scientific method having worked for us in this particular universe, we are now entitled, a posteriori, to embrace the Principle of Abductive Inference as a reliable method. My response is that we have no objective reason whatsoever to believe the scientific method has worked in hindsight -- at least not for the purpose of discovering universal laws of nature! I will grant that we have had pretty good luck with science-based engineering in the tiny little spec of the universe observable to us. I will even grant that this justifies the continued use of engineering for practical purposes with relative confidence -- under the laws of statistics, so long as, say, one anomaly per hundred thousand hours of use is an acceptable risk. But this gives no objective reason whatsoever (again under the laws of statistics) to believe that any of the alleged "laws of nature" we talk about is actually a universal law. That is to say, if you believe, with even one percent confidence, that we ever have, or ever will, uncover a single line of the source code of the universe -- a single law of Nature that holds without exception -- then you, my friend, believe in miracles. There is no reason to expect the scientific method to work, and good reason to expect it not to work -- unless human mind was designed to be able to uncover and understand the laws of nature, by Someone who knew exactly how complex they are.
Jump in the discussion.
No email address required.
Notes -
What humans are doing is Bayesian reasoning, at least if you subscribe to the Predictive Processing model of cognition, as I (and Scott, amongst others), tentatively endorse.
People, in general, are perfectly capable of modifying their beliefs in a Bayesian manner without remotely holding the idealized version of Bayes' rule in their skulls, at least above the level of groups of neurons for whom it comes naturally.
In physics, it is implicit. You will find all kinds of mention of how unlikely it is for certain varieties of observations to be seen were it not for X model that pre-emptively expect it to be the case (or at least doesn't conflict with it in hindsight).
You do not need them to write it out, any more than they need to invoke the tenets of ZFC when they add two numbers together. But then again:
What is a p-value? It's not a urine dipstick test I can tell you. Still can't read most papers without tripping over one. Don't ask me if they're using it in frequentist or Bayesian terms, but there's a conditional probability for you.
(Now, I can assure you that while Bayesian reasoning is not foreign to medical doctors, at least the ones who do actual research, the majority of doctors on the ground would recoil from that simple equation, or simply be confused at first sight. Doesn't mean they aren't using it, either explicitly or implicitly. The same goes for physics.)
It is not generally true that "you can't read most papers without tripping over one [p-value]". There is a thread of truth to this in medicine and the social sciences, but not in the physical sciences. More importantly, I think the duality you are looking for is parametric vs. Bayesian, not frequentist vs. Bayesian. The tool of p-values is part of parametric statistics, which is the main alternative to Bayesian statistics. If you see a paper with a p-value, it means they are not using Bayesian updating -- so the thing you keep tripping over is evidence against your thesis.
Finally, p-values, while they may not be urine dipsticks, are also not conditional probabilities. A conditional probability is the probability of A given B, where A and B are events in a probability space. A p-value, on the other hand, is the probability of an event A in a probability space conditioned on the value of a fixed-but-unknown parameter of that probability space. That is why parametric statistics does not use Bayes rule.
More options
Context Copy link
Correct me if I am wrong, self_made_human, but it seems to me that the unstated premise of your position is this: if someone holds an uncertain belief, and then they see something, and they revise their degree of certainty based on what they saw, and if they are acting rationally, then they must be doing Bayesian updating. Do you affirm that?
I think a lot of people fall into the trap of thinking probabilities are the only rational way of representing and reasoning with uncertain information because, unless they take an AI class, it is the only method covered in a typical undergraduate curriculum. This leaves them with the impression that "probability" means degree of belief, "probability theory" means the logic of reasoning about degrees of belief, and that the problem has been settled of the right way to do such reasoning. If all of that were true, and if Bayes rule were the only way to update beliefs using probability theory, then the unstated premise above would be correct. The problems are that (1) none of that is true, and (2) even when we use probability theory to update our beliefs, we are not always using Bayes rule.
Probability theory is actually a specific set of axioms that constitutes one particular way of reasoning about degrees of belief. There are well developed alternatives to probability theory -- including certainty factors (as used in Mycin: https://en.wikipedia.org/wiki/Mycin), Dempster-Schafer evidence theory, backpropagation (as used in large language models such as ChatGPT), and many others, which are often more effective than probability theory for particular applications -- none of which use Bayes formula or can even be incidentally described by it. Moreover, even among belief-updating methods that do use probability theory, the most frequently used approach in scientific literature is parametric statistics -- which (as I point out in a separate reply) does not use Bayesian updating.
If you claim that physicists, for example, routinely use Bayesian updating, and you claim to hold that belief for a good reason, then you should be able to give evidence that they are thinking in terms of conditional probabilities (satisfying the axioms of probability) and updating them by Bayes equation -- which is a much more specific claim than that they merely change their degrees of belief after making observations in an effective manner.
More options
Context Copy link
More options
Context Copy link