The so-called "scientific method" is, I think, rather poorly understood. For example, let us consider one of the best-known laws of nature, often simply referred to as the Law of Gravity:
Newton's Law of Universal Gravitation: Every object in the universe attracts every other object toward it with a force proportional to the product of their masses, divided by the square of the distance between their centers of mass.
Now here is a series of questions for you, which I often ask audiences when I give lectures on the philosophy of science:
- Do you believe Newton's Law of Universal Gravitation is true?
- If so, how sure are you that it is true?
- Why do you believe it, with that degree of certainty?
The most common answers to these questions are "yes", "very sure", and "because it has been extensively experimentally verified." Those answers sound reasonable to any child of the Enlightenment -- but I submit, on the contrary, that this set of answers has no objective basis whatsoever. To begin with, let us ask, how many confirming experiments do you think would have been done, to qualify as "extensive experimental verification." I would ask that you, the reader, actually pick a number as a rough, round guess.
Whatever number N you picked, I now challenge you state the rule of inference that allows you to conclude, from N uniform observations, that a given effect is always about from a given alleged cause. If you dust off your stats book and thumb through it, you will find no such rule of inference rule there. What you will find are principles that allow you to conclude from a certain number N of observations that with confidence c, the proportion of positive cases is z, where c < 1 and z < 1. But there is no finite number of observations that would justify, with any nonzero confidence, that any law held universally, without exception (that is, z can never be 1 for any finite number of observations, no matter how small the desired confidence c is, unless c = 0). . And isn't that exactly what laws of nature are supposed to do? For Pete's sake it is called the law of universal gravitation, and it begins with the universal quantifier every (both of which may have seemed pretty innocuous up until now).
Let me repeat myself for clarity: I am not saying that there is no statistical law that would allow you to conclude the law with absolute certainty; absolute certainty is not even on the table. I am saying that there is no statistical law that would justify belief in the law of universal gravitation with even one tenth of one percent of one percent confidence, based on any finite number of observations. My point is that the laws of the physical sciences -- laws like the Ideal gas laws, the laws of gravity, Ohm's law, etc. -- are not based on statistical reasoning and could never be based on statistical reasoning, if they are supposed, with any confidence whatsoever, to hold universally.
So, if the scientific method is not based on the laws of statistics, what is it based on? In fact it is based on the
Principle of Abductive Inference: Given general principle as a hypothesis, if we have tried to experimentally disprove the hypothesis, with no disconfirming experiments, then we may infer that it is likely to be true -- with confidence justified by the ingenuity and diligence that has been exercised in attempting to disprove it.
In layman's terms, if we have tried to find and/or manufacture counterexamples to a hypothesis, extensively and cleverly, and found none, then we should be surprised if we then find a counterexample by accident. That is the essence of the scientific method that underpins most of the corpus of the physical sciences. Note that it is not statistical in nature. The methods of statistics are very different, in that they rest on theorems that justify confidence in those methods, under assumptions corresponding to the premises of the theorems. There is no such theorem for the Principle of Abductive Inference -- nor will there ever be, because, in fact, for reasons I will explain below, it is a miracle that the scientific method works (if it works).
Why would it take a miracle for the scientific method to work? Remember that the confidence with which we are entitled to infer a natural law is a function of the capability and diligence we have exercised in trying to disprove it. Thus, to conclude a general law with some moderate degree of confidence (say, 75%), we must have done due diligence in trying to disprove it, to the degree necessary to justify that level confidence, given the complexity of the system under study. But what in the world entitles us to think that the source code of the universe is so neat and simple, and its human denizens so smart, that we are capable of the diligence that is due?
For an illuminating analogy, consider that software testing is a process of experimentation that is closely analogous to scientific experimentation. In the case of software testing, the hypothesis being tested -- the general law that we are attempting to disconfirm -- is that a given program satisfies its specification for all inputs. Now do you suppose that we could effectively debug Microsoft Office, or gain justified confidence in its correctness with respect to on item of its specification, by letting a weasel crawl around on the keyboard while the software is running, and observing the results? Of course not: the program is far too complex, its behavior too nuanced, and the weasel too dimwitted (no offense to weasels) for that. Now, do you expect the source code of the Universe itself to be simpler and friendlier to the human brain than the source code of MS Office is to the brain of a weasel? That would be a miraculous thing to expect, for the following reason: a priori, if the complexity of that source code could be arbitrarily large. It could be a googleplex lines of spaghetti code -- and that would be a infinitesimally small level of complexity, given the realm of possible complexities -- namely the right-hand side of the number line.
In this light, if the human brain is better equipped to discover the laws of nature than a weasel is to confidently establish the correctness an item in the spec of MS Office, it would be a stunning coincidence. That is looking at it from the side of the a priori expected complexity of the problem, compared to any finite being's ability to solve it. But there is another side to look from, which is the side of the distribution of intelligence levels of the potential problem-solvers themselves. Obviously, a paramecium, for example, is not equipped to discover the laws of physics. Nor is an octopus, nor a turtle, nor a panther, nor an orangutan. In the spectrum of natural intelligences we know of, it just so happens that there is exactly one kind of creature that just barely has the capacity to uncover the laws of nature. It is as if some cosmic Dungeon Master was optimizing the problem from both sides, by making the source code of the universe just simple enough that the smartest beings within it (that we know of) were just barely capable of solving the puzzle. That is just the goldilocks situation that good DM's try to achieve with their puzzles: not so hard they can't be solved, not so easy that the players can't take pride in solving them
There is a salient counterargument I must respond to. It might be argued that, while it is a priori unlikely that any finite being would be capable of profitably employing the scientific method in a randomly constructed universe, it might be claimed that in hindsight of the scientific method having worked for us in this particular universe, we are now entitled, a posteriori, to embrace the Principle of Abductive Inference as a reliable method. My response is that we have no objective reason whatsoever to believe the scientific method has worked in hindsight -- at least not for the purpose of discovering universal laws of nature! I will grant that we have had pretty good luck with science-based engineering in the tiny little spec of the universe observable to us. I will even grant that this justifies the continued use of engineering for practical purposes with relative confidence -- under the laws of statistics, so long as, say, one anomaly per hundred thousand hours of use is an acceptable risk. But this gives no objective reason whatsoever (again under the laws of statistics) to believe that any of the alleged "laws of nature" we talk about is actually a universal law. That is to say, if you believe, with even one percent confidence, that we ever have, or ever will, uncover a single line of the source code of the universe -- a single law of Nature that holds without exception -- then you, my friend, believe in miracles. There is no reason to expect the scientific method to work, and good reason to expect it not to work -- unless human mind was designed to be able to uncover and understand the laws of nature, by Someone who knew exactly how complex they are.
Jump in the discussion.
No email address required.
Notes -
The difference I was trying to elucidate with the missile defense system example was a difference in the degree of confidence you would have between two theories A and B, both of which have been tested, neither of which has been disconfirmed, but one of which has been tested more thoroughly (or, for whatever reason, you have more confidence in). The crucial issue is a difference in degrees of confidence (or what Popper called degree of corroboration) between two hypotheses, neither of which has been falsified.
This is not the situation I was describing. In the hypothetical, the two laws are in different domains (gravity vs. quantum computing), possibly for different purposes (say, missile defence vs. airplane autopilot) and one is better established (or better corroborated) than the other.
Like I said, if both theories A and B have passed all their tests, the evidence says nothing about them. We are free to tentatively accept them as true. We don't have to, though - my guess might be that quantum computing theory is not true, or it might be that I think that quantum computing has been only weakly tested and I'm not willing to bet on it working for my missile defense system. That's fine, but that is the part where I conjecture/guess at the truth. We don't disagree about my mental process, it's just that I think it's conjectural and not warranted by the evidence - the evidence can't tell me what to think and which bet to make and which hypothesis to prefer, the evidence can only contradict a hypothesis and thus force me to reject it if I accept the evidence as true. Everything else is me making my best guess. I'm free to describe my mental state as "very confident" in that process, but that describes my state of mind, not the state of the evidence.
I think I am beginning to understand your position better. So, here is my question. Do you think that the preference for acting on a better-tested theory over acting on a worse-tested theory is an arbitrary, subjective preference? like, some people like chocolate; some people like vanilla; different strokes? I assert that it is only rational to be more willing to act on a better tested theory.
When did anybody ever have to accept a theory? By have to do you mean rationally ought to? If rationally ought to is what you mean, then, as I said, I disagree.
Questions of subjective/objective are always tricky, and I can answer this question on several different levels. Those who think rationality can lead to justified beliefs think that justification and evidence can make it so that we objectively rationally ought to believe a justified theory, as you say. Popper and I reject this. Theories (or beliefs in general) cannot be justified. At all. However, if we are interested in finding the truth (and this is also a subjective goal, one might be more interested in, say, propaganda), we should try to eliminate any erroneous beliefs that we have, and our tool for this is rational criticism and experiments. So we should try to deploy these tools as much as we can if we are interested in the truth, and we thus want our theories to be as severely tested as possible. No matter how well-tested, however, our theories remain conjectures tempered by rational criticism.
We are also not mandated by reason (in Popper's view of science) to prefer the better-tested theory. It's not the case that we rationally ought to accept the better tested theory. We could for example be super stoked about a poorly tested theory in preference to a better tested one - but the thing to do then is to try and come up with stronger tests of our preferred poorly tested theory, since in the search for truth we should try to test our theories as strongly as possible in order to eliminate error. This is subjective in the sense that our preference for a theory is our decision, but it's not like a preference for an ice cream flavor - we deploy rational evaluation and empirical experiments to the best of our ability in order to try to guess at the truth and eliminate errors, which we do not do in our ice cream preferences. This use of the rational method of criticism in the search for truth is what makes the difference and what makes our decision rational in the sense of using critical reasoning, although this provides no objective justification for our decision and it does not tell us what we rationally ought to believe.
There is a nuance to my position that this glosses over. In my view, scientific epistemology is not just matter of ought vs ought not; it is a matter of rationally obligatory degrees of preference for better tested theories, on a continuum. However, when one theory is better tested than another on this continuum, and on some occasion we have to choose between the two, then we rationally ought to trust the better tested theory on that occasion.
If I understand your position correctly, it is an awful lot like the preference among ice cream flavors. Let's say you have to choose from chocolate, vanilla, and strawberry -- but you know the strawberry is poisoned. So strawberry is a not a viable choice, but the choice between vanilla and strawberry remains wholly subjective. Similarly, (in your view as I understand it) when choosing among alternative theories to act on, the choice among those theories that have not been disconfirmed is a subjective preference as much as chocolate vs. vanilla.
For example, suppose a person has a choice between action A and action B, and that their goal in making that choice is to maximize the likelihood that they will continue living. Action A maximizes their chance of surviving if a certain viable (tested, not disconfirmed) theory is true, and B maximizes their chance of surviving if a certain other viable theory, in another domain, is true. They know one of those theories is substantially better confirmed than the other by every relevant criterion (say, the law of gravity vs. the most recent discovery in quantum computing). I say there is only one rational action in that scenario (trust the better tested theory). Do you say the same or different?
My position is that no actions or beliefs are "rational" in this sense, of being justified or mandated by reason. Actions or beliefs can be rational in the sense that we have deployed the method of rational criticism (and, if possible, empirical testing) in order to eliminate errors, with no justification/warrant/likelihood/etc. being involved at any point. So the contents of a belief don't determine its rationality (reason doesn't tell you what to believe), but the methods we have used in order to try to find errors in that belief can be rational. A choice can be rational if we've employed critical thinking in making it, and this is the only sense in which decisions can be rational, since justification is not possible.
In comparison to ice cream preference, yes, both are arbitrary in the sense we have to judge for ourselves (we are the arbiters of) what to believe/which icecream to like. But we generally don't employ critical discussion and experimentation in our ice cream choices, although we certainly can. Again, it's the methods of critical analysis and experimentation that are rational, and a decision can made with deliberation and with the use of reason, in contrast to a preference for ice cream which usually does not involve this. But the beliefs or actions themselves can never be rational in the sense of justified, warranted, mandated by reason, etc.
As for your the law of gravity vs. the most recent discovery in quantum computing example, it's slightly confusing to me. Does option B that uses quantum computing go against the law of gravity? If so, I would reject it, since I believe the law of gravity to be true (tentatively, without justification). Or does option B use both the law of gravity and quantum computing? In that case I'm not really choosing between gravity and quantum computing, but whether to additionally also use quantum computing in my plan, in which case how well-tested quantum computing is compared with gravity is not really relevant, since I'm using gravity as well.
In general, my view of the preference for the better-tested theory (and my reading of Popper's opinion here) is that this is a soft rule-of-thumb methodological advice, but not an "rationally ought" rule. Since we want to test our theories as severely as possible in order to hopefully eliminate error, all else being equal we should prefer the better tested theory - but not in the sense of "rationally ought" but in the sense of "let's test as much as possible". But all else is rarely equal, and "better tested" is not an exact calculation. So sort of like the advice "it's a good idea to castle your king in chess". Yes, that's good advice, but it's not necessarily always the best choice, and you are not "irrational" for deciding not to castle. A more clear formulation of this advice has been advanced by Miller, Popper's former student, who formulates this stuff much more dryly than Popper but in a way more suited to the style of modern analytical philosophy (Out of Error, p. 124):
I meant something like this: the safety of A rests on the law of gravity but not the law of quantum computing; the safety of B rests on the law of quantum computing but not the law of gravity. To make the example a little more concrete (but science-fiction requiring some suspension of disbelief), your choices are to take (1) a self-flying plane that is programmed with a model using the Law of Gravity, but no laws of quantum computing, and has been operating safely for thirty years, or (2) the new teleporter -- whose safety has been tested but not disconfirmed, and has been proven safe contingent on the latest law of quantum computing, but not the law of gravity. Your goal in the selection is to maximize the probability of your survival.
Right. Well I'd definitely be interested in testing the teleporter, but I wouldn't risk my safety in a first test of something, so I'd choose the plane, which I believe is safe (tentatively, as my best guess upon rational deliberation that produces no justification but may eliminate errors). Like I said, choices and beliefs can only be rational in the sense of using deliberation and reason to make our best guess, and are never rational in the sense of being justified, warranted, reliable, established, or anything of that sort, as this is not possible.
Remember, I stipulated in the hypothetical that the goal of the reasoner in the story is to maximize their probability of survival. The intent is not to ask what you would do; if curiosity trumps safety for you as an ultimate value, so be it. The question is, given that the reasoner's goal is to maximize his or her safety, would it be rational for them to take the teleporter?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link