The so-called "scientific method" is, I think, rather poorly understood. For example, let us consider one of the best-known laws of nature, often simply referred to as the Law of Gravity:
Newton's Law of Universal Gravitation: Every object in the universe attracts every other object toward it with a force proportional to the product of their masses, divided by the square of the distance between their centers of mass.
Now here is a series of questions for you, which I often ask audiences when I give lectures on the philosophy of science:
- Do you believe Newton's Law of Universal Gravitation is true?
- If so, how sure are you that it is true?
- Why do you believe it, with that degree of certainty?
The most common answers to these questions are "yes", "very sure", and "because it has been extensively experimentally verified." Those answers sound reasonable to any child of the Enlightenment -- but I submit, on the contrary, that this set of answers has no objective basis whatsoever. To begin with, let us ask, how many confirming experiments do you think would have been done, to qualify as "extensive experimental verification." I would ask that you, the reader, actually pick a number as a rough, round guess.
Whatever number N you picked, I now challenge you state the rule of inference that allows you to conclude, from N uniform observations, that a given effect is always about from a given alleged cause. If you dust off your stats book and thumb through it, you will find no such rule of inference rule there. What you will find are principles that allow you to conclude from a certain number N of observations that with confidence c, the proportion of positive cases is z, where c < 1 and z < 1. But there is no finite number of observations that would justify, with any nonzero confidence, that any law held universally, without exception (that is, z can never be 1 for any finite number of observations, no matter how small the desired confidence c is, unless c = 0). . And isn't that exactly what laws of nature are supposed to do? For Pete's sake it is called the law of universal gravitation, and it begins with the universal quantifier every (both of which may have seemed pretty innocuous up until now).
Let me repeat myself for clarity: I am not saying that there is no statistical law that would allow you to conclude the law with absolute certainty; absolute certainty is not even on the table. I am saying that there is no statistical law that would justify belief in the law of universal gravitation with even one tenth of one percent of one percent confidence, based on any finite number of observations. My point is that the laws of the physical sciences -- laws like the Ideal gas laws, the laws of gravity, Ohm's law, etc. -- are not based on statistical reasoning and could never be based on statistical reasoning, if they are supposed, with any confidence whatsoever, to hold universally.
So, if the scientific method is not based on the laws of statistics, what is it based on? In fact it is based on the
Principle of Abductive Inference: Given general principle as a hypothesis, if we have tried to experimentally disprove the hypothesis, with no disconfirming experiments, then we may infer that it is likely to be true -- with confidence justified by the ingenuity and diligence that has been exercised in attempting to disprove it.
In layman's terms, if we have tried to find and/or manufacture counterexamples to a hypothesis, extensively and cleverly, and found none, then we should be surprised if we then find a counterexample by accident. That is the essence of the scientific method that underpins most of the corpus of the physical sciences. Note that it is not statistical in nature. The methods of statistics are very different, in that they rest on theorems that justify confidence in those methods, under assumptions corresponding to the premises of the theorems. There is no such theorem for the Principle of Abductive Inference -- nor will there ever be, because, in fact, for reasons I will explain below, it is a miracle that the scientific method works (if it works).
Why would it take a miracle for the scientific method to work? Remember that the confidence with which we are entitled to infer a natural law is a function of the capability and diligence we have exercised in trying to disprove it. Thus, to conclude a general law with some moderate degree of confidence (say, 75%), we must have done due diligence in trying to disprove it, to the degree necessary to justify that level confidence, given the complexity of the system under study. But what in the world entitles us to think that the source code of the universe is so neat and simple, and its human denizens so smart, that we are capable of the diligence that is due?
For an illuminating analogy, consider that software testing is a process of experimentation that is closely analogous to scientific experimentation. In the case of software testing, the hypothesis being tested -- the general law that we are attempting to disconfirm -- is that a given program satisfies its specification for all inputs. Now do you suppose that we could effectively debug Microsoft Office, or gain justified confidence in its correctness with respect to on item of its specification, by letting a weasel crawl around on the keyboard while the software is running, and observing the results? Of course not: the program is far too complex, its behavior too nuanced, and the weasel too dimwitted (no offense to weasels) for that. Now, do you expect the source code of the Universe itself to be simpler and friendlier to the human brain than the source code of MS Office is to the brain of a weasel? That would be a miraculous thing to expect, for the following reason: a priori, if the complexity of that source code could be arbitrarily large. It could be a googleplex lines of spaghetti code -- and that would be a infinitesimally small level of complexity, given the realm of possible complexities -- namely the right-hand side of the number line.
In this light, if the human brain is better equipped to discover the laws of nature than a weasel is to confidently establish the correctness an item in the spec of MS Office, it would be a stunning coincidence. That is looking at it from the side of the a priori expected complexity of the problem, compared to any finite being's ability to solve it. But there is another side to look from, which is the side of the distribution of intelligence levels of the potential problem-solvers themselves. Obviously, a paramecium, for example, is not equipped to discover the laws of physics. Nor is an octopus, nor a turtle, nor a panther, nor an orangutan. In the spectrum of natural intelligences we know of, it just so happens that there is exactly one kind of creature that just barely has the capacity to uncover the laws of nature. It is as if some cosmic Dungeon Master was optimizing the problem from both sides, by making the source code of the universe just simple enough that the smartest beings within it (that we know of) were just barely capable of solving the puzzle. That is just the goldilocks situation that good DM's try to achieve with their puzzles: not so hard they can't be solved, not so easy that the players can't take pride in solving them
There is a salient counterargument I must respond to. It might be argued that, while it is a priori unlikely that any finite being would be capable of profitably employing the scientific method in a randomly constructed universe, it might be claimed that in hindsight of the scientific method having worked for us in this particular universe, we are now entitled, a posteriori, to embrace the Principle of Abductive Inference as a reliable method. My response is that we have no objective reason whatsoever to believe the scientific method has worked in hindsight -- at least not for the purpose of discovering universal laws of nature! I will grant that we have had pretty good luck with science-based engineering in the tiny little spec of the universe observable to us. I will even grant that this justifies the continued use of engineering for practical purposes with relative confidence -- under the laws of statistics, so long as, say, one anomaly per hundred thousand hours of use is an acceptable risk. But this gives no objective reason whatsoever (again under the laws of statistics) to believe that any of the alleged "laws of nature" we talk about is actually a universal law. That is to say, if you believe, with even one percent confidence, that we ever have, or ever will, uncover a single line of the source code of the universe -- a single law of Nature that holds without exception -- then you, my friend, believe in miracles. There is no reason to expect the scientific method to work, and good reason to expect it not to work -- unless human mind was designed to be able to uncover and understand the laws of nature, by Someone who knew exactly how complex they are.
Jump in the discussion.
No email address required.
Notes -
The first thing I should clarify is that I think that scientific hypotheses, despite evidence never being able to elevate them above the status of a guess, can be true, really, absolutely true. If we guess right! So if you say aliens exist and I say they don't, we are both guessing (but not randomly: we are motivated, but not justified, by our other other background beliefs). But either aliens exist or they don't. So despite both of us just guessing, one of us is right and has hit upon the truth, the absolute truth. So while Newton's L.O.G. is just a guess from an epistemological standpoint, I am also tentatively accepting it as true. I claim it really is true, and I act upon that belief, although my belief in that is just a guess. Does that satisfy what you felt was missing from my position?
As for your question on the missile defense systems example. So lets say I'm choosing between two courses of action based on two different scientific hypotheses. If one of those hypotheses has passed its empirical tests and the other hasn't, the logical situation is very clear: logic and reason dictate that I reject the hypothesis that has been falsified by the tests, since the tests logically contradict the hypothesis. The hypothesis that has passed its tests I can tentatively accept as true, and I prefer the course of action based on that hypothesis. If both hypotheses have passed all their tests, I would try to concieve of a test that distinguishes between them (a test that one fails but the other doesn't). If this is not possible, then the logical situation is also clear, however: if both hypotheses have passed all their tests, the evidence tells us exactly nothing about which one we should accept - we have to decide what to believe.
And this is a crucial aspect of my position: rationality and and logic cannot tell us what to believe: we have to make that decision. Reason can, however, tell us what not to believe: we should not believe contradictory things, or in this case hypotheses that are contradicted by test results we accept. Rationality does not provide justifications that tell us what to believe. Rationality is the method, namely the method of critical evaluation and when possible empirical testing, which serves to eliminate some of our ideas, hopefully leaving us with true ones. Yes, it'd be great if we could be justified in believing what we believe, but we can't. So we are left with conjectures that we attempt to parse from error by criticism and empirical testing, using logic and reason, with the goal of believing true things. We are rational, in the sense that we use reason and logic to criticize our ideas and hopefully eliminate errors, and our goal is the truth - we aim at having true beliefs. But we can never know that our beliefs are true; we can only guess at the truth, and use reason as best we can to eliminate the guesses that are untrue.
Does this answer your questions? Feel free to ask more if I've been unclear. There are various complications I didn't want to go into (like differences in the severity of empirical tests) for the sake of clarity.
You are mistaken, but it's a common mistake. In Popper's and my view, corroborating evidence does nothing, but contradicting evidence falsifies (although also without any degree of certainty).
The fact that you have guessed right, or that you may have guessed right, does not entail that you are rationally licensed to embrace the proposition (I think you agree with this). For example, if a tarot card reader told me that I was going to get a job offer today, and I believed her and acted on it by taking out a car loan, and if the Gypsy turned out to be right by sheer luck, my action would still be irrational.
To clarify my position in this light, I never said that the physical laws we have in our corpus are all false, or anything of that sort. I said that we are not entitled to any rational confidence in them -- just as I am not entitled to any rational confidence in a tarot card reading (unless I am mistaken about that practice), even though they may be sometimes right as well -- except to the extent we also believe in miracles.
Success rates matter.
If tarot reading worked as consistently physics or math then boy would that be something.
(Now social sciences, well…)
Science as a method frequently involves guessing and dumb luck and accidental discovery. But then the point is systematically testing findings and examining new evidence and ideas. Tarot reading doesn’t have iterative improvement going on.
The success rate of science in enabling improvements to our material lives is pretty good. The success rate of science in yielding justifiable nonzero confidence in universal natural laws may be zero. Can you defend the proposition that it is not? It would be a compelling refutation of my argument if someone were to give a single universal natural law of the physical world -- take your pick -- and give an objective argument why we should have greater than zero confidence in its literal truth. Now that I think about it, that is the straightforward path to refuting my argument, and it is notable that one has attempted to take it.
A word of advice if you proceed: don't waste your time trying to use Bayesian reasoning; you will not get a nonzero posterior unless you have a nonzero prior, and that would be begging the question. And don't bother trying to use parametric statistics, because no finite number of observations will get you there.
I’m failing to understand why this is a bar any epistemology needs to clear.
Science as a method verifiably works at improving our material lives because it produces sufficiently accurate information. The utility is the payoff, but the correlation to reality is what enables it.
Where does math fit here under “physical world”?
The thing you seem to be doing is putting forth a standard no epistemology can satisfy. It’s not like pure math and logic don’t have identified paradoxes and limitations. Just ask Bertrand Russell.
How about the finding that nothing with mass can exceed the speed of light? This is something backed by math and logic, as well as experimentation. If it were otherwise physics would break, is my layman’s understanding anyway.
Is that sufficiently “universal”?
There are a lot of “universal” rules in physics, so long as you stay at the atomic level. (The quantum domain also has its rules, but they don’t break the atomic ones altogether.)
It sure is. Thanks for taking me up on the offer.
I am looking for objective evidence of the theory, Nullius in verba [Latin: No one's words (will be trusted)]. If you claim something is a theorem, show me the proof. If you claim something is experimentally verified, describe the experimental design and its results. What we have here is an appeal to authority claiming that the theory is "backed by math and logic" or that "physics would break" if it were untrue, omnes in verbo [all on the word (of authority)].
I would not be so demanding that I ask anyone to perform experiments, or even look up experimental data in literature, for the purpose of making a "Motte" post. A plausible (but concrete) story of what such evidence would look like -- in evidence of any theory of your choice -- would be enough to rebut my argument.
An appeal to authority is warranted here, rebutting your argument doesn't actually hinge on the truth of the theory, it hinges on whether it is possible for experimental evidence to justify a belief in the correspondence of a theory and reality. If it does there are cases where the logic of the theory enforces universality.
To wit, taking Newton's law as an example (and supposing we only knew classical mechanics), would we be justified in saying that the masses we observe behave as per his theory?
I'm not saying universally, merely the things we've observed locally.
If so, it turns out there are other cases, where if we are justified in believing the theory, the theory says things about the universe as a whole.
If you don't believe we can go from experimental evidence to justified belief in theory, then we have bigger problems.
To recollect (since the conversation is pretty deeply threaded now), this was the original challenge:
and the response:
It may help to step back and consider the role of appeals to authority in general, in terms of when they are conventionally accepted and when they are not. When experts communicate with other experts in post-enlightenment scholarly discourse, appeals to authority are verboten. The sacred rule of scientific dialectic is Nullius in verba [nothing on the word (of authority)]. I did not get that out a fortune cookie; it is the motto of the Royal Society of London (British equivalent of our Academy of Science), established in 1660, and now the oldest scientific academy in the world. As Turing Award Laureate Judea Pearl put it, the scientific revolution began when Galileo said, "I don't care about Aristotle and his fancy books; I want to see these two rocks dropped from the tower of Pisa, and I want to see them with my own two eyes." The hair stands up on the back of my neck every time I re-read Pearl's words, because, whether it began with Galileo or not, science in the strict sense emerged when appeals to authority were banished from scholarly discourse -- so that ideas came to be considered on their intrinsic merits rather than the merits of their inventor or advocate. It did not happen that long ago, it has not yet happened everywhere, and we are very fortunate to have that ethos as part of our heritage.
On the other hand, in a classroom or a court of law, it is conventional (and reasonable per common sense) for lay people to accept expert testimony on the merits of the speaker, if, or to the extent that they assess the speaker to be an expert on the topic in question. In these cases, the burden of rationality for the listener shifts -- from weighing the evidence that the speaker's claims stand on their merits, to rationally weighing the evidence of his merits as a trustworthy source on the topic. For example, if a professor of ornithology from Stanford tells me he is confident that we are looking at a red-bellied wood thrush (or whatever), and that is not disputed by another expert of comparable or greater standing, I would tend to believe him. If he got his bachelor's from the University of Alabama, on the other hand, I would be less inclined. (I'm just kidding; I would grudgingly believe the Alabama grad -- but War Eagle!)
To the topic at hand, I am not assuming the role of a layman in this discussion. I consider myself an expert in logic and probabilistic reasoning, but you can be the judge of whether you agree. I have a doctorate in that subject from the University of Georgia and 11 published scholarly papers in the field (as well as 22 in other fields of mathematics and computer science). During my career as a professor at Texas Tech University, I was lead investigator in over one million dollars in research contracts sponsored by NASA and DARPA. I served as chief scientist of Texas Multicore Technologies from 2011 to 2017. My most cited paper on probabilistic reasoning [Baral, Gelfond, and Rushton (2009): "Probabilistic Reasoning with Answer Sets] (https://arxiv.org/pdf/0812.0659.pdf) has 293 citations per Google Scholar, about one third of which occurred within the past two years -- which puts it in approximately the top 1% of academic papers by number of citations, as well as indicating interest in my research that is growing over time.
I am not asking you to assume the role of a layman either, and I do not expect to be taken one bit more seriously than my arguments merit on their substance. But, given an unsupported assertion that "If it were otherwise physics would break, is my layman’s understanding", I am not willing to assent to it, let alone consider it objectively established, without seeing direct evidence (Nullius in verba) -- either from you or from the alleged expert source -- in order to examine, not content of the physics theory, but the probabilistic and/or logical rules of inference that are used to support that theory. As (Pearl imagined) Galileo said, I want to see that it is true with my own eyes.
I do not believe we can, without a prodigious leap of faith in the power of the human mind relative to the complexity of Nature, unjustified by any articulable, objective reason. If you disagree, then I ask you which rules of inductive inference you would use to draw those conclusions from that evidence. So, do we have "bigger problems"?
Sorry, but no. We are on an internet forum. Asking:
It is absolutely ridiculous and comes across as raising your standards so as to avoid engaging with the argument, to ask people to describe the experimental design or the math of matters well settled in physics. That can be pages, and pages of work, it is ridiculous to ask for it, on an internet forum, especially, when if you honestly want it, a five second google search will suffice, You want to play this game? Two can do so.
You say:
Well where is your proof for this?
But no, rather then demanding proof for this, I accepted it.
Besides, what value is there in doing so for physics? We already know you do not believe the work the scientific community has done is sufficient to prove true, the laws in discussion, if you did, we would not be having that discussion, so what value is in there in reiterating it?
Really? You went to great effort to single out universal laws as specifically unbelievable, rather than coming down on empiricism in general. Do you honestly believe that we can't say, by study of the motion of say, the planets of our solar system, be justified in believing a theory about the motion of the planets (and only the planets)? If not, is there an
Finally I've made an argument I am confident proves you incorrect, but to which you have not engaged.
Say we are pulling polygons out of an infinite box of simple polygons. We notice that every polygon with three sides, has an internal angle of 180 degrees. Our observations inspire us to a mathematical proof that every three sided polygon has an internal angle of 180 degrees. Would we be justified in believing that every three sided polygon in the box, has an internal angle of 180 degrees?
This is relevant because this is how we can truly justify belief in universal laws in physics, but I would like to know your opinion on the polygon idea before I do the further of work of going into the physics.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Well I’m a layman at physics, so I’d suggest finding someone who can lay out the math, theory, and experimentation that shows it is impossible for any object with mass to travel faster than the speed of light.
My layman’s understanding is that the fundamental properties of spacetime, mass, and energy as we understand them via Special Relativity make it impossible.
Here’s a bunch of physics nerds describing how it would violate causality:
https://physics.stackexchange.com/questions/671516/proof-for-impossibility-of-ftl-signals
Great idea! Bring it on -- but I get to cross examine.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I did not say that any epistemology needed to clear that bar. If your position is that science a collection of useful fictions, and that discerning the (literally true) laws of nature falls outside the scope of its business, then your position is immune to my argument. For myself, I am a little more romantic about the goals of science.
You’re applying a rigid categorization of “fact or fiction” to an area where the practicality of “all models are wrong; some are useful” is the typical approach.
You’re calling for perfection or it’s fiction, when science has been building knowledge bit by bit. Things can have shades of gray.
Obviously, understanding the Ultimate Nature of Reality and Its Universal Laws is a fine goal, but the way to get there is almost certainly a pretty messy process.
I do not think my position is fairly characterized as denying that there are shades of grey, or that science has been building knowledge bit by bit, or that I am calling for "perfection" as the only alternative to "fiction". If someone gave objective evidence that would justify, say, 1% confidence in some particular universal physical law (of gravity, or electromagnetism, or whatever), that would be a shade of grey (1% is pretty small; 10% would be better; 78% would be nice); it would be only one fact in a growing field (building knowledge bit by bit); it would be decidedly imperfect in the sense of low certainty. Yet my claim is that we cannot accomplish even that, based on objective evidence, even if we take for granted that the universe is persistently governed by fixed laws.
So I am not challenging anyone to deliver certainty, perfection, or complete knowledge. I am challenging them to deliver objective evidence for nonzero confidence in a universal physical law. As far as degrees of certainty go, the alternative to nonzero is zero -- and I do not think it is unfair to call a proposition a fiction if we have zero confidence in its truth. Even if it is a useful fiction.
I am also not saying that I do not have (positive) confidence in some of the known laws of nature -- though, somewhat to my surprise, several posters have indicated that they take that position. I am saying that to be in that position requires faith in something that is so unlikely a priori -- not to mention strange and wonderful -- that it could be fairly characterized as a miracle.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Seeing as we recall the text differently, I was probing there for a source there (other than yourself). I am not convinced that I was mistaken. Popper defines corroboration as a diligent attempt to disprove a hypothesis:
He goes on to say that the degree of corroboration, which he views as the merit of the theory, increases with the number of non-disconfirming experiments:
If there is a difference between what Popper said, and what I said he said, it would be that I used the word "truth". Fair enough, but so did you:
and I do not see how the following claim could be correct, in light of the quotes above: "In Popper's view,... corroborating evidence does nothing". [emphasis added]
You said that Popper thinks corroboration (failed attempts to falsify a hypothesis) count as evidence for its truth. Instead, Popper says that theories cannot be verified. The first sentence of the chapter you quote is:
In the footnote soon after:
And finally, here's Popper stating the difference between psychological questions of one's state of mind (that one can be "very certain") and epistemological questions of the state of the evidence, where evidence cannot verify hypotheses.
So corroboration is a measure of how well-tested a theory is, and the severity of the tests it has undergone. But corroboration does not provide evidence for the truth of the hypothesis. Here's a quote from Popper, "Objective Knowledge", 21f:
I like my Popper but I hate looking for quotes - I'm much more interested in the substance of the discussion we're having and the view I've outlined as a response to yours.
Thanks for the researched response. I think I finally understand the disagreement now.
As you point out, Popper does not regard repeated experiments as progressively raising our confidence in the probability that the theory is true; his notion of the merit of a theory is much more nuanced than "probability of truth". So that is where my statement differs from his view; I am convinced now that I was mistaken and thank you for pointing it out.
But I believe you are also mistaken, and your view differs from Popper's in a more profound way. If you open an electronic copy of Popper's book (https://philotextes.info/spip/IMG/pdf/popper-logic-scientific-discovery.pdf), hit ctrl-f, and search for "degree of corroboration" you will find that that phrase occurs 84 times -- about once every five pages for the length of the book. So, while his notion of merit is not defined in terms of truth or probability of truth, he does hold that repeated, diligent, failed attempts to disprove a theory tend to progressively confirm its merit (or to use his word, its "mettle") -- which is a far cry from doing nothing. For Popper, non-disconfirming experiments do something (viz, "corroborate") and greater number of such experiments do more of that thing:
I read you correctly, you seem to believe that there should be no difference in our willingness to act on a theory after one rigorous non-disconfirming experiment, versus 1000 of them by 1000 different researchers using different methods and bringing different perspectives and skill sets to the table (say, Newton's law of gravity vs. some new law of quantum computing). Do I read you incorrectly (or did you perhaps misspeak)?
Ok that is a relief to hear, but it is not consistent with your other statement above (corroborating evidence does nothing), so it seems you misspoke.
Sure, Popper is developing the idea of degree of corroboration in that book, so he mentions it a lot. But no degree of corroboration can change the epistemic status of a theory, which always remains a conjecture. Like I said, it's a common mistake, and Popper shares some of the blame for by speaking about "preference" in the context of corroboration, which sounds a lot like justification or that we "rationally ought" to believe the better tested theory as if it had a greater likelihood of being true, or something like that. Popper did a lot to muddle the waters here. But corroboration is a measure of the state of the critical discussion, and not in any way a measure of the justification, reliability, probability, etc. of a theory. With regard to the epistemic status of a theory being adjusted by evidence, which is what is relevant to our discussion, corroboration does nothing. Here's Popper saying it outright, in Objective Knowledge 1972 (1979 revised edition), p. 18:
As the missile example:
This would be my conjecture, motivated in part by how poorly tested quantum computing is, but not justified or "based" on that. It's my best guess that has taken into consideration the evaluation of the state of the critical discussion on quantum computing (how well corroborated it is), but is not justified by it and remains a guess/conjecture. We can certainly take the degree of corroboration into consideration when deciding what to believe, but it can never elevate our beliefs beyond the status of conjecture, and it is in this epistemological sense that corroborating evidence does nothing.
I think I see now why I, like many people, misread Popper. Frankly, I think the position he expresses here is so egg-headed that I did not anticipate it. He implicitly conditions future performance (aka reliability) on justified confidence in general, literal truth, and so winds up concluding that theories of physical world have only two levels of reliability: known false, and other. This position hamstrings his theory of corroboration with respect to establishing a rational basis for action -- and that moves him to the bottom of my reading list for philosophy of science. It's not that his work has no intellectual merit (it's all very interesting); it's just that I have better things to do, because I am interested science as a rational basis for discriminating between alternative courses of action, and in philosophy of science as an articulated theory of the rules of evidence for doing so.
It appears that Popper (1) accepts the essence of my argument in the original post, but (2) doesn't believe in miracles -- which commits him to his position on reliability and future-performance, and also makes his theory of corroboration impotent a basis for rational action. I share his view of (1) but not (2).
For clarity, do you agree with the Popper on this (that corroboration says nothing whatever about the future performance of a theory)?
Yup, you got it. There's no establishing a rational basis for action, it cannot be done. You have done a good job articulating some of the obstacles to this in your original post. We can, however, still use reason and logic in the method of eliminating errors in the pursuit of truth. That's Popper's insight.
A small note: there is no "known false" category. Falsification is not justified either, it is as conjectural as anything else. So yes, justification doesn't work, and there is no rational basis to be had. But we can still engage in the rational pursuit of truth, in the sense of using reason and experience to temper our conjectures about the world.
As for your future reading, go with your interests, of course, but I can still recommend this short article articulating this position: https://www.science.org/doi/10.1126/science.284.5420.1625
The beauty and clarity of Popper's view is relinquishing justification and the search for a "basis", which reason and rationality are not capable of providing, but still maintaining rationality, empiricism, and the pursuit of truth. It's worth keeping in mind at least, as a possible different path that eschews the use of justification and "good reasons" but retains the use of reason and truth as the aim of science. If ever you stop believing in miracles, you need not despair of reason just yet, give Popper's view a shot first :)
I'll leave you with a final Popper quote:
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The difference I was trying to elucidate with the missile defense system example was a difference in the degree of confidence you would have between two theories A and B, both of which have been tested, neither of which has been disconfirmed, but one of which has been tested more thoroughly (or, for whatever reason, you have more confidence in). The crucial issue is a difference in degrees of confidence (or what Popper called degree of corroboration) between two hypotheses, neither of which has been falsified.
This is not the situation I was describing. In the hypothetical, the two laws are in different domains (gravity vs. quantum computing), possibly for different purposes (say, missile defence vs. airplane autopilot) and one is better established (or better corroborated) than the other.
Like I said, if both theories A and B have passed all their tests, the evidence says nothing about them. We are free to tentatively accept them as true. We don't have to, though - my guess might be that quantum computing theory is not true, or it might be that I think that quantum computing has been only weakly tested and I'm not willing to bet on it working for my missile defense system. That's fine, but that is the part where I conjecture/guess at the truth. We don't disagree about my mental process, it's just that I think it's conjectural and not warranted by the evidence - the evidence can't tell me what to think and which bet to make and which hypothesis to prefer, the evidence can only contradict a hypothesis and thus force me to reject it if I accept the evidence as true. Everything else is me making my best guess. I'm free to describe my mental state as "very confident" in that process, but that describes my state of mind, not the state of the evidence.
I think I am beginning to understand your position better. So, here is my question. Do you think that the preference for acting on a better-tested theory over acting on a worse-tested theory is an arbitrary, subjective preference? like, some people like chocolate; some people like vanilla; different strokes? I assert that it is only rational to be more willing to act on a better tested theory.
When did anybody ever have to accept a theory? By have to do you mean rationally ought to? If rationally ought to is what you mean, then, as I said, I disagree.
Questions of subjective/objective are always tricky, and I can answer this question on several different levels. Those who think rationality can lead to justified beliefs think that justification and evidence can make it so that we objectively rationally ought to believe a justified theory, as you say. Popper and I reject this. Theories (or beliefs in general) cannot be justified. At all. However, if we are interested in finding the truth (and this is also a subjective goal, one might be more interested in, say, propaganda), we should try to eliminate any erroneous beliefs that we have, and our tool for this is rational criticism and experiments. So we should try to deploy these tools as much as we can if we are interested in the truth, and we thus want our theories to be as severely tested as possible. No matter how well-tested, however, our theories remain conjectures tempered by rational criticism.
We are also not mandated by reason (in Popper's view of science) to prefer the better-tested theory. It's not the case that we rationally ought to accept the better tested theory. We could for example be super stoked about a poorly tested theory in preference to a better tested one - but the thing to do then is to try and come up with stronger tests of our preferred poorly tested theory, since in the search for truth we should try to test our theories as strongly as possible in order to eliminate error. This is subjective in the sense that our preference for a theory is our decision, but it's not like a preference for an ice cream flavor - we deploy rational evaluation and empirical experiments to the best of our ability in order to try to guess at the truth and eliminate errors, which we do not do in our ice cream preferences. This use of the rational method of criticism in the search for truth is what makes the difference and what makes our decision rational in the sense of using critical reasoning, although this provides no objective justification for our decision and it does not tell us what we rationally ought to believe.
There is a nuance to my position that this glosses over. In my view, scientific epistemology is not just matter of ought vs ought not; it is a matter of rationally obligatory degrees of preference for better tested theories, on a continuum. However, when one theory is better tested than another on this continuum, and on some occasion we have to choose between the two, then we rationally ought to trust the better tested theory on that occasion.
If I understand your position correctly, it is an awful lot like the preference among ice cream flavors. Let's say you have to choose from chocolate, vanilla, and strawberry -- but you know the strawberry is poisoned. So strawberry is a not a viable choice, but the choice between vanilla and strawberry remains wholly subjective. Similarly, (in your view as I understand it) when choosing among alternative theories to act on, the choice among those theories that have not been disconfirmed is a subjective preference as much as chocolate vs. vanilla.
For example, suppose a person has a choice between action A and action B, and that their goal in making that choice is to maximize the likelihood that they will continue living. Action A maximizes their chance of surviving if a certain viable (tested, not disconfirmed) theory is true, and B maximizes their chance of surviving if a certain other viable theory, in another domain, is true. They know one of those theories is substantially better confirmed than the other by every relevant criterion (say, the law of gravity vs. the most recent discovery in quantum computing). I say there is only one rational action in that scenario (trust the better tested theory). Do you say the same or different?
My position is that no actions or beliefs are "rational" in this sense, of being justified or mandated by reason. Actions or beliefs can be rational in the sense that we have deployed the method of rational criticism (and, if possible, empirical testing) in order to eliminate errors, with no justification/warrant/likelihood/etc. being involved at any point. So the contents of a belief don't determine its rationality (reason doesn't tell you what to believe), but the methods we have used in order to try to find errors in that belief can be rational. A choice can be rational if we've employed critical thinking in making it, and this is the only sense in which decisions can be rational, since justification is not possible.
In comparison to ice cream preference, yes, both are arbitrary in the sense we have to judge for ourselves (we are the arbiters of) what to believe/which icecream to like. But we generally don't employ critical discussion and experimentation in our ice cream choices, although we certainly can. Again, it's the methods of critical analysis and experimentation that are rational, and a decision can made with deliberation and with the use of reason, in contrast to a preference for ice cream which usually does not involve this. But the beliefs or actions themselves can never be rational in the sense of justified, warranted, mandated by reason, etc.
As for your the law of gravity vs. the most recent discovery in quantum computing example, it's slightly confusing to me. Does option B that uses quantum computing go against the law of gravity? If so, I would reject it, since I believe the law of gravity to be true (tentatively, without justification). Or does option B use both the law of gravity and quantum computing? In that case I'm not really choosing between gravity and quantum computing, but whether to additionally also use quantum computing in my plan, in which case how well-tested quantum computing is compared with gravity is not really relevant, since I'm using gravity as well.
In general, my view of the preference for the better-tested theory (and my reading of Popper's opinion here) is that this is a soft rule-of-thumb methodological advice, but not an "rationally ought" rule. Since we want to test our theories as severely as possible in order to hopefully eliminate error, all else being equal we should prefer the better tested theory - but not in the sense of "rationally ought" but in the sense of "let's test as much as possible". But all else is rarely equal, and "better tested" is not an exact calculation. So sort of like the advice "it's a good idea to castle your king in chess". Yes, that's good advice, but it's not necessarily always the best choice, and you are not "irrational" for deciding not to castle. A more clear formulation of this advice has been advanced by Miller, Popper's former student, who formulates this stuff much more dryly than Popper but in a way more suited to the style of modern analytical philosophy (Out of Error, p. 124):
I meant something like this: the safety of A rests on the law of gravity but not the law of quantum computing; the safety of B rests on the law of quantum computing but not the law of gravity. To make the example a little more concrete (but science-fiction requiring some suspension of disbelief), your choices are to take (1) a self-flying plane that is programmed with a model using the Law of Gravity, but no laws of quantum computing, and has been operating safely for thirty years, or (2) the new teleporter -- whose safety has been tested but not disconfirmed, and has been proven safe contingent on the latest law of quantum computing, but not the law of gravity. Your goal in the selection is to maximize the probability of your survival.
Right. Well I'd definitely be interested in testing the teleporter, but I wouldn't risk my safety in a first test of something, so I'd choose the plane, which I believe is safe (tentatively, as my best guess upon rational deliberation that produces no justification but may eliminate errors). Like I said, choices and beliefs can only be rational in the sense of using deliberation and reason to make our best guess, and are never rational in the sense of being justified, warranted, reliable, established, or anything of that sort, as this is not possible.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link