Bayesianism often gives up on this goal, and confines itself to only our expectations and related calculations. It thus gives up on saying anything about objective reality, and confines itself to talking about our subjective states of expectation and calculations of probabilty regarding those expectations. This, for me, is an unacceptable retreat from figuring out objective reality as the aim of science. I'm interested in how the world really is, I'm not interested in our subjective calculations of probabilty (other than if they can perhaps help in the former).
My stance is that this isn't a form of unwarranted cowardice or retreat, but a pragmatic acknowledgement that we are subjective observers forced to grapple with evidence acquired through sensoria. There is no scope for an actual bird's eye view, we must trudge through the territory and map it as we go.
I think this is our first central disagreement, then, if you've accepted what I've said is for me an "unacceptable" retreat from talking about objective reality. In my view, yes, anything we say about objective reality is guesswork, as we have agreed, but it is this objective reality that interests us - we are trying to figure out the bird's eye view, although we can never be sure (or partially justified) that we have guessed it correctly. While you, if I'm reading you right, confine yourself to talking about our subjective expectations/calculations of probability.
Thus, is science useful? Absolutely. But it can never provide perfect certainty, but we can certainly operate without that, and in practise we do. We can converge ever closer to 100% confidence that our map faithfully represents the territory, and that's good enough for me.
But what does this confidence achieve - what is its relation to external reality? Our subjective calculation of probability and our confidence do not imply the objective truth of the belief in question - there is no logical connection between the two. I'm interested in whether what I believe is objectively true - whether the map actually does match the territory despite being just a guess - and not in our subjective calculation of probability, which tells us nothing about actual, objective truth. I fail to see how to convert claims of subjectively probable truth into claims about objective truth, in other words.
EDIT: Let me put it this way: A Popperian would say "I believe this is the objective truth, but I am only guessing - help me by criticizing my guess and testing it as best we can so we can reject it if it is objectively false, although this rejection will also be just a guess." A Bayesian would say "Based on the supporting evidence and my priors, I calculate a high probability for this hypothesis". At that point, they will either say nothing about the objective truth of that belief, which for me is an unacceptable retreat from talking about objective truth and reality, or they will say "therefore, this belief is objectively true". In the latter case, it is this "therefore" that I object to - I don't think it holds as it then runs into Humean objections, and thus the Bayesian calculation has not added anything to the Popperian's claim.
And this is our second major disagreement, I believe - Popper and I think that the role of evidence is to contradict scientific theories, while you think its role is to support them with regard to our subjective probability calculations. I fail to see the connection between these subjective probability calculations and the external, objective reality which I am interested in. I'm not interested in maximizing our subjective probability, or maximizing our expected utility. I'm interested in correctly guessing the objective truth, and the actual utility of our beliefs rather than their expected utility. In this, evidence plays a negative role, in my view, i.e. one of assisiting in error elimination (which is of course also guesswork). Positive evidence does nothing to adjust our guesses, but negative evidence does adjust them by contradicting and thus falsifying them (if we accept the evidence as true, of course - it may well be that the error resides in a flaw in the experiment as in the neutrino example, rather than in the theory being tested; this is also guesswork).
Overall though, I think we are in agreement about quite a lot. Science is guesswork, rational in the sense of using reason and experience, but not justified or rational in the sense of being rationally established/warranted/justified, whether partially or fully. Where we disagree is that you confine yourself to talking about our subjective calculations of probability, whereas I explicitly aim at objective truth through guesswork. You think the role of evidence in this is positive - it supports the theory in question, thus raising its probability. I think this kind of support has no bearing on the objective truth I am interested in and think the role of evidence is negative - it can contradict our theories, and thus lead us to (hopefully correctly, but with no guarantees, partial or otherwise) to reject those that are objectively false, retaining those that are objectively true.
If there's anything left to clarify, it may be your position on converting claims about the high Bayesian probability of a belief into claims about the objective truth of that belief, where I fail to see any connection between the two. My postion is that we simply make conjectural claims about the objective truth and try to eliminate errors in them, all of which is guesswork, with the role of evidence being only to possibly contradict our guesses. In this view, Bayesian calculations may be correct under Bayesian rules, but they are superflous in our search for objective truth, other than perhaps as an attempt at formalizing our reasoning - where they, in my opinion of course, miss the mark by failing to talk about the external objective reality we are interested in, instead focusing on our subjective confidence.
Is this a fair summary of our positions? Feel free to correct me where I have misunderstood you. If I have understood you well enough, this might be a decent place to stop, and I'll let you have the last word. Either way, I've thoroughly enjoyed our talk, and learned a lot about Bayesianism and reaching mutual understanding with a Bayesian, clarifying my understanding of my own position in the process as well. I'm of course open to further questions or criticism from you or readers like @marten if you have any.
I like this discussion, it feels like we're really doing our best to understand each other's position better, as it should be.
Let's start with the easy bit.
I am confused. What is the criteria that a Popperian uses to determine what error even is, without smuggling in a bit of Bayesianism?
A Bayesian would say that even an (in hindsight) erroneous result is still evidence. But with our previous priors, we would (hopefully) decide that the majority of the probability mass is in the result being in error than the hypothesis.
The only criterion for something to count as evidence for a hypothesis is if it is more likely to be seen if the hypothesis is true, and vice versa.
In other words, how are you becoming suspicious without Bayesian reasoning (however subconscious it is)?
That's right! Bayesianism is, on one level, an attempt to formalize exactly this kind of thinking. I'd call it "critical discussion" or "conjecture" or just "reasoning", you can call it Bayesian reasoning, I'm totally fine with that. I think this process is messier than Bayesianism claims, but I also think that plenty of Popperian concepts can do with some formalizing, like "degree of corroboration", "state of the critical discussion", "severity of testing in light of our background knowledge", etc. So on this level I'd say Bayesianism is a worthwhile pursuit of formalizing this kind of thinking, and we can set aside the question of how well it does this for now. I say "on this level" because there are many variants of Bayesianism, and some claim Bayesianism does much more, such as Bayesianism providing a (partial) justification and rational basis for scientific theories instead of induction, which we have agreed it cannot do, i.e. it cannot elevate our theories beyond the status of a guess.
It is with regards to truth and the relationship of Bayesian calculations to external reality that I suspect we disagree, although I'm not quite sure about this. There's definitely still some stuff to clear up here.
You use truth in the sense that something is consistently and robustly reproducible in the observed environment or universe (or more abstract spaces). Sure. I have no beef with that.
No, this is definitely not the way in which I use truth. Like I said, I use truth in the sense of correspondence to reality. Our subjective expectations, reproducibility, and observations are totally irrelevant to truth. The only thing that matters with regard to the truth of a claim is whether our claim corresponds to reality, i.e. whether things really are the way we claim they are. So the claim that moving faster than light is not possible is true only if this corresponds to reality, i.e. if it is really, objectively, impossible to move faster than light in this universe. We can do all kinds of experiments on this, but the only thing that matters with regard to the truth of this statement is whether it is actually possible to move faster than light in reality or not. Of course, we cannot know what is true, but that does not stop our claim from being true if does in fact correspond to reality, despite it being just a guess. The experiments are attempts to eliminate our errors in our guesses about reality. And while we can never be sure (or be partially justified) in thinking what we believe is true, it can still be true, objectively, if we have guessed right, if it really is impossible to move faster than light. So we cannot be justified (fully or partially) in thinking what we believe is true, but our theories can still be true, objectively, if they correspond to reality whether we know it or not.
In my view, this is what science is interested in: how the universe really works regardless of what we think and the evidence we have. And the best we can do in figuring this out is to guess, and temper our guesses by critical reasoning and empirical testing (which is also guesswork) in order to eliminate errors in our guesses.
Bayesianism often gives up on this goal, and confines itself to only our expectations and related calculations. It thus gives up on saying anything about objective reality, and confines itself to talking about our subjective states of expectation and calculations of probabilty regarding those expectations. This, for me, is an unacceptable retreat from figuring out objective reality as the aim of science. I'm interested in how the world really is, I'm not interested in our subjective calculations of probabilty (other than if they can perhaps help in the former).
Would this describe your position though? That we cannot rationally say anything about the external, objective reality, but can calculate our subjective expectations/probabilites using Bayesian reasoning? I'm not quite sure this is your position, although there have been hints that it is and it is common among Bayesians, which is why I was asking questions about the aim of science and the relationship between Bayesian calculations and reality/truth.
Probability and truth can be converted
And I suspect this is another point of disagreement - I don't think this is right. Having certain truth would be great - if we could be certain about something being true, we could infallibly classify it as true. Awesome! However, we have agreed that this is not possible. But say we can calculate something is probable using Bayesianism. This tells us exactly nothing about whether it is actually true or not, about whether reality really works that way. Certain truth implies truth, yes, but probable truth does not imply truth. Like I said, if you are mistaken about the truth of the belief in question, i.e. if reality does not work that way, your belief being probable does nothing to mitigate your mistake. You still believe something false. If moving faster than light is possible in reality, you are wrong in believing it is not, no matter how probable your belief may be. You are equally wrong as a Popperian who would just say he guessed wrong, despite your probability calculation. And if the belief in question is true, if reality is such that moving faster than light is not possible, this belief just as true for the Popperian who only claims it to be a guess and does not reference probabilty, regardless of whether your beleif is probable or not according to Bayesianism. I see absolutely no logical connection, i.e. possibilty of conversion, between probability and truth. Reality does not care about our probability calculations.
To sum up, I have no issues with Bayesianism as a theory of decision-making. I see science as guesswork tempered by reason and experience, aiming at objectively true theories about reality - although we can never know or be (partially or fully) justified in believing them to be true: we have to guess, and eliminate our errors by experiments, which are also guesswork. I think we may disagee with regard to truth and the aim of science, but I'm not totally clear on your position here. I also think we may disagree on the connection between probability and truth/external reality.
Looks like we're on the same page on the overall epistemic status of scientific theories, namely that they are not justified by the evidence and always remain conjectural. That's not a knock against Bayesianism, I agree!
Bayesian updates are, as I understand it, the optimal way to update on new evidence in the light of your existing priors
The optimal way in order to do what? What would you say is the aim of science?
For me, it's the commonsense notion of truth as correspondence to reality. Of course, we cannot know or be justified in believing that our theories are true, but they can still be true guesses if they correspond to reality, and it is this reality that we are interested in.
You say that this binary notion of truth is unattainable, so what do you replace it with? Probability calculations? What do those achieve? What is their relationship to reality? There are many variants of Bayesiansim, and they are often very fuzzy about this point, so I'm trying to pinpoint your position.
Popperian falsifiability is simply dysfunctional. Taken at face value, recall those experiments that suggested neutrinos move faster than light? That is evidence that neutrinos move faster than light.
A serious Popperian would immediately give up on the idea that nothing can exceed the speed of light in a vacuum. A sensible Bayesian, which humans (and thus physicists) are naturally inclined to be, would note that this evidence is severely outweighed by all the other observations we have, and while adjusting very slightly in favor of it being possible to exceed the speed of light with a massive object, still sensibly choose to devote the majority of their energy to finding flaws in the experiment.
This is not quite right. For a Popperian, accepting the results of a test is a conjecture just like anything else. We are searching for errors in our guesses by testing them against reality - if we suspect a test is wrong, we are very welcome to criticize the test, devise another test, etc. It is only if we accept a negative test result that have to consider the theory being tested to be false, by plain old deductive logic since it is contradicted by the test result. But a serious Popperian is quite capable of being suspicious of an experiment, and looking for flaws in it.
Nobody has a solution to infinite regress, barring "I said so". As far as I can tell, you've got to start somewhere, and Bayesianism leads to more sensible decision theories and is clean and simple.
I have no problem with starting somewhere, but I don't claim our theories can ever be anything more than a guess, since, as you seem to have agreed, they are ultimately baseless due to infinite regress. In the context of this discussion on justification and the basis of science, I'm ok with Bayesianism that only claims to be decision theory, a formalized account of how we try to temper our guesses by reason and experience with no justification or basis ever being provided, which is also the Popperian view of the epistemic status of science. Bayesianism would then be a methdology to help in our conjectural decisionmaking, but would never elevate our theories beyond the status of a guess, in the sense of them having some sort of justification or basis. Do we disagee here?
Given that English is an imprecise language, feel free to interpret my 99.9999% confidence that the Sun will rise tomorrow as being equivalent to "it's true the Sun will rise tomorrow".
Ok, so if I'm understanding you right, you do care about the truth of your beliefs, not just about your confidence in them. So what's the logical relationship between your calculation of confidence in a theory and the truth of that theory? What is the epistemic benefit of confidence calculation, as opposed to a Popperian conjecture? It seems to me that if you are mistaken about the truth of the belief in question (as you would be with regard to the sun rising tomorrow if you went to, say, Iceland in winter), your high calculated confidence does nothing to mitigate your mistake. You are equally wrong as a Popperian who would just say he guessed wrong, despite your high confidence. And if the belief in question is true, it's just as true for the Popperian who only claims it to be a guess, regardless of confidence calculation. So what is the epistemic benefit of the confidence calculation?
To clarify a bit more, I see two questions we are discussing. First, whether Popper's falsificationist "logic of science" is a better description/methodology of science than Bayesianism. We can set that aside for now, as it is not the focus of the topic. The second question that's relevant to the topic at hand is whether you think Bayesianism can provide some sort of justification or rational basis for claims about the truth of our beliefs that elevates them to something more than a guess. We certainly seem to agree that we can temper our guesses using logic and reason and experience, but in the Popperian view all of this is still guesswork, and never elevates the epistemic status of a theory beyond that of a guess. So tell me if and where we disagree on this :)
If you demand 100% confidence that the laws of physics are "universal" and timeless, you're SOL unless you assume the conclusion in advance. But we can approach arbitrarily close, and the fact that modern technology works is testament to the fact that we can be goddamn bloody confident in them.
How can we approach arbitrarily close? As stated, this does nothing to address Hume's argument against induction, which holds equally whether you are aiming for probability or for certainty, and does not address the retro skeptical argument that every reason you can give is either based on something else or based on nothing, leading to infinite regress. I don't see how Bayesianism helps with this. Justification is not to be had, with any level of confidence or probabilty. Which is why you need Popper, who explained how you can maintain the use of logic and reason and maintain truth as the aim of science, while also accepting Hume's and the skeptical arguments as correct and consequently discarding justification alltogether.
Another issue Bayesianism often runs into is that many variants of Bayesianism give up on truth - I'm not interested in the confidence we can assign to a theory given our priors and the evidence, I'm interested in whether the theory in question is actually true. Even if we could be justified in Bayesian calculations of probabity/confidence (which we can't be), this would tell us exactly nothing about whether this probable theory is actually true, which is what we are really interested in. There is no logical connection between probable truth and truth (just because something is probably true, it need not be true), and Bayesianism often focuses on subjective calculations of probable truth and abandons actual truth as the goal of science. But if Bayesianism aims at truth rather than solely at subjective calculations of confidence unmoored from reality, if it is interested in what is true rather than just what we can be confident in, it is in no better a position to provide justification than any other epistemology.
If they believe the plane is safer than the teleporter, and their goal is to maximize safety, then by nice clean deductive logic they should chose the plane, given their premises and their acceptance (tentative, without justification) of the rules of deductive logic. The premise that the plane is safer is a conjecture, though; it is their best guess, which they have critically examined using reason, but which is not justified or warranted or something they "rationally ought" to believe. Their decision is rational in the sense that it makes use of the faculties of reason and logic to make their choice, but it is not rational in the sense of being justified or having a rational basis, as these things cannot be had.
In my view, the reasoner could make either decision rationally, as long as they critically examine it using reason and it represents their best guess after rational deliberation and critical evaluation. Their rationality is in the method they have used to make their best guess, not in the contents of their beliefs, which cannot be "rational" in the sense of being justified. The reasoner could be irrational if they don't use reason to critically evaluate their choices and instead flip a coin to make their decision, and they could be irrational if they use faulty logic, for instance by thinking: "Planes are safer than teleporters. My goal is to maximise safety. Therefore, I'll use the teleporter".
Right. Well I'd definitely be interested in testing the teleporter, but I wouldn't risk my safety in a first test of something, so I'd choose the plane, which I believe is safe (tentatively, as my best guess upon rational deliberation that produces no justification but may eliminate errors). Like I said, choices and beliefs can only be rational in the sense of using deliberation and reason to make our best guess, and are never rational in the sense of being justified, warranted, reliable, established, or anything of that sort, as this is not possible.
My position is that no actions or beliefs are "rational" in this sense, of being justified or mandated by reason. Actions or beliefs can be rational in the sense that we have deployed the method of rational criticism (and, if possible, empirical testing) in order to eliminate errors, with no justification/warrant/likelihood/etc. being involved at any point. So the contents of a belief don't determine its rationality (reason doesn't tell you what to believe), but the methods we have used in order to try to find errors in that belief can be rational. A choice can be rational if we've employed critical thinking in making it, and this is the only sense in which decisions can be rational, since justification is not possible.
In comparison to ice cream preference, yes, both are arbitrary in the sense we have to judge for ourselves (we are the arbiters of) what to believe/which icecream to like. But we generally don't employ critical discussion and experimentation in our ice cream choices, although we certainly can. Again, it's the methods of critical analysis and experimentation that are rational, and a decision can made with deliberation and with the use of reason, in contrast to a preference for ice cream which usually does not involve this. But the beliefs or actions themselves can never be rational in the sense of justified, warranted, mandated by reason, etc.
As for your the law of gravity vs. the most recent discovery in quantum computing example, it's slightly confusing to me. Does option B that uses quantum computing go against the law of gravity? If so, I would reject it, since I believe the law of gravity to be true (tentatively, without justification). Or does option B use both the law of gravity and quantum computing? In that case I'm not really choosing between gravity and quantum computing, but whether to additionally also use quantum computing in my plan, in which case how well-tested quantum computing is compared with gravity is not really relevant, since I'm using gravity as well.
In general, my view of the preference for the better-tested theory (and my reading of Popper's opinion here) is that this is a soft rule-of-thumb methodological advice, but not an "rationally ought" rule. Since we want to test our theories as severely as possible in order to hopefully eliminate error, all else being equal we should prefer the better tested theory - but not in the sense of "rationally ought" but in the sense of "let's test as much as possible". But all else is rarely equal, and "better tested" is not an exact calculation. So sort of like the advice "it's a good idea to castle your king in chess". Yes, that's good advice, but it's not necessarily always the best choice, and you are not "irrational" for deciding not to castle. A more clear formulation of this advice has been advanced by Miller, Popper's former student, who formulates this stuff much more dryly than Popper but in a way more suited to the style of modern analytical philosophy (Out of Error, p. 124):
Prefer the practical proposal that best survives critical scrutiny is more transparent and more obviously sound advice than Act on the best-tested theory, which is often not real advice at all. What must not be admitted is the suggestion that a proposal that has been subjected to critical scrutiny, and has survived it, thereby qualifies as a better proposal than one that has not been subjected to critical scrutiny. That would convict deductivism not only of inductivism by of clairvoyance, and even inductivists and justificationists can be expected to resist a claim at once so far-seeing and so reactionary. Even the advice Prefer the practical proposal that best survives critical scrutiny is defective in this respect. Since subjecting a proposal to criticism is itself a practical action of a kind, it cannot, on pain of infinite regres, always be ill advised to try something yet untried. It is not of course being suggested that it is a mistake to prefer or to adopt the best-criticized proposal, only that it need not be a mistake not to do so. At this point considerations of utility often intervene. The correct advice is, as usual, negative: Refrain from any practical proposal that does not survive critical scrutiny as well as others do. Observe that someone who rejects this advice will at once be vulnerable to critical attack.
Yup, you got it. There's no establishing a rational basis for action, it cannot be done. You have done a good job articulating some of the obstacles to this in your original post. We can, however, still use reason and logic in the method of eliminating errors in the pursuit of truth. That's Popper's insight.
A small note: there is no "known false" category. Falsification is not justified either, it is as conjectural as anything else. So yes, justification doesn't work, and there is no rational basis to be had. But we can still engage in the rational pursuit of truth, in the sense of using reason and experience to temper our conjectures about the world.
As for your future reading, go with your interests, of course, but I can still recommend this short article articulating this position: https://www.science.org/doi/10.1126/science.284.5420.1625
The beauty and clarity of Popper's view is relinquishing justification and the search for a "basis", which reason and rationality are not capable of providing, but still maintaining rationality, empiricism, and the pursuit of truth. It's worth keeping in mind at least, as a possible different path that eschews the use of justification and "good reasons" but retains the use of reason and truth as the aim of science. If ever you stop believing in miracles, you need not despair of reason just yet, give Popper's view a shot first :)
I'll leave you with a final Popper quote:
And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘How can we hope to detect and eliminate error?’ The question of the sources of our knowledge, like so many authoritarian questions, is a genetic one. It asks for the origin of our knowledge, in the belief that knowledge may legitimize itself by its pedigree. The nobility of the racially pure knowledge, the untainted knowledge, the knowledge which derives from the highest authority, if possible from God: these are the (often unconscious) metaphysical ideas behind the question. My modified question, ‘How can we hope to detect error?’ may be said to derive from the view that such pure, untainted and certain sources do not exist, and that questions of origin or of purity should not be confounded with questions of validity, or of truth. …. The proper answer to my question ‘How can we hope to detect and eliminate error?’ is I believe, ‘By criticizing the theories or guesses of others and – if we can train ourselves to do so – by criticizing our own theories or guesses.’ …. So my answer to the questions ‘How do you know? What is the source or the basis of your assertion? What observations have led you to it?’ would be: ‘I do not know: my assertion was merely a guess. Never mind the source, or the sources, from which it may spring – there are many possible sources, and I may not be aware of half of them; and origins or pedigrees have in any case little bearing upon truth. But if you are interested in the problem which I tried to solve by my tentative assertion, you may help me by criticizing it as severely as you can; and if you can design some experimental test which you think might refute my assertion, I shall gladly, and to the best of my powers, help you to refute it.
Whew, you wouldn't believe the amount of times I've hear the "Popper is a positivist" claim. From Stephen Hawking, for instance. I don't mean that as an indictment of the person making the claim, really, I mean you don't have to know everything, but of the secondary sources who taught people wrong.
Popper does claim truth for his theories though, in the sense of theories being true through correspondence with reality, but without us being able to know whether they are true. I agree that while interesting, verisimillitude never managed to be very clear or coherent, though. But his basic "logic of scientific discovery" does not rely on it.
There's an interesting bit in Popper's Realism and the Aim of Science on Khun, where Popper basically says he has no problem with Khun (or at least a non-relativist reading of him) and that Khun done good work on describing the scientific process, but that this doesn't really clash with Popper's views.
But I believe you are also mistaken, and your view differs from Popper's in a more profound way. If you open an electronic copy of Popper's book (https://philotextes.info/spip/IMG/pdf/popper-logic-scientific-discovery.pdf), hit ctrl-f, and search for "degree of corroboration" you will find that that phrase occurs 84 times -- about once every five pages for the length of the book. So, while his notion of merit is not defined in terms of truth or probability of truth, he does hold that repeated, diligent, failed attempts to disprove a theory tend to progressively confirm its merit (or to use his word, its "mettle") -- which is a far cry from doing nothing.
Sure, Popper is developing the idea of degree of corroboration in that book, so he mentions it a lot. But no degree of corroboration can change the epistemic status of a theory, which always remains a conjecture. Like I said, it's a common mistake, and Popper shares some of the blame for by speaking about "preference" in the context of corroboration, which sounds a lot like justification or that we "rationally ought" to believe the better tested theory as if it had a greater likelihood of being true, or something like that. Popper did a lot to muddle the waters here. But corroboration is a measure of the state of the critical discussion, and not in any way a measure of the justification, reliability, probability, etc. of a theory. With regard to the epistemic status of a theory being adjusted by evidence, which is what is relevant to our discussion, corroboration does nothing. Here's Popper saying it outright, in Objective Knowledge 1972 (1979 revised edition), p. 18:
By the degree of corroboration of a theory I mean a concise report evaluating the state (at a certain time t) of the critical discussion of a theory, with respect to the way it solves its problems; its degree of testability; the severy of tests it has undergone; and the way it has stood up to these tests. Corroboration (or degree of corroboration) is thus an evaluating report of past performace. Like preference, it is esentially comparative: in general, one can only say that the theory A has a higher (or lower) degree of corroboration than a competing theory B, in the light of the critical discussion, which includes testing, up to some time t. Being a report of past performance only, it has to do with a situation which may lead us to prefer some theories to others. But is says nothing whatever about future performance, or about the "reliability" of a theory.
As the missile example:
@squeecoo: I think that quantum computing has been only weakly tested and I'm not willing to bet on it working for my missile defense system.
Ok that is a relief to hear, but it is not consistent with your other statement above (corroborating evidence does nothing), so it seems you misspoke.
This would be my conjecture, motivated in part by how poorly tested quantum computing is, but not justified or "based" on that. It's my best guess that has taken into consideration the evaluation of the state of the critical discussion on quantum computing (how well corroborated it is), but is not justified by it and remains a guess/conjecture. We can certainly take the degree of corroboration into consideration when deciding what to believe, but it can never elevate our beliefs beyond the status of conjecture, and it is in this epistemological sense that corroborating evidence does nothing.
I think I am beginning to understand your position better. So, here is my question. Do you think that the preference for acting on a better-tested theory over acting on a worse-tested theory is an arbitrary, subjective preference? like, some people like chocolate; some people like vanilla; different strokes? I assert that it is only rational to be more willing to act on a better tested theory. When did anybody ever have to accept a theory? By have to do you mean rationally ought to? If rationally ought to is what you mean, then, as I said, I disagree.
Questions of subjective/objective are always tricky, and I can answer this question on several different levels. Those who think rationality can lead to justified beliefs think that justification and evidence can make it so that we objectively rationally ought to believe a justified theory, as you say. Popper and I reject this. Theories (or beliefs in general) cannot be justified. At all. However, if we are interested in finding the truth (and this is also a subjective goal, one might be more interested in, say, propaganda), we should try to eliminate any erroneous beliefs that we have, and our tool for this is rational criticism and experiments. So we should try to deploy these tools as much as we can if we are interested in the truth, and we thus want our theories to be as severely tested as possible. No matter how well-tested, however, our theories remain conjectures tempered by rational criticism.
We are also not mandated by reason (in Popper's view of science) to prefer the better-tested theory. It's not the case that we rationally ought to accept the better tested theory. We could for example be super stoked about a poorly tested theory in preference to a better tested one - but the thing to do then is to try and come up with stronger tests of our preferred poorly tested theory, since in the search for truth we should try to test our theories as strongly as possible in order to eliminate error. This is subjective in the sense that our preference for a theory is our decision, but it's not like a preference for an ice cream flavor - we deploy rational evaluation and empirical experiments to the best of our ability in order to try to guess at the truth and eliminate errors, which we do not do in our ice cream preferences. This use of the rational method of criticism in the search for truth is what makes the difference and what makes our decision rational in the sense of using critical reasoning, although this provides no objective justification for our decision and it does not tell us what we rationally ought to believe.
I'm not sure I can follow everything you're saying here, but I'm interested in what you find unconvincing about Popper, if you feel like expounding on it. I hope you're not implying Popper was a logical positivist :)
Ok, I was going for a plain language simple answer, but you obviously know your stuff. Tarski's STT in the Popper/Miller interpretation is the theory of truth I adhere to, then.
Truth in the classical sense of correspondence to reality. If I say aliens exist and you say they don't, one of us has hit upon the truth despite both of us guessing. We won't know which of the two claims is true, but one of them is true, i.e. it corresponds to reality.
What would be the truth in the "strict sense", as you put it?
You said that Popper thinks corroboration (failed attempts to falsify a hypothesis) count as evidence for its truth. Instead, Popper says that theories cannot be verified. The first sentence of the chapter you quote is:
Theories are not verifiable, but they can be ‘corroborated’. [Popper, "The Logic of Scientific Discovery", p. 248]
In the footnote soon after:
I introduced the terms ‘corroboration’ (‘Bewährung’) and especially ‘degree of corroboration’ (‘Grad der Bewährung’, ‘Bewährungsgrad’) in my book because I wanted a neutral term to describe the degree to which a hypothesis has stood up to severe tests, and thus ‘proved its mettle’. By ‘neutral’ I mean a term not prejudging the issue whether, by standing up to tests, the hypothesis becomes ‘more probable’ [Popper, "The Logic of Scientific Discovery", p. 249]
And finally, here's Popper stating the difference between psychological questions of one's state of mind (that one can be "very certain") and epistemological questions of the state of the evidence, where evidence cannot verify hypotheses.
Like inductive logic in general, the theory of the probability of hypotheses seems to have arisen through a confusion of psychological with logical questions. Admittedly, our subjective feelings of conviction are of different intensities, and the degree of confidence with which we await the fulfilment of a prediction and the further corroboration of a hypothesis is likely to depend, among other things, upon the way in which this hypothesis has stood up to tests so far—upon its past corroboration. But that these psychological questions do not belong to epistemology or methodology is pretty well acknowledged even by the believers in probability logic. [Popper, "The Logic of Scientific Discovery", p. 252]
So corroboration is a measure of how well-tested a theory is, and the severity of the tests it has undergone. But corroboration does not provide evidence for the truth of the hypothesis. Here's a quote from Popper, "Objective Knowledge", 21f:
From a rational point of view we should not "rely" on any theory, for no theory has been shown to be true, or can be shown to be true. ... in spite of the "rationality" of choosing the best-tested theory as a basis of action, this choice is not "rational" in the sense that it is based upon good reasons for expecting that it will in practice be a successful choice: there can be no good reasons in this sense, and this is precisely Hume's result.
I like my Popper but I hate looking for quotes - I'm much more interested in the substance of the discussion we're having and the view I've outlined as a response to yours.
Like I said, if both theories A and B have passed all their tests, the evidence says nothing about them. We are free to tentatively accept them as true. We don't have to, though - my guess might be that quantum computing theory is not true, or it might be that I think that quantum computing has been only weakly tested and I'm not willing to bet on it working for my missile defense system. That's fine, but that is the part where I conjecture/guess at the truth. We don't disagree about my mental process, it's just that I think it's conjectural and not warranted by the evidence - the evidence can't tell me what to think and which bet to make and which hypothesis to prefer, the evidence can only contradict a hypothesis and thus force me to reject it if I accept the evidence as true. Everything else is me making my best guess. I'm free to describe my mental state as "very confident" in that process, but that describes my state of mind, not the state of the evidence.
I'll just poke in to say that I think that the mission of science is to discover the actual, literal truth. I've hopefully made this clearer in my response in our conversation below, so I'll just refer to that instead of repeating myself here.
To add content to this post, I'd say that many epistemological perpectives do indeed give up on truth in favor of usefulness or, in some variants of Bayesianism, in favor of our probability estimates. I don't care whether a scientific hypothesis is probably true, I care whether it is actually true - and if it is true, it will also be useful.
The first thing I should clarify is that I think that scientific hypotheses, despite evidence never being able to elevate them above the status of a guess, can be true, really, absolutely true. If we guess right! So if you say aliens exist and I say they don't, we are both guessing (but not randomly: we are motivated, but not justified, by our other other background beliefs). But either aliens exist or they don't. So despite both of us just guessing, one of us is right and has hit upon the truth, the absolute truth. So while Newton's L.O.G. is just a guess from an epistemological standpoint, I am also tentatively accepting it as true. I claim it really is true, and I act upon that belief, although my belief in that is just a guess. Does that satisfy what you felt was missing from my position?
As for your question on the missile defense systems example. So lets say I'm choosing between two courses of action based on two different scientific hypotheses. If one of those hypotheses has passed its empirical tests and the other hasn't, the logical situation is very clear: logic and reason dictate that I reject the hypothesis that has been falsified by the tests, since the tests logically contradict the hypothesis. The hypothesis that has passed its tests I can tentatively accept as true, and I prefer the course of action based on that hypothesis. If both hypotheses have passed all their tests, I would try to concieve of a test that distinguishes between them (a test that one fails but the other doesn't). If this is not possible, then the logical situation is also clear, however: if both hypotheses have passed all their tests, the evidence tells us exactly nothing about which one we should accept - we have to decide what to believe.
And this is a crucial aspect of my position: rationality and and logic cannot tell us what to believe: we have to make that decision. Reason can, however, tell us what not to believe: we should not believe contradictory things, or in this case hypotheses that are contradicted by test results we accept. Rationality does not provide justifications that tell us what to believe. Rationality is the method, namely the method of critical evaluation and when possible empirical testing, which serves to eliminate some of our ideas, hopefully leaving us with true ones. Yes, it'd be great if we could be justified in believing what we believe, but we can't. So we are left with conjectures that we attempt to parse from error by criticism and empirical testing, using logic and reason, with the goal of believing true things. We are rational, in the sense that we use reason and logic to criticize our ideas and hopefully eliminate errors, and our goal is the truth - we aim at having true beliefs. But we can never know that our beliefs are true; we can only guess at the truth, and use reason as best we can to eliminate the guesses that are untrue.
Does this answer your questions? Feel free to ask more if I've been unclear. There are various complications I didn't want to go into (like differences in the severity of empirical tests) for the sake of clarity.
As I recall, Popper held that repeated, failed attempts to disprove a hypothesis count as evidence for its truth (though never certain evidence). Am I mistaken?
You are mistaken, but it's a common mistake. In Popper's and my view, corroborating evidence does nothing, but contradicting evidence falsifies (although also without any degree of certainty).
Your post is completely fine in my opinion. I sense way more smugness and shady thinking from that mod post.
A well thought-out post! However, I reject your Principle of Abductive Inference. The essence of science is falsification. Experiments cannot verify a hypothesis (it always remains just our best guess), but they can contradict and thus falsify a hypothesis. The hypothesis "all swans are white" cannot be verified by any number of white swans (because there may always be a non-white swan out there), but it is contradicted by the observation of a single black swan. Of course, the experiment itself is also just a best guess (maybe the swan is just painted black?). All knowledge is guesswork. However, the logical relationship of falsification holds (the hypothesis is logically contradicted by the experiement), while inductive inference is not logically sound (no amount of verification can "ground", "prove" or whatnot that the hypothesis is true).
For further reading along these lines, I recommend "The Logic of Scientific Discovery" by Karl Popper, or this shorter and more modern article: https://www.science.org/doi/10.1126/science.284.5420.1625
To answer your three questions:
-
Yes, I believe Newton's Law of Universal Gravitation is true.
-
How sure am I that it is true? Psychologically, very sure. Logically and rationally speaking, not at all, it's just a guess.
-
Why do I believe it, and with that degree of certainty? I believe it beause it has passed tests that other competing hypotheses have failed. This does not prove it to be true (with any degree of certainty), as you rightly point out, but given we accept the results of the tests, it makes it preferable to the competing hypotheses that fail those tests, because they are logically contradicted by those tests. So it's our best guess because its competitors have been eliminated by experiments, but it is not certain or probable or verified in any way.
Really, you are very close to my position on this, except you want experiments to do more than they can do, and struggling to find a way for them to do what they cannot, namely provide justification/inference/certainty/likelihood for hypotheses. Experiments can contradict and thus falsify hypotheses, but they cannot justify them. Relinquish the demand for justification, and the logical situation is clean and sound: we make guesses, discard those guesses that don't stand up to experiments, and tentatively accept those that do.
Definitely check out Electric Dreams, a Black Mirror styled (but I think superior) anthology series based on the short stories by Philip K. Dick. The plots are way more insane than Black Mirror (only PKD can come up with such crazy setups, really), but what I like most about Electric Dreams compared with Black Mirror is that BM episodes are about a given technology-related idea and it's implications, while ED has a crazy, multi-layered sci-fi setup in every episode, but ultimately the point and climax of every episode is a very human decision or insight by the main character that pushes the sci-fi stuff to the background. In other words, every BM episode is a one-trick pony addressing an interesting sci-fi premise, but ED episodes start with an interesting sci-fi premise but end with making a point on a universalist human condition topic.
To give an example, one episode is about the wife of the main character giving them a virtual world vacation as a birthday present. The virtual world is built from the main character's subconscious, and in the virtual world, their wife (that gave them the present) is dead. As the main character starts getting confused about which world is actually real, either the initial world or the virtual reality one, they have to choose whether the world in which their wife is alive and they are happy is more real to them, or whether it is instead the world in which their wife is dead and they are depressed is the one that feels more real. (I'm using "them" for the main character because their genders are different in the two worlds). So while there's an initial crazy sci-fi setup, the episode is ultimately about does being sad or being happy seem more "true" and "real" to the main character. And this is one of the more straightforward episodes. Not all of them are top-notch, but the ones that are good are truly good. Heartily recommended.
I understand what you are saying, although I don't think it's completely true: the VAERS form asks you to report a vaccine-related adverse event, not simply that someone died post-vaccination. Also, old people were regularly given at least flu vaccines prior to COVID, so the effect you describe of old people coincidentally dying after vaccine administration was at least partially present before COVID, so I don't think that this is a sufficient explanation for the massive increase.
Thank you for the thoughtful response! When we get to this level of analysis, I am of course willing to admit that there are many unknowns, and that the data is not sufficient for strong and clear conclusions on mRNA vaccine safety, although I would argue that there are clear indications that serious concerns exist. But it is the lack of willingness to investigate these worrying signals from the data and the blind repetition of the "safe and effective" mantra that is my main cause for concern. If you refuse to look for problems, you won't find any, right?
VAERS, the main monitoring system for vaccine safety, indicates a massive, and I mean MASSIVE, concern regarding the relative safety of COVID vaccines. I phrased my comment on VAERS carefully - it's definitely not 100% reliable, but it shows a massive relative difference in reported vaccine-related deaths since the introduction of COVID vaccines. Is this not cause for concern? Even if only 3% of the post-COVID VAERS reports are real and 97% are bogus, COVID vaccines still cause as much death as all other vaccines put together (per year instead of in 30 years combined). So even if 97% of post-COVID VAERS reports are trash (and the "increased awareness" argument is a huge stretch to support such a strong claim), the COVID vaccines are still more dangerous than all other vaccines put together, "just" causing more deaths that all other vaccines put together per year instead of more deaths than all other vaccines in 30 years. And if VAERS is complete and utter trash, as you say, isn't that even MORE cause for concern? In that case, we have NO population-level vaccine safety monitoring system of note at all. If you refuse to look for problems, you won't find any, right?
As for the second study I broguht up (https://pubmed.ncbi.nlm.nih.gov/37163200/), I agree that the clinical trials used to approve the COVID vaccines, which are the only large clinical trials that have been run on them, were not designed to assess all-cause mortality risk from the vaccines, and the sample showing no effect on overall mortality is very small, yes. So where's the follow-up? VAERS is trash, and the trials were not designed to assess overall mortality risk. If you refuse to look for problems, you won't find any, right?
Your position on the severe adverse events risk study is not entirely clear to me based on your response. It's not about pediatric populations, it's that they found a greater increase in severe vaccine-related side-effects (that land you in the hospital) than the reduction in severe COVID events compared with the control group. The COVID vaccines cause more hospitalization-level adverse events than the hospitalizations they prevent from COVID, according to that study (https://www.sciencedirect.com/science/article/pii/S0264410X22010283). The authors call for a harm-benefit analysis for mRNA COVID vaccines, which has never been done. But if you refuse to look for problems, you won't find any, right?
Finally, we have the Nature article finding that the mRNA vaccines produce random proteins. Which ones? What are their effects? Surely Pfizer and Moderna tested whether their vaccines were actually producing what they were supposed to, at some point? Or was this a total surprise, and we "could not have known at the time"? Of course, if you refuse to look for problems, you won't find any.
You requested other sources, so here's the BMJ (top medical journal) desperately calling for follow-up studies on COVID vaccine safety: https://www.bmj.com/content/379/bmj.o2527?fbclid=IwAR3e8Rv7UdOUjx60Vf7CnrtZAcM7rCVxl5IRpT76ngyTokkALHVCbiO3Naw
And I wonder how long the spike protein produced by COVID vaccines keeps being produced? Here's a study finding that it's still being produced 60 days after vaccination: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8786601/ I thought the vaccine cleared out in a week or two, as I was told? Does it keep producing (these random) proteins longer than 60 days? How long are these vaccines active in the body? Who knows! If you refuse to look for problems, you won't find any.
Here's COVID vaccines causing myocarditis (perhaps because production of random proteins by the vax causes an autoimmune response in the heart in those unlucky to have the wrong random proteins produced by the vax?): https://academic.oup.com/eurheartj/article/44/24/2234/7188747?login=false
Here's COVID vaccines causing vaginal bleeding: https://www.bmj.com/content/381/bmj-2023-074778 How? Why? Who knows! There's plenty more studies like this showing worrying signals. Modifying immune response in unknown ways for unclear reasons? Sure: https://www.medrxiv.org/content/10.1101/2023.09.29.23296354v1.full.pdf Causing seizures in children? Yup: https://www.medrxiv.org/content/10.1101/2023.10.13.23296903v1.full.pdf
Yes, you can pick apart any of these studies. They are all limited at least by being fairly small given the relative rarity of these events. None of them are proper clinical trials. But that's because these studies are the only ones that have been done. If you also dismiss population-level monitoring systems like VAERS, you can claim that there is no clear evidence, sure. If you refuse to look for problems, you won't find any. But we do now know that vaccines remain active for 60+ days and that they produce random proteins they are not supposed to (these are lab studies on how the vax works). And various data sources, flawed as they are, indicate strong safety concerns. Nevermind that this should have been investigated before giving these vaccines to billions (or coercing people into taking them). The companies are shielded from liability, and politicians will point to the medical community missing or ignoring these issues and say "we could not have known" (although scientists previously considered credible tried to raise concerns, but were sidelined or ostracized). But could we not at least look carefully at the potential issues NOW, before continuing to use this technology that was never deployed in humans before?
Anyway, I hope I've offered some insight on the anti-COVID vax position here. I'll shut up now unless there's something I really need to respond to, since this is the small-scale questions thread :)
- Prev
- Next
I'll do another reply since I think we're still talking past each other a bit.
And yeah, it's a shame our talk is buried so deep nobody is likely to read it :D Still, I found it really fun and useful!
First, let me say I don't take it for granted that objective reality exists - I believe it does, which is a conjecture like anything else, and open to criticism and revision. Objective truth, however, would exist even if there is no objective reality: in that case, the statement "there is no objective reality" would be objectively true, and this is what I would like to believe if it is true. Popperianism (or, as it's less cultishly called, critical rationalism) requires no assumptions that are not in principle open to critical discussion and rejection, which is in this view the main method of rational inquiry.
And, if I haven't made it clear enough, I'm actually a big fan of Bayesiansim. If I weren't a Popperian, I'd be a Bayesian! I'd even say it could add a lot to Popperiansim: although I think the basic Popperian picture of rational inquiry is correct, the formalization of the process of critical analysis that Bayesiansim could add to Popperiansim could definitely be useful (although I'm not smart enough and too confused by the many, many variants of Bayesiansim to attempt such a project myself). Overall though, some variants of Bayesianism, yours I believe included, are right about almost everything important to Popperians, especially the central point: accepting the skeptical and Humean objections to rational justification, while retaining the use of reason and evidence as the method of science. Popperians would add "and objective truth as the aim of science", on which I'm still not quite sure where you stand. The main disagreement, as I see it, is on the role of evidence, which is negative for Popper - evidence can only contradict theories - and positive for Bayesians - evidence can support theories, raising their subjective probabilty.
I think the discussion of whether objective reality exists and whether we can be certain of it is a bit of a sidetrack here - I completely agree with everything you said on it: we can never have direct access to objective reality (Popper would say that all our observations are "theory-laden"), and we cannot be sure that it exists, and I'm not saying I require you to demonstrate that it does to practice Bayesiansim. My main point is that Bayesian calculations are unmoored from objective reality (say nothing about it), unless you smuggle in additional induction-like assumptions that allow you to make inferences from Bayesian calculations to objective truth, in which case you run into Humean objections. And this is where I'm still uncertain of your position. You say:
But do you think your observations are evidence that your subjective reality aligns with objective reality? If yes, how does this relationship work, and how does it avoid Humean objections? If no, like I said, that'd be for me an unacceptable retreat from talking about what we are actually interested in, namely objective truth, not subjective probabilty. We can agree to disagree on that, that's not a problem, but I'm not totally clear what your position is on this, given that you have said things like the quote above, but also talked being able to convert subjective probabilty into truth. I'd like to understand how you think this works, from a logical standpoint. Or is it perhaps that your position is something analogous to Hume's solution to the problem of induction (which I also disagree with) - namely that we act as if induction is rational although we are irrational in doing so, for we have no other choice? This would be saying that while strictly speaking Bayesian calculations have no direct relationship to objective truth, we act as if it they do. This would be what I gather from the above quote, but you've also talked about probability-to-truth conversion, so I'm still unclear on that point.
Let me attempt an analogy using the map and territory metaphor to describe how I see our positions. It's a spur-of-the-moment thing, so I apologize in advance if it misses the mark, but in that case you explaining how it does so will likely be illuminating for me.
So we are blind men in a maze (the "territory"), and trying to map it out. We are blind because we can never directly see the maze, let alone get a bird's eye view of it. Now many people, the majority even, think that we are not blind and convince themselves that they can see the maze (that we can have justified true beliefs directly about objective reality). You and I agree that this is not possible, that our mapping of the maze is ultimately guesswork. We can't be sure there even is a maze! But we're trying to figure out how to act and what to believe. Now I think the best way to go about mapping the maze is to propose conjectures on the layout of various parts of the maze (i.e. scientific hypotheses), which will always be guesswork, and then test them out: if this map section I've proposed is correct, for instance, we should be able to walk 36 steps in this direction, and then turn left. If I attempt this and run into a wall, then my proposed map section guess isn't right - I gotta reject it (the hypothesis is falsified). Of course, I might have miscounted the steps, a wall may have collapsed, or any number of things might have messed up my experiment - like in the neutrino example, the experiment might be wrong, and falsification is guesswork too. But this is the role played by evidence: attempting to walk the maze, i.e. confronting our hypotheses with reality, and seeing where they clash, albeit blindly and without any justification. If my conjectural map seems to work out, if it passes the test, this says nothing additional about it corresponding to the maze. Evidence is used to contradict our guesses, not support them, in my view. And this is where we start to disagree. You think that every step you take that doesn't contradict your proposed map (all supporting evidence for the hypothesis) raises your subjective probabilty/expected utility/confidence in your proposed map of the labyrinth. To which I say ok, your confidence is increased by Bayesian calculation, but what does that tell us about the labyrinth? To me it seems you are calculating your confidence in the map, but it's the labyrinth we are interested in, and I'm not sure if and how you translate your confidence in the map into claims about the labyrinth. If you do translate your confidence in the map into claims about the labyrinth, I am not clear on how. I just directly make claims about the labyrinth, which are guesses, and my subjective confidence in them is irrelevant - the correspondece of my guesses to the labyrinth is what matters and what I'm trying to guess correctly. If you don't claim anything about the labyrinth at all and are only talking about your confidence in the map, then I think you're missing the mark - it's the labyrinth that we are interested in.
More options
Context Copy link