In many discussions I'm pulled back to the distinction between not-guilty
and innocent
as a way to demonstrate how the burden of proof works and what the true default position should be in any given argument. A lot of people seem to not have any problem seeing the distinction, but many intelligent people for some reason don't see it.
In this article I explain why the distinction exists and why it matters, in particular why it matters in real-life scenarios, especially when people try to shift the burden of proof.
Essentially, in my view the universe we are talking about is {uncertain,guilty,innocent}
, therefore not-guilty
is guilty'
, which is {uncertain,innocent}
. Therefore innocent ⇒ not-guilty
, but not-guilty ⇏ innocent
.
When O. J. Simpson was acquitted, that doesn’t mean he was found innocent, it means the prosecution could not prove his guilt beyond reasonable doubt. He was found not-guilty, which is not the same as innocent. It very well could be that the jury found the truth of the matter uncertain
.
This notion has implications in many real-life scenarios when people want to shift the burden of proof if you reject a claim when it's not substantiated. They wrongly assume you claim their claim is false (equivalent to innocent
), when in truth all you are doing is staying in the default position (uncertain
).
Rejecting the claim that a god exists is not the same as claim a god doesn't exist: it doesn't require a burden of proof because it's the default position. Agnosticism is the default position. The burden of proof is on the people making the claim.
Jump in the discussion.
No email address required.
Notes -
It seems to me you did not prove that. The default position is that I do not know if the default position is theism, atheism, agnosticism or something else.
By the way, if you argue that it is not proven that god exists, it means that you also argue that it is at least possible that he does not exist. So you are actually arguing (a bit) in favor of atheism.
Yes, unless you consider their definitions. My definitions (which are shared by many) are:
theism: believing that a god exists
atheism: not believing that a god exists
agnosticism: not knowing that a god exists
Under these definitions the default position is atheist/agnostic. Atheism answers the question of belief, and agnosticism the question of knowledge: they are orthogonal.
I believe atheism is the default position, but some people feel that atheism is stronger than agnosticism (I don't agree), or that atheism means "no gods exist" (I don't agree). I skipped a full explanation about these terms because that wasn't the point of the article.
Sorry, what do you mean by that? Are you saying you can believe something you know to be false? because it is a consequence of the orthogonality.
Yes, you know
X
is false, so you believeX
is false. But an agnostic is not someone who believesX
is false, is one who doesn't believeX
.More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The OP seemed pretty even-handed when it came to the atheism/theism position.
After all, since he argued that it is not proven that god doesn't exist, it means he also argued that it is at least possible that he does exist. So he is actually arguing a bit in favor of theism.
More options
Context Copy link
More options
Context Copy link
Standard of proof, not burden, I think.
More options
Context Copy link
One example is the effects of climate change. In my view the consequences, positive and negative, are sufficiently uncertain that one cannot sign the summed effect. That doesn't mean I don't believe that climate change is bad, it means I don't know — and suspect nobody knows, although obviously many people think they do.
I agree. Climate change is one of the areas I'm most skeptical about. I believe that if true it's one of the most important issues of our time, and I've seen evidence that climate change is indeed happening, and it's indeed caused by human activity, but evidence isn't proof.
I have also seen enough evidence to be skeptical about the amount of damage human activity is actually causing--as opposed to random fluctuations. And also to be skeptical about the irreversible damage, for which there's evidence that it's actually reversing.
So my conclusion so far is the default position: I don't know.
The primary issue is that the climate is sensitive to many inputs, and certain inputs are rare but have outsized effect if they do occur, i.e. a major volcanic eruption, or certain types of solar activity.
So for a given period humans could be the major input and the trend could be towards warming. But then a single major event can override that all at once.
This directly implies that our models about future climate could be completely thrown off by a single event, and thus we could take extensive efforts to mitigate our own input and it might mean utterly nothing due to a natural occurrence.
This naturally generates irreducible uncertainty about the future, which complicates planning.
Yes, but the true danger is certainty. Both people in the pro and anti camps make absolute statements like "the world is going to end in ten years" and "climate has always changed", and these statements can't possibly be rationally substantiated in such a complex system.
The only path forward is epistemic humility, and both camps seem to lack it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The Scottish legal system has the verdict of not proven, though reformers seem to be trying to get it abolished.
I agree it should be abolished. It should not matter what the jury thinks beyond what is proven, at least not in legal proceedings. Only whether or not the prosecution established certain facts and supported them with enough proof. Jury's personal opinions should play as little role as possible. Of course, there's no way to exclude it completely, but at least we shouldn't pretend there can be case where the jury knows The Truth and not only looking at the prosecution's work and decides if they did convincing enough job or not. It is a very rare occasion that the jury would be able to actually know what happened beyond the case made by the prosecution, so ideally only two verdicts should be possible - either convincing enough (called "guilty") or not convincing enough (called "not guilty"). "Not guilty" is "not proven" - it can't be anything else, as the jury aren't clairvoyant, just as "guilty" only can mean "proven" - in both cases, we should recognize the limits of our system and fallibility of it, and accept that's the best we can do. Adding more states to it means we're pretending there's some higher knowledge we could use but somehow we don't. But where that higher knowledge would come from?
Isn’t that what a jury does? Evaluate the evidence and make a judgement call?
Assuming Scotland uses normal standard of beyond reasonable doubt it seems to me not proven is where you’d find guilt on a more likely than not standard but hasn’t been established at the highest standard.
Yes, and if the call is "not enough proof" then the verdict is "not guilty". Not "maybe guilty as hell, but we are letting him off on technicality". "Not guilty" and that's it. You can not throw a power of state at a person, have this power to be unable to prove the guilt and still have the person to bear the stain of the accusation without any ability to clear oneself.
More options
Context Copy link
More options
Context Copy link
It matters to other people whether the evidence shows that the defendant is innocent or fails to show that he is guilty. We don't want to imprison someone accused of rape unless we are reasonably sure he is guilty, but a woman might want to avoid being alone with him unless she is pretty sure he is innocent. Similarly for many other crimes.
That's the same thing. If it fails to show they are guilty, then they are innocent. At least as far as legal system is concerned. All the rest can be done on Twitter, that's what it's for.
That's not a task for a legal system, and it can not reasonably perform it. Moreover, putting such task on it would make it very easy to permanently stain somebody's life with an accusation that is impossible to refute - if I say you're a rapist, and the court says "there's no proof but we're not 100% sure - who knows what happened there, we weren't there" then how you prove you're not? You can't sue the court for being not sure and demand them to make up their minds. And there's no process to make them sure. So you are now "possibly rapist" forever, even though there's absolutely no proof anybody could find of it.
No, they're not. That's the whole point of the article. The legal system considers a person who has been acquitted to not be found guilty, which is why the jury renders the verdict quite literally "not guilty".
In the eyes of the law, it is the same. Every legal consequence is the same. Of course, everybody can have their own opinions - but that is no longer the domain of the law. That's my whole point - if we let the law talk about something that is beyond the lawful processes, we are asking for trouble.
And yet every legal resource out there claims they are most assuredly not the same.
Would you be able to support your assertion with some quotes from the said resources?
Just google: "not guilty" versus "innocent":
Cornell Law School:
MacDonald Law Office:
Court Review:
The Associated Press:
But the reality is that no amount of evidence is going make you accept you were wrong, is there?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
My sister served jury duty last year and would have quite liked this option.
Apparently, the prosecution made a complete hash of the case. Their best evidence was that someone the defendant knew ended up in possession of the stolen goods. Since they were trying him for assault during the original theft, this was not sufficient.
My sister did have heated discussions about burden of proof with other jurors. The defendant likely committed that crime, but the prosecution was either too inept or lazy to address the object level.
I believe it went to “mistrial,” rescheduled for a new date, and the judge expressed her disappointment in the professionals involved.
More options
Context Copy link
More options
Context Copy link
Okay, fair play, you at least didn't get anything horribly wrong in this one. It wasn't good, but you didn't get the basic premise of your article fatally wrong.
Ah, perhaps my above post would come off as a bit strange without context. The last two times I read felipec's crossposts, they made catastrophic mistakes in understanding the topic, but the tone was rather smug. I said that the next time he wrote a post and linked it here, if he was smugly wrong again I would stop reading his posts.
So I'm acknowledging that he did better this time. His engagement in the thread is also a bit better.
I was not smug, you believed I was smug. Big difference.
Technically speaking, he said the tone was smug.
Doesn't that claim that I was smug?
Are you really going to start a second branch off this conversation in the exact same direction?
It's not the same direction. In subthread
a
I'm talking about why I don't believe it's good to elevate options to facts, I'm not talking about who/what was claimed to be smug anymore.In this subthread
b
I'm asking a simple question: doesn't that say that I was wrong?In fact before I started subthread
b
I was going to make a comment in subthreada
that I don't believe if I had said "the tone was not smug, you believed it was smug" my point would not have changed at all, even if more technically correct, your initial comment (he said the tone was smug) would not have applied, and I'm pretty sure I still would have been downvoted.I removed that comment because I didn't want to muddle my point, and I believe it's relatively unimportant what might have happened had I said something else. If you really believe it would have made a difference, I can edit the comment, but I don't think anything would change.
Then I read that he did in fact called me smug a few sentences later, so even if I'm not talking about "my tone"/"the tone" in subthread
a
anymore (I'm talking about something more important: my original point), I wonder if you still believe he didn't call me smug, when in fact he straight up did. You don't have to answer.My actual point is in subthread
a
.More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Definition of
smug
: "highly self-satisfied". If he said the tone was smug, it's because he believed the person writing such prose was "highly self-satisfied".No, I don't think you can make that assumption. Writing is hard, people come across in unexpected or unintended ways all the time, and people can intentionally fake tone that they don't actually feel.
If somebody is writing in an angry tone, it means that person is acting angry. You can say that you can't assume that just because a person is acting angry it means that person is actually angry.
But everyone assumes that if somebody says "you seem angry", that carries an implication that the person is claiming that you are indeed angry.
Sure, technically the person did not claim that you are actually angry, but I find it extremely curious that you bend over backwards to defend /u/magic9mushroom's claim that "the tone was rather smug", and explore what could have been the target, but you don't extend even a fraction of the same generosity to my prose.
Did my prose actually had a smug tone? That is the question that matters, not what was the target of /u/magic9mushroom's comment. He made the claim that "the tone was rather smug", that's not a fact, that's a subjective opinion which he is presenting as fact. If it seemed smug to him, that's fine, he could say "the tone seemed rather smug", and that would be accurate (according to him).
But just because a tone seemed smug to a person doesn't mean the tone was actually smug, just like if somebody seems angry that doesn't mean the person is actually angry: that's just your perception.
Why aren't you generous towards that distinction as well?
Yeah, honestly.
Pretty much everything you write comes across as extremely smug. To me, at least, I can't state how anyone else feels. But it absolutely does to me.
I dunno if that's your intention, but if it's not your intention, I recommend revisiting your writing style.
Technically, yeah. But at some point there's the fact that we all speak as to our own opinions, and you just gotta read a little bit of that in implicitly. From a recent ACX post:
I do strongly encourage people to couch their words as opinions if they're diving into areas that are highly controversial and sensitive, and I've pushed to the point of warning (and maybe even bans) if people keep doing it. But this (1) isn't sensitive, and (2) is phrased specifically as "the last time [he] read your posts".
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I do not think it is novel, I specifically said a lot of people don't have any problem seeing this distinction, and I would expect most rationalists to see this distinction.
But it's a fact that a lot of people do not see this distinction, not dull people, a lot intelligent people. I've debated many of them, and it's a chore to explain again and again how the burden of proof actually works and why
not-guilty ≠ innocent
. Next time I'm in a debate with such people I can simply link to this article, and presumably so can other people in similar debates.Moreover, you seem to be overlooking the fact that what is obvious to you (and many rationalists), may not be so obvious to everyone. This bias is called the curse of knowledge. People have a tendency to fake humility, because people don't like arrogance, they like humility, but the fact is assuming everyone is as intelligent and/or knowledgeable as you is not always productive.
In fact, the whole motivation behind the article is that someone I know refused to accept there's a genuine difference. He is not unintelligent.
More options
Context Copy link
More options
Context Copy link
The burden of proof is just a convenient default rule and only that, a convenience.
The burden of proof in reality is not just about the person making a claim but on everyone if the verisimilitude of the statement has an utilitarian impact, it is my moral duty to steelman arguments from others even if I disagree with their initial formulation, the erudition and epistemological level of the author is semi-contingent and therefore it even follow that the more deficient that person is, the more effort I should allocate to steelman his statement as an intellectual solidarity in the economic sense, as a political way to reduce inequalities and also for various reasons, to increase the coverage of ideas on the semantic mental search space.
unrelated:
The ad-hoc legitimization of inaction as a non or as a lesser cime than action is the biggest cause of suffering on this planet.
I disagree. The burden of the proof is an inherent property of the claim, not the person.
If a deficient person makes claim
X
, and you find it's your moral duty to create a steel man argument designed to maximize the defense of claimX
, claimX
still has the burden of proof. This means I as a rational person should not believeX
until such time when anyone--whether it's you, the deficient person, or somebody else--substantiates such claim.And I don't have to disprove
X
,X
is not considered true by default.More options
Context Copy link
More options
Context Copy link
This is all good for legal systems, but I do wonder, how should people carry this out in the public world? Should not guilty be treated in social situations as innocent?
This all reminds me of one of the most irritating feminist memes I saw come out of the third wave feminist revival of 2012 to 2018 regarding high profile rape accusations, usually the kind of he-said-she-said things you'd see on college campuses, adjudicated by the college kangaroo courts and brought to everyone's attention by popular magazines who wanted to take a side. The meme was, "Why should we presume that someone who's been accused of rape is innocent? Shouldn't we instead presume that the accuser is innocent of making a false accusation?"
Fortunately, I think the climate has calmed down in this respect, and I haven't seen such black and white thinking in regards to these issues since then. Maybe Amber Heard had something to do with it, idk.
I don't think so. If Jake is accused of sexually assaulting Rachel and you consider Jake as innocent you would have a tendency to dismiss evidence that Rachel is telling the truth (since people have a tendency to not like to be wrong). Also, people would justly ask you for evidence that Jake is innocent, since you do actually have a burden of proof in this case. And then if incontrovertible evidence comes out that Rachel was telling the truth, you would have been proven wrong.
If instead of considering him innocent you say "the jury is still out", then you are open to evidence of guilt, you don't have a burden of proof, and if Jake turns out to be guilty you would not have been proven wrong.
It's OK to say "I don't know".
A lot depends on the details but in sexual assault there can be an honest disagreement on the facts (eg Rachel could be mistaken, or perhaps Rachel’s subjective view does not comport with objective view).
That is, the world isn’t quite as black and white as you seem to be positing.
There is no "gray prison", there's only prison. In the real world at some point decisions must be made.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I have come up with this exact point and argument before but I take it in the opposite direction than the feminist meme does. The symmetry of he-said-she-said just means that we should ignore them. Which is a little unfortunate.
More options
Context Copy link
More options
Context Copy link
I'm not overly familiar with the Bayesian way of thinking, I have seen it expressed very often in The Motte and similar circles, but I don't see why would anyone conclude that this is a valid way of reasoning, especially when it comes to beliefs. I do understand Bayes' theorem, and I understand the concept of updating a probability, what I don't understand is why anyone would jump to conclusions based on that probability.
Let's say through a process of Bayesian updating I arrive to a 83% probability of success, should I jump the gun? That to me is not nearly enough information.
Now let's say that if I "win" I get $100, and if I "lose" I pay $100. Well now I have a bit more information and I would say this bet is in my favor. But if we calculate the odds and adjust the numbers so that if I lose I pay $500, now it turns out that I don't gain anything by participating in this bet, the math doesn't add up:
((5 / 6) * 100) / ((1 / 6) * 500) = 1
.Even worse: let's say that if I win I get $100, but if I lose I get a bullet in my brain. I'm literally playing Russian roulette.
83% tells me absolutely nothing.
Real actions in real life are not percentages, they are: do you do it or not? and: how much are you willing to risk?
You can't say I'm 60% certain my wife is faithful, so I'm going to 40% divorce her. Either you believe something, or you don't. Period.
Even worse is the concept of the default position in Bayesian thinking, which as far as I understand it's 50%.
Edit: I mean the probability that the next coin toss is going to land heads is 50%.
So starting off if I don't know if a coin is fair or not, I would assume it is. If I throw the coin 100 times and 50 of those it lands head the final percentage is 50%. If I throw the coin 1,000,000 times and 500,000 of those times it land heads it's still 50%, so I have gained zero information. This does not map to the reality I live in at all.
My pants require at least two numbers to be measured properly, surely I can manage two numbers for a belief. So let's say before I have any evidence I believe a coin is fair
50%±50
(no idea), after throwing it a million times I would guess it's about50%±0.01
(I'm pretty sure it's fair).So no, I'm not sold on this Bayesian idea of a continuous belief, I can't divorce my wife 40%, or blow my brains 17%. In the real world I have to decide if I roll the dice or not.
Does this mean that what you said right after is how you would see the coin case?
If so, well, a Bayesian wouldn't use just one number here either. And it would indeed lead to having more information after throwing the coin a million times. If this doesn't go contrary to what you meant to say, ignore. If it does and you don't see why, feel free to ask a follow-up.
Do you have any source? Everyone I've debated says it's a single number: 50%.
This article in Stanford Encyclopedia of Philosophy goes to great lengths to explain why the standard view of degree of belief is limited and proposes alternative views using imprecise probabilities: Imprecise Probabilities. It seems to confirm my belief that Bayesians consider only a single probability.
Both things are true. But it's like, they'd say height is a single number but if they don't know your height precisely they'd assign it a probability distribution, which is not just a number.
So yes, Bayesians give probabilities of events as a single number. The coin would have a single number
P
representing its probability of the event "will land heads" when flipped.But if the Bayesian isn't certain about what the value of that
P
is, and since that value has to be a single precise number, he would add a previous step to the experiment to encode his uncertainty, before the coinflips happen. A step that only answers the question: what world are we living in, one where the coin is unbiased, somewhat biased, very biased? i.e. what's the exact value ofP
? Here,P
does not represent a probability but a random variable, with its whole probability distribution, parameterized by as many numbers as you need (typically here just a beta distribution with 2 parameters that sort of map to your two numbers in "50%±50" ).Then as coins are flipped and you get results, this distribution of
P
gets updated and sooner or later it gets narrower around the real coin bias, just like you said it should happen.This is not like stretching Bayesianism into a pretzel. It's a very canonical formulation.
What's true is that this view forces you encode your uncertainty about
P
through a full distribution when all you probably have is some fuzzy feeling like "50%±50". That's what iiuc those imprecise probabilities try to improve upon.Are you sure about that? Maybe you consider the distribution, and maybe some Bayesians do consider the distribution, but I've debated Scott Alexander, and I'm pretty sure he used a single number to arrive to the conclusion that doing something was rational.
I've been writing about uncertainty in my substack and I've felt a substantial amount of pushback regarding established concepts such as the burden of proof, not-guilty is not the same as innocent, and the null hypothesis implies uncertainty. Even ChatGPT seems to be confused about this.
I'm pretty certain that most people--even rationalists--do not factor uncertainty by default, which is why I don't think Bayesians thoroughly consider the difference between: 0/0, 50/50, or 500/500.
Now this one is a question that a Bayesian would typically answer with a single number. In reality it is either true or false, but due to uncertainty I'd use a single number and say something like, I'm 98% sure. This works for any event, or yes-no question. Will Russia detonate a nuclear weapon in 2023? Etc. You could say you're also giving a distribution here, but since you can only have two outcomes, once I say I'm 6% sure Russia will throw a nuke, I'm also saying 94% they won't, so the full distribution is defined by a single number.
But the coin bias in reality is not true or false. It's 45%, 50.2%, or any number 0-100, so you need a full distribution.
I don't know what to say, haven't read. I'll take a guess that there was taking past each other. It seems to me you think Bayesians around here don't like to consider the uncertainty behind those three concepts. But it's the other way around. Bayesians want a more fine description of the uncertainty than those 3 concepts allow. It's like with the coin example, where you suggested two numbers but a Bayesian would use a full distribution.
So, when there's a binary event, like the Russia nuke question, a Bayesian says 6% probability, but a "burden-of-proofer" may say "I think the people that claim Russia will throw a nuke have the burden of proof", a null-hypothesis-er would say "the null hypothesis is that Russia will not throw a nuke", etc. These concepts don't give the uncertainty with enough resolution for a Bayesian. They only give you 2 or 3 options: burden on one side vs the other, guilty vs innocent vs not-guilty.
I'm not asking if the coin is biased, I'm asking if the next coin flip will land heads. It's a yes-or-no question that Bayesians would use a single number to answer.
No, I say "I don't know" (uncertain), which cannot be represented with a single probability number.
Yeah
But at first it seems to me you were talking about the bias and what you can learn about it from repeated tosses (and were confused in thinking Bayesians wouldn't learn).
So like we've talked, they'd use many numbers to compute the probability of the yes-no question, they just give the final answer as one number. Bayesians do consider uncertainty, to all levels they feel they need. What they don't do is give uncertainties about uncertainties in their answers. And they see the probability of next toss heads as equivalent to "how certain am I that it's going to be heads?" (to a Bayesian, probabilities are also uncertainties in their minds, not just facts about the world). Iiuc, you would be happy saying you believe the next toss has 50%±20 chances of being heads. Why not add uncertainty to the 20% too since you are not sure it should be exactly 20%, as in 50%±(20±5)%? If that feels redundand in some sense, that's how a Bayesian feels about saying "coin will come up heads, I'm 50% sure, but I'm only 30% sure of how sure I am.". If it doesn't feel redundant, add another layer until it does :P
Still, I think I see your point in part. There is clearly some relevant information that's not being given in the answer if the answer to "will this fair coin land heads?", 50%, is the same as the answer given to "plc ashetn ðßh sst?" (well-posed question in a language I just invented), now a lame 50% meaning "the whaat huuhhh?".
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
No, the probability that the next toss of a coin is going to land heads is 50%, regardless if the results have been 0/0, 50/50, or 500000/500000.
My beliefs are binary. Either I believe in something or I don't. I believe everyone's beliefs are like that. But people who follow Bayesian thinking confuse certainty with belief.
In your view, is "believing" something equivalent to supposing it with 100% certainty (or near-100% certainty)?
I have a strong suspicion that your epistemic terminology is very different from most other people's, and they aren't going to learn anything from your claims if you use your terminology without explaining it upfront. For instance, people may have been far more receptive to your "2 + 2" post if you'd explained what you mean by an "assumption", since most people here were under the impression that by an "assumption" you meant a "strong supposition". So it's hard to tell what you mean by "people who follow Bayesian thinking confuse certainty with belief" if we misunderstand what you mean by "certainty" or "belief". Is a "belief" a kind of "supposition", or is it something else entirely?
No.
How so? I believe the Bayesian notion that you can believe something 60% is what is not shared by most people. Most people either believe something or they don't.
There's a difference between most people and most people "here". My understanding of "assume" is in accordance with many dictionaries, for example: to take as granted or true.
certainty: a quality of being proven to be true
belief: something considered to be true
Something can be 60% proven to be true, it can't be 60% considered true.
And something that is "granted" is "assumed to be true", by the same dictionary. The definition is circular: it doesn't lead to your interpretation of "to assume" as "to believe true with absolutely zero possible doubt".
Besides, the dictionary argument can be taken in any direction. Per Dictionary.com, "to assume" is "to take for granted or without proof", "to take for granted" is "to consider as true or real", "to consider" is "to regard as or deem to be true", and "to regard as true" is "to judge true". That leads to the usage of the term by many here, where to make an assumption about something is to make a strong judgment about its nature, while still possibly holding some amount of doubt.
You draw strong boundaries between these epistemic terms. But if common usage recognized your boundaries, then the dictionaries would be flat-out wrong to say that, e.g., to believe something is to assume it, suppose it, or hold it as an opinion (where an opinion is explicitly a belief less strong than positive knowledge). That's why I suspect that your understanding of the terms is not aligned with common usage, since the dictionaries trample all over your boundaries.
Also, I think that "certainty" in a Bayesian context is best treated as a term of art, equivalent to "degree of belief": a measure of one's belief in the likelihood of an event. It's obviously incompatible with the everyday notion of something being certainly true, but just using the term of art in context doesn't mean one is confusing it with the general term. After all, mathematicians can talk about "fields" all the time without confusing them with grassy plains.
Many definitions on all dictionaries are circular. Language is not an easy thing, which is why AI still has not been able to master it.
No, that's not what the definition is saying. "[[[judge true] or deem to be true] as true or real] or without proof". There is no possibility of doubt. It's judged/deemed/considered to be true.
I believe they are. dictionary.com says "believe" is "assume", but Merriam-Webster does not. One of them has to be wrong.
That's the whole reason dictionaries exist: people disagree.
One dictionary does, not all.
BTW. I used ChatGPT and asked it if it saw any difference between "assume" and "suppose", and it 100% said exactly what is my understanding.
There's a big difference in saying "I'm 75% certain
X
is true", and "I'm certainX
is 75%". If I believe it's likely that Ukraine launched a missile and not Russia, I'm saying I'm 75% certain that's true, I don't think there's an event which is 75% likely. I believe most people think this way, and it's more rational.Sure, my point is just that your meaning can't be supported by that definition alone. Even if we say that "to assume" is the same as "to take as granted or true", that isn't sufficient to refute my notion that in common usage, neither "to assume" nor "to take as granted or true" necessarily implies zero possible doubt.
That particular dictionary says the exact opposite of what you're saying. To "judge" is "to infer, think, or hold as an opinion; conclude about or assess" (def. 10), and an "opinion" is "a belief or judgment that rests on grounds insufficient to produce complete certainty" (emphasis mine; notice how its author thinks one can be uncertain about a judgment?). So if you want a dictionary to support you on that, you'll have to find another dictionary.
Or perhaps both dictionaries are sometimes correct, sometimes incorrect, and sometimes partially correct, since in real life people can have subtly or obviously different understandings of terms depending on the context. That's the whole thesis of "The Categories Were Made for Man, Not Man for the Categories": nearly all our categories are fuzzy and ill-defined, but they're still useful enough that we talk about them anyway. So in general usage, people don't usually resolve ambiguity by refining their terminology (since hardly anyone else would recognize it), but instead by inserting enough qualifications and explanations that their point hopefully gets across to most of the audience.
I asked ChatGPT the question, and the interpretation it produced is certainly far less strong than your standard of "zero possible doubt" regarding an assumption:
I wouldn't say that being "confident" about something implies that you necessarily have zero possible doubt. But even if you disagree on that, ChatGPT doesn't act on such a strict definition in practice. For instance, it produced the following exchange:
If Alice had absolutely zero doubt that the box contained a dog, then her belief could not be challenged in that way: she'd have to conclude that the dog can meow, or that the meow came from outside the box.
Since I'm not one to trust ChatGPT's output to be representative of anything, I decided to ask some people in real life about it.
First, I asked a friend, "What do you think is the difference between assuming something and supposing something?" He replied that the difference is that you assume something before it occurs, but you suppose it while it's occurring or after it occurs.
I asked the same question to a stranger at the bus stop. He replied that when you assume something, you're not entirely sure whether or not it's true, but when you suppose something, you have some kind of predetermined knowledge that it's true.
Finally, I asked the same question to a stranger in a hallway. After several seconds of thought, she replied that she had no clue, then her friend chimed in to say she also had no clue.
ChatGPT, the dictionaries I've checked, and the ordinary people I've asked all give different definitions of "assume" and "suppose", none of which include your standard of zero possible doubt in order to assume something. Therefore, I have strong evidence to believe that in common usage, the terms have no fixed meaning beyond "to accept as true without proof"; all else is vague connotation that can be overridden by context.
What evidence do you have that common usage recognizes your hard boundary, so hard that to cross it is to be unambiguously incorrect?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
In economics terms what you do is take your Bayesian beliefs and multiply each probability by the utility gained or lost by each state. Then choose which ever course of action gives the most utility in expected value.
So say a lottery that gave you a 99% chance of gaining a dollar, but a 1% chance of losing a thousand dollars would be a bad bet, but one that gave you a thousand dollars at 1% chance and lost you a dollar at 99% chance would be a good bet.
Beliefs about the world and actions we take on those beliefs are somewhat orthogonal. You need to multiply the probability by the expected benefits or losses. But, those gains or losses don't change our underlying beliefs about what is likely true or not.
Expected value is not everything. For example, if you play the following game: you can choose a value n, then you will have a probability 1/n to get n^2 dollars and 1-(1/n^2) to give the other guy n/2 dollars. Your expected gain is approximately n/2 when n is large enough. You are playing with a billionaire. Is it really more rational to choose n=10000 than say n=1000 or n=2? More generally, does it makes sense to choose n=2^256 even though the other player can afford to pay if you win?
More options
Context Copy link
I know how expected value works. But this confirms what I said: a single percentage cannot tell me what I should believe.
Also, this still doesn't answer my scenario. Is the next toss of a coin going to land heads given that in previous instances there have been 50 heads / 50 tails? How about 0 heads / 0 tails?
I know there's a difference, but Bayesians assume they are the same.
The single value is just the point estimate of your belief. That belief also has a distribution over possible states with each state having it's own percentage attached to it.
The more times you flip a coin the more concentrated your probability distribution becomes around that coin being actually fair.
You seem to believe Bayesians only care about the point estimate and not the whole probability distribution. I don't think you disagree with Bayesianism so much as misunderstand what it is.
There is no "point estimate" of my belief because I don't believe anything.
You are trying pinpoint my belief on a continuum, or determine it with a probability function, but you can't, because I don't have any belief.
Do you have any source for that? Do you have any source that explains the difference between a coin flip with 0/0 priors vs 50/50?
That is precisely what I am saying.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Should we be agnostic about Russel's Teapot?
E: Mostly focusing on continuing your thought about agnosticism. Your point about guiltiness is right.
Yes. Although I don't like to use the term "agnostic" because it relates to knowledge, and here we are dealing with belief. I prefer "skeptical".
The default position is
uncertain
, so maybe there's a teapot, maybe not. That means we are questioning its existence, therefore we are skeptics. But this also means we don't believe its existence (not-guilty
), which is different than believing it doesn't exist (innocent
).Russel's Teapots seems bogus to me. I would absolutely not like to be "skeptical" (
not-guilty
) about Russel's Teapot. I don't believe in such a teapot (innocent
). Can it be proven?When I continued to think about this post, this is the reasoning that occurred to me: I am not completely ignorant. I know a few facts here from experience:
Teapots do not naturally form in outer space.
Humans do not normally send teapots to outer space.
Based on this line of thinking, I'm comfortable with believing it doesn't exist (
innocent
).The one can come to me and say I haven't proven it
beyond a reasonable doubt
but now it feels like we're haggling over the standard of proof, not the burden of proof.Whereas your post gets the burden of proof right, it doesn't say much about standard of proof. Perhaps that is just a different topic?
But there is still zero evidence that such a teapot doesn't exist. Even if I were to grant you that your rationale is solid, that's not evidence.
I feel people have a hard time understanding that unlikelihood is not evidence. If someone tells me it's unlikely for me to lose in Russian roulette, that's not evidence that I'm going to win. Unlikely events happen all the time, and people don't seem to learn that.
What are the chances that the entire housing market is overpriced and it's about to collapse? Someone might have said "almost impossible" right before the financial crash of 2008, and in fact many did.
What are the chances that Bernie Madoff is running a Ponzi scheme given that his company has already passed an SEC exam? Again, "almost impossible" is what people said.
Black swans were considered impossible long time ago, and yet they existed, which is precisely why the term is used nowadays to describe things we have no evidence for, but yet could happen.
You can be considered right in thinking that black swans don't exist, that Bernie Madoff is legit, and that the housing market is not about to collapse (
innocent
), right until the moment the unlikely event happens and you are proven wrong. It turns out a cheeky Russian astronaut threw out a teapot in the 1970s and it has been floating since.Why insist in believing the unlikely is not going to happen only to be proven wrong again and again when we can just be skeptical?
How ludicrous does the example have to be before you are comfortable with a position of "innocent"? Russel's complete tea set, with a table, two chairs, cups and saucers, the whole deal... Still just skeptical or are you innocent yet?
What if maintaining the "skeptic" position actually involved having skin in the game? Imagine that not only is the claim that the teapot exists, but that by singing and performing the I'm a Little Teapot dance each night before bed for one year the teapot grants you a wish with no restrictions that will come true. If you are truly just skeptical (after all there is no evidence that such a teapot doesn't exist), then you would of course be singing every night, correct? Why take the innocent position and risk missing out on such a reward?
Unlikely events do happen all the time, but it seems to me your method of thinking is what allows people to believe that impossible things can/have happened, usually defended with "You can't prove it didn't"
There is no such thing.
False. That would require me believing
guilty
.I'm not taking the
innocent
position. I'm not taking any position.Your example is not impossible, just extremely unlikely. If you tell me the teapot has the shape of a triangle with four sides, well, that is truly impossible and I would say
innocent
.My example, that I made up on the spot, that there is a teapot orbiting Jupiter that will grant you one wish if you sing certain words every night for one year, is not impossible, just extremely unlikely?
This is a near perfect example of what I meant, and why I am convinced your way of thinking is absolutely incorrect.
In my mind, I am completely justified in applying the label "impossible" to that example. The fact that you cannot do the same would seem to indicate a failure in your rationality, in my opinion.
Yes. That's a foundation of science: you cannot know something with 100% certainty.
I did not say I cannot do the same, I said do not do the same.
This is shifting the burden of proof: you want me to prove to you how
X
is not impossible. I don't have to do that, because I'm not making any claim. You can believe whatever you want.If you want me to believe that
X
is impossible, then you have the burden of proof. But you can't do that, so you have no justification in questioning my unbelief.I am perfectly entitled to stay in my skepticism. What you do is up to you, I'm not questioning your belief.
I do not want you to believe anything is impossible, what you do is up to you. I am simply commenting that your inability to concede that the extraordinary magic space teapot that I admit to you I fabricated on the spot does not possibly exist indicates a failure in your rationality.
You are welcome to hang on to your skepticism and hold a belief that it may exist, but the fact that you are not singing the song and dancing the dance (I assume) every night would indicate you don't think it exists any more than I do (which is not at all), since the payoff is so large for such a small cost.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
What is evidence? Is seeing a video recording of the defendant shooting his wife evidence? Recordings can be faked, even if it's unlikely.
I agree with the rest of what you said, that people who predict things will often be wrong.
What's stopping me from being skeptical about everything, even in the face of stuff you call "evidence"? Maybe you & your blog are just a GPT bot. Why should I just assume I'm talking to a human?
Which is why evidence should not be considered proof. People often confuse the two, for example in the aphorism "absence of evidence is not evidence of absence" which is incorrect: it is evidence of absence, it's just not proof.
You shouldn't. Why do you need to know that I'm a human?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Humans also don't normally send Tesla Roadsters to outer space. Until one abnormal one did.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
What about Bostrom's Simulation argument?
https://www.simulation-argument.com/simulation.pdf
Either it's very difficult/impossible to reach a high level of civilizational achievement (controlling the resources of many star systems), or civilizations that do this totally shun simulating their ancestors, or we are in a simulation.
It should be trivial for a powerful civilization to simulate their ancestors, they ought to have billions of times more computing power than is necessary, billions of years to have their fun. Even if they're just operating on the physics we know (which is a dubious assumption given that we take lots of shortcuts in our own simulations), planetary scale computers would do the trick very nicely. You wouldn't need something really advanced like a Matrioshka brain.
It seems very likely they would do this. A lot of people today play games simulating our history. Maybe if we have a certain kind of hostile AGI taking over every single time, that would prevent ancestor simulations. But would every single civilization fall to AGI? That seems unlikely. It only takes one powerful real civilization to create millions, billions of ancestor-simulations. We should conclude that most pre-singularity civilizations exist in simulation.
Thus it seems very reasonable to conclude that we are in a simulation and we are thus ruled by a deity. Unlike with traditional religions, we have empirical proof of how God's powers of creation could work based upon principles we already understand and observe. There's no need to justify prophets, miracles or other dubious functions. There is no need to justify a benevolent God ruling over a harsh universe - we can assume our simulators are not really interested in our welfare.
You can invert the burden of proof argument. If we take agnosticism as default, that's the same as saying we're not sure whether we live on the highest level of reality or any of the myriad lower levels of simulation. I reckon it's overwhelmingly more likely that we live in a simulation, likely a nested simulation. There's only one highest level, there are surely many many lower levels.
I see people make this probabilistic fallacy very often. You can say
X
is very likely, so it's reasonable to conclude it's true, but winning Russian roulette is likely, do you think it's reasonable to conclude you will win? This doesn't change with higher values ofX
.If you change the statement to "it's reasonable to conclude that we are likely in a simulation", then I would agree.
I don't believe
rand() < 0.99
istrue
, because it could befalse
.I think you're making an isolated demand for rigour here. You can't be 100% sure of anything except "there are thoughts" because the chance that a Cartesian Daemon is screwing with your thought process is not zero. So if you require 100% certainty to call anything "true", your set of "true statements" has one member.
I don't require 100% certainty to call anything true, but even if I did, I don't need to call absolutely anything true.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Eh, why should it be trivial to simulate a younger civilization?
Say our ancestors want to simulate Earth alone in real time. The stars and microwave background and gravity waves are just a projector on a sheet. How many bits of information are in that bubble? By definition, I don’t think they could fit on an Earth-sized computer.
The problem can be handwaved away if our simulators exist in different laws of physics, but we’re assuming that our own laws reflect theirs. Memory can also be traded off for time if our perceptions can be stepped frame-by-frame. That tradeoff opens up the possibility that higher reality just hasn’t had long enough to run a simulation of our observed fidelity. I don’t think that’s a knockout, just another possible resolution to the paradox.
He discusses this in the paper:
a rough approximation of the computational power of a planetary‐mass
computer is 10^42 operations per second, and that assumes only already known
nanotechnological designs, which are probably far from optimal. A single such a
computer could simulate the entire mental history of humankind (call this an
ancestor‐simulation) by using less than one millionth of its processing power for
one second.
You just take shortcuts. You don't need to simulate the whole atomic-level phenomena unless it's actually being used, if someone is pointing an electron microscope at it for example. Or if they make it into a semiconductor. The interior of the Earth can be simplified hugely too, along with much of the deep oceans.
And if it takes a million or a billion times more processing power than expected for whatever reason, they could increase their computing operations to match. The galaxy is not short of stars or planets to be converted.
More options
Context Copy link
More options
Context Copy link
I wish this argument would be true.
It is unfortunately inept however, have you any idea of the energy and computing power needed to simulate a universe? Have you any idea of the energy and computing power needed to simulate a bottle of water at the atomic level? A single cell ? We can barely exhaustively study quantum systems with more than 2 particles IIRC
The universe has finite resources and the constraints of mathematics are universal or even meta-universal and so is https://en.wikipedia.org/wiki/Computational_complexity your hypothetical aliens must bends to those extreme limits. Even mankind has already mostly reached them, we have extreme diminishing returns everywhere.
Bostrom did the maths - he found that a planetary-scale computer operating on principles we know has more than enough computing power, provided you take a few shortcuts. Only go to the full quantum simulation if it's actually needed, like if there's an electron microscope pointing at the target. The twin slits phenomenon is one example of the kind of method they could use.
Why do people assume that a simulation needs to be perfect down to the very last particle? Most of that stuff is just noise. You can have everything outside the solar system be an elaborate skybox, abstract away most of the Earth's crust aside from tectonic drift and volcano/earthquakes. The Sun can be greatly simplified.
Furthermore, what stops them using methods unknown to us, or methods that they deliberately leave out of the simulation? One might just as well say it would be prohibitively expensive to make a Minecraft computer capable of simulating the world they observe in Minecraft. They can't automate the construction of redstone circuitry like we can - and there is no notion of quantum computing at all in Minecraft. Say that there's such a thing as 'Quantum Computing 2' using more advanced physics that we can't even access - that would make it very easy. But that's not even required, according to Bostrom's analysis.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link