felipec
unbelief
No bio...
User ID: 1796
For those who lack the aptitude and or drive, education is just busywork or daycare that at best only instills basic skills.
It's also useless for many who have the aptitude and the drive.
Good. If it's about their brains, it went from beta(1, 1) --> ... -> beta(51, 51). They learned.
No. A Bayseian doesn't answer beta(51, 51)
, he answers 0.5
.
Then ask how a Bayesian would answer that version of the question, and see if the disagreement persists.
I know the definitions of probability, I know what probability is according to a Bayesian, I know what a likelihood function is, and I know what the actual probability of this example is, because I wrote a computer simulation with the actual probability embedded in it.
You are just avoiding the facts.
No, my medical decisions are not like asking a woman out.
Yes they are. Rational medical decisions are based on probability, not trust.
-
I decided to use masks, did I trust that they would work? No.
-
I decided not to take any COVID-19 shot, did I trust that they weren't safe? No.
Of course, you can make them like rolling a dice but that's not the best way.
You can deny that you rolled the dice all you want, but you did. Rolling a die that has 99% chance of winning is still rolling the dice. You could have been wrong.
If you have 100% certainty that your medical decision is going to be correct, you are simply not rational.
And this is a red herring. You are basically saying "medical decisions are not black swans", but they don't have to be (even though they are), I showed you black swans. Your white swans are irrelevant. Case closed.
If I can show you 10 black swans that prove you wrong, you are just going to deny reality.
No, I need something to trust before I make a voluntary decision.
No, you don't.
If you are thinking about asking a woman out, do you need to trust that she will say "yes" before making the decision to ask her out?
If you are thinking on rolling dice, do you need to trust that you will get a 7 before making the decision to roll the dice?
I don't understand how people don't even realize how they make decisions. Many of the decisions you make are based on chance, not trust, and most you are not even aware that you made them.
It's the straightforward interpretation of
It's not.
I just gave you two examples,
No, you didn't. You showed how they could do it, that doesn't necessarily imply that's what they actually do in the real world.
That uncertainty is also typically published
Wrong. I'm not talking about mathematics or statistics, I'm talking about epistemology.
You don't seem to know what subject we are even talking about: bayesian epistemology. How one particular rationalist makes a decision is not published.
Even more proof that people don't even listen to what is being actually said.
Apply bayes rule, get posterior p∽beta(51,1)
Wrong. It's beta(51,51)
. It's beta(heads+1,tails+1)
.
You seem to believe that any space on which a functional can give a single value is a one-dimensional space?
I don't understand what would make you think I believe that.
Do another Bayesian update after a flip of tails following the flip of heads, and the originally-uniform prior will be back to p(heads next)=1/2 (for a third time, the exact same single value!)
Yes, so this confirms what I'm saying: the result is a single value. I understand what you are saying: the function that was used to generate this single value is not the same as the original function, but I don't think you are understanding what I am saying.
You are operating from a bottom-up approach: you claim you have a perfectly rational method to arrive to the value of p=0.5
. This method integrates all the observed evidence (e.g 0/0, or 1/1 head/tails), so any new evidence is going to adjust the value correctly, for example 1 head is going to adjust the value differently if there have been 1/1 head/tails versus 10/10 head/tails.
I do not disagree that this method arrives to a mathematically correct p=0.5
.
But I'm operating from the other side: the top-down approach. If I need to make a decision, p=0.5
does not tell me anything. I don't care if the process that was used to generate p=0.5
is "mathematically correct", I want to know how confident should I be in concluding that p
is indeed close to 0.5
.
If in order to make that decision I have to look at the function that was used to generate p=0.5
, then that shows that p=0.5
is not useful, it's the function that I should care about, not the result.
My conclusion is the same: p=0.5
is useless.
Now, you can say "that's what Bayesians do" (look at the function, not the final value), but as I believe I already explained: I debated Scott Alexander, and his defense for making a shitty decision was essentially: "I arrived to it in a Bayesian way". So, according to him if p=0.9
, that was a "good" decision, even if turned out to be wrong.
So, maybe Scott Alexander is a bad Bayesian, and maybe Bayesians don't use this single value to make decisions (I hope so), but I've seen enough evidence otherwise, so I'm not convinced Bayesians completely disregard this final single value. Especially when even the Stanford Encyclopedia of Philosophy explicitly says that's what Bayesians do.
You're the guy from "2+2≠4" who was having trouble with equivalence classes and Shannon information, right?
I was not the one having trouble, they were. Or do you disagree that in computing you can't put infinite information in a variable of a specific type?
I was feeling bad about how many people were just downvoting you and moving on, and at that point I wasn't one of them, but now I kind of get it: downvoting falsehoods and moving on is fast, correcting falsehoods is slow, and I fear sometimes dissuading further falsehoods is impossible.
You are making the assumption that they are falsehoods, when you have zero justification for believing so, especially if you misinterpret what is actually being said.
Uff, I even told you how it's done.
Show me step by step, I'll show you where you are wrong.
See the first Google result for "Bayes theorem biased coin" for an example.
Did you actually read that? It clearly says:
P(“Heads”)
: The evidence term is the overall probability of getting heads and is the sum of all 101 (prior * likelihood) products.
It's a single value.
I looked at the code of the simulation:
evidence = sum(likelihood .* prior);
It's a single value.
I printed the variable at the end of the simulation:
p=0.498334
It's a single value.
Me: They do learn. They use many numbers to compute.
They don't. The probability that the next coin flip is going to land heads is the same: 0/0
, 50/50
, 5000/5000
is 0.5
. It does not get updated.
Me: They do, the probability is the uncertainty of the event.
No. It's not. p=0.5
is not the uncertainty.
You: 50%+-20% is analogous to saying "blue" whereas saying 50% is analogous to saying "sky blue".
I didn't say that.
Me: Not if probability means uncertainty.
Which is not the case.
Me: It depends on the question.
There is no other question. I am asking a specific question, and the answer is p=0.5
, there's no "it depends".
p=0.5
is the probability the next coin flip is going to land heads. Period.
I'm going to attempt to calculate the values for n
number of heads and tails with 95% confidence so there's no confusion about "the question":
-
0/0
:0.5±∞
-
5/5
:0.5±0.034
-
50/50
:0.5±0.003
-
5000/5000
:0.5±0.000
It should be clear now that there's no "the question". The answer for Bayesians is p=0.5
, and they don't encode uncertainty at all.
Found it: @heartereum.
In your view, is having doubt the result of a conscious consideration of whether one may be wrong? Or can one have doubt even before considering the matter?
Both. Some people can have doubt immediately, other people do not doubt unless they are asked to consider their doubt level. And most people increase their level of doubt once they are asked to put skin in the game, for example with a bet.
Since most people can change their minds given enough evidence
I completely disagree with that statement. Most people cannot change their minds regardless of the evidence. In fact in the other discussion I'm having on this site of late the person is obviously holding a p=1
(zero room for doubt).
The links point to both dictionaries in question, not just one.
And only one of them is saying those things.
First, I attributed that to "most people here", not myself.
That is what we are talking about: it's your view that most people ascribe the meaning of X
, if a person ascribes the meaning opposite of X
, that is opposite to your view.
In Bayesianese, "50%+-50%"
But that's not Bayesian. That's the whole point. And you accepted they use a single number to answer.
If someone reads your words, "Most people assume we are dealing with the standard arithmetic" (from your 2 + 2 post), do you believe that they are likely to understand that you mean, "Most people have zero doubt in their minds that we are dealing with the standard arithmetic"?
No, I believe in this particular case they would understand that "assume" in this context means "take for granted", but that doesn't contradict the notion that they have zero doubts in their minds. They have zero doubts in their mind because most people don't see there's any doubt to be had.
Are you saying that "assuming something is true" is different from "thinking something is true with 100% certainty"
No.
and that you are making two different points in your Substack post and submission?
No. In my substack article I said: "Why insist on 100% certainty?". My point and the objective of my point are two different things.
If people are capable of accepting evidence against what they think is true, regardless of whether they previously had 100% certainty, then why shouldn't people have 100% certainty?
Because the fact that it can happen doesn't mean it's likely to happen.
By Bayes' theorem
Not everyone follows Bayes' theorem.
And if it's true that under Bayes the probability of an event doesn't get updated if the prior is 1, regardless of the result. Then that proves Bayes is a poor heuristic for a belief system.
When you said, "I believe they are", were you not referring to the dictionaries being "flat-out wrong to say [those things]"?
Yes, I was. So I believe the dictionaries saying those things are wrong.
Or did the links I provided not show them saying those things?
The links you provided showed one dictionary saying those things, therefore if I believe those dictionaries saying those things are wrong, I believe that one dictionary saying those things is wrong.
How does this imply that his definitions are "wrong" when they are not flipped?
I explained that in the very next sentence.
Where do I say that?
You literally said: «since most people here were under the impression that by an "assumption" you meant a "strong supposition"».
Which has nothing to do with what we are talking about. And the article being "wrong" is a subjective opinion which might itself be wrong.
If it doesn't feel redundant, add another layer until it does :P
But they don't do that, they give a single number. Whatever uncertainty they had at the beginning is encoded in the number 0.5
.
Later on when their decision turns out to be wrong, they claim it wasn't wrong, because they arrived at that number rationally, nobody would have arrived to a better number.
Still, I think I see your point in part. There is clearly some relevant information that's not being given in the answer if the answer to "will this fair coin land heads?
It's not just about the answer is given, it's about how the answer is encoded in your brain.
If the answer to some question is "blue", it may not be entirely incorrect, but later on when you are asked to recall a color you might very well pick any blue color. On the other hand if your answer was "sky blue", well then you might pick a more accurate color.
I claim the correct answer should be 50%±50%
, but Bayesians give a single answer: 50%
, in which case my answer uncertain
is way better.
Yes, I disagree, because I'm not a Bayesian. I don't believe X
.000000001%
, I don't believe X
.
It's a yes-or-no question:
Do you believe that (2+2=4)
and (2+2=0 (mod 4))
are "the same statement"?
That's your opinion. Everything is interpreted.
But the coin bias in reality is not true or false.
I'm not asking if the coin is biased, I'm asking if the next coin flip will land heads. It's a yes-or-no question that Bayesians would use a single number to answer.
So, when there's a binary event, like the Russia nuke question, a Bayesian says 6% probability, but a "burden-of-proofer" may say "I think the people that claim Russia will throw a nuke have the burden of proof"
No, I say "I don't know" (uncertain), which cannot be represented with a single probability number.
(2+2=4 (mod 4)) and (2+2=0 (mod 4)) is the same statement.
That is not what I asked.
No, math is abstract truth.
You are not math, you are a person interpreting what math says.
You don't need to trust anyone to make a decision. Nor do you need to believe anything.
I agree. There's a difference between education and schooling. You don't need to go to school to educate yourself, and most of what a school concerns about is not education.
In particular in the area of information technology we don't bother remembering anything, we develop technologies like wikipedia.org and stackoverflow.com so that relevant information is easily available and retrievable by anyone. No education needed.
If any kills are necessary to learn are those of logic, reasoning and skepticism, otherwise everything else one learns might not be learned properly. Unfortunately I don't see anyone interested in learning these skills, they all believe they already know what they need to know, and no evidence of the contrary would convince them otherwise.
Which why I don't think you will manage to persuade many people. Either they already understand why modern education is not working, or they don't.
More options
Context Copy link