- 119
- -14
What is this place?
This website is a place for people who want to move past shady thinking and test their ideas in a
court of people who don't all share the same biases. Our goal is to
optimize for light, not heat; this is a group effort, and all commentators are asked to do their part.
The weekly Culture War threads host the most
controversial topics and are the most visible aspect of The Motte. However, many other topics are
appropriate here. We encourage people to post anything related to science, politics, or philosophy;
if in doubt, post!
Check out The Vault for an archive of old quality posts.
You are encouraged to crosspost these elsewhere.
Why are you called The Motte?
A motte is a stone keep on a raised earthwork common in early medieval fortifications. More pertinently,
it's an element in a rhetorical move called a "Motte-and-Bailey",
originally identified by
philosopher Nicholas Shackel. It describes the tendency in discourse for people to move from a controversial
but high value claim to a defensible but less exciting one upon any resistance to the former. He likens
this to the medieval fortification, where a desirable land (the bailey) is abandoned when in danger for
the more easily defended motte. In Shackel's words, "The Motte represents the defensible but undesired
propositions to which one retreats when hard pressed."
On The Motte, always attempt to remain inside your defensible territory, even if you are not being pressed.
New post guidelines
If you're posting something that isn't related to the culture war, we encourage you to post a thread for it.
A submission statement is highly appreciated, but isn't necessary for text posts or links to largely-text posts
such as blogs or news articles; if we're unsure of the value of your post, we might remove it until you add a
submission statement. A submission statement is required for non-text sources (videos, podcasts, images).
Culture war posts go in the culture war thread; all links must either include a submission statement or
significant commentary. Bare links without those will be removed.
If in doubt, please post it!
Rules
- Courtesy
- Content
- Engagement
- When disagreeing with someone, state your objections explicitly.
- Proactively provide evidence in proportion to how partisan and inflammatory your claim might be.
- Accept temporary bans as a time-out, and don't attempt to rejoin the conversation until it's lifted.
- Don't attempt to build consensus or enforce ideological conformity.
- Write like everyone is reading and you want them to be included in the discussion.
- The Wildcard Rule
- The Metarule
Jump in the discussion.
No email address required.
Notes -
The headline talks about rationalists, but the article actually talks a lot about people who aren't rationalists at all. Like journalists. Or Krugman. Which are very easy to dupe, because they want to be duped. They actively go out and look for people who can be used as props to launder their agenda through them, and in some cases if they fail, they manufacture it (somehow this is considered to be much worse behavior than cherry-picking props, while being essentially the same). This is an easy trap to fall into - and I am sure many people declaring themselves rationalist fell in it too, because they are human. If you build a trap skillfully and put tasty enough cheese inside (different cheese for different people), a lot of people will get caught. Some of them may be calling themselves "rationalists", some of them may even try and become less easy to get caught - but they are imperfect humans, so they'll get caught anyway. That is to be expected. Doubly so if they actually profit in one way or another from getting caught (like journalists or political activists - which are pretty much one and the same nowdays). For those, passing a good "boo outgroup" story is almost inhumanly hard, so here are most of your examples.
To add to this, there's also the element of betting.
Humans, even rationalists, have to make decisions without the time to obtain perfect knowledge. It's only prudent to place bets if you think the upside might be big and the downside small. In other words, there were probably rationalists in the OP's sample that donated/took money from SBF while thinking this is all likely going to blow up in their face. This isn't the case of conflicting beliefs--it's playing the odds.
Plus, the characterization of "rationalists" seems to me a faulty generalization. There are probably very few people who make their life revolve around rationalism. But rationalism isn't some monastic order that stamps out mentat-like Rationalists, so in the real world, "rationalist" describes everyone from hyperlogical baysian wizards to folks who like a good argument and enjoy eating popcorn while watching the Culture War eat itself.
Yes, sometimes, but a lot of times they don't have to make a decision, and they do anyway. For example if I enter a meeting I will want to sit down, I don't know if the chair isn't broken, but I sit down anyway. Is not checking the chair a mistake? No, I can make a decision without perfect knowledge. But what about a raffle? I also don't know that I'm going to lose, so it might make sense to buy a ticket, but I don't have to. You'll say that I made a decision anyway, but not necessarily, a lot of times the result is "I don't know", and that's not really a decision.
That depends on the odds. A small upside and big downside might make sense if the odds of losing are sufficiently small.
But those are two different things. Taking money from a person is one decision, trusting that person is a completely different one. You can take money from a person without trusting them.
The difference between skeptics and normal people is not readily apparent. We both sit on a chair without checking if it's broken, but I as a rational skeptic do not assume it is unbroken. The end result looks the same, but the mental model is different: I do not have a belief.
And yet you assume you have access to other people's mental models.
No, I ask them what they believe, and they tell me.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
But the point is not that they get caught, all humans indeed have the potential to get caught at some point in their life, the point is why. Why do people get burnt touching a pan?
So, what's your answer for the why, that is special for rationalists? I say my answer is a common human one - they thought the pan is not hot, or maybe they wanted what's in the pot too much to reasonably evaluate the chances that it'd be too hot. People do that. I'm not too proud to admit it happened to me.
You are forgetting the most common reason: they have never encountered a hot pan in their life (e.g. kids). They get burnt because they didn't think they could get burnt. This also happens to adults who should know better after a while of not dealing with hot pans.
People who have never been scammed are the easiest to scam, precisely because they don't think it could possibly happen to them. Hubris and overconfidence are known to make intelligent people make obvious mistake they otherwise would not commit.
It's actually looking like the most un-common reason. You can only do it once in your life. If you ever been burned by a hot pan more than once in your life (I have, I assume most other people too, pretty much any adult had this experience - and yet adults are regularly getting burned by hot pans) - that's not the most common reason for you. It's hard for something to be most common reason for something if you can do it only once in your whole life, and you have plenty of warning before it.
OTOH, I'm pretty sure a lot of people tried to scam rationalists - because a lot of people try to scam everybody, look into your mailbox under "Spam" and you'd probably see a dozen scam attempts every day. Surely, they haven't been scammed this particular way before, but nobody has been scammed this particular way before, so there's nothing special for rat circles. BTW, a lot of much more weathered people - like journalists, politicians, Hollywood types, etc. - had accepted SBF with open arms. It's not like everybody but rationalists rejected him, but those doofuses got caught. Nobody within the Well Respected People circles rejected him. He had investment from the best and most respected venture funds. Financial regulators planned to use them as the example of "good crypto investor". He had CFTC license. Those people not only have seen every scam there is, they are supposed to be the supreme authority of the land to determine which is scam and which is not. They failed. Surely there were many reasons for that. Not having seen a scam before in their lives isn't one of them.
No it's not, it's basic statistics. You can only donate your heart once by dying, and guess what's the most common reason for heart donation: death.
False equivalence fallacy.
Yes they have. Financial fraud is not new.
This has nothing to do with my argument, you are attacking a straw man of it. Of course there are dumb journalists who fell for the scam, but the intelligent ones with solid epistemology likely did not, because they have epistemic humility.
That's pretty cheap trick. Of course in a set of one, the only element is the maximum. But when you have multiple ways to do something, the it's different - it's hard for the way that you can do only once to be the most common.
Fraud in general is not new. This one in particular is. You are just substituting a set we were talking about with much larger set encompassing many more elements. It's like I said "I never heard this language, its sound is completely new for me!" and you replied "Lies! You certainly heard people speak a language before!". A language - yes, this particular one - no. A fraud - yes, this particular one - no.
It remains to be proven that no intelligent ones with solid epistemology in fact did, and only dumb ones did. And your criteria for "dumb" better not include "falling for this particular scam".
Most people when faced with something they have not imagined complain about that.
No it's not. Do I really have to explain it with statistics?
Say everyone will experience event
X
once in their lifetime, which is 80 years in average, that means in a population of 1000 in every given year around 12.5 people will experience it for that reason in average. Now let's say there's another way they can experienceX
that also happens for everyone in their lifetime, so again it's 12.5. In this case the percentage of people who experienceX
for the first time every given year is 50%, so it's not the most common cause.But, what if the other way doesn't happen for 100% of the people, they learn their lesson and it only happens to 50% of the people? In that case it's only 6.25 people and the percentage of people who experience
X
for the first time any given year is 67%, therefore it's the most common cause.Your failure of imagination is not an argument.
No. All fraud relies on people trusting without good reason, or more specifically: not distrusting enough. This is no exception.
Indeed, but it doesn't have to be proven because the hallmark of having a solid epistemology is not believing things without evidence, and in order to fall for the fraud you have to believe things without evidence. So if anyone with a solid epistemology fell for the fraud, they would have to be almost by definition a very rare exception.
That's a useless statement, it's like saying all deaths are caused by not living long enough and presenting it as some ultimate discovery in medicine. Of course fraud relies on trust, that's by definition, and of course in the hindsight, that trust was misplaced. But one absolutely can not function in a society without trusting somebody with something. Even low-trust societies have some trust. You go to a store and you trust the owner not to murder you, feed your body to the pigs and take your money. You put your money in the bank and you trust the bank not to refuse to give it back, or the society to be on your side if they do. You get employed and you trust your employer to pay you and not to sell your data to the identity thieves and ghost you, etc. (Sidenote: before you say "I actually never trust anybody, I grow my own food on the top of remote mountain and never speak to another human being unless I see them through the sights of my rifle, and only to procure ammunition for the said rifle, and I demand it upfront" - good for you, it's not how human society works, please understand "you" as collective pronoun here). We trust somebody many times a day if we live in a society, and in the most of these cases the trust is reciprocated with cooperation. Sometimes, though, there are defectors. We recognize the pattern of defection and avoid trusting them - if somebody comes to you on the street and offers to sell you genuine Rolex watch for $5, you rightfully mistrust them - because you have prior experience that says in this context, trust is not warranted. However, absent such context, the cases of misplaced trust would always exist, because it is not possible to perfectly calibrate one's trust without decent knowledge of the matter at hand.
Again, this is a banality which on closer consideration comes apart as useless. You can not evaluate the quality of evidence without experience in evaluating the particular kind of evidence, and not many have experience with evaluating evidence in this particular area.
No you don't. You just would believe the evidence that in the hindsight proves wrong or low quality. In most topics, you can not evaluate evidence by yourself - nobody can. Most people rely on authority of some sort for that - we're back to trust. The modern newspaper fashion of sanctifying "evidence" is a meaningless ritual - anything can be "evidence" or "not evidence", depending on how you evaluate it and relate it to the question at hand. How you know if some investment is good or a fraud? You check its description, its references, the opinion of other people, the data about similar investments, your knowledge about how financial system works, you knowledge about who particular person is - all this relies on myriads of sources which you can not check empirically - it's trust all the way down. There's no procedure that can guarantee you absence of possibility of being deceived here - only methods to reduce this possibility to the level you would find tolerable, but even these calculations again rely on some data which you'd have to take on trust. Sometimes the whole house of cards fails, and you find yourself defrauded. It may be because you personally misjudged the evidence, it may be because somebody who you trusted made a mistake, it may be because somebody somewhere in the web of trust defected. There's no "solid epistemology" that would provide you a guarantee against that. If you think there is - you are the one that is believing things without evidence.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link