You may be familiar with Curtis Yarvin's idea that Covid is science's Chernobyl. Just as Chernobyl was Communism's Chernobyl, and Covid was science's Chernobyl, the FTX disaster is rationalism's Chernobyl.
The people at FTX were the best of the best, Ivy League graduates from academic families, yet free-thinking enough to see through the most egregious of the Cathedral's lies. Market natives, most of them met on Wall Street. Much has been made of the SBF-Effective Altruism connection, but these people have no doubt read the sequences too. FTX was a glimmer of hope in a doomed world, a place where the nerds were in charge and had the funding to do what had to be done, social desirability bias be damned.
They blew everything.
It will be said that "they weren't really EA," and you can point to precepts of effective altruism they violated, but by that standard no one is really EA. Everyone violates some of the precepts some of the time. These people were EA/rationalist to the core. They might not have been part of the Berkley polycules, but they sure tried to recreate them in Nassau. Here's CEO of Alameda Capital Caroline Ellison's Tumblr page, filled with rationalist shibboleths. She would have fit right in on The Motte.
That leaves the $10 billion dollar question: How did this happen? Perhaps they were intellectual frauds just as they were financial frauds, adopting the language and opinions of those who are truly intelligent. That would be the personally flattering option. It leaves open the possibility that if only someone actually smart were involved the whole catastrophe would have been avoided. But what if they really were smart? What if they are millennial versions of Ted Kaczynski, taking the maximum expected-value path towards acquiring the capital to do a pivotal act? If humanity's chances of survival really are best measured in log odds, maybe the FTX team are the only ones with their eyes on the prize?
Jump in the discussion.
No email address required.
Notes -
There's a bunch of argument about what utilitarianism requires, or what deontology requires, and it seems sort of obvious to me that nobody is actually a utilitarian (as evidenced by people not immediately voluntarily equalizing their wealth), or actually a deontologist (as evidenced by our willingness to do shit like nonconsensually throwing people in prison for the greater good of not being in a crime-ridden hellhole.) I mean, really any specific philosophical school of thought will, in the appropriate thought experiment, result in you torturing thousands of puppies or letting the universe be vaporized or whatever. I don't think this says anything particularly deep about those specific philosophies aside from that it's apparently impossible to explicitly codify human moral intuitions but people really really want to anyway.
That aside, in real life self-described EAs universally seem to advocate for honesty based on the pretty obvious point that the ability of actors to trust one another is key to getting almost anything done ever, and is what stops society from devolving into a hobbesian war of all-against-all. And yeah, I guess if you're a good enough liar that nobody finds out you're dishonest then I guess you don't damage that; but really, if you think for like two seconds nobody tells material lies thinking they're going to get caught, and the obvious way of not being known for dishonesty long-term is by being honest.
As for the St. Petersberg paradox thing, yeah, that's a weird viewpoint and one that seems pretty clearly false (since marginal utility per dollar declines way more slowly on a global/altruistic scale than an individual/selfish one, but it still does decline, and the billions-of-dollars scale seems about where it would start being noticeable.) But I'm not sure that's really an EA thing so much as a personal idiosyncrasy.
There's a problem with that: a moral system that requires you to lie about certain object-level issues also requires you to lie about all related meta-, meta-meta- and so on levels. So for example if you're intending to defraud someone for the greater good, not only you shouldn't tell them that, but if they ask "what if you were in fact intending to defraud me, would you tell me?" you should lie, and if they ask "doesn't your moral theory requires you to defraud me in this situation?" you should lie, and if they ask "does your moral theory sometimes require lying, and if so, when exactly?" you should lie.
So when you see people espousing a moral theory that seems to pretty straightforwardly say that it's OK to lie if you're reasonably sure you're not getting caught, when questioned happily confirm that yeah, it's edgy like that, but then seem to realize something and walk that back, without providing any actual principled explanation for that, like Caplan claims Singer did, then the obvious and most reasonable explanation is that they are lying on the meta-level now.
And then there's Yudkowsky who actually understood the implications early enough (at least by the point SI rebranded as MIRI and scrubbed most of the stuff about their goal being creating the AI first) but can't help leaking stuff on the meta-meta-level, talking about this bayesian conspiracy that, like, if you understand things properly you must understand not only what's at stake but also that you shouldn't talk about it. See Roko's Basilisk for a particularly clear cut example of this sort of fibbing.
More options
Context Copy link
That's like saying that Christians don't actually believe that sinning is bad because even Christians occasionally sin. You can genuinely believe in moral obligations even if the obligations are so steep that (almost) no one fully discharges them.
Why on earth would a deontologist object to throwing someone in prison if they're guilty of the crime and were convicted in a fair trial?
Well it sure seems like Caplan has the receipts on Singer believing that it's okay to lie for the greater good, as a consequence of his utilitarianism.
Sure, except for when it really matters, and you're really confident that you won't get caught.
Fair enough! I suppose it depends on whether you view the morally relevant action as "imprisoning someone against their will" (bad) vs "enforcing the law" (good? Depending on whether you view the law itself as a fundamentally consequentialist instrument).
I think the relevant distinction here is that not only do I not give away all my money, I also don't think anyone else has the obligation to give away all my money. I do not acknowledge this as an action I or anyone else is obligated to perform, and I believe this is shared by most everyone who's not Peter Singer. (Also, taking Peter Singer as the typical utilitarian seems like a poor decision; I have no particular desire to defend his utterances and nor do most people.)
On reflection, I think that actually everyone makes moral decisions based on a system where every action has some (possibly negative) number of Deontology Points and some number (possibly negative) of Consequentialist Points and we weight those in some way and tally them up and if the outcome is positive we do the action.
That's why I not only would myself, but would also endorse others, stealing loaves of bread to feed my starving family. Stealing the bread? A little bad, deontology-wise. Family starving? Mega-bad, utility-wise. (You could try to rescue pure-deontology by saying that the morally-relevant action being performed is "letting your family starve" not "stealing a loaf of bread" but I would suggest that this just makes your deontology utilitarianism with extra steps.)
I can't think of any examples off the top of my head where the opposite tradeoff realistically occurs, negative utility points in exchange for positive deontology points.
I mean... yeah? The lying-to-an-axe-murderer thought experiment is a staple for a reason.
Fair in general, but he is a central figure in EA specifically, and arguably its founder.
How about stealing $1000 of client funds to save a life in a third world country? If they'd be justified to do it themselves, and indeed you'd advocate for them to do it, then why shouldn't you be praised for doing it for them?
The fatal flaw of EA, IMO, is extrapolating from (a) the moral necessity to save a drowning child at the expense of your suit to (b) the moral necessity to buy mosquito nets at equivalent cost to save people in the third world. That syllogism can justify all manner of depravity, including SBF's.
Yeah, fair, I'll cop to him being the founder (or at least popularizer) of EA. Though I declaim any obligation to defend weird shit he says.
I think one thing that I dislike about the discourse around this is it kinda feels mostly like vibes-- "how much should EA lose status from the FTX implosion"-- with remarkably little in the way of concrete policy changes recommended even from detractors (possible exception: EA orgs sending money they received from FTX to the bankruptcy courts for allocation to victims, which, fair enough.)
On a practical level, current EA "doctrine" or whatever is that you should throw down 10% of your income to do the maximum amount of good you think you can do, which is as far as I can tell basically uncontroversial.
Or to put it another way-- suppose I accepted your position that EA as it currently stands is way too into St. Petersberging everyone off a cliff, and way too into violating deontology in the name of saving lives in the third world. Would you perceive it as a sufficient remedy for EA leaders to disavow those perspectives in favor of prosocial varieties of giving to the third world? If not, what should EAs say or do differently?
I don't have a minor policy recommendation as I generally disagree with EA wholesale. I think the drowning child hypothetical requires proximity to the child, that proximity is a morally important fact, that morality should generally be premised more on reciprocity and contractualism and mutual loyalty than on a perceived universal value of human life. More in this comment.
Is there, do you think, any coherent moral framework you'd endorse where you should donate to the AMF over sending money to friends and family?
I think utilitarianism should play a very small but positive part of one's moral framework, a tiny minority vote in one's moral parliament, but committing to donate 10% of one's income to the other side of the planet is messed up, and asks to be reciprocated by your neighbors and fellow countrymen treating you as no more deserving of moral consideration than a stranger on the other side of the planet. If one see an EA type drowning in a pond, I don't exactly endorse this approach, but I think there would be a certain cold reciprocity to walk whistling past, clean in conscience that one has already dedicated at least 10% of one's attention to one's neighbors, friends and community.
Are there any charities to which you would endorse sending 10 percent of your income each year?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I don't understand your point. Are you claiming that it's impossible to believe that you have a moral obligation if you aren't living up to it? That obligations are disproved by akrasia?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link