bigdickpepe1488
Ban pending if username not changed
No bio...
User ID: 220
Kelly tells you that taking bets past kelly optimal causes your growth rate to decline very fast. While yes, kelly optimal + epsilon is still good (by continuity of kelly formula), the risk generally lives to the right.
This is doubly true if your probabilities are estimates, and not actually certain - which is what I attempted to illustrate with the dashed vertical line.
Take a look at the graph I've attached. It's my general thinking on the topic, though possibly I am misunderstanding something? I am not in any sense an expert or theoretician - just a guy who uses Kelly as a heuristic. (And in an illiquid markets context where I cannot choose arbitrary bet sizes.)
I am mostly a person who uses Kelly and not a theoretician, but the Kelly formula is definitely derived from the principle of maximizing $E[\ln(S)]$ with S=wealth. That has the obvious philosophical interpretation of diminishing marginal utility, unless I'm missing something/
I assume that what SBF is talking about is he instead maximizes $E[S]$ which is quite different.
I did find this which attempts to defend kelly from the perspective of volatility drag rather than diminishing utility: https://www.lesswrong.com/posts/zmpYKwqfMkWtywkKZ/kelly-isn-t-just-about-logarithmic-utility
Will need to read it more carefully.
Kelly "applies" even if you can't vary your bet size. Kelly tells you what the maximum bet size you should be willing to take is, so you would take a bet if it's size is below what Kelly tells you, reject otherwise.
Yeah I stand corrected, wrote this early in the morning.
Maybe financial institutions should start offering serious money on double or nothing bets during the interview. And if anyone does it more than three times you happily pay them the money from the bet and then don't fucking hire them.
My new career: interview at financial institutions and take their gambles, but secretly use kelly criterion w.r.t. my total wealth.
One curious thing I noticed about SBF on the Tyler Cowen podcast is that he had a very odd idea about the St. Petersburg Paradox. At the time, I found myself very much unable to steel man this.
COWEN: Should a Benthamite be risk-neutral with regard to social welfare?
BANKMAN-FRIED: Yes, that I feel very strongly about.
COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?
BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually.
COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing.
BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical.
COWEN: Then you keep on playing the game. So, what’s the chance we’re left with anything? Don’t I just St. Petersburg paradox you into nonexistence?
BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.
I'll just interject here that to me, this sounds completely insane. For those less familiar with decision theory, this not an abstruse philosophical question - it's simply a mathematical fact with probability approaching 1 (specifically, 0.49^n for large n), SBF will destroy the world.
COWEN: Are there implications of Benthamite utilitarianism where you yourself feel like that can’t be right; you’re not willing to accept them? What are those limits, if any?
BANKMAN-FRIED: I’m not going to quite give you a limit because my answer is somewhere between “I don’t believe them” and “if I did, I would want to have a long, hard look at myself.”...
At the time I found this odd. Does SBF not understand Kelly betting? This twitter thread, unearthed from 2 years ago, suggests maybe he doesn't?
https://twitter.com/breakingthemark/status/1591114381508558849
I don't see how he, or Caroline, or the rest of his folks got to where they did without understanding kelly. Pretty sure you don't get to be a junior trader at Jane St. without understanding it.
My best attempt at a steelman is that because he's altruistic, the linear regime of his utility function goes a lot further than for Jeff Bezos or someone else with an expensive car collection. As in, imagine each individual has a sequence of things they can get with diminishing marginal utility - $u_0 > u_1 > ... > u_n > ...$, $u_n \rightarrow 0$ and each thing has unit cost. A greedy gambler has sublinear utility since they first buy u_0, then u_1, etc. By definition, $\sum_{i=0}^N u_i < N u_0$.
But since SBF is buying stuff for everyone, he gets $N u_0$.
Then again, this is still clearly wrong - eventually he runs out of people who don't have $u_0$, and he needs to start buying $u_1$. His utility is still diminishing.
Is there some esoteric branch of decision theory that I'm unfamiliar with - perhaps some strange many worlds interpretation - which suggests this isn't crazy? Is he just an innumerate fraud who truly believes in EA, but didn't understand the math?
I would love any insights the community can share.
But it also has its downsides, and one of the most important is the plot-driven necessity that anybody powerful be somewhere between evil and useless, so never good. A heroic overdog would be boring, after all: no uncertainty about how he'll get out of this one.
One can certainly write a story of this sort. But it's a quite different story.
There's an isekai called Worth The Candle that tells a story of this sort. Not sure how to spoiler tag things, but basically there's an evil villain who immediately surrenders because he can see he's underpowered relative to the ubermensch lead character. At this point he reveals that he rules a surprisingly populous society that he built around his particular variety of evil, one which will rapidly descend into starvation if you just switch it off, and with a population that is aligned with his values. Um, now what? And is handling this situation even a good use of ubermensch time and effort relative to the primary goal of creating a good magical singularity?
Unlike Tolkien, it was not rooted in medieval England. You could race swap the main characters with no issue and toss and hint that the isekai lead faced racism back in Oklahoma.
Also it's niche internet fiction written by a stay at home dad who is a big success by the standards of Patreon and earns maybe $1-2k/month from 444 subscribers. There's a reason Amazon didn't buy that and ruin it - it's utterly inaccessible to most viewers.
Galadriel is a Mary Sue, but I guess she is in the books too. We'll see how her story develops.
In the books this works because Tolkien understands narrative. There's a clear plot point that prevents Galadriel from just orc punching her way to Mordor - namely that once she possesses the Ring, she may become ultimate warrior princess (which she was not in the books!), but it'll be warrior princess with Sauron-characteristics. As a result she's in maybe two scenes and primarily illustrates just how deep in over their heads the hobbits are - setting the stage for their heroic journey.
It's much the same reason why Superman sucks, but Watchmen was good. Superman punches people but can't hurt them cause also superman and - cue dramatic music and Henry Cavil emoting - finally punches the bad guy really hard and wins. In contrast, Watchmen made Dr. Manhattan a mysterious godlike figure and told the story through the lens of mortals. Manhattan could punch the Soviets really hard, but isn't doing that for his own reasons.
You can't make Galadriel, Gandalf or Tom Bombadil the main character in any kind of hero journey - it's too late, the journey is over for them. A story about them is a fundamentally different thing and probably too niche to justify the price tag.
- Prev
- Next
The point of the $8 checkmark is to eliminate the sumptuary laws that previously served to distinguish between nobles and commoners.
In a similar line of thinking, expect Musk to eliminate twitters Lese-Majeste laws. It may soon be legal to tell a journalist to learn to code or to describe a "public health expert" with an NPC meme.
More options
Context Copy link