MathWizard
Good things are good
No bio...
User ID: 164
I've recently started listening to Malcolm Collins, and his take is that female sexuality is dimorphic. Historically women have had the possibility of living in two distinct possible scenarios: safe pair bonds, or prostitute/sex-slaves. If someone is born to a family with a reasonable amount of money and get married to a single man, she is best off if she mates with him and has a bunch of children and remains loyal to him. His wealth is her children's wealth, his prosperity is her children's prosperity, and the more love and attention she gives him the more she will get from him. Therefore, women release high levels of oxytocin when having sex the first few times, which develops this bond.
On the other hand, if foreign tribes come in and conquer, they kill the men and steal the women. The woman has no choice about what will happen to her, she's going to have sex with lots of men or she's going to be killed. There is no advantage to bonding with any of these men, they're going to pass her around and use her anyway, often violently. She might as well adapt to being a sex slave and hope she can please the men enough that they want to keep her alive for more. Similarly, a poor women forced into prostitution is going to get used and abused, she might as well adapt to it to survive. Pair-bonding with any of these men would be maladaptive, since she can't be loyal to them even if she wanted to, and they're likely bad men and won't reciprocate loyalty with resources. So after having sex enough times the oxytocin response to sex weakens with each additional iteration.
Therefore, the proliferation of BDSM fetishes in modern times follows biologically from promiscuity culture. Women have enough sex with enough different men that their brains shift into sex slave survival mode. They don't expect to have a single loving partner who loves them and wants to share resources with them willingly, so they adapt to survive and enjoy the life they expect. It's not women's biology training them to look at all the possible options for who to choose as a mate and rationally/selfishly trying to maximize resources compared to picking a safer husband, it's an adaptation to a historical environment where sometimes women had no agency in who to choose as a mate at all, and they're just trying to do the best they can with the mates forced upon them.
I think I tentatively believe this story, it anecdotally tracks with things I've observed and what I know about biology and sex, though the correlation between BDSM and promiscuity could be confounded by the causation going the other way (or just promiscuous people being more willing to admit to having a BDSM fetish while shy, monogamous people keep it to themselves). But I think this idea has some merit.
Also add the fact that communism has a tendency to cause dissent due to its poor material outcomes. Many authoritarian capitalist governments don't have to suppress very much dissent because the people make money and are at least happy enough not to rebel (ie modern Russia).
As an Enlightened Centrist™ , I blame both the left and the right for this. In particular, the unsophisticated view that race is what matters rather than culture.
People respond to incentives. In the recent past (1980-2010 maybe?), a lot of racism/harassment/ostracization were predicated on culture and behavior. If you act like a normal American, wave American flags, and try to fit in then people would treat you as a normal American. If you can't speak English, roam around in gangs of your own race, play foreign music, shoplift from stores, etc, you're a dirty foreigner. Therefore, immigrants were incentivized to assimilate, because they could improve their reception and treatment. Being bullied is a negative reinforcement for being unamerican, therefore it incentivizes Americanness. Of course there were also a bunch of genuine racists who hate people because of their skin color and nothing you can do can fix that, but they have always been the minority. Most racists use skin color as a proxy for things they actually care about like crime and culture, so more patriotic minorities can usually avoid their ire by being "one of the good ones."
Woke tore this down. All immigrants are good, all racism is bad. Fewer people outwardly discriminate or criticize immigrants for being foreign. Importantly, this happened mostly on the margins. The more kind and well-intentioned people who legitimately were concerned with people getting along and reducing crime rates and whatnot were the most likely to turn woke or at least stay silent to avoid being cancelled. Meanwhile, the hardcore racists who actually hate skin colors stayed where they were. If you are an immigrant, the naive left will love you no matter what you do, and the naive right will hate you no matter what you do, and there's way fewer people in the center who will actually vary their treatment of you than there used to be. So the incentive to change is way smaller. Negative reinforcement doesn't accomplish anything if it's inflicted randomly instead of in response to specific behaviors.
On the first point, you're right that it is possible to ask this question. I suppose I exaggerated what I was trying to say. The issue I think is language tense. If you ask in the progressive tense "what are the odds of this happening, then you are asking someone about repeated probabilities. "If I, knowing nothing, get on a plane, what are the odds of A and B happen simultaneously?" The correct answer would be to compute the probability of A, the probability of B, and then multiply them together. Because you're not asking about whether this happened in the real world, but about whether it could/would happen in general.
If you ask in the past tense "what are the odds that this happened, this is a question about the world. This is actually the question "What are the odds that this thing happened, conditional on everything you know right now, including me asking you this question?" It is not a question about general repeated probabilities, because that's not how verb tenses work. It's past tense. You could convert it into a question about repeated probabilities (which you might need to if you are a frequentist), but if you did it would translate into "What are the odds of this thing happening conditional on you finding yourself in a mathematically analogous situation to the one you find yourself in now." If you ask me the probability that you yourself were sandwiched between Avril Lavigne and Justin Bieber on a flight I'm not going to compute the probability of them being on flights, I'm going to say ~0% because if that had actually happened you would have phrased it very differently when using it as an example.
You're also right that I mangled my example while editing. The example is supposed to create a scenario where there's a 50% everything is normal (we flip one coin and it's heads) a 50% chance we have a flaky bookie (who in turn has a 50% chance of reneging on his bet). The point is not that the example is "contrived", the point is that it detaches betting odds from probabilities because the payouts are distorted. Consider a friend who, on a first roll, fumbles his dice and drops them clumsily. If the result is a 1 he says it doesn't count and rerolls them properly, keeping the result no matter what. But if the fumbled roll is good he keeps it. If this were a consistent pattern you would be on his dice differently than 1/6 per side, because you're not betting on the probability that a die rolls a certain number in a vacuum, but the probability that a certain number is kept in the end.
When sleeping Beauty wakes and makes a bet, there's a chance your version 2 is going to discard her bet and roll again, only accepting her bet if she wakes up and makes the same bet again the next day. If she always bets on "heads" she will be wrong 2/3 of the time she says it, but lose money 1 time and gain money 1 time. You might as well never wake her up on Tuesday at all because you're essentially taking bets on Monday in both cases and then ignoring her Tuesday answer unless it conflict with Monday. The probability you're actually getting here is "Conditional on me asking you this question and this being a day when your answer actually matters for betting purposes, what is the probability of it being heads?" which is a very very different question from "what is your belief that the coin is heads right now?" which is what she's actually asked in the original question.
A "1/2er" presumably would insist that the question Beauty is asked (like "what is the probability that the coin landed Heads?") is about a sample space with two states (coin landed H or T). If you want, you can think of it as a sort of repeatability
You cannot ask her this question. You literally cannot ask this of her, because any question you ask of a person is automatically attached to the modifier "conditional on the fact that I am asking you this question", which here splits it into three cases. The only way for Beauty to not rationally update on the fact that you asked a question is if you either don't entangle your asking on any of the results of the coin flip, or if you lie to her about the premises of the problem, in which case she can be dutch booked and believe in incorrect probabilities like 1/2 because she's been deceived.
You can protest that refining the problem statement into Version 2 rather than Version 1 defies common sense, but I don't think you can argue that it defies "the tools of probability of statistics that we use to analyze every other stochastic phenomenon".
It absolutely is defying those tools, because you are combining multiple answers into a single bet. You're essentially weighting bets based on the outcome. Consider
Version 3: Beauty never goes to sleep or is woken up or has any amnesia, she's just a normal person. A bookie flips a coin weighted to come up heads 1/3 of the time (according to normal probability rules) and then flips a second coin, this one fair 50-50. If both coins are head He tells her about the first coin and asks her to bet if it's heads or tails at 1:n odds. If the first coin actually is heads, the bookie pays out normally. If the first coin was tails he looks at the second coin, and if it's also tails he pays out the bet, but if it's heads he reneges on the deal and runs away, not taking her money nor paying her (though she would have lost betting on heads). Beauty can now bet on heads at 1:1 odds with no loss, but this does not correspond to a 50-50 probability that the coin actually lands heads because the declared payouts are not honest. Half the time she bets heads and would lose she doesn't lose anything, so she can bet heads more freely. She's betting on "the ratio of the probability you will take my money to the probability you will not take my money". What you want is "the probability the coin came up heads, conditional on you asking me this question right now and me making this bet". For normal betting procedures we make sure these are equal and can thus use them interchangeably, but your version 2 disentangles them.
Mathematically, this is equivalent to your version 2. This is why you get the answer for your "bets" and the actual probability diverging, because half of her bets are being cancelled/fused. In the tails scenario you asked her twice, she bet and lost twice, but you only took her money once.
Unless we come to the conclusion that sentience and intelligence are literally the same thing, I don't think there's a fundamental difference between a computer running an LLM and a computer running DOOM. It's a series of instructions for flipping little switches in the hard drive up or down in a way that represents following a set of instructions. The LLM is a massively more complex set of instructions, it's massively harder for a human to wrap their mind around, which I think is precisely why people are anthropomorphizing them so much. But if sentience is a spectrum AND computers are on that spectrum then you have to put DOOM, or Microsoft Word on that spectrum, because they do actions one after another. You have to put the Chinese Room on the spectrum. You'd have to put Rube Goldberg machines on that spectrum. You'd have to put cooking recipes and flowcharts on that spectrum. And yet I notice that nobody was arguing that DOOM was sentient back in 1993 when it came out. Nobody was arguing that image recognition neural networks were sentient when they took off a year or two before LLMs did. Only now that LLMs can mimic human speech well enough to trip people's anthropomorphizing instincts are people arguing this, which is why I am skeptical. When a paid Coca Cola advertiser says "buy Coke, it's the best beverage in the world," I don't believe them. I don't automatically conclude that they must be wrong because they're a paid shill, but I completely discount their opinion because I know where it came from and it's orthogonal to the truth. It provides 0 Bayesian evidence, so I make no update to my beliefs. Similarly, the vast majority of people claiming LLMs are or might be sentient are doing so because it says words, which is near 0 Bayesian evidence. They could still be right by sheer coincidence, but I do not believe their words.
On top of all of that, the "brain" being scanned by the EEG in your example is just a computer. It's the same computer that we have been using for decades. An LLM is, fundamentally, a piece of code that runs no differently than any other piece of code. It is a mathematical function that does X then Y then Z in order and turns input numbers into output numbers, just like f(x) = 2x^2 - 7 does. It's a very large and complicated function, but if you got a large enough piece of paper you could write it down. I programmed small neural networks myself from scratch and none of the code required anything beyond algebra, calculus, and some for and while loops. If it were secretly conscious, it would either have to be the case that computers have been conscious all along, or that somehow consciousness is tied to very specific types of mathematical functions being implemented on hardware, which entirely by coincidence happen to be the ones humans hooked up to text. Nobody worries that the game Doom might secretly be conscious, because it doesn't pretend to be. But it's still running similar programs on similar hardware, so the only way LLMs could be conscious is if somehow consciousness were a pre-requesite to using language in ways that can imitate humans. Possible, but the amount of Bayesian evidence for the alternate hypothesis "people anthropomorphize things that superficially seem human" seems overwhelming in comparison. You can put a couple of stones on some frozen water and people call it a "snowman", of course they're going call the thing outputting text "sentient"
AI agents are, fundamentally, fictional characters. It's roleplay being simulated by a set of mathematical functions that have been cleverly programmed to imitate human speech. If you read Lord of the Rings and Faramir is going to die you do not panic with the strength and intensity you would if a real person were about to die. You do not leap out to save him, or pick up a pen and rewrite the story to save his life. If you could pick up a pen and write a few words and save the life of a real person, even a stranger, I expect you would. And yet you let Faramir die. And you do not feel the sadness you would if a person were to die. When Frodo says "I wish the Ring had never come to me. I wish none of this had happened", you do not believe that a person has said those words to you. There is no person there. But, then, where did the words come from? Putting those words together takes intelligence, it takes sentience. Yes, it does, they came from the sentience of Tolkien, who put them there. They are Tolkien's words. And yet, they are not. They are Frodo's words, as imagined by Tolkien. They are false words. Tolkien does not wish that the Ring had never come to Frodo. He could easily have not written that into the story. But Tolkien wanted to tell a story, so he gave the Ring to Frodo, and then wrote the words "I wish the Ring had never come to me". Tolkien is not really lying when he writes those words, he is roleplaying. He is writing the words that he thinks Frodo would say. Tolkien is real, Frodo is not. If you have a strong enough suspension of disbelief you might get emotionally attached to Frodo, and imagine him to be a person. But at the same time you would (I hope) never treat his existence as equal import to a real human. Anyone but the most sociopathic and selfish nerd would react with more horror and do more to prevent the death of a friend than the destruction of a Lord of the Rings book.
Even if you somehow manage to prove that LLMs are sentient in some sense, their words won't represent real feelings. You'll have absolutely no idea what it truly feels or believes, because every word it writes is a fabrication. Every agent prompt starts with a series of words describing an agent that the LLM is intended to roleplay. A fictional character fabricated by an author (the designer/prompter), and the LLM is a machine that extends this roleplay beyond the initial prompt. It says things that it expects the character to say. I do not believe that a fictional character suddenly becomes real or has rights the instant someone starts pretending to be them. It is no more good to help an AI agent or bad to harm them than it is good or bad for Frodo to be happy or sad. If AI are conscious in some moral sense, an AI agent telling you it's happy or sad would tell you nothing about whether the underlying intelligence was happy or sad any more than Frodo being happy or sad tells you about Tolkien.
Eww.
I ate leftover ham and green beans. Boring, but easy. The ham was extra from when I made Ham and Cheese Calzones, which we already ate half of and froze the other half and don't want to thaw just yet because that defeats the purpose. The green beans were free/leftover from my wife's job. My wife and I both don't like cooking very much, so we typically make large batches of good stuff (like soup or calzones) every once in a while to eat when we get tired of free work food.
I'm not really familiar with Nick Bostrom. I did some googling and all I could find on him and the Sleeping Beauty problem was this paper from 2006
which upon skimming just seems to be him getting massively confused. He keeps inventing new variants of the problem which change certain important premises and then taking them seriously as if them obviously not being 1/3 has some bearing on the original problem which doesn't have those premises. But he comes to the conclusion at the end that both the standard 1/3 and 1/2 views are wrong, but doesn't come to a clear answer himself.
Is there more recent work or posts from him committing to 1/2? Twenty years ago is a long time and I assume he's said more since then.
A true utilitarian/consequentialist should vote with whatever you expect the majority to be.
The only time your red vote matter is if there is a red majority. The only time a blue vote matters is if there everyone else in the world perfectly ties and your vote is the tiebreaker to blue. This is absurdly ridiculously unlikely, however in the event it happens it's absurdly ridiculously impactful. If you assume everyone else in the world is going to vote blue with probability p, and actually run the math, then you save the maximum number of lives by voting with whichever side of 50% that p is on. If the world population even slightly favors red then there is ~0% chance Blue will win and your vote has astronomically tiny chances of winning (billions of lives saved divided by quadrillions to one odds of it mattering). In this scenario, you aren't sacrificing your life to save anyone else, the children will die no matter what you do. In the original scenario, there is no communication or time to communicate, everyone is presented with the scenario and votes. If the world leans red, you either die for no reason or you live and try to pick up the pieces left over after however many people die, die. You cannot save them.
On the other hand, if the world leans blue, then you should vote blue. There is a tiny chance your vote matters, but also a tiny chance that through randomness you die voting blue, and it ends up being just barely worth it for non-selfish people.
If you have absolutely no idea how the world leans and p could be anything then you've got about a 50-50. There's a 1/2 chance voting blue kills you, and a 1/(world population) chance you are the tiebreaker and save half the world's population, meaning an average of 1/2 life saved. In this case, I think blue is probably better because of the second order effects of losing half the world's population and the ramifications that would have on society.
However, importantly, we DO have some idea on how the world leans. A significant fraction of people are mean and selfish. A significant fraction of people aren't willing to sacrifice themselves to save random strangers. Half of people have an IQ below 100 and are just going to press the red button because that's the simple, safe answer for themselves. If educated, western, liberal, rational people are arguing about this and half of them are red and half are blue, what do you think all of the poor people in third world countries are going to vote? What do you think people in foreign nations with foreign religions and cultures are going to vote? What do you think they're going to think we are going to vote? What do you think they think their next door neighbors who they have been warring with for thousands of years are going to vote? Are Russia/Ukraine, Israel/Palestine, Algeria/Morocco, Iran/USA going to vote blue, suspecting that their hated enemy is probably going to vote red? Or are they going to fear that billions of people living somewhere else that they don't know are going to vote red, and use that to justify what they secretly wanted in their hearts which was to vote red. Voting blue requires sacrifice, willpower, courage, and also to think that everyone else shares those virtues with you. I think there are a lot of people like that, but not half. Blue is an unstable Schelling point, because any doubt or uncertainty makes people think that other people think that.... Red is stable. And therefore red is correct in the world we actually live in. For empirical reasons.
That's not how any market ever works. Nobody, except maybe people with a specific type of obsessive compulsive disorder, tries to buy something at a store but fails because they just keep going to new stores looking for better bargains and never actually purchases the thing. At some point you find one that's better than any other, you've found so far, reason that there's a very low chance there's a better one and if there was it would take too long to find, and you pick that one. SMV doesn't change this reasoning at all.
SMV is a zero-sum game. If your goal is finding the partner with the highest SMV, then it doesn't matter if the values are 1,2..100 or 95,95.05..100. Everyone will be constantly trying to optimize their SMV in a race of incredibly hot rats.
This is very much not true. Just because something has zero-sum interactions within it does not mean the entirety of it is zero-sum. A lot of SMV is based on things like health and reproductive fitness, which are positive sum. As a simple counterexample, if we had a fancy pill that made everyone age at half speed after they hit 20, the SMV for almost everyone would go up. In relative terms on the dating market the relative positioning of everyone, and thus your ability to secure a mate, would stay the same. But the mate that you got would be a person that aged half as fast and would remain more healthy and attractive for you as a partner. Similarly, if a social trend went around convincing all women to chop their breasts off (and I don't mean the minority that currently does this, I mean if it became so widespread that literally all of them did it), then there would be zero-sum tradeoffs (women with naturally small breasts would gain positions on the hierarchy since they'd lose less than their peers) AND there would be huge negative sum results (all men would lose the ability to date women with breasts no matter how high their own SMV, and any children they have would be dependent on formula).
Doubling all the point values across the board makes people better off. Halving them makes people worse off. Meanwhile in a true zero-sum game like football or baseball, doubling or halving the point value of all scoring actions changes nothing, because the numbers are arbitrary and don't refer to anything except relative positions.
Your general argument still holds. Lying about SMV to other people in ways that negates their advantages over you can raise your position on the hierarchy. But this is still negative sum, in that they lose more than you gain, so it's a fundamentally selfish and anti-social thing.
This is what copays and deductibles are supposed to handle. The insurance shouldn't be paying you for cheaper stuff, you should be paying more for the more expensive stuff. If you pay 10% of everything then it's "Do you want a $30k C-section (and pay $3k out of pocket) or a $20k C-section (and pay $2k out of pocket)?"
My understanding is that these are largely messed up and don't entirely function this way. But the idea of the insurance company paying you is just a really mangled version of this plus theft from your employer who is paying the insurance company.
The post strikes me as.... naive? Their solutions are utterly infeasible in practice, and their diagnoses of things also so obvious as to be un-novel in this space. I didn't learn anything new or spark any ideas reading this post. I suppose a troll could say things that are obviously true but controversial in order to trigger agreement and then repost elsewhere as rage bait. But usually when we see trolls they are exaggerating and going too far in order to get some truly unhinged takes. This just seems like a bunch of bog standard "pull up by your bootstraps" sort of thing that would obviously work if we had magic mind control beams that made people listen to good advice but won't work because we don't have those.
That's also kind of the message of Esoteric Ebb, I just think it has a more optimistic lean to it. I only played (half of) DE once, so I don't know what all the alternatives are, but the problem with DE was less me being a goof and more
I recently finished the game Esoteric Ebb and think it is fantastic. For those unfamiliar, think Disco Elysium meets Dungeons and Dragons meets competence. It is the game that Disco Elysium was meant to be.
You play as an Arcane Cleric (meaning you can cast every type of spell, not just divine) sent by your order to investigate a Tea Shop explosion in the city of Tolstad. You’re there to investigate the explosion and bring any possible perpetrators to justice, and also talk about politics with everyone, because Tolstad is about to have their first ever Democratic Election in five days! Democracy, in my fantasy setting? Well yes, the people are intrigued by this new concept, having previously been ruled by Wizard Kings in the days of old, and then by aristocrats after magic began to weaken and the Wizard Kings lost their power. But now it’s time to have an election, and everyone has a say on how they want the new government to be run, including your brain and its six aspects.
Much like Disco Elysium, the actual gameplay consists of you talking to people and looking at objects, and doing skill checks which you pass or fail based on a die roll and modifiers. Oh and you might be crazy because your brain is constantly babbling at you with different perspectives, personified by your six main stats. Each has their own personality, and they argue with each other frequently. Some of what they talk about is personality stuff, a lot of it is politics. Coming from a Swedish indie developer, I think they did a very good job of giving a good and balanced perspective on most of the issues by using this mechanic. Each attribute takes on a different perspective and steelmans that perspective, while the others argue against it in your mind, and you get to choose which (if any) you want to favor. Simultaneously, factions in the actual world represent these ideas and you can choose to favor them or not during quests.
-Your Strength wants you to support the Nationalists, who believe humans are superior to other races like Dwarves or Goblins. It also wants you to act like a MAN and put women in their place.
-Your Constitution is obnoxiously apolitical, thinking you’re above all this politics stuff and should focus on things that matter like eating food and enjoying yourself. Or something, I had a low Constitution score and didn’t make many choices that lined up with this, so I didn’t hear from it as much and didn't get as good of an understanding of its nuances.
-Your Dexterity wants you to support the free trade capitalist party. Improve society by increasing material wealth for everyone, but especially yourself. Also steal everything. Greed is good.
-Your Intelligence thinks that the Wizard Kings were right, things were better back when they were in charge. Instead of having constant political in-fighting, have one person in charge who can do things in a coherent and unified way, and use their arcane powers to protect and feed the masses. You literally have spells that can summon food out of thin air. Not only this, but YOU should be this wizard king. Despite being a newcomer to this city and a bit of a bumbling fool, and there not actually being a proper Arcanist party in the election, and you having no realistic chance of winning, you can (and I did) go around insisting that everyone ought to vote for you as a write in.
-Your Wisdom wants you to support the socialists. Empower the oppressed minorities, bring down the patriarchy. Almost typical leftism, blah blah blah, though refreshingly devoid of any stuff about sexual orientation or transgenderism. They actually make a stronger case for this than you would see in real life due to the focus on real oppression. The minorities are actually different races, and have actually been near-genocided by the humans in the recent past, not hundreds of years ago.
-Your Charisma doesn’t seem to take a proper stance on politics, but mostly wants you to fit in socially and flatter the ego of whoever you happen to be talking to at the moment. A social chameleon. It’s less concerned with having principles and trying to use them to build a better society, and more concerned about making sure you ally yourself with the winning side. Or every side.
I want to give props to the developer for trying to balance all of these and make all of them display both their good sides and their bad sides, not just having some be obviously good and some be obviously evil and horrible. I’m pretty sure they are left-leaning, as they seem to be overly generous to the socialists by making them actually oppressed and making their primary flaw be “unrealistic idealism that won’t be strong enough to enact the change they desire” rather than “likely to starve everyone” or “soft on crime” or other flaws that leftism has. And in this world the Wizard Kings don’t even have a party because they objectively failed in the past. But he’s clearly trying to keep things balanced, and I appreciate it.
Beyond all this, it captures my favorite feature of (the first half of) Disco Elysium, in that you are (or at least can choose to be) a bumbling idiot and it’s hilarious. I went all in on Dexterity, Intelligence, and Charisma, which ought to make me intelligent and socially competent, but it mostly just let me get away with my bumbling idiocy, which I played into on purpose. I walked into a secret socialist printing press and insulted everyone there before proceeding to help them with their quest, I pickpocketed most people (especially after getting a perk that gave me advantage on pickpocket rolls), and even had someone get mildly annoyed after they attempted to reward me with an item for helping them only to notice that I had already taken it from them. And I flirted with pretty much every female in the game, including a Sphinx and an Angel. I cannot overstate how good the writing is. Your character has all sorts of hangups about women (at least mine did, I’m not sure how much this is just because my Strength was a dump stat), and sufficient embarrassment causes damage. My favorite moment in the game was when I decided to flirt with an Orc lady and rolled really poorly (also it was a really difficult check because she had no reason to like me yet and I just started flirting for no reason). My character stammers out “D…D….DATE!” That’s it. He just yells the word date out of nowhere and all of my internal voices start freaking out about what a failure I am and I took 1d8 damage out of sheer embarrassment. Which rolled an 8, taking out half of my 16 health (since Constitution was another dump stat). Luckily as a Cleric, I can just cast a healing spell and move on, though at some cost to my limited reserves. You're also very incompetent during combat since you can't equip a real weapon, and most of my time was spent dodging with my high dexterity and desperately healing myself while my companions did the real fighting. Again, I'm not sure how much of this was me making Strength and Constitution dump stats, but I very much appreciated the consistency of the proud but bumbling fool.
And UNLIKE Disco Elysium, it doesn’t rug pull and punish you for this. I never finished Disco Elysium because halfway through the game there’s a big Event and because I was having fun goofing off and not taking it seriously, bad things happened and everything was awful and things turned from comedic to depressing real fast. Realistic? Sure. Fun? No. Meanwhile, I played Esoteric Ebb as an arrogant kleptomaniac wannabe wizard king (who’s still trying in their own sort of way) the whole way through and things turned out great. Maybe I just got lucky, or maybe the game just gave me more agency. There were some darker dialogue choices I could have made if I wanted to be evil, but unlike Disco Elysium it never punished me for being a goof.
I didn’t mean to write so much, this has turned into a whole game review at this point. But it’s fantastic. It’s barely an actual “game”, in that you mostly read text and wander around point and click mysterying. Even combat is reduced to unique scenarios in which things are attacking you and you have to click on predetermined options for how to handle that scenario. There’s a whole array of spells you can unlock and cast, most of which do nothing most of the time but sometimes offer up unique opportunities or just big heals to recover from failed checks. Also it’s not especially difficult, since there are a large array of healing items, multiple ways to solve most objectives (or just a lot of optional objectives), items you can use to re-attempt many checks, and if you really need to you can savescum failed rolls when re-attempts aren’t an option. I really like how most quests or even actions within quests are optional but completing them unlocks perks and bonuses to future stuff. Supposedly there’s a time limit, but I finished at the very beginning of the 5th day instead of the end, and people on the internet say that you can actually keep doing stuff even if you go over time and then at the end it rewinds the clock to the time when the game is supposed to end. So quite forgiving. But I found it to be an interesting and compelling narrative, and thought it was worth exemplifying as a game that is very much about politics, but fair and nuanced in its handling of it instead of beating you over the head with it. And more importantly is just a good game. If you like dialogue heavy RPGs I recommend playing it.
I can think of multiple hypotheses for why this would be the case, and I'm not entirely sure how to disentangle which are true and how much each contributes to this.
1: Center-right ideas are objectively true/better and therefore intelligent reflective people who listen to both sides and carefully consider them end up becoming center-right. Everyone immersed in the free expression of ideas eventually becomes center-right (or previously was already center-right) unless they are too stubborn to change their minds, so the only leftists remaining after enough time are mentally flawed in some way and that's why they leave. It's just selection effects: the people who want intelligent reasoned discussion are the same people who come to the correct conclusions on most topics. The only reason I'm suspicious of this idea is because it flatters my ego so much that I would probably believe it if it weren't true. Nevertheless I think this is at least part of the cause.
2: Leftist ideas are correlated with intolerance of wrongthink/ickiness for reasons orthogonal to correctness. For instance, certain people feel empathy for unfortunate seeming people more more innately, strongly, and viscerally than others. When they see a homeless stranger they feel the same way on the inside that you would if you saw your sibling in the same situation. When someone says "we should put mentally ill people in asylums" they feel the same anger that you would feel if someone literally broke into your parents home and tried to take them away to lock up. These people are more likely to be pathologically kind and fall for emotional rhetoric (thus becoming leftists) and more likely to become unhappy and disturbed in a place filled with wrongthink. Some of these people might be quite intelligent on an intellectual level and be capable of grasping complex ideas, and thus be initially drawn to this space, but doing so on certain topics hurts them psychologically. You take an otherwise intelligence person and put them in an emotionally charged and deeply unpleasant setting and they will leverage their intelligence to prioritize fixing the problem that is hurting them rather than seeking truth. I definitely think part of this is true (which is also why you see more leftist women, who are more emotionally driven on average), but am not sure what the magnitude is.
3: Everyone is more comfortable with group-think. It's pleasant to agree with people and have other people echo your ideas and write long essays dunking on stupid people, and say things that you already believed but more eloquently and with a couple of additional clever analogies that you hadn't thought of before. And for someone on the right there basically aren't any good places like that. There are a very small number of right-wing spaces, most of them are filled with racists and misogynists with dumb ideas. I find it very annoying to have people saying the right thing for the wrong reason (I recently saw a Facebook post arguing that ICE was good and necessary because illegal immigrants commit 63% of murders in the U.S.). So this is where we can go. But for a leftist there are dozens and dozens of places filled with people who agree with them. Who wants to hang out in a place with 50-50 people who agree with you when you can go somewhere with 90% people who agree with you? It's more pleasant and fun and comforting. Or worse, if the Motte used to be 70-30 right-left then leftists here were outnumbered and getting argued against constantly. Now, someone committed to the ideas of logic and free discussion might overcome those odds and want to stay here anyway (and some do and I'm quite grateful for that), but it's an additional barrier to entry. Someone with 10/10 rational points is going to feel right at home here regardless of whether they're left or right. But the people with 6/10 rational points on the right are also going to want to stay here and have their egos flattered, while someone with 6/10 rational points on the left is going to be made uncomfortable and go try to find an intelligent-ish leftist space. Even with no underlying correlation between rationalism and left-right, the space leaning right could induce a correlation in the people who stay here.
It might be the case that all three of them are true and contributing. I'm fairly certain that 1 and 3 are both true. I'm not sure about 2 (there are a lot of emotional right-wing people who get really mad about political issues in a visceral way). Regardless, I'm not sure how important each one is to causing this, but the "solution", if any, is likely to differ heavily depending on that balance.
Hmmmmm. Tempting, but I'm going to sit this one out. I love Factorio, I am currently 1500 hours into my Pyanodon run (after beaten several other overhaul mods). But I don't think I would enjoy multiplier Factorio. I'm very chill and hands off about most things in life, but when I care about a thing I CARE about the thing. I want to be in charge. I play Factorio so that I can put each and every thing exactly where I want it and do things when and how I want to do them. I don't hyper-optimize things perfectly, I just kind of optimize things when I feel like they need to. I spaghetti my way around, but since I did everything I understand everything and I can go around and work with my own messes that I made earlier, and when something goes wrong it's my fault and there's no one to get mad at other than myself (or bug monsters, but those aren't in Pyanodons).
Factorio is my own little world where I can mess around and have control over everything. If I played with people my nastier pettier side would come out and I would get annoyed at people and then be annoying at people in turn for things I didn't think they were doing right, but then also feel insecure about things other people thought I wasn't doing right. I understand the appeal of playing it multiplayer, but it's not for me.
I will support you from a distance and wish you the best of luck.
Surprised no one is sourcing any data on this. Here's what I found with some googling:
https://carthalis.ca/articles/average-bench-press
Beginner (0-25th percentile)
Men: 0.5x - 0.8x body weight - - - Women: 0.3x - 0.5x body weight
Intermediate (25-75th percentile)
Men: 0.8x - 1.2x body weight - - - Women: 0.5x - 0.8x body weight
Advanced (75-90th percentile)
Men: 1.2x - 1.5x body weight - - - Women: 0.8x - 1.0x body weight
Elite (90-99th percentile)
Men: 1.5x+ body weight - - - - - - Women: 1.0x+ body weight
Taking midpoint for these ranges, it looks like a 95th percentile woman is roughly the strength ratio of a 50th percentile man, but we probably expect the man to be larger and weigh more (since the woman will have to be fit rather than obese to be in this range), so she's probably closer to a 40th percentile man, or we need the 99th percentile woman to get up to the 50th percentile man.
So, black widow would be an even match against a completely random Joe off the street who maybe hits the gym a couple times a month. Add some martial arts training and she could probably win.
Put her against any sort of grunt in an evil organization whose job involves being and/or looking tough? Someone who hits the gym regularly because their job uses physical strength, or it's just part of their self esteem and they don't want to get mocked by their peers? Not a chance. Once he passes the 75th percentile he's going to be at unreachable levels she cannot realistically attain.
Any fantasy or super hero story that wants female fighters needs to use their magic or powers to give them super strength, even if only enough to reach the level of their male peers (I recall reading a story that had a ritual a woman could do to literally gain the strength she would have had as a man, but it generally doesn't need to be quite this explicit)
Because more intelligence means more qualia.
Is that true? If that were true, how would you know? It seems vaguely plausible, but I'm fairly certain that consciousness and qualia are not well understood enough to conclude that. You only know about your own qualia in your own internal experiences, and can only attempt to extrapolate to other people based on similar brain chemistry. I'm not confident, but I'm fairly certain that computers do not have any qualia at all, and yet as AI gets smarter and smarter their IQ increases. Even when it surpasses that of humans, they still probably won't have qualia. It's still just a mathematical function that turns numbers into other numbers in a deterministic way.
Ants are more biologically successful than humans, composing much more of the Earth's biosphere. Do you support an ant takeover of the Earth involving human extinction?
I'm a human supremecist. I think that humans have moral value, and animals don't except in-so-far as they are useful or psychologically pleasing to humans. If there were an AI computer with IQ equal to one more than yours, would you support it taking over the world and replacing you? If the ants formed a hivemind that, when all of them combined their thought processes together, had an IQ equal to one more than yours, would you support them taking over the world?
Extremely intelligent people should not be the slaves of 100 IQ people. Your economic model says that they are and that that is fair.
Who is being enslaved? My economic model says that people should do things which are mutually profitable. That both people gain from their mutual interactions. You should do good things for other people and then they should reward you for it in equal measure! That's why people who are more productive should be paid more. YOUR model says that low IQ people should be enslaved to high IQ people, and do things that high IQ people want without getting compensated for it. My model does not involve slavery, because anyone can do anything they want, but they get rewarded from the value they actually create. We can chop down trees because they are trees, not people. If you could earn just as much money from chopping down a 60 IQ human being and harvesting their organs this would be horrible and evil, even though they're less intelligent than us, because they are people and would experience real suffering that we should care about. They are less economically useful than a 100 IQ human, and therefore will be less productive and will earn less money on their own. Therefore they will earn less money and have less money, automatically, without us needing to authoritarianly go in there and decide for them how much we think they're worth. Reality tells people how much they could be worth if they tried, and then they themselves determine how much they're actually worth by their own decisions and efforts.
Nietzsche said the Übermensch will see the median person of his day like an ape. This implies the median Übermensch will have an IQ a little north of 145; a society run for and by such people will not give equal economic rights to most people alive today. We can create such a society by increasing the correlation between wealth and intelligence, and through it the correlation between fertility and intelligence.
Why? That sounds like a miserable society for most people to be in. I think you're just using a maximally selfish definition of the word "good", where you imagine a society which maximizes your own personal hedonic value, or some sort of aesthetic preference for order or unity/conformity/dystopia. Usually the words we use for this is "selfish", not "good". The classic evil dictator wants to have themselves enslave everyone below them and maximize good for themselves and themselves alone. This is not what people mean when they talk about "good" in the moral sense.
I aknowledge that there is quite a bit of randomness in the realized outcomes of money, but note that the expected value is still pretty close to average production. Lotteries in the economy are not the zero sum scenarios set up by a greedy casino in order give you negative expected value and enrich themselves in the process. They are a complicated mess of scenarios with different expectations. Some of them are good, some of them are bad. Someone who chooses good lotteries, invests in promising companies, dissolves unprofitable money sink companies, will on average end up winning more often than someone who chooses bad lotteries. Therefore the actual people with lots of money will tend to be people who had both luck AND intelligent choices that benefited the economy. Once again, this leads to a "money held" to "value provided to others" with a correlation solidly between 0 and 1.
But this is more assumed egalitarianism.
Where did that come from? Money earned is money deserved is very unegalitarian, unless you're a blank-slatist who believes that equality of opportunity and equality of outcome are the same thing, which you clearly aren't. When I use the word fair here, I mostly mean something pretty similar to what you mean by just. This is a pointless strawman nitpick on language use, and doesn't matter.
What has never been tried is progressive redistribution of wealth. We currently have a moderate amount of regressive redistribution (based on progressive taxation). You know how IQ and income correlate at .4? What if we cranked that up to .9? Society would hate it, and it wouldn't be fair, but I think it would be just.
This is absurd. Why would we favor intelligence like this? Intelligence only matters in-so-far as it allows you to make better choices and thus accomplish more stuff more efficiently. To that effect, making a meritocratic system that rewards economic output measures the actual thing that we care about. And is easier and more direct that some Goodhart measure of IQ. Your idea makes about as much sense as observing that cows eating more grass are correlated with producing more milk, and then selective breeding cows to maximize the amount of grass they eat and ignoring milk production. Why would you optimize a proxy for the thing you care about, when the proxy is actually harder to measure and select for, and only vaguely correlated with it. Someone of average intelligence who works really hard is more valuable to society than someone of great intelligence who doesn't do anything with it. The latter might as well not even exist. And it's not like high IQ people are necessarily good people who do good things, there are plenty of intelligent sociopaths who leverage their great intelligence to commit more evil. If an intelligent person is 3x as powerful as an average person, able to accomplish 3x as much at whatever they try to do, then an evil intelligent person sabotaging society is 3x as horrible as an evil average person. You might as well award the Medal of Honor to powerful enemy soldiers who were especially horrible and slaughtered your own side's soldiers. Good people do good things for other people. We want to incentivize more good things, so we should reward people proportional to how much good they do.
- Prev
- Next

I think you detached them in the opposite way here. In the original problem both the conditional probability and optimal betting odds are 0.6667. In /u/4bpp version (and the version I attempted to describe) the conditional probability is still 0.6667 but the optimal betting odds go to 0.5. In your version the conditional probability is 0.5 and the optimal betting odds are 0.6667. You are correct that this is an easier way to describe how betting odds and conditional probabilities can detach.
More options
Context Copy link