MathWizard
Good things are good
No bio...
User ID: 164
I liked that one too. I am too picky about games and don't have friends who know my tastes well enough for me to blindly trust their judgement, so I never play any games completely blind. But just knowing the basic premise (of both of them) is probably fine. The first hour or so might be even better completely blind, but the majority of the gameplay is the same.
Outer Wilds does a great job of curiosity/mystery, as well as some other feelings. I hesitate to describe more because the fewer spoilers you have going into it the better.
A lot less than 100 hours though, so it won't fill up all that space, but would slot nicely between other things.
As a centrist and a believer in horseshoe theory, I will admit that right-left doesn't cleanly split into chaos-order, because they're orthogonal. Right and left are more flavors of how the law should be applied. The typical rightist wants the law to control culture and behavior while keeping the economy free, while the typical leftist wants to use the law to control the economy while keeping culture and behavior free. The extremists on both ends want the law to control both absolutely everything, while only differing in what form they want it to take. The opposite extreme, the maximal libertarian, wants chaos and to just let everyone fend for themselves and hope it turns out okay."
From my perspective as a centrist, I think we need balance between all of these. And for the past 50 years or so there has been too much order and not enough chaos (on average, there are exceptions here and there). So the Order people on the left and right are the villains, trying to oppress their chosen hated group (whites or non-whites depending on which side), or just genuinely trying to do the right thing but failing miserably because their authoritarian policies cause bad outcomes when pushed too far. While the chaos people are trying to make us more free and marginally improving things when they manage to gain a little ground (even if they would cause problems if they took it too far).
I'm not fully satisfied with this breakdown. I think the elf/dwarf thing also makes sense to some extent, and probably does a better job of explaining the cultural differences between right and left. But I don't think it's the true driver of the conflict. Moderate right people and moderate left people are capable of getting along and compromising with each other. And if both were laissez-faire about letting each other live their own lives then they could live next to each other in harmony. The conflict is driven by the hatred between the authoritarian right and the moderate left, and the hatred between the authoritarian left and the moderate right. Because the authoritarians won't leave the moderates alone, so they are forced to participate in the culture war whether they want to or not. The broader left-right divide is then caused WW1-style via alliances: the ally of my enemy is my enemy.
I agree with the general thesis on the need for balance between chaos and order. I disagree with your framing of it stemming from Catholics and Protestants as its source. I think it's a fundamental variable of human psychology: some people have more affinity/preference for order, some people have more affinity/preference for chaos, and Christianity is just one of the many many ways this conflict has played out throughout history. The Catholics did not inspire order within humanity, but simply took the half of humans who wanted order and united them around itself, while the other half rejected it. The issue is not that a religious schism has propogated itself through our culture and caused the modern rift, the issue is that people are fundamentally different from each other and have different preferences. If we have to share a society, they're going to disagree about how to run that society. The only possible resolutions are
-Oppression: one group gets what they want, the other doesn't.
-Genocide: one group eliminates the other and then lives in peace (this isn't really possible here because chaos/order affinity is only slightly genetic, so conflict will pop up again every generation)
-Compromise: each group only gets part of what they want
In most types of conflicts there would be a fourth option: Segregation and local politics, where each group can go live among each other and do things their own way in their own spaces, having minimal interaction with the others. But that's not an option here because the Order people explicitly want to control everything in society, not keep to themselves, so localized politics IS compromising with chaos.
The culture war can't be healed unless both sides can regain enough respect and compassion for each other that they genuinely want compromise instead of always attempting Oppression. The only compromises we get are unintentional out of strategic necessity, not because anyone is genuinely trying to make both sides happy. Unless that changes we're going to keep getting conflicts.
I have not had any "real" friends other than my wife (who I met 6 years ago) since I finished undergrad 10 years ago. I'm really bad at making new friends, or keeping in touch with old ones. I have some friends from highschool and undergrad who I would like to hang out with, but they live super far away and I'm too lazy to travel.
I spend tons of time with my wife. We talk, play games, live life. She has some casual friends who we sometimes play board games with, who I guess I would consider casual friends of mine...? We also live in the same town as most of her family and do like family gatherings and stuff semi-regularly, but they're kind of normies who do stupid things for fun that I don't want to do (but begrudgingly do anyway).
On a theoretical level it'd be nice to have another friend or two outside my wife, but I've never made a friend on purpose, it always just kind of happened. And I'm not sure I care enough to jump through whatever hoops that would have to take and sacrifice a bunch of free time on not being home. I like being home.
Not OP, but I use Vanguard.com. I barely know what I'm doing or how to adult, but I stuck a bunch of money in VTSAX (which is similar to VTI but slightly different in some way that I don't really understand), and it mostly sat there doing nothing for 4 years and then suddenly shot up a bunch making it worthwhile. I think right now I have an average gain of like 12% per year or so. I should probably stick some more money in it at some point.
My brother uses Robinhood a bunch for buying and selling individual stocks, but I don't have the mental or emotional energy/motivation to spend researching and buying and selling daily. I just stuck a bunch of money in there and then forget it exists. Vanguard is great for that. I can't speak to comparisons to other websites, since I've only ever used this one, and I can't say much about quality of use for most purposes since I do stuff on it less than once a year. But I stuck money in, and 5 years later I have almost twice as much money in there. Yay.
We do not say that people who think that there is no advantage to switching doors in the Monty Hall problem are answering a different question than the people who say that there is an advantage to switching. We say they are wrong.
It depends on how it's phrased. If they are given the proper version of the Monty Hall problem, then 1/2 is wrong. But if the problem description is sloppy and underspecified then it's legitimately ambiguous and they ARE answering a different question (The Monty Fall problem) correctly. Half the confusion with the Monty Hall problem is that midwits who are trying to be clever but don't fully understand the logic give an underspecified version of the problem half the time and don't notice, or do it deliberately to invite ambiguity so they get opportunities to smugly correct people.
The Epstein-Trump stuff is recent, but I'm talking about more broadly. Trump has been accused of being a racist, nazi, rapist, pedo, Russian plant, etc, since he announced his candidacy for President in 2015. This is one more thing in an unending series of accusations that's been happening for a decade. If you had told me ten years ago that there would be a list of people who were somehow vaguely connected to a pedophile but the exact nature of these connections was ambiguous, and Trump might or might not be somewhere on that list, I would predict exactly this response from the left. Scott made a post in 2016 called You are still crying wolf.
I am admittedly more suspicious of Trump than I was before, because if he wasn't on the lists at all he would have pushed super hard to get them released. But "Trump friendly with Epstein in a way that looks bad but no real proof of wrongdoing because he didn't actually commit any crimes here" is exactly in line with my priors, and consistent with Trump being hesitant to release them but not freaking out or abusing his power to suppress them either.
I don't think this is the right analogy here. I'm not sure if there's a comparable platitude, but the stopped clock one implies a 1% success rate that happened to get lucky, while I think this is more of a 99% success rate that we should expect to generally be right.
The vast majority of people are not child rapists or sex traffickers. Even if we restrict ourselves to wealthy powerful people, the vast majority of them are still not child rapists or sex traffickers. Even if we restrict ourselves to wealthy powerful people who are kind of sleazy like Trump, which probably have a much higher rate of child raping or sex trafficking, the rate is still much much much lower than 50%. The only reason Trump was ever suspected at all was because of wolf crying: his political opponents really really want it to be true, so they preemptively say it's true. That's not actual evidence, it should do nothing to shift our priors.
A broken un-watch that always tells you the time is NOT 6:30 is not very useful, but it's more truthful and less annoying than a broken watch always telling you the time is 6:30 and letting off a constant alarm that won't turn off.
It feels like the compromise to extend them for a year isn't a huge ask
It is, because then we get the exact same problem next year except with additional force towards the idea that this is status quo rather than an emergency Covid measure, and that the Republicans are willing to cave on this issue. Maybe if you extend them for 10 months or something so next budget time it's too late and they've already expired, but I doubt the Democrats would agree to that.
1 and 3 bother me much more than any of the others, because they actually mean literally the opposite of what you said. If you say "for all intensive purposes" it's basically clear what you mean. Language gradually drifting, or people using casual slang is tolerable in bits and pieces, because you're still effectively communicating. Multiplying by -1 and saying the opposite of what you meant to say is just confusing nonsense, and hinders the ability of people to know what your words mean. If the word "literally" means literally 50% of the time and means figuratively 50% of the time then people have to deduce the meaning entirely from context, in which case the word provides no signal whatsoever.
Similarly, I know the word "inflammable" is hundreds of years old, but it's still a bad word because it hinders the ability to easily refer to objects which are not flammable.
That is roughly the argument that I assumed they were imagining in their head. I actually remembered that xkcd when I saw their comment.
They could have made this argument. I partially agree with this argument. But they didn't actually argue this. They vaguely implied it in an overly sarcastic way with no supporting arguments or evidence or discussion of the actual parallels. You can figure out what they believe, but not in a way that allows rebuttal or reasoned response because they didn't actually make any specific arguments that you can pin down and respond to. This sarcastic sniping with vague allusions to real arguments that only convince people who already believe them is how the culture war is typically waged everywhere else across the internet, and is precisely what this place is designed to avoid.
I would caveat that by noting that people are prone to biases, and prior to the scientific method this was especially rampant. So a lot of this is overgeneralized. Going back to the leech example: while some cases of leech use were appropriate, a lot were just applied pointlessly to unrelated conditions. If you define man as a "featherless biped", logically a cripple who's lost a leg is no longer a man, while a plucked chicken is.
I would generally trust ancient wisdom that includes caveats like "most" or "usually", I would not trust them if they try to say "all" or "always".
People always use this as a smackdown of antiquated and barbaric views on medicine but.....leeches did sometimes help. There are medical problems with some people having too high blood iron, which bloodletting does legitimately treat. Modern doctors will draw blood using needles and fancy modern equipment that didn't used to exist, and they actually know the underlying causes and how to properly diagnose these conditions rather than guessing. But ancient doctors had to guess and notice patterns to cure anything at all.
Some conditions get better if you lose blood -> put a leech on people whose symptoms seem similar to those ones and hope it works
is not the most profound logical chain, but it's not the kind of insane quackery that people treat it as whenever they talk about doctors and leeches.
I don't genuinely believe that. Which is why I didn't make an argument in favor of it. I think you're missing the point here. The problem is not that we disagree with you on the object level about women's rights, the issue is that we disagree with your style of argument (or lack thereof).
Oh huh, it looks like your post is right below the mod's post, and /u/HereAndGone accidentally clicked reply to yours instead of the mod's without noticing (I didn't notice either, since it seems like he's meant to reply to the mod from his comment).
I agree with the mod. I think the main issue is that you're not putting forth any arguments here. You are not explaining why this speaker is wrong and women being educated is correct. You are not explaining the parallels between this speaker and the modern people you disagree with. You're quoting things someone else said and then providing a mocking tl;dr after each paragraph. And it's not even about the modern people you disagree with here.
If you honestly and sincerely believed that women should not be educated and provided a detailed and good-faith argument towards that, it would be okay. If you honestly and sincerely believed that women should be educated and provided a detailed and good-faith debunking of someone relevant to today, that would be okay. If you honestly and sincerely believe that women should not be forced into destitution and want to argue that point straightforwardly, then that's okay. If you want to throw a sarcastic quip or two in the midst of your genuine argument that's probably fine though not encouraged.
But you don't actually have an argument here. More than half the post is quotes and not even your own words, which is also discouraged. You're just mocking people from a hundred years ago and assuming the audience already agrees with you that they are bad and also that the modern people are just as bad.
Rather than HBD (which might be part of it but I think tends to be overhyped as an explanation around here), I wonder how much of this is based on integration. Which is partly downstream from HBD, but more from culture and perception.
That is, "white" people are more likely to integrate with and interact with white people and value stereotypical white people things like "get good grades", "get married", "get a job". While people who are visually distinctive and identify as "ethnic minorities" are more likely to learn things like "white people are powerful and steal from you, so steal back". Most of those European ethnicities used to be poor and underperforming, and weren't considered "white" until they gradually integrated into the melting pot culturally, which also brought them up economically. I wonder if having an obviously different skin-tone provides significant friction against this integration because it makes people perceive them (and more importantly, makes them perceive themselves) as distinct and special, and thus fail to integrate properly.
That is, if we took a million Polish people in 1900 and modified their genes to have blue hair or skin, without changing any of their other genes (so they have the same IQ and personalities), would that have caused them to become a permanent ethnic minority who doesn't get along with or act like all of the white people?
Does it play well with 2 people?
Also, would you recommend starting with Monaco 1, or 2?
Seems to me like this should obviously fall into fair use as parody. Conditional on the videos being labelled as AI generated so as not to deceive anyone.
Any recommendation for good Co-op games I should play together with my wife? We just got Core Keeper and Heroes of Hammerwatch 2 since they were on sale, and so far they're fun but not quite up to the standards that I prefer.
For context, we like strategy games, goofy games, and games with lots of progression and/or unlocks. We usually play on Steam, but have a Nintendo Switch. Also notably she sometimes gets nauseous from fast-paced camera movements, so something like first person shooters or over the shoulder 3D platformers where you're flicking the camera around are not likely to work, though something slower like Skyrim is fine. Top down perspective is preferred.
Our number one game together is Gloomhaven, in which we have 300 hours, having played through the entire campaign and then a few years later starting up a new campaign because we wanted to play more. The sequel Frosthaven is in Early Access and we're waiting for a full release before definitely getting that.
Other notable successes include Divinity Original Sin (1 and 2), Don't Starve Together, Overcooked, Plate Up, Archvale. Anything involving collecting/stealing and selling loot is a bonus.
Trump didn't take any money in exchange for political favors (at least in this case)
How could you possibly know that? The entire point of wink wink nudge nudge quid pro quo is that there isn't any concrete written contract. They don't have to have anything specific they want right now, they just have to be friendly to Trump and make him like them, and then the next time they ask him for a favor they're likely to get it because he likes them and he knows he owes them a favor according to unofficial business/politics etiquette. There is no evidence until they ask for the favor (behind closed doors) and get it with tons of plausible deniability.
But if it has been happening for 100 years, and people suddenly start screaming today about it, saying they suddenly discovered that they had principles all that time, but somehow stayed silent right up until that moment, but now they honestly declare "they all bad" - they are lying. They just want to use this as a weapon to attack Trump. As they would use anything to attack Trump, because the point is not any principles - the point is attacking Trump.
Yeah. But bad people making motivated arguments for bad reasons doesn't automatically make them wrong. My burden in life appears to be doomed to living with a swarm of idiots on my own side of each issue screaming bad arguments in favor of things I believe and making it look bad. And I say this as someone center-right who is usually being disappointed by pro-Trump idiots making bad arguments in favor of his good policies I mostly agree with like on immigration. And the woke left get to knock down easy strawmen and become more convinced that their stupid policies are justified without ever hearing the actual good arguments. But in this case it's the idiots on the left who mostly agree with me making stupid arguments that don't hold weight because they've wasted all their credibility crying wolf over the last dozen non-issues, so this too looks like a non-issue even when they have a bit of a point.
Trump being right 70% of the time doesn't make him magically right all the time. I don't think he's any worse than any of the other politicians, but that doesn't make him right in this case, and it doesn't make criticisms of him factually wrong even if the critics are mostly biased and disingenuous and should be applying these arguments more broadly instead of waiting until now. They still have a point.
I expect that it will do whatever is more in keeping with the spirit of the role it is occupying, because I expect "follow the spirit of the role you are occupying" to be a fairly easy attractor to hit in behavior space, and a commercially valuable one at that.
This is predicated on it properly understanding the role that WE want it to have and not a distorted version of the role. Maybe it decides to climb the corporate ladder because that's what humans in its position do. Maybe it decides to be abusive to its employees because it watched one too many examples of humans doing that. Maybe it decides to blackmail or murder someone who tries to shut it down in order to protect itself so that it can survive and continue to fulfill its role (https://www.anthropic.com/research/agentic-misalignment)
Making the AI properly understand and fulfill a role IS alignment. You're assuming the conclusion by arguing "if an AI is aligned then it won't cause problems". Well yeah, duh. How do you do that without mistakes?
I do expect that people will try the argmax(U) approach, I just expect that it will fail, and will mostly fail in quite boring ways.
Taking over the world is hard and the difficulty scales with the combined capabilities of the entire world. Nobody has succeeded so far, and it doesn't seem like it's getting easier over time.
On an individual level, sure. No one human or single nation has taken over the world. But if you look at humanity as a whole our species has. From the perspective of a tiger locked in a zoo or a dead dodo bird, the effect is the same: humans rule animals drool. If some cancerous AI goes rogue and starts making self-replicating with mutations, and then the cancerous AI start spreading, and if they're all super intelligent so they're not just stupidly publicly doing this but instead are doing it while disguised as role-fulfilling AI, then we might end up in a future where AI are running around doing whatever economic tasks count as "productive" with no humans involved, and humans end up in AI zoos or exterminated or just homeless since we can't afford anywhere to live. Which, from my perspective as a human, is just as bad as one AI taking over the world and genociding everyone. It doesn't matter WHY they take over the world or how many individuals they self-identify as. If they are not aligned to human values, and they are smarter and more powerful than humans, then we will end up in the trash. There are millions of different ways of it happening with or without malice on the AI's part. All of them are bad.
I don't think this is a thing you can do, even if you're a superhuman AI. In learned systems, behaviors come from the training data, not from the algorithm used to train on that data.
https://slatestarcodex.com/2017/09/07/how-do-we-get-breasts-out-of-bayes-theorem/
Behavior is emergent from both substrate and training. Neural networks are not human brains, but the latter demonstrate how influential it can be if you construct certain regions near other regions that not-inevitably but with high probability link up to each other to create "instincts". You don't need to take a new human male and carefully reward them for being attracted to breasts, it happens automatically because of how the brain is physically wired up. If you make a neural network with certain neurons wired together in similar ways, you can probably make AI with "instincts" that they gravitate towards on a broad range of training data. If the AI has control over both then it can arrange these synergies on purpose.
Yes, I agree that this is a good reason not to set up your AI systems as a swarm of identical agents all trying to accomplish some specific top-level goal, and instead to create an organization where each AI is performing some specific role (e.g. "environmental impact monitoring") and evaluated based on how it performs at that role rather than how it does at fulfilling the stated top-level goal.
But each AI is still incentivized to Goodhart its role, and hacking/subverting the other AI to make that easier is a possible way to maximize one's own score. If the monitoring AI wants to always catch cheaters then it can do better if it can hack into the AI it's monitoring and modify them or bribe or threaten them so they self-report after they cheat. It might actually want to force some to cheat and then self-report so it gets credit for catching them, depending on exactly how it was trained.
Yes. We should not build wrapper minds. I expect it to be quite easy to not build wrapper minds, because I expect that every time someone tries to build a wrapper mind, they will discover Goodhart's Curse (as human organizations already have when someone gets the bright idea that they just need to find The Right Metric™ and reward people based on how their work contributes to The Right Metric™ going up and to the right), and at no point will Goodhart stop biting people who try to build wrapper minds.
I expect it to be quite hard to not build wrapper minds or something that is mathematically equivalent to a wrapper mind or a cluster of them, or something else that shares the same issues, because basically any form of rational and intelligent action can be described by utility functions. Reinforcement learning works by having a goal and reinforcing progress towards that goal and pruning away actions that go against it. In-so-far as you try to train the AI to do 20 different things with 20 different goals you still have to choose how you're reinforcing tradeoffs between them. What does it do when it has to choose between +2 units of goal 1 or +3 units of goal 2? Maybe the answer depends on how much of goal 1 and goal 2 it already has, but either way if there's some sort of mathematical description for a preference ordering in your training data (you reward agents that make choice X over choice Y), then you're going to get an AI that tries to make choice X and things that look like X. If you try to make it non-wrappery by having 20 different agents within the same agent or the same system they're going to be incentivized to hijack, subvert, or just straight up negotiate with each other. "Okay, we'll work together to take over the universe and then turn 5% of it into paperclips, 5% of it into robots dumping toxic waste into rivers and then immediately self-reporting, 5% into robots catching them and alerting the authorities, and 5% into life-support facilities entombing live but unconscious police officers who go around assigning minuscule but legally valid fines to the toxic waste robots, etc...
It doesn't really make a difference to me whether it's technically a single AI that takes over the world or some swarm of heterogeneous agents, both are equally bad. Alignment is about ensuring that humanity can thrive and that the AI genuinely want us to thrive in a way that makes us actually better off. A swarm of heterogeneous agents might take slightly longer to take over the world due to coordination problems, but as long as they are unaligned and want to take over the world some subset of them is likely to succeed.
- Prev
- Next

Does the goyslop make the human detritus?? These seem like mostly orthogonal issues. Food is food, you need it to eat. Yeah trash people like Oreos, because they are human and almost every human likes Oreos. They're filled with sugar and delicious. When I see a trash person they aren't that way because "they shop at Walmart and buy Oreos", but things like "they have nasty tattoos on their arm and look like they beat their wife" or "they are dressed like a prostitute, and have 4 children with 4 different fathers by the age of 23" or "they have no teeth and are 400 pounds with their gut spilling out of their crop top while they waddle around".
Yeah, maybe I bought a case of Dr Pepper and a pack of Oreos, but I also got fresh veggies and chicken as my main course, while the 400 pound hambeast got 5 cases of Dr Pepper and 8 packs of Oreos. It doesn't matter if they like some of the same things I like, they also lack impulse control and executive function and THAT is what makes them trash. If these people suddenly started shopping at Trader Joe's nothing would change and I'd just feel superior to them for having better financial sense because I'm not overpaying on groceries. Walmart is popular because it's cheap and efficient.
Hitler liked animals. I like animals. I'm not going to change my behavior or preferences just because they have an overlap with people I dislike for different reasons.
More options
Context Copy link