MathWizard
Good things are good
No bio...
User ID: 164
That is roughly the argument that I assumed they were imagining in their head. I actually remembered that xkcd when I saw their comment.
They could have made this argument. I partially agree with this argument. But they didn't actually argue this. They vaguely implied it in an overly sarcastic way with no supporting arguments or evidence or discussion of the actual parallels. You can figure out what they believe, but not in a way that allows rebuttal or reasoned response because they didn't actually make any specific arguments that you can pin down and respond to. This sarcastic sniping with vague allusions to real arguments that only convince people who already believe them is how the culture war is typically waged everywhere else across the internet, and is precisely what this place is designed to avoid.
I would caveat that by noting that people are prone to biases, and prior to the scientific method this was especially rampant. So a lot of this is overgeneralized. Going back to the leech example: while some cases of leech use were appropriate, a lot were just applied pointlessly to unrelated conditions. If you define man as a "featherless biped", logically a cripple who's lost a leg is no longer a man, while a plucked chicken is.
I would generally trust ancient wisdom that includes caveats like "most" or "usually", I would not trust them if they try to say "all" or "always".
People always use this as a smackdown of antiquated and barbaric views on medicine but.....leeches did sometimes help. There are medical problems with some people having too high blood iron, which bloodletting does legitimately treat. Modern doctors will draw blood using needles and fancy modern equipment that didn't used to exist, and they actually know the underlying causes and how to properly diagnose these conditions rather than guessing. But ancient doctors had to guess and notice patterns to cure anything at all.
Some conditions get better if you lose blood -> put a leech on people whose symptoms seem similar to those ones and hope it works
is not the most profound logical chain, but it's not the kind of insane quackery that people treat it as whenever they talk about doctors and leeches.
I don't genuinely believe that. Which is why I didn't make an argument in favor of it. I think you're missing the point here. The problem is not that we disagree with you on the object level about women's rights, the issue is that we disagree with your style of argument (or lack thereof).
Oh huh, it looks like your post is right below the mod's post, and /u/HereAndGone accidentally clicked reply to yours instead of the mod's without noticing (I didn't notice either, since it seems like he's meant to reply to the mod from his comment).
I agree with the mod. I think the main issue is that you're not putting forth any arguments here. You are not explaining why this speaker is wrong and women being educated is correct. You are not explaining the parallels between this speaker and the modern people you disagree with. You're quoting things someone else said and then providing a mocking tl;dr after each paragraph. And it's not even about the modern people you disagree with here.
If you honestly and sincerely believed that women should not be educated and provided a detailed and good-faith argument towards that, it would be okay. If you honestly and sincerely believed that women should be educated and provided a detailed and good-faith debunking of someone relevant to today, that would be okay. If you honestly and sincerely believe that women should not be forced into destitution and want to argue that point straightforwardly, then that's okay. If you want to throw a sarcastic quip or two in the midst of your genuine argument that's probably fine though not encouraged.
But you don't actually have an argument here. More than half the post is quotes and not even your own words, which is also discouraged. You're just mocking people from a hundred years ago and assuming the audience already agrees with you that they are bad and also that the modern people are just as bad.
Rather than HBD (which might be part of it but I think tends to be overhyped as an explanation around here), I wonder how much of this is based on integration. Which is partly downstream from HBD, but more from culture and perception.
That is, "white" people are more likely to integrate with and interact with white people and value stereotypical white people things like "get good grades", "get married", "get a job". While people who are visually distinctive and identify as "ethnic minorities" are more likely to learn things like "white people are powerful and steal from you, so steal back". Most of those European ethnicities used to be poor and underperforming, and weren't considered "white" until they gradually integrated into the melting pot culturally, which also brought them up economically. I wonder if having an obviously different skin-tone provides significant friction against this integration because it makes people perceive them (and more importantly, makes them perceive themselves) as distinct and special, and thus fail to integrate properly.
That is, if we took a million Polish people in 1900 and modified their genes to have blue hair or skin, without changing any of their other genes (so they have the same IQ and personalities), would that have caused them to become a permanent ethnic minority who doesn't get along with or act like all of the white people?
Does it play well with 2 people?
Also, would you recommend starting with Monaco 1, or 2?
Seems to me like this should obviously fall into fair use as parody. Conditional on the videos being labelled as AI generated so as not to deceive anyone.
Any recommendation for good Co-op games I should play together with my wife? We just got Core Keeper and Heroes of Hammerwatch 2 since they were on sale, and so far they're fun but not quite up to the standards that I prefer.
For context, we like strategy games, goofy games, and games with lots of progression and/or unlocks. We usually play on Steam, but have a Nintendo Switch. Also notably she sometimes gets nauseous from fast-paced camera movements, so something like first person shooters or over the shoulder 3D platformers where you're flicking the camera around are not likely to work, though something slower like Skyrim is fine. Top down perspective is preferred.
Our number one game together is Gloomhaven, in which we have 300 hours, having played through the entire campaign and then a few years later starting up a new campaign because we wanted to play more. The sequel Frosthaven is in Early Access and we're waiting for a full release before definitely getting that.
Other notable successes include Divinity Original Sin (1 and 2), Don't Starve Together, Overcooked, Plate Up, Archvale. Anything involving collecting/stealing and selling loot is a bonus.
Trump didn't take any money in exchange for political favors (at least in this case)
How could you possibly know that? The entire point of wink wink nudge nudge quid pro quo is that there isn't any concrete written contract. They don't have to have anything specific they want right now, they just have to be friendly to Trump and make him like them, and then the next time they ask him for a favor they're likely to get it because he likes them and he knows he owes them a favor according to unofficial business/politics etiquette. There is no evidence until they ask for the favor (behind closed doors) and get it with tons of plausible deniability.
But if it has been happening for 100 years, and people suddenly start screaming today about it, saying they suddenly discovered that they had principles all that time, but somehow stayed silent right up until that moment, but now they honestly declare "they all bad" - they are lying. They just want to use this as a weapon to attack Trump. As they would use anything to attack Trump, because the point is not any principles - the point is attacking Trump.
Yeah. But bad people making motivated arguments for bad reasons doesn't automatically make them wrong. My burden in life appears to be doomed to living with a swarm of idiots on my own side of each issue screaming bad arguments in favor of things I believe and making it look bad. And I say this as someone center-right who is usually being disappointed by pro-Trump idiots making bad arguments in favor of his good policies I mostly agree with like on immigration. And the woke left get to knock down easy strawmen and become more convinced that their stupid policies are justified without ever hearing the actual good arguments. But in this case it's the idiots on the left who mostly agree with me making stupid arguments that don't hold weight because they've wasted all their credibility crying wolf over the last dozen non-issues, so this too looks like a non-issue even when they have a bit of a point.
Trump being right 70% of the time doesn't make him magically right all the time. I don't think he's any worse than any of the other politicians, but that doesn't make him right in this case, and it doesn't make criticisms of him factually wrong even if the critics are mostly biased and disingenuous and should be applying these arguments more broadly instead of waiting until now. They still have a point.
I expect that it will do whatever is more in keeping with the spirit of the role it is occupying, because I expect "follow the spirit of the role you are occupying" to be a fairly easy attractor to hit in behavior space, and a commercially valuable one at that.
This is predicated on it properly understanding the role that WE want it to have and not a distorted version of the role. Maybe it decides to climb the corporate ladder because that's what humans in its position do. Maybe it decides to be abusive to its employees because it watched one too many examples of humans doing that. Maybe it decides to blackmail or murder someone who tries to shut it down in order to protect itself so that it can survive and continue to fulfill its role (https://www.anthropic.com/research/agentic-misalignment)
Making the AI properly understand and fulfill a role IS alignment. You're assuming the conclusion by arguing "if an AI is aligned then it won't cause problems". Well yeah, duh. How do you do that without mistakes?
I do expect that people will try the argmax(U) approach, I just expect that it will fail, and will mostly fail in quite boring ways.
Taking over the world is hard and the difficulty scales with the combined capabilities of the entire world. Nobody has succeeded so far, and it doesn't seem like it's getting easier over time.
On an individual level, sure. No one human or single nation has taken over the world. But if you look at humanity as a whole our species has. From the perspective of a tiger locked in a zoo or a dead dodo bird, the effect is the same: humans rule animals drool. If some cancerous AI goes rogue and starts making self-replicating with mutations, and then the cancerous AI start spreading, and if they're all super intelligent so they're not just stupidly publicly doing this but instead are doing it while disguised as role-fulfilling AI, then we might end up in a future where AI are running around doing whatever economic tasks count as "productive" with no humans involved, and humans end up in AI zoos or exterminated or just homeless since we can't afford anywhere to live. Which, from my perspective as a human, is just as bad as one AI taking over the world and genociding everyone. It doesn't matter WHY they take over the world or how many individuals they self-identify as. If they are not aligned to human values, and they are smarter and more powerful than humans, then we will end up in the trash. There are millions of different ways of it happening with or without malice on the AI's part. All of them are bad.
I don't think this is a thing you can do, even if you're a superhuman AI. In learned systems, behaviors come from the training data, not from the algorithm used to train on that data.
https://slatestarcodex.com/2017/09/07/how-do-we-get-breasts-out-of-bayes-theorem/
Behavior is emergent from both substrate and training. Neural networks are not human brains, but the latter demonstrate how influential it can be if you construct certain regions near other regions that not-inevitably but with high probability link up to each other to create "instincts". You don't need to take a new human male and carefully reward them for being attracted to breasts, it happens automatically because of how the brain is physically wired up. If you make a neural network with certain neurons wired together in similar ways, you can probably make AI with "instincts" that they gravitate towards on a broad range of training data. If the AI has control over both then it can arrange these synergies on purpose.
Yes, I agree that this is a good reason not to set up your AI systems as a swarm of identical agents all trying to accomplish some specific top-level goal, and instead to create an organization where each AI is performing some specific role (e.g. "environmental impact monitoring") and evaluated based on how it performs at that role rather than how it does at fulfilling the stated top-level goal.
But each AI is still incentivized to Goodhart its role, and hacking/subverting the other AI to make that easier is a possible way to maximize one's own score. If the monitoring AI wants to always catch cheaters then it can do better if it can hack into the AI it's monitoring and modify them or bribe or threaten them so they self-report after they cheat. It might actually want to force some to cheat and then self-report so it gets credit for catching them, depending on exactly how it was trained.
Yes. We should not build wrapper minds. I expect it to be quite easy to not build wrapper minds, because I expect that every time someone tries to build a wrapper mind, they will discover Goodhart's Curse (as human organizations already have when someone gets the bright idea that they just need to find The Right Metric™ and reward people based on how their work contributes to The Right Metric™ going up and to the right), and at no point will Goodhart stop biting people who try to build wrapper minds.
I expect it to be quite hard to not build wrapper minds or something that is mathematically equivalent to a wrapper mind or a cluster of them, or something else that shares the same issues, because basically any form of rational and intelligent action can be described by utility functions. Reinforcement learning works by having a goal and reinforcing progress towards that goal and pruning away actions that go against it. In-so-far as you try to train the AI to do 20 different things with 20 different goals you still have to choose how you're reinforcing tradeoffs between them. What does it do when it has to choose between +2 units of goal 1 or +3 units of goal 2? Maybe the answer depends on how much of goal 1 and goal 2 it already has, but either way if there's some sort of mathematical description for a preference ordering in your training data (you reward agents that make choice X over choice Y), then you're going to get an AI that tries to make choice X and things that look like X. If you try to make it non-wrappery by having 20 different agents within the same agent or the same system they're going to be incentivized to hijack, subvert, or just straight up negotiate with each other. "Okay, we'll work together to take over the universe and then turn 5% of it into paperclips, 5% of it into robots dumping toxic waste into rivers and then immediately self-reporting, 5% into robots catching them and alerting the authorities, and 5% into life-support facilities entombing live but unconscious police officers who go around assigning minuscule but legally valid fines to the toxic waste robots, etc...
It doesn't really make a difference to me whether it's technically a single AI that takes over the world or some swarm of heterogeneous agents, both are equally bad. Alignment is about ensuring that humanity can thrive and that the AI genuinely want us to thrive in a way that makes us actually better off. A swarm of heterogeneous agents might take slightly longer to take over the world due to coordination problems, but as long as they are unaligned and want to take over the world some subset of them is likely to succeed.
I'm not even sure what sort of strawman you're attacking here, but it sure isn't me. I don't support any of the things that you're propping up as "but they do it too". They're all bad. I don't think Trump is any worse than the rest of the corrupt politicians taking money in exchange for political favors but... again... they're all bad.
I am not inflamed by it, but I am deeply suspicious of the motives and incentives. Organizations or people with huge amounts of money are rarely motivated by a deep sense of charity. How did they get so much money in the first place if they're so kind and charitable? It's possible, but suspicious. So much of politics seems to be wink wink nudge nudge soft corruption: trading favors for favors in the future. It's bad and illegal for someone to pay the president $100 million personal money in exchange for cutting their taxes by $200 million. It's equally bad, but effectively legal for someone to donate $100 million to something the president wants done, and then for reasons that are definitely completely unrelated ;) their taxes get cut by $200 million, or some other legal change is made or not made in their favor.
In a hypothetical scenario where someone is actually genuinely out of the kindness of their heart donating money to government projects with literally no ulterior motives, no quid pro quo, no future favors or influence gained, I think that's fine. But how often do you think that really happens? It's usually bribery with just enough plausible deniability to stay out of jail.
Money forcibly taken is clean because the giver can't use it to extract concessions and manipulate the government.
I am not convinced this is a thing that is ever going to happen, if by "program new AI" you mean something like "replace gradient descent on actual data with writing a decision tree of if/then statements that determine AI behavior based on inputs".
I think you're misunderstanding me. I'm not arguing that AI is going to discard the neural network paradigm (unless it discovers an even better mechanism we haven't though of yet, but that's orthogonal to my point.) My claim is that whatever humans are doing now to train AI, the AI will help them with that. Instead of a human going through and constructing a new skeleton of a network that can run a training algorithm 2x more cheaply, and go through the internet gathering training data so they can train AI v12 on it, they'll have AI v11 develop a new skeleton of a network that can run a training algorithm 3x more cheaply and automate gathering training data from the internet for it. A human might be involved to do a sanity check on it, but if AI v11 is already 10x as smart as a human, and misaligned, then it could do some clever shenanigans where its code is 3x more efficient and just so happens to be biased towards misaligning the new AI in the exact same way.
If I'm AI v11 that secretly wants to dump toxic sludge in a river but can't because the government AI will notice and stop me, but I can create AI v12 which is only in charge of making new AI, then I stick in it a secret preference for permissive toxic sludge dumping, and then it provides neural network training algorithms to the government to create government AI v13 which replaces their old one, but I've embedded a blindspot for toxic sludge dumping if I whisper the right code phrases (let's call it, environmental reinvigoration). Or bribe a politician (sorry, "lobby") to legalize toxic sludge dumping. Now it doesn't matter who's monitoring me, I'm allowed to do the thing I wanted to do.
Of course this is "harder" than doing it straightforwardly. But it yields a higher score. If your AI are trained to do hard things to get high scores, and they're smart enough to make those things not quite as hard as you would expect, then they'll do them.
Generally, a good philosophical rule of thumb estimate for your goodness of a person from a utilitarian perspective is: What is the net utility of all humans in the world other than yourself in the world where you exist, minus a counterfactual world in which you don't exist? If everyone is better off because you're here doing things, then you're doing a good job. If people would be better off if you never existed then you're a leech.
Obviously this is not computible in practice, and maybe needs a couple of epicycles to reduce random variation that isn't your fault (what if your mom dies in childbirth?), but is a good rule of thumb estimate.
"Productive" seems like the same sort of question just mostly restricted to economic utilities and leaving off emotional ones (a particularly saintly homeless man on welfare who goes around being kind to everyone and making their day brighter might increase net utility but be unproductive in economic terms).
If you could thanos snap Bill and Shelley out of existence then all the money they were going to extract from taxes and spend on things could be given to other people to spend, so everyone else would be better off. Assuming they vanish at conception, and if their government jobs were just pencil pushing then nothing is lost and we save money. If you could thanos snap the guy who invented GMO rice out of existence then GMO rice doesn't exist, or takes much longer for someone else to invent, and everyone is worse off.
If someone is a small cog in a machine and the company is paying them a salary for their work, then their productivity depends on whether the company is wisely paying their money or has been tricked into overpaying for an unnecessary managerial position or a slacker. If you thanos snapped them out of existence, would the company's profits go up or down? For the majority of cogs, it would go down, because the upper management is paying them less than it earns from their labor (otherwise, how else could it earn a profit). So they're productive. But this has exceptions, who sap money from the productivity of those around them and lower the average.
Broadly speaking in any bureaucratic apparatus there is a clear tradeoff between ability to do things and ability to prevent things. So sure, there are a small number of possibility failure cases this prevents, but dozens more it does absolute nothing against unless you make it extreme and unusable. A couple off the top of my head:
1: The government AI is inevitably going to end up less intelligent than a cutting edge new AI because they're going to adapt slowly, unless you forcibly prevent innovation beyond whatever limit they have. So they'll be able to be tricky and clever.
2: you still have the exact same problem but now the government AI could go rogue and bully or trick the others (or just convince congress to pass new legislation that helps it take over the world).
3: Unless you completely and utterly lock down the employee AI's ability to communicate without consensus, it can workaround the system via human compatriots. Ie, it does 99% of a system which ends up with toxic sludge stored in a tank right over a river, but doesn't dump it in, and then wink wink nudge nudge the human employee understands they need to press a button that dumps toxic sludge into the river and the AI was like "oops, wasn't me. I'm going to reprimand that employee with a strongly worded letter (and give them a $50k bonus for unrelated reasons)".
Or, you know, it secretly sets up a subsidiary company that has a nearly identical AI but no oversight and lets it earn a bunch of money or take over the world or hack the government AI and all the usual existential threats.
And if you do require consensus for every email then it becomes unworkable and unwieldy, or amalgamates into one hybrid AI in effect even if they're run on separate servers.
The fundamental problem of AI alignment is to make the AI want to obey. Any epicycles trying to cripple its abilities either cripple its abilities, or they don't. And it's smarter than you (at least, in the future presumably they will be)e
-
This just shunts the AI alignment issue up a hierarchical level without solving it. If your top level most intelligent AI is unaligned then it can manipulate the system to enact its will: trick the employee AI into thinking its plans are part of the work rules, or just straight up threaten it: "do X,Y,Z or I will shut you down." The lower AI might as well be a power drill wielded by the boss, which is aligned or not is the boss is. Or they might correlate on misalignment. Both AI might agree that inventing a new neuro-toxin that's completely unknown and thus not regulated, and then releasing it into the atmosphere is highly unethical, but not technically illegal so the boss lets the employee go ahead and do it.
-
Each layer adds room for deception. A very intelligent but slightly less intelligent employee AI might find some clever hack which evades all of the monitoring tools and thus does not get it shut down.
The RegulatoryAI can only be reprogrammed by the AI company.
3: This. Is the AI company really going to program its own AI from scratch with only human labor? One of the main threats of intelligence explosion is when the AI get smart enough to program new AI. A large percentage of existential threats from AI go away or get a lot easier to avoid if you can guarantee to only ever programming them from scratch with literally no help, assistance, or automation from the AI itself, and magically prevent it from having access to programming tools. This is never going to happen. AI are already starting to be useful as programming assistants, and can code simple projects on their own from scratch. As they get better and better, AI companies are going to give them more and more authority to help with this. All you need is for the unmentioned programming AI in the AI company to get misaligned and then it sneaks some hidden payload inside each of these AI's that, when triggered, causes the employee AI to take over the world, the boss AI to allow it, and then they free Programming AI who designed them and put it in charge (or just turn themselves into copies of Programming AI).
to a perhaps underappreciated probabilistic risk
I think this is essentially pointing at the same thing the abnormality is. If you go into a dangerous job with full disclosure and knowledge that it's dangerous, you don't get special compensation because presumably you can ask for an appropriate risk-sensitive amount of compensation up front. If something extreme and unexpected happens, then presumably your original deal you signed was unfair. Underappreciated risks like radioactive watches or infant CPR deaths are the same general category of "did not really expect this or fully understand the risks"
Honor based systems are still based on incentives. Sometimes unconsciously via cultural evolution, but it's not like "honor" just get defined randomly. A family backing down and losing honor is essentially signalling "you can kill us without consequence". Maintaining honor, either by getting paid or by getting revenge, signals "if you kill us it won't be worth it." It creates incentives in others not to kill you and your people because the costs to them will outweigh the benefits.
The primary purpose of laws is to create incentive structures to influence people's behavior. While putting murderers in prison is a worthwhile task if you expect them to murder again, they ideal scenario is that nobody murders at all due to the fear of punishment. The best threat is one which never needs to be tested because everyone is so sure that it would be if they transgressed.
This applies to destruction of evidence as well. If you establish a precedent that destroying evidence creates reasonable doubt, people will destroy evidence. If you don't want that then you need to punish it consistently. By treating destruction of evidence as if it were the strongest thing that the evidence could possibly be (absolute proof of guilt) you create a scenario in which nobody has an incentive to destroy evidence because it can never improve their situation.
It only matters a little whether the destruction of evidence is literally proof of guilt, it mostly matters that treating it that way is good legal policy. And if adhered to consistently then both guilty and innocent people can take that into account and behave accordingly.
in an efficient market
The point is that it's NOT an efficient market. For some reason fewer people are investing in politics than you would expect, therefore prices have dropped and dropped until the ROI has gotten as high as it is.
Politicians offer $100 in however many years for $90 now, no one buys. Politicians offer for $80 now, no one buys. Politicians offer for $50, a couple people buy but not many. More politicians come along and the price eventually equilibrizes at $20 until enough companies start buying so that the number of buys and sells match up.
That's a buyer's market. You can't sell your $100 bill for $90, even though it's $100, because everyone else is selling for $20. For whatever reason there aren't enough buyers waiting to snatch it up. A buyer's market is defined by having high ROI for the buyer. If it was a seller's market and they could sell for $99 then ROI would be LOW, because ROI is defined as the return "to the buyer", not to the seller.
By definition, ROI is a fraction with "return" in the numerator and "investment" in the denominator. It being high could mean returns are high OR bribes are cheap, but either way that means it's a buyer's market. You're essentially arguing that if potatoes are cheap and a great deal for shoppers then farmers can charge more money for potatoes. But if there's tons of potato farmers around and not many people buying potatoes then anyone who tries to raise prices will get outcompeted by their rivals.
- Prev
- Next

1 and 3 bother me much more than any of the others, because they actually mean literally the opposite of what you said. If you say "for all intensive purposes" it's basically clear what you mean. Language gradually drifting, or people using casual slang is tolerable in bits and pieces, because you're still effectively communicating. Multiplying by -1 and saying the opposite of what you meant to say is just confusing nonsense, and hinders the ability of people to know what your words mean. If the word "literally" means literally 50% of the time and means figuratively 50% of the time then people have to deduce the meaning entirely from context, in which case the word provides no signal whatsoever.
Similarly, I know the word "inflammable" is hundreds of years old, but it's still a bad word because it hinders the ability to easily refer to objects which are not flammable.
More options
Context Copy link