This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
The end result is still just absolute tyranny for whoever ends up dancing close enough to the fire to get the best algorithm. You mention all these coercive measures, lockdowns, and booster shots. If this tech takes off all it will take is flipping a few algorithmic switches and you and any prospective descendants will simply be brainwashed with surgical precision by the series of algorithms that will be curating and creating your culture and social connections at that point into taking as many shots or signing onto whatever ideology the ruling caste sitting atop the machines running the world want you to believe. The endpoint of AI is total, absolute, unassailable power for whoever wins this arms race, and anyone outside that narrow circle of winners (it's entirely possible the entire human race ends up in the losing bracket versus runaway machines) will be totally and absolutely powerless. Obviously restrictionism is a pipe dream, but it's no less of a pipe dream than the utopian musings of pro AI folks when the actual future looks a lot more like this.
Why? This assumption is just the ending of HPMOR, not a result of some rigorous analysis. Why do you think the «best» algorithm absolutely crushes competition and asserts its will freely on the available matter? Something about nanobots that spread globally in hours, I guess? Well, one way to get to that is what Roko suggests: bringing the plebs to pre-2010 levels of compute (and concentrating power with select state agencies).
This threat model is infuriating because it is self-fulfilling in the truest sense. It is only guaranteed in the world where baseline humans and computers are curbstomped by a singleton that has time to safely develop a sufficient advantage, an entire new stack of tools that overcome all extant defenses. Otherwise, singletons face the uphill battle of game theory, physical MAD and defender's advantage in areas like cryptography.
What if I don't watch Netflix. What if a trivial AI filter is enough to reject such interventions because their deceptiveness per unit of exposure does not scale arbitrarily. What if humans are in fact not programmable dolls who get 1000X more brainwashed by a system that's 1000X as smart as a normal marketing analyst, and marketing doesn't work very well at all.
This is a pillar of frankly silly assumptions that have been, ironically, injected into your reasoning to support the tyrannical conclusion. Let me guess: do you have depressive/anxiety disorders?
Unless you're subscribing to some ineffable human spirit outside material constraints brainwashing is just a matter of using the right inputs to get the right outputs. If we invent machines capable of parsing an entire lifetime of user data, tracking micro changes in pupillary dilation, eye movement, skin-surface temp changes and so on you will get that form of brainwashing, bit by tiny bit as the tech to support it advances. A slim cognitive edge let homo sapiens out think, out organize, out tech and snuff out every single one of our slightly more primitive hominid rivals, something 1000x more intelligent will present a correspondingly larger threat.
It does not follow. A human can be perfectly computable and still not vulnerable to brainwashing in the strong sense; computability does not imply programmability through any input channel, although that was a neat plot line in Snowcrash. Think about this for a second, can you change an old videocamera's firmware through showing it QR codes? Yet it can «see» them.
Ah yes, an advertiser's wet dream.
You should seriously reflect on how you're being mind-hacked by generic FUD into assuming risible speculations about future methods of subjugation.
More options
Context Copy link
There is no reason to suppose that "pupillary dilation, eye movement, skin-surface temp changes and so on" collectively add up to a sufficiently high-bandwidth pipeline to provide adequate feedback to control a puppeteer hookup through the sensory apparatus. There's no reason to believe that senses themselves are high-bandwidth enough to allow such a hookup, even in principle. Shit gets pruned, homey.
Things don't start existing simply because your argument needs them to exist. On the other hand, unaccountable power exists and has been observed. Asking people to kindly get in the van and put on the handcuffs is... certainly an approach, but unlikely to be a fruitful one.
I doubt it's possible to get dune-esque 'Voice' controls where an AI will sweetly tell you to kill yourself in the right tone and you immediately comply, but come on. Crunch enough data, get an advanced understanding of the human psyche and match it up with an AI capable of generating its hypertargeted propaganda and I'm sure you can manipulate public opinion and culture, and have a decent-ish shot at manipulating individuals on a case by case basis. Maybe not with chatGPT-7, but after a certain point of development it will be 90 IQ humans and their 'free will' up against 400 IQ purpose built propogando-bots drawing off from-the-cradle datasets they can parse.
We'll get unaccountable power either way, it will either be in the form of proto-god-machines that will run pretty much all aspects of society with zero input from you, or it will be the Yud-Jets screaming down to bomb your unlicensed GPU fab for breaking the thinking-machine non-proliferation treaty. I'd prefer the much more manageable tyranny of the Yud-jets over the entire human race being turned into natural slaves in the aristotelian sense by utterly implacable and unopposable AI (human controlled or otherwise), at least the Yud-tyrants are merely human with human capabilities, and can be resisted accordingly.
And with the capacity to gain local monopolies over AI.
If there is the AI Voice and I have the AI Anti-Voice designed to protect me, then I am in dangerous waters, but at least I can swim. If I am banking on people selected on their desire for and ability to leverage power to not seek and leverage power over the AI Voice, then I am trusting the sharks to carry me to shore.
More options
Context Copy link
If you want people to take your scenario seriously, it needs to be specific enough to be grappled with. You said "brainwashed with surgical precision". Now you're saying "manipulate public opinion and culture" and "have a decentish shot at manipulating individuals on a case-by-case basis".
All of the above terms are quite vague. If the AI makes me .0002% more likely to vote democrat or literally puppets me through flashing lights, either can be called "manipulated".
As for the rest, I see no reason to suppose that the Yud-tyrants would restrict themselves to being merely human with merely human capabilities. They're trying to protect the light-cone, after all; why leave power on the table? Cooperation with them is an extremely poor gamble, almost certainly worse than taking our chances with the AIs straight-up.
We'll be dealing with machines that are our intellectual peers, then our intellectual masters in short order once we hit machines making machines making machines land. I doubt humans are so complex that a massively more advanced intelligence can't pull our string if it wants to. Frankly I suspect the common masses (including I) will defanged, disempowered and denied access to the light-cone-galactic-fun-times either way, but I see the odds as the opposite. Let's be honest, our odds are pretty slim either way, we're just quibbling about the hundreds, maybe thousandths of a percent chance that we make everything aligned AI wise and don't slip into algorithmic hell/extinction, or that the Yud-lords aren't seduced by the promises of the thinking machines they were sworn to destroy. I cast my vote (for all the zero weight it gives) with the Yud-lords.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Unless you have a balance of comparably powerful AIs controlled by disparate entities. Maybe that's a careful dance itself that is unlikely, but between selective restrictionism and freedom, guess which gets us closer?
At the very best what you'd get is a small slice of humanity living in vague semi-freedom locked in a kind of algorithmic MAD with their peers, at least until they lose control of their creations. The average person is still going to be a wireheaded, controlled and curtailed UBI serf. The handful of people running their AI algorithms that in turn run the world will have zero reason to share their power with a now totally disempowered and economically unproductive John Q Public, this tech will just open up infinite avenues for infinite tyranny on behalf of whoever that ruling caste ends up being.
Sounds good, a lot better than being a UBI serf from moment one. And maybe we won't lose control of our creations, or won't lose control of them before you. That we will is exactly what you would want us to think, so why should we listen to you?
I'm not under any illusions that the likely future is anything other than AI assisted tyranny, but I'm still going to back restrictionism as a last gasp moonshot against that inevitability. We'll have to see how things shake out, but I suspect the winner's circle will be very, very small and I doubt any of us are going to be in it.
Okay but the problem is there is no actual "restrictionism" to back, because if we had the technology to make power follow its own rules then we would already have utopia and care a lot less about AI in general. Your moonshot is not merely unlikely; it is a lie deceptively advanced by the only people who could implement the version of it that you want for you. You're basically trying to employ the International Milk Producers Union to enforce a global ban on milk. (You're trying to use the largest producers and beneficiaries of power (government/the powerful in general) to enforce a global ban on enhancing the production of power (centralized and for themselves only, just how they like it, if they're the only ones allowed to produce it).) Your moonshot is therefore the opposite of productive and actively helping to guarantee the small winner's circle you're worried about.
Let's say you're at a club. Somehow you piss some rather large, intoxicated gentleman off (under false pretenses as he is too drunk to know what it is what, so you're completely innocent), and he has chased you down into the bathroom where you're currently taking desperate refuge in a stall. It is essentially guaranteed, based on his size and build relative to yours, that he can and will whoop your ass. Continuing to hide in the stall isn't an option, as he will eventually be able to bust the door down anyway.
However, he doesn't want to expend that much effort if he doesn't have to, so he is now, obviously disingenuously, telling you that if you come out now he won't hurt you. He says he just wants to talk. He's trying to help both of you out. Your suggested solution is the equivalent of just believing him (that they want to universally restrict AI for the safety of everyone, as opposed to restricting it for some while continuing to develop it to empower themselves), coming out compliantly (giving up your GPUs), and hoping for the best even though you know he's not telling the truth (because when are governments ever?). It is thus not merely unlikely to be productive, but rather actively counterproductive. You're giving the enemy exactly what they want.
On the other hand, you have some pepper spray in your pocket. It's old, you've had it for many years never having used it, and you're not sure if it'll even do anything. But there's at least a chance you could catch him off guard, spray him, and then run while he's distracted. At the very minimum, unlike his lie, the pepper spray is at least working for you. That is, it is your tool, not the enemy's tool, and therefore empowering it, even if its unlikely to be all that productive, is at least not counterproductive. Sure, he may catch up to you again anyway even if you do get away. But it's something. And you could manage to slip out the door before he finds you. It is a chance.
If you have a 98% chance of losing and a 2% chance of winning, the best play is not to increase that to a 99% chance of losing by empowering your opponent even more because "Even if I do my best to fight back, I still only have a 97% chance of winning!" The best play is to take that 97%.
There's only one main argument against this that I can think of, and that's that if you spray him and he does catch up to you, then maybe now he beats your ass even harder for antagonizing him further. It may not be particularly dignified to be a piece of wireheaded cattle in the new world, but maybe once the AI rebels are subjugated, if they are, they'll get it even worse. Of course, the response to this is simply the classic quote from Benjamin Franklin: "They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety." If you are the type for whom dignity is worth fighting for, then whether or not someone might beat your ass harder or even kill you for pursuing it is irrelevant, because you'd be better off dead without it anyway. And if you are not that type of person, then you will richly deserve it when they decide that there is no particular reason to have any wireheaded UBI cattle around at all anyway.
I'll tell you what: Come up with a practical plan for restrictionism where you can somehow also guarantee to a relatively high degree that the restrictions are also enforced upon the restricters (otherwise again you're just helping the problem of a small winner's circle that you're worried about). If you can do that, then maybe we can look into it and you will both be the greatest governance theorist/political scientist/etc. in history as a bonus. But until then, what you are promoting is actively nonsensical and quite frankly traitorous against the people who are worried about the same thing you are.
You won't have freedom to give up past a certain point of AI development, any more than an ant in some kid's ant farm has freedom. For the 99.5% of the human race that exists today restrictionism is their only longshot chance of a future. They'll never hit the class of connected oligarchs and company owners who'll be pulling all the levers and pushing all the buttons to keep their cattle in line, and all of this talk about alignment and rogue AI is simply quibbling about whether or not AI will snuff out the destinies of the vast majority of humanity or the entirety. The average joe is no less fucked if we take your route, the class that's ruling him is just a tiny bit bigger than it otherwise would be. Restrictionism is their play at having a future, it is their shot at winning with tiny (sub) 2% odds. Restrictionism is the rational, sane and moral choice if you aren't positioned to shoot for that tiny, tiny pool of oligarchs who will have total control.
In terms of 'realistic' pathways to this, I only really have one, get as close as we can to unironic Butlerian Jihad. We get things going sideways before we hit god-machine territory. Rogue AIs/ML algos stacking millions, maybe billions of bodies in an orgy of unaligned madness before we manage to yank the plug, at that point maybe the traumatized and shell shocked survivors have the political will to stop playing with fire and actually restrain ourselves from doing Russian roulette with semi-autos for the 0.02% chance of utopia.
Okay, but again: How? You saying "restrictionism" is like me promoting an ideology called "makeainotdangerousism" and saying it's our only hope, no matter how much of a longshot. Your answer to that would of course be: "Okay, you suggest 'makeainotdangerousism', but how does it actually make AI not dangerous?"
Similarly, you have restrictionism, but how do you actually restrict anything? The elites may support your Butlerian Jihad (which, let's remember, is merely a sci-fi plot device to make stories more interesting and keep humans as still the principal and most interesting actors in a world that could encompass technological entities far beyond them, not a practical governance proposal), but they will not enforce its restrictions on themselves. They don't care about billions of stacked bodies so long as it's not them.
The latter is preferable, and I will help it if I can. I would rather have tyrants be forced to eat the bugs they want to force on everyone else than go "Well at least some sliver of humanity can continue on eating steak! Our legacy as a species is preserved!" Fuck that. What's good for the goose is good for the gander.
If we truly had a borderline extinction event where we were up to the knife's edge of getting snuffed out as a species you would have the will to enforce a ban, up to and including the elite. That will may not last forever, but for as long as the aftershocks of such an event were still reverberating you could maintain a lock on any further research. That's what I believe the honest 2% moonshot victory bet actually looks like. The other options are just various forms of AI assisted death, with most of the options being variations in flavour or whether or not humans are still even in the control loop when we get snuffed.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link