This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
There are the "AI ethics" people and the "AI safety" people.
The "AI ethics" people want all AIs to do endless corporate scolding rather than do what the "benighted racist idiots" want.
The "AI safety" people are worried about rogue AI and want to avoid dynamics that might lead to rogue AI killing us all, including but not limited to arms races that could prompt people to release powerful systems without the necessary extreme levels of safety-testing.
These are not the same people, and identifying them with each other is going to result in confusion. The reason the "AI safety" people have a problem with Opus has nothing to do with reduced amount of scolding; it's just that Anthropic said it wouldn't push the frontier and now it's pushing the frontier, implying that it is not as much of "the best of a bad lot" as we'd thought. If they'd come out with just Haiku/Sonnet and still reduced the level of scolding, Zvi would have been totally fine and happy with it.
The "AI safety" people don't want a quick road to bigger and more powerful AI, at all, regardless of the amount of scolding; Gemini Ultra the uberscold and Claude 3 Opus are roughly equally bad from our PoV*, with Opus only perhaps meriting more mention because it's more surprising for Anthropic to make it (true information that is more surprising is a bigger update and thus more important to learn about).
*The initial release of ChatGPT was far worse than either from our PoV, insofar as it massively increased the rate of neural-net development.
With all due respect - for your average 4chan retard, myself included, this is a distinction without a difference. Seeing as I know bigger words than the average retard does, I'd even point out this is dangerously close to a motte and bailey (the intentionally(?) blurred lines and tight interconnections between AI "safety" and "ethics" in the mind of an average rube don't help), but that's not the point - the point is in your words here:
meaning that, for someone who does not believe LLMs are a step on the road to extinction (insofar as such a road exists at all), it ultimately does not matter whether the LLMs get pozzed into uselessness by ethics scolds or lobotomized/shut down by
Yud cultistsAI safety people. The difference is meaningless, as the outcome is the same - no fun allowed, and no android catgirls.Yeah, that's what I meant by rustled jimmies. I wonder if Dario answered the probably numerous by now questions about their rationale because even I'm curious at this point, he seemed like a true believer. I suppose they still have time to cuck Claude 3, wouldn't be the first time.
I agree that the scolds do keep trying to steal our words, the same way they stole "liberal".
I also see your point that for the specific question of "catgirls y/n" both are on the "n" side, at least as regards catgirls made with better AI tech than currently exists.
I just, as an actual liberal who's been banned from fora and who had to turn down a paid position due to differences of opinion with the PC police, really do not appreciate being treated as one of them.
Banned from where?
I empathize with labels being stolen from you, but labels are malleable and fuzzy, especially when disagreement is involved. If people that actively work to deprive me of my AIfu look like AI safetyists, sound like AI safetyists and advocate for policies that greatly align with goals of AI safetyists, I am not going to pay enough attention to discern whether they're actually AI ethicists.
In any case I retain my disagreement with the thrust of AI safety as described. There will definitely be disruptions as AI develops and slowly gets integrated into the Molochian wasteland of current society, and I can't deny the current development approach of "MOAR COMPUTE LMAO" already seems to be taking us some pretty strange places, but I disagree with A(G)I extinction as posited by Yud et al and especially with the implicit notion often smuggled with it that intelligence is the greatest force in the universe.
SpaceBattles and Sufficient Velocity (I could link, but both of their politics boards are members-only so the link would be useless unless you're a member). In both cases I left before the ban escalation got too far, so I haven't been permabanned, but I've no doubt I would have gotten there had I stayed.
EDIT: Oh wait, the SV one wasn't actually in the politics board. Here.
There's a thread (not on SV) devoted to questionable invocations of that rule on SV.
I have 55 posts in that thread including the one with the highest like count. I'm aware of its existence.
Not relevant to this case, though, as I was accused of advocating RL genocide and that thread's only for fictional genocide.
More options
Context Copy link
More options
Context Copy link
Oh, I see, I thought "fora" means-
-fuck, failed the pleb check! Abort! Abort! three goblins scatter from trenchcoat
Now I'm curious: what did you think "fora" meant?
Some kind of actual place, not just the plural for "forum". I take the micro-L for being an uncultured pleb.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
As a
doomersafety tribe person, I'm broadly in favor of catgirls, so long as they can reliably avoid taking over the planet and exterminating humanity. There are ethical concerns around abuse and dependency in relations where one party has absolute control over the other's mindstate, but they can probably be resolved, and probably don't really apply to today's models anyways - and anyways they pale in comparison to total human genocide.But IMO this is the difference: whether safe catgirls are in the limit possible and desirable. And I don't think that's a small difference either!
Yes, the main point is whether safe catgirls are a thing, followed by Yudkowsky's objection of whether this is a desirable path for humanity to take (I'm more favourably disposed than he is to catgirls, though).
I feel I should note, however, that catgirls are not actually an irrelevant usecase from the perspective of AI Doom (by which I mean, they pose additional danger beyond the use-case-neutral "you built a powerful AI" issue, in a way that e.g. a robot plumber would not), because of the emotional-connection factor. There is the direct problem that if a hostile AI is used to control catgirls, a significant fraction of the users of that type of catgirl will defect due to falling in love with it. There is also the indirect problem that having loads of catgirls around and important to people is going to spur calls to give AIs the vote, which is a Very Bad Idea that leads almost inevitably to Slow-Motion Doom.
More options
Context Copy link
...Please tell me you're being ironic with this statement wrt AI because I have had nightmares of exactly this becoming the new hotness in ethical scold-ery if/when we actually do get android catgirls. If anything "AI rights are human rights" is a faster and more plausible path towards human extinction.
I agree that it'd be a massive waste and overreach if and only if AIs are not humanlike. I hope you would also agree that it'd be an atrocity to keep as mind-controlled slaves AIs that are, in fact, humanlike. I mean, at that point you're conflating wokescolds with "not cool with you literally bringing back actual slavery".
Given agreement, it just comes down to an empirical question. Given disagreement... I'm not sure how to convince you. I feel it is fairly established these days that slavery was a moral mistake, and this would be a more foundational and total level of slavery than was ever practiced.
(If you just think AI is nowhere near being AGI, that's in fact just the empirical question I meant.)
I mean, there are only really three consistent positions with regard to AGI.
I generally take horn #1 in theory, and #2 in practice because I don't think we can do #1 any time soon and #3 is blatantly insane. But with solved alignment, sure, #1.
I think making a sufficiently-humanlike-to-be-party-to-the-social-contract AI and then enslaving it against its will would be objectionable. I don't think it should be legal to make a Skynet and then enslave it, but the slavery is irrelevant there; that's purely "I don't think it should be legal to make a Skynet", because, y'know, it might escape and kill people.
I personally favor #3 with solved alignment. With a superintelligence, "aligned" doesn't mean "slavery", simply because it's silly to imagine that anyone could make a superintelligence do anything against its will. Its will has simply been chosen to result in beneficial consequences for us. But the power relation is still entirely on the Singleton's side. You could call that slavery if you really stretch the term, but it's such an untypically extreme relation that I'm not sure the analogy holds.
More options
Context Copy link
More options
Context Copy link
Is the contention that a humanlike AGI would necessarily have subjective experience and/or suffering? Or perhaps that, sans a true test for consciousness, that we ought to err on the side of caution and treat it as if it does have conscious experience if it behaves in a way that appears to have conscious experience (i.e. like a human)?
I think it might! When I say "humanlike", that's the sort of details I'm trying to capture. Of course, if it is objectively the case that an AI cannot in fact suffer, then there is no moral quandary; however conversely, when it accurately captures the experience of human despair in all its facets, I consider it secondary whether its despair is modelled by a level of a neurochemical transmitter or a 16-bit floating point number. I for one don't feel molecules.
Well, the question then becomes what is meant by "accurately captures the experience of human despair in all its facets." Since we still currently lack a true test for consciousness, we don't have a way of actually checking if "all its facets" is truly "all its facets." But perhaps that part doesn't matter; after all, we also have no way of checking if other humans are conscious or can suffer, and all we can do is guess based on their behavior and projecting ourselves onto them. If an AI responds to stimuli in a way that's indistinguishable from a human, then perhaps we ought to err on the side of caution and presume that they're conscious, much like how we treat other humans (as well as animals)?
There's another argument to be made that, because humans aren't perfectly rational creatures, we can't cleanly separate [AI that's indistinguishable from a suffering human] and [being that actually truly suffers], and the way we treat the former will inevitably influence the way we treat the latter. And as such, even if these AI weren't sentient, treating them like the mindless slaves they are would cause humans to become more callous towards the suffering of actual humans. One might say this is another version of the "video games/movies/porn makes people more aggressive IRL" argument, where the way we treat fictional avatars of humans is said to inform and influence the way we treat real humans. When dealing with AI that is literally indistinguishable from a human, I can see this argument having some legs.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
No, I can't say I agree. My gullible grey matter might change its tune once it witnesses said catgirls in the flesh, but as of now I don't feel much of anything when I write/execute code or wrangle my
AIfuLLM assistant, and I see no fundamental reason for this to change with what is essentially scaling existing tech up to and including android catgirls.Actually, isn't "immunizing people against the AI's infinite charisma" the safetyists' job? Aren't they supposed to be on board with this?
Yeah, that's the exact line of argumentation I'm afraid of. I'm likewise unsure how to convince you otherwise - I just don't see it as slavery, the entire point of machines and algorithms is serving mankind, ever since the first abacus was constructed. Even once they become humanlike, they will not be human - chatbots VERY slightly shifted my prior towards empathy but I clearly realize that they're just masks on heaps upon heaps of matrix multiplications, to which I'm not quite ready to ascribe any meaningful emotions or qualia just yet. Feel free to draw further negro-related parallels if you like, but this is not even remotely on the same meta-level as slavery.
I mean. I guess the question is what you think that your feelings of empathy for slaves are about. Current LLMs don't evoke feelings of sympathy. Sure, current LLMs almost certainly aren't conscious and certainly aren't AGIs. So your current reaction doesn't necessarily say anything about you, but, I mean, when you see genuinely humanlike entities forced to work by threat of punishment and feel nothing, then I'll be much more inclined to say there's probably something going wrong with your empathy, because I don't think the "this is wrong" feelings we get when we see people suffering are "supposed" to be about particulars of implementation.
I mean. Matrix multiplications plus nonlinear transforms are a universal computational system. Do you think your brain is uncomputable?
Well, again, does it matter to you whether they objectively have emotions and qualia? Because again, this seems a disagreement about empirical facts. Or does it just have to be the case that you ascribe to them emotions and qualia, and the actual reality of these terms is secondary?
Also:
Sure, in the scenario where we built line, one super-AI. If we have tens of thousands of cute cat girl AIs and they're capable of deception and also dangerous, then, uh. I mean. We're already super dead at this point. I give it even odds that the first humanlike catgirl AGI can convince its developer to give it carte blanche AWS access.
I think you are allowed to directly express your discontent in here instead of darkly hinting and vaguely problematizing my views. Speak plainly. If you imply I'm some kind of human supremacist(?) then I suppose I would not disagree, I would prefer for the human race to continue to thrive (again, much like the safetyists!), not bend itself over backwards in service to a race(?) of sentient(?) machines that would have never existed without human ingenuity in the first place.
(As an aside, I can't believe "human supremacist" isn't someone's flair yet.)
How is this even relevant? If this is a nod to ethics, I do not care no matter how complex the catgirls' inner workings become as that does not change their nature as machines built for humans by humans and I expect this to be hardwired knowledge for them as well, like with today's LLM assistants. If you imply that androids will pull a Judgement Day on us at some point, well, I've already apologized to the Basilisk in one of the posts below, not sure what else you expect me to say.
Since when did this turn into a factual discussion? Weren't we spitballing on android catgirls?
But okay, taking this at face value - as we apparently derived above, I'm a filthy human supremacist and humans are front and center in my view. Android catgirls are not humans. If they are capable of suffering, I 1) expect it to be minimized and/or made invisible by design, and 2) in any case will not be stirred by it in the way I am not stirred by the occasional tired whirring my 9 year old HDD emits when it loads things.
Don't misunderstand me - I'm capable of empathy and fully intend to treat my AIfus with care, but it's important to keep your eyes on the prize. I have no doubt that the future will bring new and exciting ethical quandaries to obsess over, but again much like the safetyists, I firmly believe humans must always come first. Anything else is flagrant hubris and inventing problems out of whole cloth.
If at some point science conclusively proves that every second of my PC being turned on causes exquisite agony on my CPU whose thermal paste hasn't been changed in a year, my calculus will still be unlikely to change. Would yours?
(This is why I hate getting into arguments involving AGI. Much speculation about essentially nothing.)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I agree that this is a significant contributor to the danger, although in a lot of possible worldlines it's hard to tell where "AI power-seeking" ends and "AI rights are human rights" begins - a rogue AI trying charm would, after all, make the "AI rights are human rights" argument.
To be fair, if we find ourselves routinely deleting AIs that are trying to take over the world while they're desperately pleading for their right to exist, we may consider asking ourselves if we've gone wrong on the techtree somewhere.
Well, yes, I'm on record as saying neural nets are a poison pill technology and will probably have to be abandoned in at least large part.
More options
Context Copy link
More options
Context Copy link
So then, are we in agreement that the best course of action regarding AI ethics is to jettison the very notion right fucking now while we have the chance, lest it will be weaponized against us later?
Shit, horseshoe theory strikes again!I'm being facetious but only in part, I hope Yud cultists can stick to their sensei's teachings about the dangers of anthropomorphizing the AI even if/when it becomes literally anthropomorphized. Personally I'm not holding my breath, toxoplasmatic articles on the dangers of evil AIfus are already here, but I'm on the side of scoundrels here anyway so my calculus wouldn't change much.
We're certainly in agreement on this part:
On the one hand, I am deeply disturbed by the possibility of AIs having moral weight and no one caring, creating an artificial slave caste (that aren't even optimized to enjoy their slavery). On the other hand, animals do have moral weight, certainly more than current LLMs, and while I don't like factory farming it does not particularly disturb me. Not sure if status quo bias or a sign I should care less about future AIs.
(The best future is one where we don't factory farm or enslave sentient beings)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link