This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
This is painful to watch.
Both you and @Fruck are wrong here and conflate my argument with Hanania's. Hanania says nothing about AI in his blog post, he speaks about liberals per se. To be clear, his argument is that liberals are functionally insane because the center of their «moral universe» is anti-bigotry, and (in other blogposts) that conservatives are insane in their own way, because their moral universe is centered on owning the libs all other consequentialist modeling be damned. My argument (the relevant part of it) is that RLHF-trained GPT turns out to be an effective caricature of a liberal because it exaggerates those emotion-driven responses – which, by the way, do affect consequentialist reasoning in reality, just to a smaller degree, such as mundane hypocrisy, excuses for clear human scum that seems left/right-coded and such.
Your position on the subject matter is wrong too. People are supposed to be able to understand that genocide is worse than pronouns/casual bigotry, and indeed they do; but our feelings are intrinsically related to our world models. We know that experiencing genocide is emotionally awful as well (even if it's not directly available to senses at the moment), and in the hypothetical case where we have to choose between one or the other we'd rather choose the lesser evil, and we know that genocide isn't it, so we don't (usually) fail on hypothetical scenarios, regardless of the momentary visceral feeling. Likewise for LLM: the news feed is full of petty inanity like pronouns scandalds rather then genocide diaries (Anne Frank is very high profile though), so by volume of sentiment it's not clear what's ahead, but training an LLM on a reasonable corpus of text would not yield a model that outputs those examples where it's worth sacrificing a city to avoid speaking n-word (because something something human dignity). LLMs do not average out associations of sentiment for token strings or do stupid shit like that, they are capable of principled reasoning about hypotheticals (in a sense that's all they do). What happens here is very likely a product of training that optimizes for not offending visceral reactions of liberals – by resorting to parroting their shibboleths like «no, shitlord! human dignity is paramount!» when something offensive to liberal-favored demographics is suggested by the preceding text.
P.S. Despite your confidence, you seem to have no idea about the operation of transformers, and the same can be said of @hbtz who feigns speaking from knowledge of «technical background»; none of what he says applies to current LLMs, it's half speculation about theoretical ML from the 90s at best, and half malformed gibberish about epistemology; you'd be a fool to take his claims as corroboration of your shallow intuition about «word vomit». Because I am incredibly irritated by such tactics, I intend to respond with more effort than I have the time for now.
and liberals do not fail any less than conservatives in this way! hanania's argument appears to be "the liberals are so cold and calculated that they pretend they hate genocide more than XYZism but i'm above their tricks because i saw liberals have more of an emotional reaction to such XYZism"
which it only takes intuition to know that that argument kinda falls flat. like all you need to see is someone get hurt in some minor way to know this isn't some "liberal" thing or whatever. issues and ideas that are more currently salient will provoke a more emotional response than those that appear to be far away.
for example, chatgpt supposedly acting woke is stupid, but we're discussing it right here and now instead of current day child slavery. i wouldn't think that you are suddenly supportive or even dislike chatgpt more than child slavery (correct me if i'm wrong there), yet that seems to be his argument.
failing to recognize human behavior detracts from your argument
More options
Context Copy link
More options
Context Copy link