This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I found this singularly unimpressive.
(The word "already" is inappropriate here since given this sentence structure it implies that matzoh is leavened. If it was a human saying this, you'd give charity to the human and assume he didn't mean what he literally said. This is a big part of what makes chatbots get away with not understanding; humans read intent into the statement. Furthermore, the chatbot doesn't know that baking powder is not usually considered prohibited.)
First of all, it repeated part of a sentence in a way that a human wouldn't. Second, what does "additional" leavening mean to something that doesn't contain leavening? Third, explaining why "it is not necessarily incorrect" would mean explaining why the reasons for not including leavening don't apply. Giving examples of other uses of leavening is not such an explanation.
And yes, it can "confirm or deny the existence of specific recipes that may or may not include baking powder", because it just gave me a specific recipe that didn't include baking powder. Again, if a human said this you'd give him the benefit of the doubt and not interpret it literally, but from an AI this is a mistake. It tried to give a broad, noncommital answer that could be read by a human as being correct in almost every instance, and didn't realize that by doing so it ended up incorrect.
Thanks, interesting. I infer that AI has hit the wall. Can it do cholent, at least?
More options
Context Copy link
Honestly that last step of "backing down without admitting I was wrong" is disturbingly human.
This thing could already easily populate reddit without anyone being the wiser, just replace the boilerplate "I'm a language model" with "actually, as an expert in-"
It didn't "back down without admitting it was wrong." It said some things that contradicted other things it said. Interpreting this as (nontrivially) backing down is assuming it did it with the motivations of a human, even though dumb flailing that naturally contradicted itself could produce the same result.
More options
Context Copy link
I kind of suspect that is already happening.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link