site banner

Culture War Roundup for the week of September 19, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

33
Jump in the discussion.

No email address required.

I think there are a couple uses of the word "discriminatory".

First, yeah, defining a training subset is a decision to favor one group over another. Making the historically-accurate-armor AI is going to disappoint the fantasy-art users. An AI that makes everyone [insert feature here] is going to disappoint the people wanting it to be realistic. And of course there's nothing stopping the dataset curator from excluding all members of a race, or all Christians, or all women with realistic proportions; doing so would be obviously discriminatory.

On the other hand, a sentencing or profiling AI is categorically different. When a predictive machine discriminates, it is causing a disparate impact on actual people rather than on the collective. I'm struggling to find the right words...it is the difference between perpetuating an injustice vs. causing it, a sort of active vs. passive harm.

(I think there's also an argument to be made that the ill-trained AI is more obvious that an AI trained well on an evil world, and thus more compatible with exit-rights liberalism, but I'm not so confident in that...)

The OP argued that predictive AI was a zero-sum game where only reverse racism could compensate for the inbuilt discrimination. I think...there's some degree to which that is correct? It's extending that conclusion to generative AI that strikes me as wrong. I believe generative discrimination is an easier problem than predictive because it's a lesser problem. Maybe you can't make a "world that could be" generative AI without excluding anyone, but I'm reasonably sure you can make one without actively harming any individuals.

Anyway, thanks for pressing me on this. It's an interesting philosophical topic.

OK, so it sounds like based on this post that when you wrote:

Does it require discriminating against anyone?

about a theoretical generative AI model based around a world that could be (instead of the world as is), you were making a general statement about how no generative AI model could possibly discriminate against someone. Which seems like a strange thing to say about 1 specific type of generative AI model (i.e. based on a world that could be) when discussing 2 different theoretical types of models that are both generative (i.e. the other one being based on the world as it is), but technically true, I suppose.

More or less, though I wouldn't frame it as an absolute, if only because that's inviting extreme counterexamples.

The original article was writing about unimpressive debiasing in corporate generative models. When sulla responded with the assertion that this is unavoidable in a bleeding-edge AI, I thought it appropriate to point out that his examples were all predictive rather than generative. I think that really undermines his equation of discrimination with the "true but verboten." A generative AI could be wildly discriminatory, in the loose sense, without ever discriminating in the narrow, personal sense, and it is the latter which is a one-way ticket to a media circus.