site banner

Culture War Roundup for the week of July 24, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

Well sure. If you don't know everything else about the person you're looking at. But the goal- is to be able to know everything about each person you're looking at so that you don't have to make non-causal inferences. Not just for equity, but because they make you wrong more.

And the counterargument- to what I just said- is that it's hard to do that right? But ultimately I'm right. Right? We'd be right more if we had the resources to just see more layers of the life of the black dude in the alley?

So I'ma go make a social inference AI and get back to you.

I think "presume we still lack access to an AI with near-omniscience in the realm of [x]" is implicit in most discussions. Once your social inference AI exists and we have free access to it, the entire social landscape would transform so much that it's hard to even predict what problems might exist, much less how we'd fix them. How to treat race in that landscape is a very different question than how to treat race in the landscape in which we find ourselves. And without the ability to instantly or even quickly switch from one landscape to the other, we still have to figure out how to interact with each other in this current crude landscape where we lack access to this social inference AI.

the entire social landscape would transform so much that it's hard to even predict what problems might exist, much less how we'd fix them.

Actually, coming back to this. I would like to get your thoughts. I believe in myself and my ability to make this AI. It will be very tough to get it to the point where it can see a face in a hoodie at night on a dark street and tell you about that guy- But I expect to be able to do a fairly comprehensive background/shared-values/personal-info/interaction-styles/preferences check on all internet figures with consistent usernames that frequently post in servers/sites/subs I visit by year's end.

What I want your thoughts on is- I know you litterally just said "it's hard to even predict what problems might exist, much less how we'd fix them." But this is important. If I succeed I need to be aware that of disruption I cause by distributing this to anyone who wants it and making it simple and easy to one click install on desktop and query via LLM....

I want to know what problems you think I'll be creating as I move forward. I want to be able to solve issues I help create by spreading this level of social awareness. Brainstormed hypotheses are fine here if you don't have strong predictions. It's all worth at least considering. Especially as I begin to automate the consideration and processing of such possibilities.

Yes. Very understandable. I will not begrudge you that. I am going to keep sitting in my privileged small town and never walking down streets at night and making my social inference AI though. You keep doing you but heads up.