site banner

Culture War Roundup for the week of December 16, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

Alternatively, it will never feel obvious, and although people will have access to increasingly powerful AI, people will never feel as if AGI has been reached because AI will not be autoagentic, and as long as people feel like they are using a tool instead of working with a peer, they will always argue about whether or not AGI has been reached, regardless of the actual intelligence and capabilities on display.

(This isn't so much a prediction as a alternative possibility to consider, mind you!)

because AI will not be autoagentic

Even in this scenario, AI might get so high level that it will feel autoagentic.

For example, right now I ask ChatGPT to write a function for me. Next year, a whole module. Then, in 2026, it writes an entire app. I could continue by asking it to register an LLC, start a business plan, make an app, and sell it on the app store. But why stop there? Why not just, "Hey ChatGPT go make some money and put it in my account".

At this point, even though a human is ultimate making the command, it's so high level that it will feel as if the AI is agentic.

And, obviously, guardrails will prevent a lot of this. But there are now several companies making high level fundamental models. Off the top of my head we have: OpenAI, Grok, Claude, Llama, and AliBaba. It doesn't seem out of the realm of possibility that a company with funding on the order of $100 million will be able to repurpose a model and remove the guardrails.

(Also just total speculation on my part!)

Even in this scenario, AI might get so high level that it will feel autoagentic.

Yes, I think this is quite possible. Particularly since more and more of human interaction is mediated through Online, AI will feel closer to "a person" since you will experience them in basically the same way. Unless it loops around so that highly-agentic AI does all of our online work, and we spend all our time hanging out with our friends and family...