site banner

Culture War Roundup for the week of March 17, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

Hot on the heels of Google's image generation/editing release, xAI has quietly released their own image editing functionality. Of course it's dogshit, and every image is covered in crunchy tokenizer artifacts, but at least it doesn't ban anime.

What's interesting is that OpenAI, Meta, and xAI have all been sitting on this functionality since last year, and they just sat around and let Google beat them to the punch, and xAI's public release isn't improved at all from what they teased months ago. Yet after Google dropped their model, xAI is going public only a week later.

It seems like the big AI companies are deathly terrified of releasing anything new at all, and are happy to just sit around for months or even years on shiny new tech, waiting for someone else to do it first. I remember reading that Google had internally achieved something akin to ChatGPT and just did nothing with it. Then once it comes out it's a race to the bottom in the safety and censorship lane while pushing incremental improvements. At least right now most of the cutting edge is happening out in the open, being published by researchers, so nobody is able to build a moat or a big lead technically, so the players are left chasing percentage points on the margins.

Anyways here's to the deluge of next-gen image (and possibly other multimodal) models about to be unleashed on the world. It's about time.

Edit:

I'm pleased to say that the anime ban on gemini has been lifted, I just tested it today.

It seems like the big AI companies are deathly terrified of releasing anything new at all, and are happy to just sit around for months or even years on shiny new tech, waiting for someone else to do it first.

Surprised you didn't mention Sora here. The Sora demo reel blew everyone's minds ... but then OpenAI sat on it for months, and by the time they actually released a small user version of it, there were viable video generation alternatives out there. As much as it annoys me, though, I don't entirely blame them. Releasing an insufficiently-safecrippled video generator might be a company-ending mistake in today's culture, and that part isn't their fault.

As a member of the grubby gross masses who Cannot Be Trusted with AI tech, I've been pretty heartened that, thus far, all you need to do to finally get access to these tools has been to wait a year for them to hit open source. Then you'll just need to ignore the NEW shiny thing that you Can't Be Trusted with. (It's like with videogames - playing everything a year behind, when it's on sale or free - and patched - is so much cheaper than buying every new game at release...)

You are noticing that none of these companies want to race. The whole competition to build Sand God is largely kayfabe. Western AI scene is not really a market, it's a highly inefficient cartel (with massive state connections too), which builds up enormous capacity but drags its feet on products because none of them ultimately believe their business models are sustainable in the case of rapid commoditization. This is why DeepSeek was such a disruption: not only was it absurdly cheap (current estimates put their annual operations cost at like $200M), not only were they Chinese, but they dared to actively work to bring the costs of frontier capabilities to zero, make it logistically mundane, in alignment with Liang Wenfeng's personal aesthetic and nationalist preferences.

I think R1's release has sped up every Western frontier lab by 20-50% simply by denying them this warm feeling that they can feed the user base some slop about hidden wonder weapons in their basements, release incremental updates bit by bit, and focus on sales. Now we are beginning to see a bit more of their actual (still disappointingly low, not a single one of these companies could have plausibly made R1 on that cluster I think) power level.

It seems like the big AI companies are deathly terrified of releasing anything new at all, and are happy to just sit around for months or even years on shiny new tech, waiting for someone else to do it first. I remember reading that Google had internally achieved something akin to ChatGPT and just did nothing with it. Then once it comes out it's a race to the bottom in the safety and censorship lane while pushing incremental improvements. At least right now most of the cutting edge is happening out in the open, being published by researchers, so nobody is able to build a moat or a big lead technically, so the players are left chasing percentage points on the margins.

Is this really accurate? Because OpenAI's o1 was only fully released in early December of last year, just over 3 months ago. Google's advanced image generation capability was just released this month. Are they not considered Big AI Companies? The speed of AI advancement is actually very fast, and I don't see good evidence that companies are deathly afraid of releasing anything new. Perhaps the reason why xAI didn't release what they teased months ago is because they failed to improve on its functionality in a way that makes for a good product?

Also, what do you mean by "it's a race to the bottom in the safety and censorship lane"? If you mean they are becoming less censored, then that's true. Grok is largely uncensored, both OpenAI and Anthropic are making their flagship LLMs less censored, and you can turn off Gemini's safety filters in Google's AI Studio.

I don't see good evidence that companies are deathly afraid of releasing anything new.

Here's when the ai labs told the public they had advanced image editing and generation.

Perhaps the reason why xAI didn't release what they teased months ago is because they failed to improve on its functionality in a way that makes for a good product?

If you're going to release some half baked slop anyways, better to do it when it's still fresh and novel, rather than when it's old, boring, and second rate.

you can turn off Gemini's safety filters in Google's AI Studio.

Doesn't Didn't apply to the anime ban.

Indeed, the tweets and paper from OpenAI and Meta are evidence that they had image generation capabilities, but not whether their capabilities in practice were cool and shiny or more of the same. xAI claimed they had image editing in December, but according to you[1], it sucked then, and it still sucks now in the public release. That's more evidence that they had something interesting in December, and wanted to take their time to improve and productionize it, but was forced to release it now because Google released theirs. If it sucked then and it sucks now, they never had anything cool and shiny to hold back.

Doesn't Didn't apply to the anime ban.

Yup, I just tested it too. When "doesn't" becomes "didn't", I think that's a point in favor for models becoming less censored than the other way around.


[1]: It's not a dig at you, by the way, I didn't follow the xAI developments back then so I'll take your word for it.