site banner

Culture War Roundup for the week of January 20, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Manhattan Institute, for example. He's very much against Israelis. Really admires Tony Blair. Calls him 'the Dark Lord' and says his political acumen and ability to exercise power is pretty much unparallelled in the modern west.

Anyway, he predicted that this faction of the US 'right', the 'techno-zionists' is likely to win over globalist technocrats(WEF etc) and third-world lunatics (Soros etc). He believes US under this new leadership will challenge China and build a new regime, the one he dubs 'the Rufo Reich'..

I don't think he's wrong much, this is the likeliest outcome although of course it's doubtful US can stand against Chinese and it all comes down to how implementations of AI change politics. I don't believe he pays much attention to AI even though it can change everything.

Sad.

Really admires Tony Blair. Calls him 'the Dark Lord' and says his political acumen and ability to exercise power is pretty much unparallelled in the modern west.

Does he cite many examples of this allegedly unparalleled ability and political acumen?

Look at his career. Look at budget or influence of TBI.

Yes, and?

TBI looks to be fairly ordinary globohomo think tank. If he's unparalleled how do Clinton, Soros and Obama have similar or larger foundations?

On the surface. They're massively profitable apparently..

AA has a video on it.

Blair is an arch grifter, but so are the Clintons, both made hundreds of millions after leaving office. Blair’s prime ambition in life was to become the first President of the EU (this is arrested by many who were around him) and he failed in it, or even establishing that role.

..but grift is the essence of democratic politics.

I was saying he was great at them.

Yeah there's a large strain of thought on the far-right that AI is basically a meme concocted by soyboy WEF types, that it's massively overhyped and if anything might just end up hurting the email-class. Of course it's not unique to them, lots of smart people in the current era saw GPT 3.5 and went 'this is what AI is, ok, moving on' and think it's a bubble.

It is quite literally true that AI is going to be mostly bad for low skill white collar workers and barely affect wrench turners.

Right now AI writes better than you. Check for yourself. Go talk to deepseek-r1 and cue him with Dasein's posts or something complex..

Same model has 93 or 96th percentile on code forces. The context window is 100 kb..

It's all aligning to a lot of people getting replaced.

Chinese are training robots to replace wrench turners using same algos that trained the LLMs..

It's not quite so simple. The 'real' context windows length (i.e. the part of the context that meaningfully affects output) for all models I've tested is approximately 10k tokens. They no longer spaz out and start producing 'gggggggggggg' if you give them a longer context, but for the vast majority of tasks* the rest of those tokens are wasted. As a consequence, they aren't good for any tasks where there is a meaningful amount of information that is relevant to producing the final output.

Fixing this is tricky from a data standpoint, because there isn't a lot of data that is long-form and where all the context for what is written is present in the data. Take the code I wrote yesterday: part of the way I used reason strategy is because of discussions with colleagues in Slack re: specific business objectives, and that information doesn't exist in the codebase. So training on my codebase doesn't necessarily make you able to determine the relationship between that code and the associated business context. You might be able to do clever things with self-play like have the model generate potential business contexts and feed those back into the training data, but that's still mostly hypothetical.

As for replacing wrench turners, that's a lot harder right now. Physics is hard. Hardware is hard. Mapping a constantly changing relationship between sensor inputs, robot dynamics, environment and output is very hard and very far from solved.

*Excluding artificial tasks like needle in the haystack problems.

As for replacing wrench turners, that's a lot harder right now. Physics is hard. Hardware is hard. Mapping a constantly changing relationship between sensor inputs, robot dynamics, environment and output is very hard and very far from solved.

It's not being solved, it's being learned.

That’s what I mean. I have literally done this. It is very very hard even for a toy problem in a structured environment, which is why nobody uses it in the field. That could change: I’ve seen embedding generators for robot actions, for example. But no clear breakthroughs yet.

When? I understand it got a lot better using transformers.

Now what they're doing is also making virtual enviroments and simulating, which is already faster than in RL, but there's some new hardware coming for that as I understand.

Anyaway here's some company's promo video.

Before transformers. I’m prepared to believe it could happen, but AFAIK it hasn’t and I’m pretty sure I’m plugged in enough to know if it had. We don’t even have reliable touch tensors yet.

Wrench turners are not safe, see lights-out factories.

"Oh but I install HVAC outside in an unplanned physical world"

Humanoid robots are training in simulation. They're coming. High skill white collar workers? They're coming.

There has been automation taking away human jobs, but it rarely looks anything like humanoid robots- it looks like agricultural combines, power looms, other notably not man-shaped things. I’m not saying it won’t happen, I’m saying that, by precedent, it’s not time to start worrying about my job until someone comes up with a better idea than ‘humanoid robots’.

Unless you plan to retire in 5 yrs or live in the EU, worry.

It’s almost like the people benefitting from the great American empire can adapt themselves to whatever the state ideology of the great American empire happens to be, as long as it’s compatible with the GAE.