site banner

Culture War Roundup for the week of September 23, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

I have considered it

I'm only going to evaluate the implications of ... products they actually have

It seems like you have not, in fact, considered the possibility of models improving. Is this the meme where some people literally can't evaluate hypotheticals? Again, doomers are worried about future, better models. What would you be worried about if you found out that models had been made that can do your job, and all other jobs, better than you?

I certainly have the ability to evaluate hypotheticals. Where I get off the train is when people treat these hypotheticals as though they're short-term inevitabilities. You can take any technology you want to and talk about how improvements mean we'll have some kind of society-disruping change in the next few decades that we have to prepare for, but that doesn't mean it will happen, and it doesn't mean we should invest significant resources into dealing with the hypothetical disruption caused by non-existent technology. The best, most recent example is self-driving cars. In 2016 it seemed like we were tantalizingly close to a world where self-driving cars were commonplace. I remember people arguing that young children probably wouldn't ever have driver's licenses because autonomous vehicles would completely dominate the roads by the time they were old enough to drive. Now here we are, almost a decade later, and this reality seems further away than it did in 2016. The promised improvements never came, high profile crashes sapped consumer confidence, and the big players either pulled out of the market or scaled back considerably. Eight years later we have yet to see a single consumer product that promises a fully autonomous experience to the point where you can sleep or read the paper while driving. There are a few hire car services that offer autonomous options, but these are almost novelties at this point; their limitations are well documented, and they're only used by people who don't actually care about reaching their destination.

In 2015 there was some local primary candidate who was running on a platform of putting rules in place to help with the transition to autonomous heavy trucking. These days, it would seem absurd for a politician to be investing so much energy into such a concern. Yes, you have to consider hypotheticals. But those come with any new piece of technology. The problem I have is when every incremental advancement treats these hypotheticals as though they were inevitabilities.

Again, doomers are worried about future, better models. What would you be worried about if you found out that models had been made that can do your job, and all other jobs, better than you?

I'm a lawyer, and people here have repeatedly said that LLMs were going to make my job obsolete within the next few years. I doubt these people have any idea what lawyers actually do, because I can't think of a single task that AI could replace.

In 2016 it seemed like we were tantalizingly close to a world where self-driving cars were commonplace. I remember people arguing that young children probably wouldn't ever have driver's licenses because autonomous vehicles would completely dominate the roads by the time they were old enough to drive. Now here we are, almost a decade later, and this reality seems further away than it did in 2016.

You can order a self-driving taxi in SF right now, though.

I agree it's not a foregone conclusion, I guess I'm hoping you'll either give an argument why you think it's unlikely, even though tens of billions and lots of top talent are being poured into it, or actually consider the hypothetical.

I can't think of a single task that AI could replace.

Even if it worked??

self-driving cars are here but only in some places and with some limitations, they're just a novelty

So they're here? Baidu has been producing and selling robotaxis for years now, they don't even have a steering wheel. People were even complaining the other day when they got into a traffic jam (some wanting to leave and others arriving).

They've sold millions of rides, they clearly deliver people to their destinations.

I can't think of a single task that AI could replace

Drafting contracts? Translating legal text into human readable format? There are dozens of companies selling this stuff. Legal work is like writing in that it's enormously diverse, there are many writers who are hard to replace with machinery and others who have already lost their jobs.