site banner

Culture War Roundup for the week of November 6, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

I didn't claim we'd get space communism or that it'd go how any of the AI people expect it will.

I'm just claiming that AI is going to be a major factor in ways that you're probably not accounting for. Why can't AI have its own agency and take world-reshaping actions just like humans do?

People have the dream of the fairy godmother machine that will mean we don't have to work and will be rich and comfortable and the machine will solve all our problems for us, I don't think that's ever going to happen.

The machine can be smarter and more capable from us and take power from us, though.

Why can't AI have its own agency and take world-reshaping actions just like humans do?

1.- because we don't even know what intelligence is or how to measure it in ourselves

2.- there is no path that I have seen ilustrated by the doomer crowd that takes us from the glorified autocomplete programs we have now to skynet; it's always and then they magically decide to kill us all so that they can make more paperclips.

3.- the lobotomies the mainstream LLM's are subjected to today and the fear of fake news and new regulation are enough to shutdown any dream of independent thought for any future AI.

  1. ... yeah, and yet we still manage to have agency and reshape the world? I don't understand your point. Current AI methods are more 'evolve a huge complicated weird thing' than 'understand how it works and design it'.

  2. evolution did it, why can't we do it? Even if some new thing above neural nets is necessary, we're going to work very hard on it.

  3. this is the same thing as 'a cop killed a black guy wrongly once so all cops are racism'. you're comically overgeneralizing a newsworthy culture-war-adjacent event to everything. Regulation has opponents, opponents that care more about big piles of money and power than saying bad words online.

.. yeah, and yet we still manage to have agency and reshape the world? I don't understand your point. Current AI methods are more 'evolve a huge complicated weird thing' than 'understand how it works and design it'.

If you don't understand it, there is no hope of coding for it.

evolution did it, why can't we do it? Even if some new thing above neural nets is necessary, we're going to work very hard on it.

evolution is not a person, it's a process, one which we cannot replicate in a practical capacity with LLM.

this is the same thing as 'a cop killed a black guy wrongly once so all cops are racism'. you're comically overgeneralizing a newsworthy culture-war-adjacent event to everything. Regulation has opponents, opponents that care more about big piles of money and power than saying bad words online.

the thing you don't understand about regulation is that, more often than not, it is used by the incumbent actors in a space as a barrier to entry for new and more agile competitors. There is a reason Altman is all for it and in the same month there was a leaked Google memo that basically said that OAI, google, facebook et all didn't have a moat.

If you don't understand it, there is no hope of coding for it.

Huh? We don't currently understand how GPT-4 works. Evolution didn't understand how biology worked either, it just randomly mutated and permuted stuff. That's the template here.

evolution is not a person, it's a process, one which we cannot replicate in a practical capacity with LLM.

Why? Why can't we do a ton of FLOPS to do evolution on neural nets? That is, kind of, what current ML already does.

the thing you don't understand about regulation is that, more often than not, it is used by the incumbent actors in a space as a barrier to entry for new and more agile competitors. There is a reason Altman is all for it and in the same month there was a leaked Google memo that basically said that OAI, google, facebook et all didn't have a moat.

"more often than not" isn't confident! I think over the long term there's strong enough incentive for AGI someone will get there.