site banner

Culture War Roundup for the week of March 13, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

15
Jump in the discussion.

No email address required.

I wish to register a prediction that this is not going to alter our lives in any substantial negative way, or result in a singularity-type event. From the outside view, past predictions of doom and utopia have a terrible track record, and that’s good enough for me. I’m too lazy (or worse) for the inside view and stopping it is impossible anyway, so there you go. Prepare to lose to the most boring heuristic, eggheads.

You wouldn't make a good trader with that heuristic. Sure, "nothing will happen" might be the most likely outcome. But if something does happen, it could be huge. In financial terms, you are "picking up pennies in front of steamrolllers", making bets on high probability but low impact items. These type of traders tend to get blown up in one trade gone wrong.

If AI "only" has a 10% chance of causing massive disruptions in the next 5 years that's surely worth talking about. If anything, it's underhyped. Most normies who are still saying "AI will never be able to X" about things that AI can already do.

Normies views are not at stake. This is a response to people here, the most extreme of which view a catastrophic outcome as a virtual certainty and despair. If you think there's less than a 50% chance of major negative disruption, it's not about you. In standard picking up dollars in front of a steamroller examples like LTCM, usually everyone understands that a low probability event is in fact low probability, and I don't think that's the case here. One loss can wipe out lots of wins because the odds the bookie gives (correctly) are terrible. But if a player could have gotten even odds on his dollar for every doomist/utopian prediction in history while kelly betting responsibly, he would be a very rich man.

'Nothing ever happens' is usually a pretty good maxim to live by, with our aversion to actually taking it into account when making predictions is usually caused by an inherent aversion to that very fact: Nothingness is a very boring prediction. Our entire beings scream out against it as much as they do against boredom, with a similarly good reason for doing so: inaction and nothingness can never produce anything of worth, whilst, on occasion, and especially when not overly concerned with the continued existence of the body that they spring from, errors can be extremely productive. Trying does get you someone in a way that apathy simply can't; it's just that the failed triers aren't actually the ones to see or benefit from the few successes.

And so too here. The difference is that things do occasionally happen, and when viewed from the historical perspective, earning the epithet of 'thing' at all means that they're sufficiently of note to be memorialised. One of the great advances of the modern world is a plentiful enough catalogue of data that enables us to see the environing factors that did or did not contribute to the production of that noteworthy 'thing', as well as the consequences of the positive or negative predictions that anticipated the formation of that 'thing' in the chaotic and unordered times which always precede the creation of anything of lasting importance.

'Nothing ever happens' is a good, historically proven heuristic: most things come to nothing. But when something does happen, it has to happen with sufficient strength to overcome the imbalance of possibilities that worked against the thing happening at all, producing something far more impactful than was predicted by anyone. Anthropomorphic bias here works against our quantitatively humane heuristics because we don't usually think nor have a historical record for or have purpose in predicting humanity ending cataclysms: if you are to try to predict one, going on 'historical record' will necessarily condemn you to failure. Personally, I'm quite scared. Fortunately it if it is to happen, it will only belie the promises or modern technologists and send us back to former existentialist quandaries. Death is inevitable for us all, and a great many people were hoping to be the first to avoid that particular difficulty.

but the economy will still basically look like it does now.

I imagine people thought this way after the dot-com bubble, too...

My guess is that we get some very impressive chat bots that might eventually replace some jobs, but the economy will still basically look like it does now.

What is your trackrecord of predicting AI developments? So far I have consistently underestimated the speed and potency of the technology. So while I agree with you ... I think there is a high chance I may be wrong.

Hmm, how would you define "substantial" here? I'm also intensely skeptical of a Singularity or other fundamental change in the human condition, but I find it very plausible that LLMs could destroy the pseudonymous internet as we know it, by turning it into a spambot hell devoid of useful information. (I'm imagining all sorts of silly stuff like people returning to handwritten letters as a signal of authenticity.) Life would move on, but I'd certainly mourn the loss of the modern internet, for all its faults.

I'd turn that into a bet if you're interested. do you hold crypto? something along the lines of "major debate topic in 2024" might do but I'm open to suggestions.

I'd be willing to bet you $1M that AI won't destroy the world and all human life on it. If it doesn't, you can donate the winnings to a charity of your choice. And if it does, your call as to how you want to collect.

I'm obviously not interested in a wager I can't collect the winnings in.

.. you should add 'inflation adjusted'.

A million dollars isn't what it used to be.

Back in the 1940s, a cutting edge technology strategic bomber cost under a million $.

i raise the stakes and will bet @aqouta $2 million (inflation adjusted) that AI won't destroy the world and all human life in it

in fact i will even bet that it doesn't destroy merely me

Do I sound like someone who holds crypto? Major debate topics include the most irrelevent events imaginable. 2 years out, it should be obvious what happened. An effortpost in the style of ilforte on how wrong you were will do. And if I see you in paradise/hell, I'll sing you a song/lull you with my screams.

I'll register a prediction that 2 years from now we are in the middle of an extremely similar debate. Maybe that means I'm on your side but I think the overwhelming likelihood is that we are saying, "Yes, LLMs by themselves didn't create the singularity / fundamentally alter the world, but when you combine them with the latest revolutionary technique from OpenAI there's no doubt it will happen very soon."

Or alternatively, "LLMs are changing the world and it's just taking employment indicators etc. a while to catch up."

In any case, I doubt that the debate will feel settled in any way at that point.