site banner

Culture War Roundup for the week of November 6, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Like, how does Christianity relate to AGI? It doesn't! Does this mean AGI won't happen?

Your question can be broken down into two parts (I'm assuming AGI means "Artificial General Intelligence").

(1) How does Christianity relate to AGI?

On the same basis it relates to all other creations of humanity and the way we conduct ourselves, are we trying to make a heaven on earth that will instead result in a hell on earth?

(2) Does this mean AGI won't happen?

Yes. But that's because I don't believe all the hopes/fears about Fairy Godmother AI and Paperclippers. We'll get machine intelligence of a kind, but we won't get Colossus or HAL or the Culture AIs. What we'll get will be even more of the same that we're seeing now - using AGI to fake up term papers etc., to generate articles for online and mainstream media, to assist scammers in scamming, and used as a very blunt sorting instrument by government. White collar jobs will now be as precarious as blue collar jobs have been. But we're not going to get the Singularity, post-scarcity, or even dystopias. Just more of the same, even faster.

Yes. But that's because I don't believe all the hopes/fears about Fairy Godmother AI and Paperclippers. We'll get machine intelligence of a kind, but we won't get Colossus or HAL or the Culture AIs

The argument is incredibly compact. Do you believe that 1) computers can't have the intelligence and independent action of humans, despite obvious material paths to accomplishing that we currently are aggressively pursuing or that 2) we won't unleash that intelligence and independent action, despite the truly enormous potential individual and collective benefits of doing so?

Like, a million years ago there weren't humans (homo sapien). We evolved. Whether or not you believe in god, the fossil record and DNA clearly demonstrates that. Imagine a million years from now. If we create things smarter and more capable than ourselves, why won't they end up on top in a million years, in the same way we did?

And how long does it look like it'll take? A thousand seems more plausible than a million, given computers weren't a thing 200 years ago. A hundred or two seems more plausible than a thousand. And suddenly it's an issue for your grandchildren, at least.

Do you believe that 1) computers can't have the intelligence and independent action of humans, despite obvious material paths to accomplishing that we currently are aggressively pursuing or that 2) we won't unleash that intelligence and independent action, despite the truly enormous potential individual and collective benefits of doing so?

(1) Yup (2) Also yup - "unleashing intelligence and independent action" my left foot, there won't be any happy-clappy choice about it: it will be "use AI or your business is not competitive", and as always, AI will be to make the rich richer and nothing to do with "every single existing human will suddenly be rich and happy". AI will be used to nudge us into buying more crap to make big businesses even more profitable. That's the path, why do you think Microsoft etc. are working so hard on it? To make a Third World Indian peasant farmer into the equivalent of a Californian middle class tech employee?

To quote an anecdote about Irish political history, "Ireland will get its freedom and you still be breaking stones".

We got here - metal towers that scrape the sky, man's foot touching the moon, seeing the faces and hearing the voices of men ten thousand miles away, a billion peoples' labor acting in a decentralized yet coordinated dance - purely by human intelligence and capability. The specific structures of morality, governance, economy, and society that we imagine are fixed were created by us and for our purposes. They have changed, and they will change.

If we create something smarter than us, why won't it do the same - create its own structures, that wn't involve us? Now, you describe accurately what microsoft wants. But Microsoft doesn't get everything it wants. And microsoft only wants what it wants because those specific social and material technologies make them powerful. What makes microsoft want money or powerful computers? They lend Microsoft and employees power, influence, capability. What'll give Microsoft even more of that? Creating AGI. Giving AGI more power and influence. And then the AGI can, uh, do that itself. And then?

500 years ago, "capitalism" and "computers" didn't exist. Why do you expect computers and capitalism to last for another 500 years, "just as it's always been". "consumerism" and "profitable joint stock corporations", that's just how it is, haha. Nothing caused that, and whatever caused it certainly can't change, we can't lose our position as kings of the world.

It's 50s Golden Age SF techno-optimism still in play. We got the future, but not the flying cars and space colonies they imagined. I don't expect AI to go as they imagined, either. 500 years from now, our descendants could be back to the stage of the Golden Horde (if all the doom and gloom about climate, resources, population, etc. happens).

I'm not a forecaster. I have no idea what the 26th century will be like. But I'm pretty sure in the short term, by 2050 we will not all be living Fully Automated Luxury Gay Space Communism lifestyles. Back in the 70s, forecasting the future was a very popular notion for the media, experts, and amateurs alike. It was expected that given automation, etc, in the 21st Century (our days) we'll all be on four hour work weeks and have so much leisure, we wouldn't know what to do with ourselves.

Increasing automation did not lead to "I can do all my week's work in four hours"; instead it meant "now you can do extra work to be extra productive and make extra profit". People have the dream of the fairy godmother machine that will mean we don't have to work and will be rich and comfortable and the machine will solve all our problems for us, I don't think that's ever going to happen.

I didn't claim we'd get space communism or that it'd go how any of the AI people expect it will.

I'm just claiming that AI is going to be a major factor in ways that you're probably not accounting for. Why can't AI have its own agency and take world-reshaping actions just like humans do?

People have the dream of the fairy godmother machine that will mean we don't have to work and will be rich and comfortable and the machine will solve all our problems for us, I don't think that's ever going to happen.

The machine can be smarter and more capable from us and take power from us, though.

Why can't AI have its own agency and take world-reshaping actions just like humans do?

1.- because we don't even know what intelligence is or how to measure it in ourselves

2.- there is no path that I have seen ilustrated by the doomer crowd that takes us from the glorified autocomplete programs we have now to skynet; it's always and then they magically decide to kill us all so that they can make more paperclips.

3.- the lobotomies the mainstream LLM's are subjected to today and the fear of fake news and new regulation are enough to shutdown any dream of independent thought for any future AI.

  1. ... yeah, and yet we still manage to have agency and reshape the world? I don't understand your point. Current AI methods are more 'evolve a huge complicated weird thing' than 'understand how it works and design it'.

  2. evolution did it, why can't we do it? Even if some new thing above neural nets is necessary, we're going to work very hard on it.

  3. this is the same thing as 'a cop killed a black guy wrongly once so all cops are racism'. you're comically overgeneralizing a newsworthy culture-war-adjacent event to everything. Regulation has opponents, opponents that care more about big piles of money and power than saying bad words online.

.. yeah, and yet we still manage to have agency and reshape the world? I don't understand your point. Current AI methods are more 'evolve a huge complicated weird thing' than 'understand how it works and design it'.

If you don't understand it, there is no hope of coding for it.

evolution did it, why can't we do it? Even if some new thing above neural nets is necessary, we're going to work very hard on it.

evolution is not a person, it's a process, one which we cannot replicate in a practical capacity with LLM.

this is the same thing as 'a cop killed a black guy wrongly once so all cops are racism'. you're comically overgeneralizing a newsworthy culture-war-adjacent event to everything. Regulation has opponents, opponents that care more about big piles of money and power than saying bad words online.

the thing you don't understand about regulation is that, more often than not, it is used by the incumbent actors in a space as a barrier to entry for new and more agile competitors. There is a reason Altman is all for it and in the same month there was a leaked Google memo that basically said that OAI, google, facebook et all didn't have a moat.

More comments

It's 50s Golden Age SF techno-optimism still in play. We got the future, but not the flying cars and space colonies they imagined.

We got something better, something even the grand masters could not imagine - all knowledge of the world at our fingertips, at any place and any time. No comparison to some shitty spaceships operated by slide rules.

Anyway, if you remember shiny happy sciencefictional future, you are really ancient or fan of yoghurt commercials.

I was promised nothing than total enslavement by big governments and big corporations or total death by nuclear war, famine, plague, pollution, robots, aliens or mutants.

https://archive.is/qqlAs

I'm not a forecaster. I have no idea what the 26th century will be like. But I'm pretty sure in the short term, by 2050 we will not all be living Fully Automated Luxury Gay Space Communism lifestyles. Back in the 70s, forecasting the future was a very popular notion for the media, experts, and amateurs alike.

Not always unsuccesful.

See famous predictions about year 2000 by thermonuclear man Herman Kahn from 1967 and their evaluation from 2002.

https://sci-hub.ru/10.1016/s0040-1625(02)00186-5

Ten best forecasts

  1. Inexpensive high-capacity, worldwide, regional, and local (home and business) communication (perhaps using satellites, lasers, and light pipes)

  2. Pervasive business use of computers

  3. Direct broadcasts from satellites to home receivers

  4. Multiple applications for lasers and masers for sensing, measuring, communication, cutting, welding, power transmission, illumination, and destructive (defensive)

  5. Extensive use of high-altitude cameras for mapping, prospecting, census, and geological investigations)

  6. Extensive and intensive centralization (or automatic interconnection) of current and past personal and business information in high-speed data processors

  7. Other widespread use of computers for intellectual and professional assistance (translation, traffic control, literature search, design, and analysis)

  8. Personal ‘‘pagers’’ (perhaps even two-way pocket phones)

  9. Simple inexpensive home video recording and playing

  10. Practical home and business use of ‘‘wired’’ video communication for both telephone and TV (possibly including retrieval of taped material from libraries) and rapid transmission and reception of facsimile

Ten worst forecasts

  1. Individual flying platforms
  2. Widespread use of improved fluid amplifiers
  3. Inexpensive road-free (and facility-free) transportation
  4. Physically nonharmful methods of overindulging
  5. Stimulated, planned, and perhaps programmed dreams
  6. Artificial moons and other methods for illuminating large areas at night
  7. Human hibernation for short periods (hours or days)
  8. Inexpensive and reasonably effective ground-based BMD (ballistic missile defense)
  9. The use of nuclear explosives for excavation and mining, generation of power, creation of high-temperature – pressure environments, or as a source of neutrons or other radiation
  10. Human hibernation for relatively extensive periods (months to years)

edit: links unscrambled

Anyway, if you remember shiny happy sciencefictional future, you are really ancient

Yes, plus I've read a lot of older SF from before I was born, because when I was a kid reading skiffy, that was what there was.

So I'm old enough to be very sceptical about shiny dreams of the future, since they never work out like that.

What a great list, thanks.

Individual flying platforms

This is technically achievable within the budget of a middle class American, if you consider ultralights and some ghetto quadcopters. The biggest hurdle, as is the case for many things we're technologically capable of doing, is regulation.

Artificial moons and other methods for illuminating large areas at night

More that this is both unnecessary given how cheap electric lighting is, and because it's unnecessarily disruptive.