This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
The argument is incredibly compact. Do you believe that 1) computers can't have the intelligence and independent action of humans, despite obvious material paths to accomplishing that we currently are aggressively pursuing or that 2) we won't unleash that intelligence and independent action, despite the truly enormous potential individual and collective benefits of doing so?
Like, a million years ago there weren't humans (homo sapien). We evolved. Whether or not you believe in god, the fossil record and DNA clearly demonstrates that. Imagine a million years from now. If we create things smarter and more capable than ourselves, why won't they end up on top in a million years, in the same way we did?
And how long does it look like it'll take? A thousand seems more plausible than a million, given computers weren't a thing 200 years ago. A hundred or two seems more plausible than a thousand. And suddenly it's an issue for your grandchildren, at least.
(1) Yup (2) Also yup - "unleashing intelligence and independent action" my left foot, there won't be any happy-clappy choice about it: it will be "use AI or your business is not competitive", and as always, AI will be to make the rich richer and nothing to do with "every single existing human will suddenly be rich and happy". AI will be used to nudge us into buying more crap to make big businesses even more profitable. That's the path, why do you think Microsoft etc. are working so hard on it? To make a Third World Indian peasant farmer into the equivalent of a Californian middle class tech employee?
To quote an anecdote about Irish political history, "Ireland will get its freedom and you still be breaking stones".
We got here - metal towers that scrape the sky, man's foot touching the moon, seeing the faces and hearing the voices of men ten thousand miles away, a billion peoples' labor acting in a decentralized yet coordinated dance - purely by human intelligence and capability. The specific structures of morality, governance, economy, and society that we imagine are fixed were created by us and for our purposes. They have changed, and they will change.
If we create something smarter than us, why won't it do the same - create its own structures, that wn't involve us? Now, you describe accurately what microsoft wants. But Microsoft doesn't get everything it wants. And microsoft only wants what it wants because those specific social and material technologies make them powerful. What makes microsoft want money or powerful computers? They lend Microsoft and employees power, influence, capability. What'll give Microsoft even more of that? Creating AGI. Giving AGI more power and influence. And then the AGI can, uh, do that itself. And then?
500 years ago, "capitalism" and "computers" didn't exist. Why do you expect computers and capitalism to last for another 500 years, "just as it's always been". "consumerism" and "profitable joint stock corporations", that's just how it is, haha. Nothing caused that, and whatever caused it certainly can't change, we can't lose our position as kings of the world.
It's 50s Golden Age SF techno-optimism still in play. We got the future, but not the flying cars and space colonies they imagined. I don't expect AI to go as they imagined, either. 500 years from now, our descendants could be back to the stage of the Golden Horde (if all the doom and gloom about climate, resources, population, etc. happens).
I'm not a forecaster. I have no idea what the 26th century will be like. But I'm pretty sure in the short term, by 2050 we will not all be living Fully Automated Luxury Gay Space Communism lifestyles. Back in the 70s, forecasting the future was a very popular notion for the media, experts, and amateurs alike. It was expected that given automation, etc, in the 21st Century (our days) we'll all be on four hour work weeks and have so much leisure, we wouldn't know what to do with ourselves.
Increasing automation did not lead to "I can do all my week's work in four hours"; instead it meant "now you can do extra work to be extra productive and make extra profit". People have the dream of the fairy godmother machine that will mean we don't have to work and will be rich and comfortable and the machine will solve all our problems for us, I don't think that's ever going to happen.
I didn't claim we'd get space communism or that it'd go how any of the AI people expect it will.
I'm just claiming that AI is going to be a major factor in ways that you're probably not accounting for. Why can't AI have its own agency and take world-reshaping actions just like humans do?
The machine can be smarter and more capable from us and take power from us, though.
1.- because we don't even know what intelligence is or how to measure it in ourselves
2.- there is no path that I have seen ilustrated by the doomer crowd that takes us from the glorified autocomplete programs we have now to skynet; it's always and then they magically decide to kill us all so that they can make more paperclips.
3.- the lobotomies the mainstream LLM's are subjected to today and the fear of fake news and new regulation are enough to shutdown any dream of independent thought for any future AI.
... yeah, and yet we still manage to have agency and reshape the world? I don't understand your point. Current AI methods are more 'evolve a huge complicated weird thing' than 'understand how it works and design it'.
evolution did it, why can't we do it? Even if some new thing above neural nets is necessary, we're going to work very hard on it.
this is the same thing as 'a cop killed a black guy wrongly once so all cops are racism'. you're comically overgeneralizing a newsworthy culture-war-adjacent event to everything. Regulation has opponents, opponents that care more about big piles of money and power than saying bad words online.
If you don't understand it, there is no hope of coding for it.
evolution is not a person, it's a process, one which we cannot replicate in a practical capacity with LLM.
the thing you don't understand about regulation is that, more often than not, it is used by the incumbent actors in a space as a barrier to entry for new and more agile competitors. There is a reason Altman is all for it and in the same month there was a leaked Google memo that basically said that OAI, google, facebook et all didn't have a moat.
Huh? We don't currently understand how GPT-4 works. Evolution didn't understand how biology worked either, it just randomly mutated and permuted stuff. That's the template here.
Why? Why can't we do a ton of FLOPS to do evolution on neural nets? That is, kind of, what current ML already does.
"more often than not" isn't confident! I think over the long term there's strong enough incentive for AGI someone will get there.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
We got something better, something even the grand masters could not imagine - all knowledge of the world at our fingertips, at any place and any time. No comparison to some shitty spaceships operated by slide rules.
Anyway, if you remember shiny happy sciencefictional future, you are really ancient or fan of yoghurt commercials.
I was promised nothing than total enslavement by big governments and big corporations or total death by nuclear war, famine, plague, pollution, robots, aliens or mutants.
https://archive.is/qqlAs
Not always unsuccesful.
See famous predictions about year 2000 by thermonuclear man Herman Kahn from 1967 and their evaluation from 2002.
https://sci-hub.ru/10.1016/s0040-1625(02)00186-5
Inexpensive high-capacity, worldwide, regional, and local (home and business) communication (perhaps using satellites, lasers, and light pipes)
Pervasive business use of computers
Direct broadcasts from satellites to home receivers
Multiple applications for lasers and masers for sensing, measuring, communication, cutting, welding, power transmission, illumination, and destructive (defensive)
Extensive use of high-altitude cameras for mapping, prospecting, census, and geological investigations)
Extensive and intensive centralization (or automatic interconnection) of current and past personal and business information in high-speed data processors
Other widespread use of computers for intellectual and professional assistance (translation, traffic control, literature search, design, and analysis)
Personal ‘‘pagers’’ (perhaps even two-way pocket phones)
Simple inexpensive home video recording and playing
Practical home and business use of ‘‘wired’’ video communication for both telephone and TV (possibly including retrieval of taped material from libraries) and rapid transmission and reception of facsimile
edit: links unscrambled
Yes, plus I've read a lot of older SF from before I was born, because when I was a kid reading skiffy, that was what there was.
So I'm old enough to be very sceptical about shiny dreams of the future, since they never work out like that.
More options
Context Copy link
What a great list, thanks.
More options
Context Copy link
This is technically achievable within the budget of a middle class American, if you consider ultralights and some ghetto quadcopters. The biggest hurdle, as is the case for many things we're technologically capable of doing, is regulation.
More that this is both unnecessary given how cheap electric lighting is, and because it's unnecessarily disruptive.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link