This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I agree that there is bizarrely little focus on the possibility of our current institutions simply becoming worse, more powerful, and more totalizing versions of themselves. Although Andrew Critch and Paul Christiano have written detailed doom scenarios that look something like this. e.g. https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like
It's because their faith in the singularity is so strong that it dwarfs all other concerns.
I think that AI is going to make our lives worse, but it's going to be in mundane ways rather than dramatic total-annihilation-of-humanity sorts of ways. It's already possible with today's systems to do automated monitoring of all text, audio, and image content for hate speech and wrongthink. I'd be mildly surprised if we don't see this sort of thing come installed by default on all phones within the next few years. There will be a slow but steady trickle of jobs being replaced by AI, but there won't be enough mass unemployment to trigger serious discussions of UBI (not that UBI is a panacea anyway - we do an AI-powered analysis of your internet history to make sure your UBI money won't be used for hateful or seditious activities, and we've linked you to an anonymous account on "the moat dot org" that made some very concerning posts...). The rich will get richer and the poor will get poorer; the benefits will not be in any way evenly distributed. The quality of daily life will continue to degrade as the internet becomes overrun with spam and chatbots become ubiquitous in more and more interactions.
None of this is a problem though if you think that the singularity is near. The general thinking among AI optimists seems to be that AGI will be here within the next decade, and via recursive self-improvement, AGI is just a short hop skip and a jump to ASI. ASI is presumed to have godlike powers to manipulate the physical universe and generate solutions to any conceivable problem. Turning all humans into paperclips, fully artificial private worlds, resurrecting the dead via simulation - nothing will be beyond its grasp. Obviously if such a thing comes to pass, then all current thinking about social ills will be rendered obsolete. What use do we have for "governments" or "jobs" or "money" when literal gods roam the earth? The singularity might be a utopia, or it might kill us all - but either way, it's going to be unlike anything that currently exists.
OpenAI seems to imply as much:
Well, if you're trying to build something as awesome as boundless upside, then you're absolved from having to actually think about the real-world implications of your technology. Nothing else matters if the rapture is near.
(I don't think the rapture is near.)
How can you tell the difference between that and 'they genuinely believe it for, maybe flawed maybe not, complex, thought out reasons'? Er, what even is the difference?
Also, yud and many of the alignment people don't like openai's alignment approaches! They think capabilitise work should not happen.
I’m not sure what you mean. I think they do genuinely believe that the singularity is near and they are acting and thinking rationally based on that belief. I’m not accusing them of lying or being confused or anything.
If the robot god really will be here soon, then they are correct to not worry about any other smaller-scale effects of AI. My only disagreement with them is the probability of the robot god.
When 'faith' is used in the context of, like, a 'robot god', it's usually with the implication the belief is caused by some mechanism, or held in some manner, that isn't well-founded or rational - as a comparison to "faith in a god" in the religious sense, which was the potential disagreement
I don't think it takes a 'robot god' to have computers of some sort have more intelligence and agency than people do, and for that to transform 'everything that is familiar to us'
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Thanks for the link, I haven't kept up on lesswrong for a while now. Glad to see stuff in this direction being discussed, at the very least.
More options
Context Copy link
More options
Context Copy link