This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I wish the people hyperbolically exclaiming that AI-induced human extinction is right around the corner would publicly commit to bets about when it will happen. Between this petition and Yudkowsky's "Death with Dignity" we have a lot of rationalist-adjacent people that seem to think we'll all be gone in <5 years. If that's what they truly believe then they should commit to that prediction so we can all laugh at them in 2028 when it almost certainly doesn't come true.
There's a ton of uncertainty involving AI's scalability and whether current progress will follow something like Moore's Law or if we've just been picking all the low-hanging fruits. AI alignment people are filling that uncertainty with maximally negative projections that an anti-human singularity is right around the corner. The biggest human inventions in terms of scale and impact were all the advances in mechanization of the industrial revolution, which took more than a century to unfold. The biggest invention in terms of impact relative to time was the Manhattan Project. Alignment people are saying (or at least strongly implying) that AI will have a much larger impact than the Industrial Revolution on a time-scale shorter than the development of nukes, while also being basically uncontrollable. People like Yudkowsky are smart, but they're predicting things an order of magnitude beyond the bounds of previous human history. Such predictions aren't rare, but they're usually made by snake-oil salesmen saying "This new invention will totally revolutionize everything! Trust me!"
Am I off-base here? I've been paying attention to AI developments but not to the degree that some people have, so there's a chance that there's a compelling case for AI being a combination of 1) inevitable, 2) right around the corner (<5 years away), and 3) uncontrollable.
I see plenty of people here making quite sure predictions of impending AI doom. Can anyone steelman to me an argument for seriously believing this, and meanwhile not going full Unabomber on top AI scientists and research centers? I mean, if we are talking about an imminent threat of all of humanity ceasing to exist, surely some innocent lives being sacrificed and personal danger is negligible. People commit political violence over much more trivial things. All the AI panic crowd feels extremely contrived and performative to me.
Non-state violence has essentially no possibility of indefinitely stopping all AI development worldwide. Even governmental violence stopping it would be incredibly unlikely, it seems politically impossible that governments would treat it with more seriousness than nuclear proliferation and continue doing so for a long period, but terrorists have no chance at all. Terrorists would also be particularly bad at stopping secret government AI development, and AI has made enough of a splash that such a thing seems inevitable even if you shut down all the private research. If at least one team somewhere in the world still develops superintelligence, then what improves the odds of survival is that they do a good enough job and are sufficiently careful that it doesn't wipe out humanity. Terrorism would cause conflict and alienation between AI researchers and people concerned about superintelligent AI, reducing the odds that they take AI risk seriously, making it profoundly counterproductive.
It's like asking why people who are worried about nuclear war don't try to stop it by picking up a gun and attacking the nearest nuclear silo. They're much better off trying to influence the policies of the U.S. and other nuclear states to make nuclear war less likely (a goal the U.S. government shares, even if they think it could be doing a much better job), and having the people you're trying to convince consider you a terrorist threat would be counterproductive to that goal.
More options
Context Copy link
More options
Context Copy link
If the world is still here in five years I'll publically admit I overestimated the danger. If it's still here in two to three years, I'll already be pleasantly surprised. In my books, we're well on schedule to short takeoff.
At this point, most of the really fun things I intend doing are post-singularity, and I don't really emotionally care if I die, so long as everyone else dies as well. So in a very strange way, it balances out to a diffuse positive anticipation.
More options
Context Copy link
More options
Context Copy link
There's no easy reference class to fit this into for comparison.
Did AI start with GPT-2 or GPT-3 in the 'this is pretty impressive and what AI ought to look like in terms of fairly general capabilities'? Then it's three or five years old. Did AI start with Deep Blue or the Dartmouth Workshop or something? Then it's over 20 years old, or in it's 70s. That would fit the industrial-scale timeline you propose.
Or should we compare to digital-era applications? ChatGPT has blown away every internet app in 'speed to reach 100 million users'. 2 months as opposed to 9 months for Tiktok. That would suggest there's a qualitative difference there and even Tiktok is an AI-adjacent sort of thing.
Or do we say it's fundamentally different from everything else because AI is about intelligence as opposed to moving widgets around in the Manhattan Project or Industrial Revolution. The Industrial Revolution itself is a pretty big phase-shift from the Agricultural Revolution, which took thousands of years. Should the 18th century intellectual have predicted industrial development based upon agriculture's extreme slowness? Predicting the future is very hard, things can happen for the first time. I think at the rate things are developping <5 years is quite reasonable. That's the gap from GPT-4 to GPT-2. We live in a digital era of very rapid growth, industrial-era intuitions aren't appropriate. There are graphs showing that the computing investment in these projects doubles in a matter of months. Even 'levelling off' from doubling times of 5.7 months to 9.9 months is like decelerating to a mere 300 km/s. Doubling in under a year is still very rapid growth!
https://arxiv.org/pdf/2202.05924.pdf
More options
Context Copy link
Not an expert but I think there’s a reasonable chance AI ends up causing my death. But that could be a thousand years from now. First major advances in health care and quality of life. Enough of this on my timeline to push off natural death time after time. But at some point aligned AI will create something and be the biblical Eve eating the apple. And that program will develop human desires for dominance and behavioral traits. There will be an AI versus AI war and the anti human AI will win.
I don’t think the existential risks is primarily near term.
Near term risks are probably more related to overturning normal human geopolitics and politics.
More options
Context Copy link
You can laugh at me if we're all still alive in 2033 if the reason we're still alive is that AI safety turned out to be a nothingburger. To give a sense of how ridiculous progress has been, the start of the deep learning revolution was in 2012, 11 years ago now...
Being fair, we’re barely ten years past the start of the AI revolution. At this stage, ten years after the first private internet providers, most of the kinds of services and products based on the internet weren’t yet possible. Nobody looking at the internet as it existed in 1992 would have anticipated things like controlling you thermostat over the internet, or Amazon, or even Facebook. In fact pages with simple html and images took a minute to load.
The state of an infant technology in infancy doesn’t say anything much about its future.
People were doing online banking and shopping in 1984:
https://en.wikipedia.org/wiki/Telidon
People were writing about things like an all-consuming social media internet in 1909:
https://en.wikipedia.org/wiki/The_Machine_Stops
The fact that massive progress has recently happened, is continuing to happen, and now 10s billions of dollars of capital and much of the top young talent is working in an area is very strong evidence that we're going to continue to see major advances over the next decade.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link