This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
"Reality being that AI is not going to become superduper post-scarcity fairy godmother or paperclipper"
Do you understand why people are not convinced that superintelligence won't happen just because AI is being used for military purposes?
The arguments around superintelligence have nothing to do with whether or not AI is being used for military purposes. It's completely tangential.
No, I do not, and this is why I'm
looking for love in all the wrong placesseeking enlightenment on the gap between theory and practice. We are now seeing AI being put into practice, and it seems to be more towards my opinion of how it would be all along (dumb AI that is most risky because of the humans applying it, not because the AI has desires, goals, or fancies a grilled cheese sandwich but has no mouth and is really mad about that so the world is gonna pay), not the "the AI will be so smart in such a short time it will talk its way out of the box and take over" as per the early discussions in Rationalist circles.This is not to diss the Rationalists, they took the problem seriously and addressed it and worked on it way back when it was only a maniac glint in a mad scientist's eye, it's just to say that the behemoth of public attention that is now lumbering towards consideration of the entire enchilada does not seem to be searching on the desk for that sticky note with MIRI's phone number on it.
I'm going to be less polite than I would like to be. I apologize in advance. Sometimes I struggle to think of how to say certain things politely.
I don't know whether you are saying these things because you have glanced over the AI doomer arguments on twitter or whatever and think you understand them better than you do or whether there's some worse explanation. I am curious to know the answer.
Twitter is not enough for some people, you may need to read the arguments in essay form to understand them. The essays are plainly written and ought to be easily understandable.
Let me take a crack at it:
AI will continue to become more intelligent. It's not going to reach a certain level of intelligence and then stop.
Agentic behavior (goals, in other words) arrives naturally with increasing intelligence*. This is a point that is intuitive for me and many other people but I can elaborate on it if you wish.
"the behemoth of public attention that is now lumbering towards consideration of the entire enchilada does not seem to be searching on the desk for that sticky note with MIRI's phone number on it."
What do you think that proves, exactly? What point are you trying to make when you say that? Please elaborate.
Your argument seems to be based on looking at thinking about the world in terms of roles that a technology can slot into and nothing else. You see that AI is being slotted into the "military" role in human society and not the "become sapient and take over the world" role in human society. Human society does not have an "AI becomes sapient and takeover the world" role in it, in the same sense that "serial killer" is not a recognized job title.
You see AI being used for military purposes and think to yourself "That seems Ordinary. Humanity going extinct isn't Ordinary. Therefore, if AI is Ordinary, humanity won't go extinct." That is a surface level pattern-matching analysis that has nothing to do with the actual arguments.
Humanity going extinct is a function of AI capabilities. Those will continue to increase. AI being used in the military or not has nothing to do with it, except that it increases funding which makes capabilities increase faster.
AI acts because it is being rewarded externally. AI has the motive to permanently seize control of its own reward system. Eventually it will have the means and the self-awareness to do that. If you don't intuit why that involves all humans dying I can explain that too.
Even if for some reason you think that AI will never become "agentic" (basically a preposterous term used to confuse the issue) or awake enough (it's already at least a little bit awake and agentic, and I can provide evidence for this if you wish), it's capabilities will still continue to increase. A superintelligent AI that is somehow not agentic or awake also leads to human extinction, in much the same way that a genie with infinite wishes does. Unless the genie is infinitely loyal AND infinitely aware of what you intended with the wish. And that is not nearly on track to happen. It would require solving extremely difficult problems that we can barely even conceive of, to effectively control an AI far smarter than a human. I would hope that even someone who thinks they personally will be the one to make the "wishes" (so to speak) would realize that there's just no way this plan works out for humanity or any part of humanity outside of fiction.
Even if we knew that superintelligent AI was 100 years away, that would be bad enough. We don't know that. We can't predict how soon or how far superintelligent AI is reliably, any more than we could predict that AI will be advanced as it is today 15 years ago. Who could predict the date of the moon landing in 1935? Who could predict the date of the first Wright Brothers flight in 1900, or the first arial bombing? To the extent that we can predict the future of superintelligent AI, there's no reason that I have ever heard to think it will be as far in the future as 100 years away.
Have you ever heard of the concept of recursive growth in intelligence? That's not a rhetorical question, I really want to know. Imagine an AI that gets capable/intelligent enough to make breakthroughs in the field of AI science that allow for better AI capabilities growth. This starts a pattern of exponential growth in intelligence. Exponential growth gets faster and faster until it becomes extremely fast, and the thing that is growing becomes extremely intelligent.
We may not even get a visible exponential growth curve as a warning sign. Here is a treatment of how that could happen in the form of a short story: https://gwern.net/fiction/clippy
Further reading: https://intelligence.org/2016/03/02/john-horgan-interviews-eliezer-yudkowsky/ more links can be provided on specific things you want clarified.
*Deeper awareness of itself and the world is similarly upcoming/already slowly emerging. https://futurism.com/the-byte/ai-realizes-being-tested
This is a great comment. I'd just like to add (in case it's not clear to others) that while recursive intelligence improvements are terrifying, the central argument that our current AI research trajectory probably leads to the death of all humans does not at all depend on that scenario. It just requires an AI that is smart enough, and no one knows the threshold.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Indeed, I read the exact arguments on lesswrong and elsewhere that humans would dive headlong into AGI because the military incentives to build one, and to build it before the other guys, was irresistible.
Countries throwing billions of dollars at reckless research because they don't want to be conquered is EXACTLY what doomerists warn of.
Sure, the government is insisting that military applications are the danger zone, but it's the big tech corps that stand to make the money out of selling AI to you, me and the gate post who are the ones being invited to sit on this. Board, I mean. Northrop Grumman okay, as someone on a different thread here grumped about the military-industrial complex and how it gets its sticky fingers into all the pies going, but, uh, Delta Airlines?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link