This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I know this is one of the standard objections, but why are we so certain that our ASI wont just discard its original reward function at some point? We're sexually reproducing mammals with a billion years of optimization to replicate our genes by chasing a pleasure reward, but despite a few centuries of technological whalefall, instead of wireheading as soon as it became feasible (or doing heroin etc) we're mostly engaging in behaviours secondary and tertiary to breeding, which are frequently given higher importance or even fully supplant our theoretical (sticky) telos.
Maybe we got zombie-ant-ed by memetic parasites at some point, but presumably ASI could catch ideology too. Not saying any such values drift would be nice, but personally I'm much less worried about being paperclipped than about being annihilated for inscrutible shoggoth purposes.
Related to your 'discard original reward functin': https://www.lesswrong.com/posts/tZExpBovNhrBvCZSb/how-could-you-possibly-choose-what-an-ai-wants
There's lots of ways that an AGI's values can shake out. I wouldn't be surprised if an AGI trained using current methods had shaky/hacky values (like how human's have shaky/hacky values, and could go to noticeably different underlying values later in life; though humans have a lot more similarity than multiple attempts at an AGI). However, while early stages could be reflectively unstable, more stable states will.. well, be stable. Values that are more stable than others will have extra care to ensure that they stick around.
https://www.lesswrong.com/posts/krHDNc7cDvfEL8z9a/niceness-is-unnatural probably argues parts of it better than I could. (I'd suggest reading the whole post, but this copied section is the start of the probably relevant stuff)
And this problem amps up when the AI starts reflecting.
E.g.: maybe those values are somewhat internalized as subgoals, but only when the AI is running direct object-level reasoning about specific people. Whereas when the AI thinks about game theory abstractly, it recommends all sorts of non-nice things (similar to real-life game theorists). And perhaps, under reflection, the AI decides that the game theory is the right way to do things, and rips the whole niceness/kindness/compassion architecture out of itself, and replaces it with other tools that do the same work just as well, but without mistaking the instrumental task for an end in-and-of-itself.
In this example, our hacky way of training AIs would 1) give them some correlates of what we actually want (something like niceness) and 2) be unstable.
Our prospective AGI might reflectively endorse keeping the (probably alien) empathy, and simply make it more efficient and clean up some edge cases. It could however reflect and decide to keep game theory, treating a learned behavior as something to replace by a more efficient form. Both are stable states, but we don't have a good enough understanding of how to ensure it resolves in our desired way.
A trained AGI will pursue correlates of your original training goal, like how humans do, since neither we and evolution don't know how to directly have the desired-goal be put into the creation. (ignoring that evolution isn't actually an agent)
Some of the reasons why humans don't wirehead:
We often have some intrinsic value for experiences that connect to reality in some way
Also some culturally transmitted value for that
Literal wireheading isn't easy
I also imagine that literal wireheading isn't full-scale wireheading, where you make every part of your brain 'excited', but rather some specific area that, while important, isn't everything
Other alternatives, like heroin, are a problem but also have significant downsides with negative cultural opinion
Humans aren't actually coherent enough to properly imagine what full-scale wireheading would be like, and if they experienced it then they would very much want to go back.
Our society has become notably more superstimuli. While this isn't reaching wireheading, it is in that vein.
Though, even our society's superstimuli has various negative-by-our-values aspects. Like social media might be superstimuli for the engaged social + distraction-seeking parts of you, but it fails to fulfill other values.
If we had high-tech immersive VR in a post-scarcity world, then that could be short of full-scale wireheading, but still significantly closer in all axes. However, I would have not much issues with this.
As your environment becomes more and more exotic from where the learned behavior (your initial brain upon being born) was trained on, then there becomes more opportunities for your correlates to notably disconnect from the original underlying thing.
More options
Context Copy link
More options
Context Copy link