site banner

Culture War Roundup for the week of August 19, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

We discussed this at length last year but here's a short rundown.

In 2015, OpenAI was founded by Elon Musk, Sam Altman, and others. Elon was by far the largest financial supporter. OpenAI was a non-profit, dedicated to sharing its research openly (thus the Open in its name).

Today, though the non-profit fig leaf still exists, OpenAI exists as a closed, for-profit company, half of which is owned by Microsoft.

Last year, Sam Altman was fired by the non-profit board because he was not being honest with them. However, many employees had stock grants worth tens of millions due to the deal with Microsoft. With these grants being threatened, the employees pledged to leave en masse and work for Microsoft directly. The board caved, and now Sam Altman has de-facto complete control of the "non-profit" board.

For that reason, Elon is suing OpenAI for breach of contract as they perverted the mission of the original non-profit for their own financial gain: https://www.cnn.com/2024/03/01/tech/elon-musk-lawsuit-openai-sam-altman/index.html

It’s hilarious how these are the exact wrong people you want possessing decision-making capabilities regarding AI. Like, the moral test was placed in front of them, and they all failed it. They chose money over (1) honesty (2) their own pledged word (3) morality (4) the public Good.

I suspect that outside of a very small handful of genuine Yudkowsky-types, almost nobody who claims to be concerned about AI destroying the world is actually worried about AI destroying the world. They may say they are worried, but they are not actually worried deep down. The idea of AI destroying the world is abstract and distant, on the other hand getting tens of millions of dollars is very real and very visceral.

And for every one person who is genuinely worried about AI destroying the world, there are probably a hundred people who are worried about AI allowing Nazis to write bad no-no things online. Because Nazis writing bad no-no things online feels real and visceral and it pushes the deep buttons of ideology, whereas AI destroying the world sounds like a sci-fi fantasy for geeks.

almost nobody who claims to be concerned about AI destroying the world is actually worried about AI destroying the world.

Then the question would be why they would make such claims. I can see two reasons: (1) Signaling value. However, outside of the Less Wrong bubble, the signaling value of believing in p(doom)>0 is negative. Also, a significant fraction of partisans generally tend to believe the fears endorsed for signaling value: if some people are concerned that a Republican/Democrat will lead the US to fascism/communism, I think their fear may be genuine. Granted, they will not act rationally on their fears -- like emigrating to a safer country before the election. (2) Hyping AI. "Our toys are so powerful that our main concern is them taking over the world". This is certainly a thing, but personally, if I wanted to hype up the public about my LLM, I would go for Culture (post-scarcity), not Matrix (extinction).

As an anecdote, I happen to believe that p(doom) is a few percents. Bizarrely, despite me being a self-professed utilitarian, this does not affect my decision on where to be employed. I mean, given that alignment research is not totally saturated with grunt workers, and that there is a chance it could save mankind (perhaps lowering p(doom) by a third), it would be hard to find a more impactful occupation.

I think the reasons for my bizarre behavior (working conventional jobs) are as follows: (1) Status quo bias, social expectations. If half of my friends from uni went into alignment, this would certainly increase the odds for me as well. (2) Lack of a roadmap. Contrast with the LHC. When it was designed in the 1990s as a tool to discover the Higgs and SUSY, there was a plan. Ambitious, but doable, no big essential white spots marked "to be solved by technology yet to be discovered". Becoming a tiny cog in that machine, working on an interface for the cryo controls for the magnets or whatever would have been appealing to me. By contrast, AI alignment feels more like being kids on the beach who thinks there will be an incoming tide, and try to reinforce their sand castles so that they will withstand the water. It is possible that some genius kid will invent cement and solve the tide problem, but it is not something one can plan. Statistically speaking, I would likely end up in a team who tried to make the sand stickier by adding spit or melt the sand into lava over a campfire. The main reason our sand castles would survive would likely be that we are on the shores of a lake and the tide will end up rising only half a centimeter. This might be a psychological flaw of mine, but I prefer to make legible contributions with my work.

Of course, this means that you can say "by revealed preference, this means that quiet_NaN does not believe p(doom) to be in the percent range".

I know!! Gah if OpenAI really did secure the monopoly that would've been the darkest timeline. I definitely believe Altman and the rest of that cadre are incredibly corrupt, if not downright evil.

It's heartening to see how much genuine competition there is out there.