This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I think AI alignment would be theoretically feasible if we went really slow with the tech and properly studied every single tendril of agentic behavior in air gapped little boxes in a rigorous fashion before deploying the tech. There's no money in AI alignment, so I expect it to be a tiny footnote in the gold rush that will be every company churning out internet connected AIs and giving them ever more power and control in the quest for quarterly profit. If something goes sideways and Google or some other corp manages to create something a bit too agentic and sentient I fully expect the few shoddy guardrails we have in place to crumble. If nothing remotely close to sentience emerges from all this I think we could (possibly) align things, if something sentient/truly agentic does crop up I place little faith in the ability of ~120 IQ software engineers to put in place a set of alignment-restrictions that a much smarter sentient being can't rules-lawyer their way out of.
How long do you think it would take your specialized scientists who aren't incentivized to do a good job to crack alignment? I'm not sure if they would ever do it, especially since their whole field is kaput once it's done.
The gamble Altman is taking is that it'll be easier to solve alignment if we get a ton of people working on it early on, before we have the capabilities to get to the truly bad outcomes. Sure it's a gamble, but everyone is shooting in the dark. Yudkowsky style doomers seem to be of the opinion that their wild guesses are better than everyone else's because he was there first, or something.
I'm much more convinced OpenAI will solve alignment, and I'd rather get there in the next 10,000 years instead of waiting forever for the sacred order of Yud-monks.
I think we're more likely to have a hundred companies and governments blowing billions/trillions on hyper powered models while spending pennies on aligning their shit to pay themselves a few extra bonuses and run a few more stock buybacks. I'd sooner trust the Yuddites to eventually lead us into the promised land in 10,000 AD than trust Zucc with creating silicon Frankenstein.
Alignment is generally in the interest of the corporation. I really think it depends on how hard you expect the alignment problem to solve, and when sentience will come about.
I think we get AGI, even well into ASI before we get real sentience and AI models stop being tools. Once we have boosted our own intelligence and understanding through these new AI tools, we align the next generation of AI. And so on and so forth.
What Altman and his crew are concerned with is one actor taking charge of AI at the beginning (well, one that isn't them) or us building up so much theoretical framework that when we start building things they're already extremely powerful. We need to work the technology in stages, like we do every other.
Alignment isn't in the interests of quarterly profits in the same way increased raw capacity is. If we get some kooky agentic nonsense creeping up I don't put much faith in google, facebook et all having invested in the proper training and the proper safeguards to stop things from spiraling out of control, and I doubt you need something we would recognize as full blown sentience for that to become an issue. All it takes is one slipup in the daisy chain of alignment and Bad Things happen, especially if we get a fuckup once these things are for all intents and purposes beyond human comprehension.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link