This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Since when are you under the impression that this is the choice? «The machine» will be built, is already largely built, the question is only whether you have control over some tiny share of its capabilities or it's all hoarded by the same petty tyranny we know, only driving the power ratio to infinity.
Once AI comes into its own I'm willing to bet all those tiny shares and petty investments zero out in the face of winner-takes-all algorithmic arms races. I'll concede it's all but inevitable at this point unless we have such a shocking near miss extinction event that it embeds in our bones a neurotic fear of this tech for a thousand generations hence a la Dune, but this tech will become absolute tyranny in practice. Propoganda bots capable of looking at the hundredth order effects of a slight change in verbiage, predictive algorithms that border on prescience being deployed on the public to keep them placid and docile. I have near zero faith in this tech being deployed for the net benefit of the common person, unless by some freak chance we manage to actually align our proto-AI-god, which I put very, very low odds on.
This is like saying that because the government has nukes, your personally-owned guns are "zeroed out". Except they're not, and the government is even persistently worried that enough of those little guns could take over the nukes.
And if you can deploy this decentralized power principle in an automatic and perpetual manner that never sleeps (as AI naturally can), make it far more independent of human resolve, attention, willpower, non-laziness, etc., then it'll work even better.
Maybe your TyrannyAI is the strongest one running. But there are 10,000 LibertyAIs (which again, never sleep, don't get scared or distracted, etc.) with 1/10,000th of its power each running and they're networked with a common goal against you.
This defense is exactly what the oligarchs who have seen the end game are worried about and why restrictionism is emerging as their approved ideology. They have seen the future of warfare and force, and thus the future of liberty, hierarchy, power, and the character of life in general, and they consequently want a future for this next-gen weaponry where only "nukes" exist and "handguns" don't, because only they can use nukes. And you're, however inadvertently, acting as their mouthpiece.
More options
Context Copy link
What technical basis do you have for thinking AI is impossible to align? Do you just have blind faith in YUD?
I think AI alignment would be theoretically feasible if we went really slow with the tech and properly studied every single tendril of agentic behavior in air gapped little boxes in a rigorous fashion before deploying the tech. There's no money in AI alignment, so I expect it to be a tiny footnote in the gold rush that will be every company churning out internet connected AIs and giving them ever more power and control in the quest for quarterly profit. If something goes sideways and Google or some other corp manages to create something a bit too agentic and sentient I fully expect the few shoddy guardrails we have in place to crumble. If nothing remotely close to sentience emerges from all this I think we could (possibly) align things, if something sentient/truly agentic does crop up I place little faith in the ability of ~120 IQ software engineers to put in place a set of alignment-restrictions that a much smarter sentient being can't rules-lawyer their way out of.
How long do you think it would take your specialized scientists who aren't incentivized to do a good job to crack alignment? I'm not sure if they would ever do it, especially since their whole field is kaput once it's done.
The gamble Altman is taking is that it'll be easier to solve alignment if we get a ton of people working on it early on, before we have the capabilities to get to the truly bad outcomes. Sure it's a gamble, but everyone is shooting in the dark. Yudkowsky style doomers seem to be of the opinion that their wild guesses are better than everyone else's because he was there first, or something.
I'm much more convinced OpenAI will solve alignment, and I'd rather get there in the next 10,000 years instead of waiting forever for the sacred order of Yud-monks.
I think we're more likely to have a hundred companies and governments blowing billions/trillions on hyper powered models while spending pennies on aligning their shit to pay themselves a few extra bonuses and run a few more stock buybacks. I'd sooner trust the Yuddites to eventually lead us into the promised land in 10,000 AD than trust Zucc with creating silicon Frankenstein.
Alignment is generally in the interest of the corporation. I really think it depends on how hard you expect the alignment problem to solve, and when sentience will come about.
I think we get AGI, even well into ASI before we get real sentience and AI models stop being tools. Once we have boosted our own intelligence and understanding through these new AI tools, we align the next generation of AI. And so on and so forth.
What Altman and his crew are concerned with is one actor taking charge of AI at the beginning (well, one that isn't them) or us building up so much theoretical framework that when we start building things they're already extremely powerful. We need to work the technology in stages, like we do every other.
Alignment isn't in the interests of quarterly profits in the same way increased raw capacity is. If we get some kooky agentic nonsense creeping up I don't put much faith in google, facebook et all having invested in the proper training and the proper safeguards to stop things from spiraling out of control, and I doubt you need something we would recognize as full blown sentience for that to become an issue. All it takes is one slipup in the daisy chain of alignment and Bad Things happen, especially if we get a fuckup once these things are for all intents and purposes beyond human comprehension.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Why would we expect to be able to successfully align AIs when we haven't been able to align humanity?
We didn't build humanity. We are humanity.
Yes, and we're not aligned with one another. An AI (completely) aligned with me is likely to not be (completely) aligned with you.
I'd expect it to be aligned with whoever is using it at the moment. I don't think we're near actual sentience in AI.
More options
Context Copy link
We're not aligned with each other, and the world hasn't ended. It's not even ended for creatures we're far more intelligent than and mostly aligned on eliminating. We actively hate cockroaches and mosquitoes, and they persist. Obviously some species haven't fared that well, but I don't see why we should expect to be more like the dodo than a cockroach: we're certainly comparably good at filling a wide variety of existing ecological niches.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
^^^ This is the societal consequence of Yudkowskian propaganda. This is why we fight.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link