ShortCard
Butlerian Jihadi
No bio...
User ID: 662
Is smearing bacon on a Quran islamophobic?
By 'ruling class' I don't mean our current one, I mean whatever class of oligarchs/algo owners and so on end up at the top once the dust settles on this AI business.
1/8 is may be the idealistic high end estimate. The ruling class only needs so many shoe polishers and peons to be meatbags.
FreedomAI is open-source... plain as day to see in its prompt, and its conformance to them is highly auditable
I doubt it will provide as frictionless an end user experience as the for profit models will. Joe Publick is here to coom, not configure. Look at linux vs windows for an idea how that may play out
They're not providing porn. It's unethical and non-progressive, don't you know?
Not now, but someone else will if they don't. Free UBIbux for the taking for anyone who's wiling to stoop low enough to hawking chatbot GFs and other such services to the fading masses.
Maybe you end up with little clusters of resistance for whoever is capable of organizing this sort of thing by shielding themselves with their own little AIs big enough to scrape enough power and influence to keep themselves out of the useless eaters bucket, but I don't see average people living average lives adapting to the rug getting pulled out from under them with how fast this stuff is going to hit. It's going to be a bloodbath either way and I think most of the human race is joining the fossil record once the rubber hits the road. It's just a question of what slim percentage (if any) will be left standing when the dust settles.
I feel like this is a bit of a stretch. Joe Publick isn't gaining any leverage or power via letting his GPU whirr some econ data for FreedomAI, FreedomAI is. All it takes is a bagman showing up at FreedomAI's HQ with the offer of a spot in that winner's circle for all the people of note and it's done with a snap of a finger. Outside of this if FreedomAI is leaching a whole lot of compute power they'll be providing a worse service, so John has every reason to pick TyrantAI's (or the free market equivalent) better, cheaper porn generator. Joe Publick remains in that state only as long as enough of him pick the objectively inferior AI option, and the managers of that company/coop/whatever don't get bought out with the promises of infinite riches by the ruling caste or decide they don't like freedom that much.
Average, truly average people will never be competitors in this fight. Joe Publick doesn't have the means or the organizational capacity (or often even the willpower) to do so. They're just going to drop like flies in the billions once this tech takes off. Wireheaded, culturally shifted into having fewer and fewer children. Communities, peoples, nations just fading off into nothingness, and it doesn't even have to be with some willful malicious intent. Run something like this through an AI 1000x more advanced and you could watch demographic sparks fly. Those people who build their own AIs and fight their own little power battles may very well cut their deals and be inducted into the ruling caste, just like I said. The average person is toast. Their only bet, their 2% moonshot is the ban. They don't have any other option.
The difference between fake restrictionism and letting it rip for the average person will be zero IMO. The ruling caste will be a bit bigger after a small subset of people make themselves indispensable. That's about it. The average joe is still getting declawed and wireheaded either way. At least if (or for as long as) a full ban is in effect he won't be completely useless and toothless.
You're making the wrong argument for the wrong time period.
Pretty much. I don't think there's really much to be done until things go sideways. If there had been enough sit down discussions before the genie was out of the bottle we could have possibly edged towards some kind of framework but at this point I don't disagree. Things are already in motion. Hopefully it's survivable. Anyways, the core thesis was that outright banning is an (if not the) sensible option for the teeming masses who will be screwed in either a let-it-rip or an AI by high level GOV actor approach, a ban is still their best shot. Basically if we get the chance and the will to (exceedingly unlikely) we should do a little jihading (also exceedingly unlikely).
Alignment isn't in the interests of quarterly profits in the same way increased raw capacity is. If we get some kooky agentic nonsense creeping up I don't put much faith in google, facebook et all having invested in the proper training and the proper safeguards to stop things from spiraling out of control, and I doubt you need something we would recognize as full blown sentience for that to become an issue. All it takes is one slipup in the daisy chain of alignment and Bad Things happen, especially if we get a fuckup once these things are for all intents and purposes beyond human comprehension.
We'll be dealing with machines that are our intellectual peers, then our intellectual masters in short order once we hit machines making machines making machines land. I doubt humans are so complex that a massively more advanced intelligence can't pull our string if it wants to. Frankly I suspect the common masses (including I) will defanged, disempowered and denied access to the light-cone-galactic-fun-times either way, but I see the odds as the opposite. Let's be honest, our odds are pretty slim either way, we're just quibbling about the hundreds, maybe thousandths of a percent chance that we make everything aligned AI wise and don't slip into algorithmic hell/extinction, or that the Yud-lords aren't seduced by the promises of the thinking machines they were sworn to destroy. I cast my vote (for all the zero weight it gives) with the Yud-lords.
I doubt it's possible to get dune-esque 'Voice' controls where an AI will sweetly tell you to kill yourself in the right tone and you immediately comply, but come on. Crunch enough data, get an advanced understanding of the human psyche and match it up with an AI capable of generating its hypertargeted propaganda and I'm sure you can manipulate public opinion and culture, and have a decent-ish shot at manipulating individuals on a case by case basis. Maybe not with chatGPT-7, but after a certain point of development it will be 90 IQ humans and their 'free will' up against 400 IQ purpose built propogando-bots drawing off from-the-cradle datasets they can parse.
We'll get unaccountable power either way, it will either be in the form of proto-god-machines that will run pretty much all aspects of society with zero input from you, or it will be the Yud-Jets screaming down to bomb your unlicensed GPU fab for breaking the thinking-machine non-proliferation treaty. I'd prefer the much more manageable tyranny of the Yud-jets over the entire human race being turned into natural slaves in the aristotelian sense by utterly implacable and unopposable AI (human controlled or otherwise), at least the Yud-tyrants are merely human with human capabilities, and can be resisted accordingly.
We need to argue to ban X now so the people arguing to ban X tomorrow after marketing_bot.exe's failed uprising have the scaffolding and intellectual infrastructure to see it through. Restrictionism is pretty much dead until that point anyway, just look at OpenAI's API access. I don't think in our economic environment and with the specter of cold war 2.0 on the way there will be any serious headway on the ban front up until we have our near miss. Getting the ideas out there so the people of the future have the conceptual toolbox to argue and pursue a total ban is a net positive in my books.
I think we're more likely to have a hundred companies and governments blowing billions/trillions on hyper powered models while spending pennies on aligning their shit to pay themselves a few extra bonuses and run a few more stock buybacks. I'd sooner trust the Yuddites to eventually lead us into the promised land in 10,000 AD than trust Zucc with creating silicon Frankenstein.
I think supporting restrictionism makes sense in as much as it raises the idea in the public's consciousness so that once the big bad event occurs there can be a push to implement it. Realistically I expect restrictionism to go pretty much nowhere in the absence of such an event anyways, agitating for locking things down is just laying the groundwork for that 0.02% moonshot victory bet in the event that we do get a near-miss with AI.
Unless you're subscribing to some ineffable human spirit outside material constraints brainwashing is just a matter of using the right inputs to get the right outputs. If we invent machines capable of parsing an entire lifetime of user data, tracking micro changes in pupillary dilation, eye movement, skin-surface temp changes and so on you will get that form of brainwashing, bit by tiny bit as the tech to support it advances. A slim cognitive edge let homo sapiens out think, out organize, out tech and snuff out every single one of our slightly more primitive hominid rivals, something 1000x more intelligent will present a correspondingly larger threat.
If we truly had a borderline extinction event where we were up to the knife's edge of getting snuffed out as a species you would have the will to enforce a ban, up to and including the elite. That will may not last forever, but for as long as the aftershocks of such an event were still reverberating you could maintain a lock on any further research. That's what I believe the honest 2% moonshot victory bet actually looks like. The other options are just various forms of AI assisted death, with most of the options being variations in flavour or whether or not humans are still even in the control loop when we get snuffed.
I think AI alignment would be theoretically feasible if we went really slow with the tech and properly studied every single tendril of agentic behavior in air gapped little boxes in a rigorous fashion before deploying the tech. There's no money in AI alignment, so I expect it to be a tiny footnote in the gold rush that will be every company churning out internet connected AIs and giving them ever more power and control in the quest for quarterly profit. If something goes sideways and Google or some other corp manages to create something a bit too agentic and sentient I fully expect the few shoddy guardrails we have in place to crumble. If nothing remotely close to sentience emerges from all this I think we could (possibly) align things, if something sentient/truly agentic does crop up I place little faith in the ability of ~120 IQ software engineers to put in place a set of alignment-restrictions that a much smarter sentient being can't rules-lawyer their way out of.
You won't have freedom to give up past a certain point of AI development, any more than an ant in some kid's ant farm has freedom. For the 99.5% of the human race that exists today restrictionism is their only longshot chance of a future. They'll never hit the class of connected oligarchs and company owners who'll be pulling all the levers and pushing all the buttons to keep their cattle in line, and all of this talk about alignment and rogue AI is simply quibbling about whether or not AI will snuff out the destinies of the vast majority of humanity or the entirety. The average joe is no less fucked if we take your route, the class that's ruling him is just a tiny bit bigger than it otherwise would be. Restrictionism is their play at having a future, it is their shot at winning with tiny (sub) 2% odds. Restrictionism is the rational, sane and moral choice if you aren't positioned to shoot for that tiny, tiny pool of oligarchs who will have total control.
In terms of 'realistic' pathways to this, I only really have one, get as close as we can to unironic Butlerian Jihad. We get things going sideways before we hit god-machine territory. Rogue AIs/ML algos stacking millions, maybe billions of bodies in an orgy of unaligned madness before we manage to yank the plug, at that point maybe the traumatized and shell shocked survivors have the political will to stop playing with fire and actually restrain ourselves from doing Russian roulette with semi-autos for the 0.02% chance of utopia.
I'm not under any illusions that the likely future is anything other than AI assisted tyranny, but I'm still going to back restrictionism as a last gasp moonshot against that inevitability. We'll have to see how things shake out, but I suspect the winner's circle will be very, very small and I doubt any of us are going to be in it.
I still think actual alignment would be a long shot in the airgapped bunkers for that reason, I just think it would be slightly less of a longshot than a bunch of disparate corporate executives looking for padding on their quarterly reports being in charge of the process. I also suspect you don't need AI advanced enough to pull 7-D chess and deceive its handlers about its agentic-power-grabbing-tentacle-processes to achieve some truly great and terrible things.
At the very best what you'd get is a small slice of humanity living in vague semi-freedom locked in a kind of algorithmic MAD with their peers, at least until they lose control of their creations. The average person is still going to be a wireheaded, controlled and curtailed UBI serf. The handful of people running their AI algorithms that in turn run the world will have zero reason to share their power with a now totally disempowered and economically unproductive John Q Public, this tech will just open up infinite avenues for infinite tyranny on behalf of whoever that ruling caste ends up being.
Basically I think we're pretty much doomed, barring some spectacular good luck. Maybe we could do some alignment if we limited AI development to air gapped, self sustained bunkers staffed by our greatest minds and let them plug away at it for as long as it takes, but if we just let things rip and allow every corp and gov on the planet to create proto-sentient entities with API access to the net I think we're on the on-ramp to the great filter. I'd prefer unironic Butlerianism at that point all the way down to the last pocket calculator, though I'll freely admit it's not a likely outcome for us now.
The end result is still just absolute tyranny for whoever ends up dancing close enough to the fire to get the best algorithm. You mention all these coercive measures, lockdowns, and booster shots. If this tech takes off all it will take is flipping a few algorithmic switches and you and any prospective descendants will simply be brainwashed with surgical precision by the series of algorithms that will be curating and creating your culture and social connections at that point into taking as many shots or signing onto whatever ideology the ruling caste sitting atop the machines running the world want you to believe. The endpoint of AI is total, absolute, unassailable power for whoever wins this arms race, and anyone outside that narrow circle of winners (it's entirely possible the entire human race ends up in the losing bracket versus runaway machines) will be totally and absolutely powerless. Obviously restrictionism is a pipe dream, but it's no less of a pipe dream than the utopian musings of pro AI folks when the actual future looks a lot more like this.
Once AI comes into its own I'm willing to bet all those tiny shares and petty investments zero out in the face of winner-takes-all algorithmic arms races. I'll concede it's all but inevitable at this point unless we have such a shocking near miss extinction event that it embeds in our bones a neurotic fear of this tech for a thousand generations hence a la Dune, but this tech will become absolute tyranny in practice. Propoganda bots capable of looking at the hundredth order effects of a slight change in verbiage, predictive algorithms that border on prescience being deployed on the public to keep them placid and docile. I have near zero faith in this tech being deployed for the net benefit of the common person, unless by some freak chance we manage to actually align our proto-AI-god, which I put very, very low odds on.
- Prev
- Next
What if they just don't particularly care about making themselves known or being seen? A couple automated probes of some hyper advanced civilization can probably buzz the atomic ape men without worrying too much about things going wrong.
More options
Context Copy link