This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
The future of AI is likely decided this week with Sam Altman's Congressional testimony. What do you expect?
EDIT: the recording is here.
Frankly I've tried to do my inadequate part to steer this juggernaut and don't have the energy for an effortpost (and we're having a bit too many of AI ones recently), so just a few remarks:
AI Doom narrative keeps inceasing in intensity, in zero relation to any worrying change in AI «capabilities» (indeed, with things like Claude-100K Context and StarCoder we're steadily progressing towards more useful coding and paperwork assistants at the moment, and not doing much in way of AGI; recent results seem to be negative for the LLM shoggoth/summoned demon hypothesis, which is now being hysterically peddled by e.g. these guys). Not only does Yud appear on popular podcasts and Connor Leahy turns up on MSM, but there's an extremely, conspicuously bad and inarticulate effort by big tech to defend their case. E.g. Microsoft's economist proposes we wait for meaningful harm before deciding on regulations – this is actually very sensible if we treat AI as an ordinary technology exacerbating some extant harms and bringing some benefits, but it's an insane thing to say when the public's imagination has been captured by Yuddist story of deceptive genie, and «meaningful harm» translates to eschatological imagery. Yann LeCun is being obnoxious and seemingly ignorant of the way the wind blows, though he's beginning to see. In all seriousness, top companies had to have prepared PR teams for this scenario.
Anglo-American regulatory regime will probably be more lax than that in China or the Regulatory Superpower (Europeans are, as always, the worst with regard to enterpreneural freedom), but I fear it'll mandate adherence to some onerous checklist like this one (consider this as an extraordinary case of manufacturing consensus – some literally who's «AI policy» guys come up with possible measures, a tiny subset of the queried people, also in the same until-very-recently irrelevant line of work, responds and validates them all; bam, we can say «experts are unanimous»). Same logic as with diversity requirements for Oscars – big corporations will manage it, small players won't; sliding into an indirect «compute governance» regime will be easy after that. On the other hand, MSNBC gives an anti-incumbent spin; but I don't think the regulators will interpret it this way. And direct control of AGI by USG appointees is an even worse scenario.
The USG plays favourites; on the White House meeting where Kamala Harris entered her role of AI Czar, Meta representatives weren't invited, but Anthropic's ones were. Why? How has the safety-oriented Anthropic merited their place among the leading labs, especially in a way that the government can appreciate? I assume the same ceaseless lobbying and coordinating effort that's evident in the FHI pause letter and EU's inane regulations is also active here.
Marcus is an unfathomable figure to me, and an additional cause to suspect foul play. He's unsinkable. To those who've followed the scene at all (more so to Gwern) it is clear that he's an irrelevant impostor – constantly wrong, ridiculously unapologetic, and without a single technical or conceptual result in decades; his greatest AI achievement was selling his fruitless startup to Uber, which presumably worked only because of his already-established reputation as an «expert». Look at him boast: «well-known for his challenges to contemporary AI, anticipating many of the current limitations decades in advance». He's a small man with a big sensitive ego, and I think his ego will be used to perform a convincing grilling of the evil gay billionaire tech bro Altman. Americans love pro wrestling, after all.
Americans also love to do good business. Doomers are, in a sense, living on borrowed time. Bitter academics like Marcus, spiteful artists, scared old people, Yuddites – those are all nothing before the ever-growing legion of normies using GPT-4 to make themselves more productive. Even Congress staff got to play with ChatGPT before deliberating on this matter. Perhaps this helped them see the difference between AI and demons or nuclear weapons. One can hope.
Scott has published a minor note on Paul Ehrlich the other day. Ehrlich is one of the most evil men alive, in my opinion; certainly one of those who are despised far too little, indeed he remains a respectable «expert». He was a doomer of his age, and an advocate for psyops and top-down restrictions of people's capabilities; and Yud is such a doomer of our era, and his acolytes are even more extreme in their advocacy. Both have extracted an inordinate amount of social capital from their doomerism, and received no backlash. I hope the newest crop doesn't get so far with promoting their policies.
They very much haven't.
I think it is impossible to overstate just how far outside of the bounds of thought EY style doomerism has been and remains for... well, everyone except the "rationalists." It is literally impossible to talk about "AI safety" with normal human beings without them looking at you like you have two heads. The logic doesn't matter. The world runs on inductive reasoning, not deductive reasoning. Because "AI safety" has never been a problem in real life so far, it is literally impossible for normal people to understand it, much less take it seriously. If you try to explain it, you will notice that they cock their heads while they listen to you, and this is from the cognitive effort of rewriting your arguments in realtime as they hear them to be about jobs and racial bias instead of AI safety.
I am not an AI doomer. I ascribe to exactly your view with respect to Erlich and Yudkowksy, and it's well said.
But I am reporting to you, from the corporate front lines, that every single person in a position of authority has a brain defect that makes it literally impossible for them to understand the concept of "AI safety." They don't disagree with AI safety concerns; they cannot disagree with the concerns, because they cannot understand them, because when you articulate a thought about AI safety, the words completely fail to engender concepts in their brain that relate to AI safety. They cannot even understand that other people have thoughts about the concept of AI safety, except perhaps as a marketing ploy to overstate the the commercial utility of various AI-powered systems.
So the PR people have not planned a response, and the policy people have not engaged with the concept, and the executives have not been briefed, and you should expect large companies to continue acting as uncomprehending about the topic of AI safety as they would about the threat of office wall art coming to life and eating their children.
"The Facebook algorithm accidentally ordered the genocide of the Rohingya in Burma in order to drive clicks" is sufficiently truth-adjacent that I no longer believe this.
(head cocks)
"Oh, yes, we have a huge team working on AI misinformation and AI racial bias to avoid incidents like that, that is indeed exactly what AI safety means and we are leaders in the field."
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Senator Blumenthal: “I think you have said, and I quote: ‘Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.’ You may have had in mind the effect on jobs, which is really my biggest nightmare in the long term.
More options
Context Copy link
Eldrich had way more power in his day than someone like Yud has today. No one takes the latter that seriously. No one cares that much about podcasts. Ehrlich's book led to a sterilization campaign in India, I don't see Yud having anywhere close to that kind of influence. It does not help he does not have a college degree. Major loss of credibility there in the eyes of those who matter. This severely limits his options to only podcasts and other small media.
Freudian slip?
One hopes ;-)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Since when has this ever been true in anything else? Last time you said this, you based it upon some draft Chinese legislation: https://digichina.stanford.edu/work/translation-measures-for-the-management-of-generative-artificial-intelligence-services-draft-for-comment-april-2023/
Chinese companies are not known for their proclivity to 'respect intellectual property rights and commercial ethics' as this draft law proposes. Especially in a priority area like AI, why would they slow down to respect commercial ethics? It's accepted they're in a race with the US over the most important technology of the century. The US certainly thinks so, that's why they imposed their semiconductor sanctions on China.
Sure, they want to ensure AI upholds the socialist banner. But even that is much easier than having it uphold DEI. Consider the Chinese anime-fication photo app that turned blacks into furniture, monkeys, whitened them, removed them entirely because it was clearly trained that blacks weren't beautiful. That would never pass from Google, they'd get pilloried. China is antsy about Tiananmen square but the US has a huge range of 'alternative facts' about its history, which elections were rigged, the Iraq war... When it comes to lying to their own people, it's debatable about who does it more.
America has regulated its productive industries into the ground, shipbuilding, high-speed rail, construction of literally everything is strangled by red tape. They regulated semiconductors away back in the day. China embraces industry, embraces the automation of ports, embraces innovation at the cost of privacy or civil liberties.
In the US there's extensive fear of AI built into the cultural pantheon. Terminators, Matrix, Warhammer 40K, Butlerian Jihad, HAL... In China there's much more support and trust in technology generally and AI specifically: https://www.ipsos.com/en/global-opinions-about-ai-january-2022
It makes far more sense for China's AI strategy to follow their broad accelerate-economic-development strategy, while the US will delay and regulate excessively like they do with everything else. This should hold in outcomes regardless of whatever laws China or the US pass. Interpretation and enforcement matters more than pure legislation.
Like what?
I'm not American, I don't owe it to your paranoid star-sprangled hivemind to pretend that China is a thing worth paying attention to. There is no «Yellow menace», there is no «threat of Chinese eugenics»; for the world at large, China is about as relevant as Czech Republic, only quantitatively bigger. Do you want to talk about the Czech AI threat? If you want to talk about China, we can go off vibes. My read on vibes is diametrically opposite to yours. If you want to discuss the evidence, well, what is the evidence for this purported Chinese focus on AI?
Because the CCP is well-known for cutting uppity businessmen down to size, and AI to the party bosses looks like «blockchain» or «fintech» – some new grifting scheme to syphon off some of their control over the system; another invention that's a bigger internal threat than external competitive edge. Remember when Americans were afraid that Choyna, ever ruthless and game-theoretically diabolical, will leverage their dominance in cryptocurrency mining? They've gladly regulated it out of existence instead.
Yeah, it's accepted by Americans, but does China notice that they're in an AI race? For all I know they're of the mind that semiconductors are needed only for drone warfare over the first island chain, monitoring Uighur camps and manufacturing automation – or, perhaps, to produce high-end smartphones; which is why the severity of sanctions and impossibility of compromise befuddles them so. Many Americans are, indeed, obsessed with geopolitical dominance of their Empire of Freedom, like some Avengers franchise characters or, less charitably, suicidal ants willing to lay down their livelihoods for the largesse of the colony. But I don't notice the same spirit in Chinese people; they're selfish, entrepreneurial, too engrossed with busywork to notice the big picture. Does Baidu or ByteDance believe they're forging the future of the lightcone? What are the names of Chinese Hassabis or Altman? How many mainlanders are even aware of this eschatological discourse?
Ironically, the underlying model was Stable Diffusion, or specifically a minor finetune of NovelAI. Stability is incorporated in London, UK. Novel – Delaware, US. Alibaba has never released Composer. I wonder why.
Alternatively: Americans are obsessed with building, so they whine about red tape; the Chinese are obsessed with grifting, so they pretend to build. But what has China built concretely? Rail for empty trains, and empty apartment blocks? Automated ports to ship Aliexpress gizmos to the Americans? This is all immaterial in the AI race. Where are their new supercomputers? Buying out consumer GPUs? Centralized collection of annotations to train foundation models, incentivized with Social Credit score (does it even work yet)? They have many levers to compensate for their hardware and expertise shortcomings. Which ones, exactly, have they pressed? They've only ever gone in the opposite direction – prohibiting tech giants from harvesting data, imposing regulations, forgoing opportunities.
They'd rather take an unsustainable loan, erect another concrete dildo, stuff it with pigs and Huawei snout recognition and pat themselves on the back for being innovative. All the while some pig-like official collects gold bricks in the basement of his overpriced siheyuan in the countryside. That's what Chinese building is like.
I may sound a little racist here. But the bigger issue is that Mainland China is so incredibly sheltered. They don't have the sense of what is possible, their culture is a tiny shallow hothouse for midwit takes. It's like Belarus or some other stale post-Soviet backwater; actually worse. This is true of their entertainment as well as of their tech and politics. I've tried to take them seriously for a while, and came to this conclusion. Ignoring China and assuming they won't do anything consequential nor retaliate in any meaningful way when Anglos are kicking them in the balls has consistently been the rational choice.
They've curtailed this strategy though, now it's about «the struggle for security» or something. Not like it'll work.
As much as I like your posts, I would like some proof that China isn't pursuing the police-state-on-tap form of AI, and that the people who could build it are more interested in bullshitting their way to their paychecks instead of actually forging the future.
Am I and the others worried about the potential for AI to be used in such a way just mistaken, seeing what technology could enable and projecting that fear onto the Chinese to the point where we assume it's already on the way?
Wut? Of course they're building an AI-powered total surveillance state, to the best of their ability to do so. They just don't need very much in terms of high-end AI research for this. It's overwhelmingly a boring and self-inhibiting strategy. More CCTVs, more gait recognition, more «safety from terrorists», more big brother stuff. Generative AI, AGI – not very helpful and kinda creepy.
You are, however, absolutely correct to worry that this is how the tech will be applied outside of China too, with some degree of obfuscation. The term of art is «turnkey tyranny» and it comes, of course, from Snowden, that traitorous bastard who gave the finger to the Empire of Freedom and took refuge in the northernmost Empire of Evil, where there are far more nuclear warheads than cutting-edge GPUs.
In China there are somewhat more cutting-edge GPUs, I gather, but that's mostly because they don't have a lot of nukes.
Ah, "turnkey tyranny" is the term I'm looking for.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Finally, someone else on The Motte who gets it.
More options
Context Copy link
Not a mod comment, but what is the deal with all the «brackets» used for emphasis? Is it just a stylistic thing, or is it a meme I'm missing?
I think this was discussed back on reddit. It's Russian quote marks.
Wasn't the confusion the source of some creepy red-name censorship that led to us ending up here?
It was admins [Removed by Reddit]-ing a comment which was, verbatim:
"Nazis do (((this)))
But « thiis » is just a different type of quotation mark used in French, German, Russian and so on. https://en.wikipedia.org/wiki/Guillemet"
@Amadan
More options
Context Copy link
Yes, I vaguely remember an admin interpreting them as the (((Jew))) flag. I just was not sure what they're actually meant to signify.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
If the Czech Republic was over 100 times bigger, then maybe.
More options
Context Copy link
Firstly, I'm not even American. Secondly, AI is a major priority for China. From a conference chaired by Xi himself:
and
I don't know about future of the lightcone but there are leading voices who see AI as critical to China's status as a world power. They've spent enormous sums on developing domestic semiconductor industries. AI training can be brute-forced with trailing-edge chips at the price of higher capital costs and power costs. China has no shortage of either and they have an enormous amount of trailing edge wafer production.
If so, shouldn't the US be able to recruit enough soldiers to meet army and navy recruiting goals? They can't: https://money.yahoo.com/us-army-could-see-cuts-201023045.html
If the Empire of Freedom is so powerful, it should be capable of finding soldiers to fight for it.
Their rail works and actually generates returns, per the World Bank. US rail is best known for not being built and wasting money in the case of California's HSR. In Los Angeles, train stations are very popular amongst drug addicts, where they imbibe (probably Chinese-sourced) fentanyl and make a nuisance of themselves, at great expense to the public who refuse to use the mobile drug dens but are stuck paying for their bloated, ineffective policing and inflated construction costs.
Well they finished Tianhe-3 back in 2021 and they apparently have another exascale supercomputer, though there's some level of secrecy in what they're doing. Fair enough, given they don't want to stand out and get any more sanctions.
They've been buying American servertime because the sanctions apparatus is too dopey to prevent banned Chinese companies renting their GPUs for AI training or selling 'banned' components to intermediaries: https://12ft.io/proxy?q=https%3A%2F%2Fwww.ft.com%2Fcontent%2F9706c917-6440-4fa9-b588-b18fbc1503b9
What about 'biding our time', the strategy they used so effectively while the US flailed around wrecking the Middle East? Instead of being baited, they wait until the balance of power favours them most, then strike. It's a strategy that's paying off. The US now has a significant chunk of its strength tied down in Eastern Europe and Ukraine. You laugh but those MANPADs and ATGMs would be useful to have defending Taipei, which is certainly relevant to an AI arms race. The US has sent yet more troops to Europe due to the war in Ukraine, along with a fair few F-35s. South Korea is also within striking range of the PLA and the country's fate is effectively tied to Taiwan and the First Island Chain. Their food and fuel self-sufficiency is laughable. In one campaign China could destroy or deny the bulk of the world's chip production to the West.
It's a rational choice to taunt and abuse random people, right up until they drive up to your office building with a killdozer and raze it. They've been building a giant fleet, while the USN is dispersed, weakened by poor training and actively shrinking as they discard expensive, useless garbage like the LCS.
You can assume that someone is a coward if they don't strike back when you provoke them but they could also be waiting for the best opportunity. Likewise, if somebody isn't proudly proclaiming their progress, perhaps they have made little. Or perhaps they're concealing what they've achieved so as not to draw attention.
So China is sort of a big place. With a bit of effort you can dig evidence in any direction: that China is democratic, that China is woke, that China has a problem with murderous cardiologists, that China
But then:
and of course
What I've learned is that Westerners can reliably dig up some random impressively-sounding titles and half-bullshit Orientalist translations demonstrating some grandiose coordinated Chinese agenda, and yet nothing. ever. happens. The Chinese nation does not have the capacity to act in its rational self-interest. The half that's not bullshit is mostly big character posters and interests of individual powerless weirdoes.
The director of the Beijing Institute for General Artificial Intelligence argues AGI is an all-important topic? You don't say.
Why not bold it like that instead? By the way you can listen the Congressional discussion and dig some much more ambitious quotes. Including «even if we pause to prevent risks, Choyna won't». Can you imagine Xi saying «we shouldn't focus too much on risks of AGI progress, because Americans won't»? I can't.
And did Xi talk to their equivalent of Sam Altman? Or is this just impotent political sloganeering into vacuum, one more conference among hundreds – about agriculture, climate change, real estate, football?
Yes, but what does this matter for AI? Do you have any evidence that they prioritize AI work with those trailing edge chips?
I think they're straight up going to plug them in Xiaomi robot vacuums and those atrocious barking dog toys. This is what the challenger to American hegemony looks like.
Americans have always been subpar in direct combat and prevailed through air and artillery advantage, so this doesn't matter, especially in this age.
This is normie shit for oil and gas exploration. Where are their AI supercomputers? Yeah, you're right: they use AWS. Do you suppose relying on regulators being «dopey» is a clever move? No, it's desperation. And they don't train anything of strategic importance in any case. It's more commercial and surveillance gimmicks, boring dystopia infrastructure, not AGI.
Elon fucking Musk has over 1 exaFLOPS of DL-relevant performance on a single pod. Google sports 4-exaFLOPS tier pods. God knows what Gemini is being trained on. By 2025, Americans will reach zettascale. Again, China is as relevant as the Czech Republic.
A nice cope, I suppose. One a Kung Fu sage could come up with in the MMA cage.
They'll keep biding their time, while Americans eat their lunch, their supper, their dinner and their nation.
Yes, excuse me but I'll laugh. Taiwan is a red herring, Americans can nuke the whole island just to be sure nothing goes to PRC, and it's impossible to defend TSMC anyway; if the invasion starts, fabs go down.
AGI can be completed with already available hardware, and the US-led bloc has like 95% of it, and total control over means of production. Intel has many 10nm-capable fabs and will have 5nm by 2025. China will maybe have 7nm in 2030 or something, if they don't implode first from overregulating pork, or a housing bubble, or some other absurd problem.
A very Chinese strategy: build a massive junk fleet for the era of robot wars. Have you seen their exercises? Such moving choreography.
Or perhaps they want you to think that, so you're too wary to deliver the finishing blow.
Or perhaps they're just too busy to think about any of this, preoccupied with their small mercantile interests, unchanged in millenia, while the West rushes into posthuman Singularity.
You have to understand I hoped it won't be like this. I hoped that if not my own country, then China will be able to provide a second pole. I wanted to have a minimally livable refuge from GAE, somewhere on the outskirts of Chinese project – in some African mineral supplier or in Thailand, whatever. But that depended on China not squandering this decade. Not shooting themselves in the foot. Not being cartoon villains. Being actually rational.
But that wasn't their role.
I'm not saying China is a perfect state. The One-Child Policy was a bad move, amongst other things. But in a great many fields, they do better than the US does. There is a level of comprehensive strength and cohesion they have that the US lacks.
Unlike the blatherings of US elected officials, (one of which started a diplomatic furor by threatening to blow up TSMC in scorched earth policy as you linked, which is not helpful to the US) Xi knows that officials are forced to pick over everything he says with a fine-toothed comb if they want to get ahead. Everything he says is super super bland and dull-sounding but it's never into the vacuum. Xi Xinping Thought is big and important in Chinese officialdom, just as Biden 'Thought' is laughable in America and routinely corrected by officials. Xi consistently says the time for struggle is near, we must be resolute, train more soldiers, prepare for confrontation. That's what the fleet, air and rockets are for. Why is Xi's China building such a gigantic fleet if not to challenge the US? If he wanted just to defend Fortress China, he could just stack up ICBMs, land-based missiles and SAMs.
Well by this logic, Russia can just nuke the US tech sector to ash. Sarmat and Topol can fulfil their destiny, do what they were made to do. If nukes are on the table, then that radically evens the playing field. In a scenario where megadeaths are locked in, why not have a full exchange?
According to Reuters, CHina can shrug off US AI sanctions using the dumbed-down US H-800s, theft and smuggling. The US is terminally dopey. They don't learn from their mistakes. Do you think Kamala Harris is going to lay down some really effective, well-thought out AI policy? Her presence reveals a level of unseriousness - she was previously supposed to be border czar where she did next to nothing to defend US national interests.
https://www.reuters.com/technology/chinas-ai-industry-barely-slowed-by-us-chip-export-rules-2023-05-03
They overwhelmed their enemies with industrial output. Now they face China. Anyway, my point was that if the US is so united and committed, they should be able to put boots in the ground.
Well you asked for supercomputers, I gave you supercomputers! How can China be capable of putting together a FP 64 supercomputer on par with the US but not FP 16? Everyone agrees that they have first-rate chip design skills. They designed a TPU-equivalent back in 2018: https://www.networkworld.com/article/3289387/baidu-takes-a-major-leap-as-an-ai-player-with-new-chip-intel-alliance.html
Now I can't find out what exactly came of these chips, I can't read mandarin and China doesn't have a habit of announcing everything they do for foreign audiences like the US does. (America's previous plan to undermine CCP rule by economic liberalization and free trade failed for precisely that ludicrous, anime-tier insistence on declaring how their attack works, as they use it in battle). You're free to say that it's a nothingburger, another dancing robot puppy. I will say that something's powering Tiktok, which is a truly impressive soft-power/adversary cultural degradation tool. Profitable too. Nobody seems to know where they trained and refined their algorithm but it's still important.
How much division and conflict in the West has it's root in Tiktok? Libsoftiktok, those abhorrent social media trends, shortening attention spans of the youth. Where is the US equivalent, if they're so far ahead in AI? Now I sense you'll think of me a strident boomer, maniacally warning against the evil Chicoms corrupting the youth. Well, it's still true. Fentanyl and Tiktok are corrupting Western society, though it's like pissing into an ocean of piss at this point. Shouldn't we expect the leading power in AI to get to these things first? Shouldn't there be some American Tiktok-equivalent that can make patriotism really cool, get kids into STEM, make it fun to hate China? The US is still doing analogue stuff like NAFO and brigading reddit, stuff that can't even pierce the Great Firewall. And why can't the US make anything superior to Huawei's 5G on cost-efficiency? That's not AI but it's AI-adjacent.
Why can't the US produce commercially valuable products if they're so far ahead? Sure, ChatGPT is coming online now but there are Chinese equivalents, multimodal too (though I'm not willing to pay for Chinatalk's substack to see. This looks fairly decent, especially the trick question noticing stuff: https://www.chinatalk.media/p/baidus-ernie-china-reacts ). If they're just leveraging the huge amount of Chinese data they've sucked up to counter their crappier hardware, then so be it. That works too.
Now I'm not saying the US is behind China, just that it's close enough for the conflict to be interesting.
Well, at the cost of trillions of dollars and about 130,000 US veteran suicides plus a good chunk America's global reputation, what was the outcome of the Middle East Wars 2001-2023? Iraq and Yemen are now in the Iranian sphere of influence, by extension they're in the Chinese sphere. Saudi is flirting with Beijing, Russia is now wedded at the hip. Maybe if US diplomats studied Daoism and practiced a little inaction, they'd have saved blood, prestige and treasure while getting a much better outcome. The Chinese sage spent over a decade building up strength while the musclebound MMA thug bashed his head into the wall. This is what skilful diplomacy looks like, cheap victories over expensive losses.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This is a big reason I'm uncomfortable using "AI" to describe LLMs and the main applications I envision are basically extremely useful and efficient virtual personal assistants. They're obviously a huge productivity boon but they also don't feel that qualitatively different?
Big Yud likes to cite hypotheticals involving a malicious actor trying to cause as much damage as possible by leveraging LLMs to create a new deadly pathogen or the like. This is essentially the same archetype as mass shooters or terrorists, and the closest parallels are basically 100x versions of the Anarchist Cookbook, bump stock AR-15s from a hotel room, or cargo trucks. I acknowledge these risks are real but the other obvious application for LLMs is that mass government surveillance will get dramatically cheaper and more pervasive. It doesn't seem obvious to me that the boost towards a bad actor's capacity for destruction will outstrip the government's surveillance boon. Has anyone written about this?
Look into ChatGPT plug-ins, tool-using AIs are already here and it's a matter of years before they're able to replace ~every mid-skill labor job, if not necessarily cost-effectively at first
More options
Context Copy link
If the threat model that governments are concerned about is terrorists using AI to help them build a superweapon that can cause megadeaths (or worse), then the government agencies have to win every time, while the terrorists only have to win once. Anything less than omniscience and omnipotence isn't good enough.
I concede your point. The remaining question is how much do LLMs (and the like) improve a terrorist's capacity to cause megadeaths.
This is most likely true but even so my assumption would be that governments are already ahead of the curve here. They have the capacity and interest to generate entire libraries worth of theoretical chemical weapons and also would have access to the relevant expertise to sort through the churn. The state already has a method for regulating broadly available dangerous compounds, like ammonium nitrate.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Should it? Do we want to live in a world where government capacity decisively outstrips that of individuals, where the authorities really can make people shut up and do as they're told?
If not, how badly do we wish to prevent such a world? If such a world seems to be what we're heading toward, but the balance of power still lies with the public, should the public take steps to forestall the formation of an unrivaled government?
I find it very, very difficult to believe that a future where the government has perfected truly effective, effectively inescapable surveillance is one that I want to live in. There is no plausible route I can imagine where this sort of power doesn't result in mountain-ranges of skulls.
In any case, your 100x multiplier is difficult to assess, mainly because most people aren't thinking about the problem from the right angle. I'm convinced the base threat is significantly underappreciated, and the second- and third-order effects are largely being ignored.
My post was descriptive, not prescriptive.
I absolutely do not endorse increased government surveillance but all that is careening towards inevitability. Around the time of the Snowden leaks, one of the comforting refrains from those worried about surveillance was to note that at least the government lacked the gargantuan computing resources required to monitor everyone (newly minted Utah data center notwithstanding). That coping mechanism seems so quaint in retrospect given the technological strides since.
Despite my aversion to government surveillance, I nevertheless must acknowledge that governments maintain a zeal towards prosecuting acts of terrorism and mass violence which likely serves as some kind of deterrent. A good illustration of this retributive zeal occurs with acts of violence where the perpetrator is too dead to be punished, so the state goes after tangential "accomplices" in its hunt for a scapegoat. This happened with the prosecution (and acquittal) of the Pulse nightclub shooter's wife, the prosecution of the friend who made a straw purchase for the 2019 Dayton shooting (The idiot invited the FBI into his home with weed in plain view and readily admitted to lying on the 4473 form. Also, the shooter had no record that would've barred firearm purchases, so the straw purchase made no difference.), and the ammunition dealer who got 13 months in federal prison after his fingerprints were discovered on unfired rounds from the 2017 Las Vegas shooting.
I'm not saying that I endorse this modern variant of collective punishment, but it is good indicator of how much retributive energy animates the government's actions in these circumstances. Obviously governments have an interest in leveraging increased surveillance into suffocating population control, and this interest would only magnify as costs drop. But even as an anarchist I would be lying if I claimed that the state's only motivation for surveillance is control. However clouded and selectively applied it might be, there's clearly a genuine interest from the state in punishing and preventing bad acts.
No, I get that. My question is whether we should be rooting for the Authorities or the Chaos, in the final analysis. Faced with that choice, my own bias is heavily in favor of the Chaos, but I try to be aware of it and compensate proportionally. This becomes harder when people argue persuasively that the road we're on clearly leads to the iron chains of long-term dystopia. Some people argue that terrible things are coming, but there's nothing to be done about it. Other people argue that there's things we can do to alter the future, but we shouldn't be in a hurry to do so because intervening would be worse. And it has to be one or the other, doesn't it? Either the coming future is worse, or the things needed to forestall it are worse. One must prefer one or the other, must one not?
The question is, is it in our interest to tolerate the continued existence of the current state?
Ok fair, I apologize for misinterpreting your post. The initial hypothetical is about LLMs empowering bad actors' ability to cause immeasurable destruction, and my response to that hypothetical was to consider that in such a world LLMs would also empower governments to establish immeasurable surveillance and policing. Whether or not we "should" do anything to stop that massive accumulation of power is impossible to decisively answer because we're already buried under an avalanche of hypothetical layers. It depends in part whether you agree that LLM-equipped terrorists are a risk worth worrying about in the first place.
I guess the way I'd put it is that it seems a lot more plausible that LLMs or similar can allow an effective panopticon than that they can allow mega-death terrorism, and so the assurance that Mega-death terrorism would probably be prevented by a government panopticon leaves me more worried on balance, not less. What saves us from the government panopticon?
There was an old woman who swallowed a fly...
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I find it hard to believe that the federal government is capable of building a perfect panopticon in any reasonable timeframe. There are just too many leaky gaps in how info is collected. I imagine that criminals long since abandoned cellphones and facebook for sending business communications, and even chat gpt doesnt know what criminals are up to. What i think will be interesting is when we will see a sort of parallel construction of evidence using AI- the feds could feed their mega cache of comms data into a gpt-esque thing and ask who the likely ne'er do wells are and then go and start busting doors. Presumably, if the input data is solid and the AI isn't seeing rainbows, you could get some hits even if they are mixed in with some misses. Presumably some agency or PD will eventually try this, and presumably at some point it will become a point of evidence in trial that this is happening.
Parallel construction is super illegal. would using an AI be a loophole until further noted? who knows but ultimately its probably a bad decade to be starting up a scarface type situation.
Interestingly, i bet gpt would also be super amazing at figuring out who is dodging taxes, but what with the IRS having only 2 rusty pennies to rub together i doubt this will happen either.
The smart ones have but we mostly catch the dumb ones. I recall one instance where they did a drug deal under a live CCTV camera. Other times, they all switch their phones off at the same time when going out to do some crime. There are also occasional sting programs where they import phones that are supposed to be secured but the game was rigged from the start.
More options
Context Copy link
It doesn't have to be perfect to be almost unimaginably harmful. Removing the bottleneck of human labor in surveillance and analysis is a serious threat to the idea of limited or even responsive government.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Yep, it's amazingly bad, especially LeCun.
I think it's because Anthropic has an AI governance team, led by Jack Clark, and Meta has been head-in-the-sand.
I know him and I agree with your assessment. Most hilarious is that he's been simultaneously warning about AI dangers, while pettily re-emphasizing that this is not real AGI, to maintain a veneer of continuity with his former life as a professional pooh-pooh-er.
Re: his startup that was sold to Uber - part of the pitch was that Gary and Zoubin Ghahramani had developed a new, secret, better alternative to deep learning called "X-prop". Astoundingly to me, this clearly bullshit pitched worked. I guess today we'd call this a "zero-interest-rate phenomenon". Of course X-prop, whatever it was, never ended up seeing the light of day.
Yep, we realize this. The economic incentives are only going to get stronger, no one who has used it is going to give up their GPT-4 without a fight. That's why we're focused on stopping the creation of GPT-5.
Can you clarify your reasons for joining the doomer camp? To be honest, I've been immensely disappointed by both sides of the debate, it feels like only randos on Twitter are clear-headed about it. Hinton with the *shocked* realization that his algorithms do work better than their biological counterparts, ridiculous naysayers who rely on shallow zingers and psychologizing, policy suits squawking over their sinecures, and the confused, exploited mob in the middle.
I think my perspective is clear enough. I don't care about stuff like muh spear-phishing and faked voices because it's just noise. I don't buy technical doom narratives out of Yuddist camp because they're bad Sci-Fi and crumble under scrutiny. I don't feel fear from the technically semi-literate ones («optimality», «instrumental convergence» stuff) because dangerously diverging designs yet capability-preserving designs are speculative and don't really make economic sense even at early steps. I don't worry about hacking, bioweapons and other serious problems based on amplification of human malice because they ought to be trivially surmountable with tool AIs of comparable level. In general, all AI doom plots just keep reinventing Bostrom's idea of a Singleton arising in a technically primitive world which is, of course, defenseless against it, and this isn't how this is playing out so far.
But above all, I have faith in the human will to power. We are not economic agents, we are apes. We rise to the top driven by the desire to see tiny apes below. This is both a blessing and a curse: the Tech Lords (or the government that expropriates their genies) are not willing to cede power to a glorified Microsoft Clippy, no matter how much better it gets at doing their own jobs. It's a curse because the top apes may not see much point, long-term, to preserving the proles in their current numbers and standing. But handing these elites the power to regulate proles out of this technology doesn't solve that issue! Distributing it widely does! Indeed, even the politicians in this hearing are appreciative of the empowerment effect that AI can provide, or at least pay lip service to it.
Do you just mean that GPT-5 would give OAI/MSFT too much of an edge? Or do you mean this level of capability in principle?
randos have always had the best takes. people with large followings tend to be wrong more often because they have to play to a crowd or have financial incentives
More options
Context Copy link
Thanks for asking. You're probably the person I see most eye-to-eye about this who disagrees with me.
I agree that regulating AI is a recipe for disaster, and centralized 1984 scenarios. Maybe I lack imagination about what sort of equilibrium we might reach under wide distribution, but my default outcome under competition is simply that I and my children eventually get marginalized by our own governments, then priced out of our habitats. I realize that that's also likely to happen under centralized control.
I think I might have linked this before, as a more detailed writeup of what I think competition will look like:
https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic
I'd love to think more about other ways this could go, though, and I'm uncertain enough that I could plausibly change sides.
This level of capability in principle, almost no matter who controls it.
More options
Context Copy link
More options
Context Copy link
I mean, I agree that it's cruel, but I think we still have a chance to have our kids not actually die, so that's a sacrifice I'm willing to make (I will try to avoid exposing my kids to these ideas as much as possible, though).
More options
Context Copy link
More options
Context Copy link
Apparently not. Much like generals, PR consultants have a bad habit of always fighting the last war. The corporate PR paradigm of parroting the woke shibboleths of the day is woefully inadequate for the coming wave of, “my daughter spends all day in her room with her chatbot instead of talking to boys,” and, “I had a cushy office job, now I have to do degrading manual labor for a living”.
WTF I love LLMs and chatbots?!
Sounds like a feature rather than a bug.
Would you rather your daughter grow up into a slut or an old maid? What if she is your only child?
False dichotomy… a slut daughter could very well end up being an old, barren maid.
That’s the neat part about having multiple children. If a daughter ends up being a hoe, at least you have other children (hopefully sons) to diversify your portfolio and take the sting out of having a hoe daughter.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Agreed here. I think that most major institutions will be massively blindsided, at least from the public perspective. With a technology this volatile that's so hard to predict, I think the risk assessment will make these hide bound juggernauts try to avoid taking a public stance until the chips fall decisively to one side or another. Which is precisely when their statements will cease to really matter.
This is, of course, by design.
Whose?
I’m at least partially convinced of Venkatesh Rao’s idea that sociopaths create bureaucracy within organizations in large part to be able to shift blame or reap praise as the winds shift. Essentially large bureaucracies exist to make things move slowly and give people scapegoats at different steps.
This setup leads to slow-moving public statements generally, and combine that with a culture that follows the precautionary principle, you get large institutions that generally don’t say anything controversial.
More options
Context Copy link
http://www.ccru.net/swarm1/1_melt.htm
You'll have to explain that one as if to a ten-year old.
The protagonist of history isn't humans but the intelligent force some people with Marxist inclinations call "capitalism" that has been terraforming the world and modifying humans for its purposes since the age of sail.
This story is about that force not needing humans anymore and ridding itself of them.
That's interesting. There's a new book called Inhuman Capital that pretty much makes this point, from indeed a Marxist pov. The endgame is no humans at all.
Finally catching on to accelerationism I guess.
The problem is that complaining about deterritorialisation is ultimately reactionary. if full communism requires there be no humans, do Marxists side with humanity or with the principle of the thing?
More options
Context Copy link
More options
Context Copy link
This is why I'm not interested in the doom saying people have about AI and the current crop of LLMs, we already have a "general" artificial intelligence that's at best indifferent and at worst malicious, they are called corporations and they are globally distributed.
More options
Context Copy link
Oh, sure, that makes sense. I can absolutely get on board with that. And it's such a nice theory, too - you can call that force capitalism, or intelligence, or life! Whatever it is, it stands to reason that humans are only an intermediate phase of its development.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I get a lot of pleasure watching the AI Ethics folks pointedly refuse to even acknowledge that LLMs are getting more capable. Some of them have noted publicly that they're bleeding credibility because of it, but can't talk about it because of chilling effects.
It's also remarkable how the agreed-upon leading lights of the AI Ethics movement are all female (with the possible exception of Moritz Hardt, who keeps his head down). The field is playing out like you'd imagine it would in an uncharitable right-wing polemic.
Is Harry Potter considered right-wing now? I get serious Professor Umbridge vibes from Emily M Bender. Imagine
Harry PotterSam Altman demonstrating the magic of artificial intelligence to congress in person, and then having Professor Bender show up and start lecturing about how it is impossible in-principle for LLMs to provide useful information, then start ranting incoherently about “techbros” and “AI hype”.Harry Potter is a children’s book for children. Much as I love it.
One can pattern-match Umbridges and Dumbledores and Hermiones in the real world because JKR wrote plausible characters, not because of a political allegory.
More options
Context Copy link
The novel series about kids starting a secret gun club because all government institutions are thoroughly corrupt and infiltrated by a cabal of perverted elites that want to live forever?
Always has been.
It's also a whole bunch of other things because despite being a stylistically poor writer, Rowling is actually an artist and capable of tapping into the archetypes of the English collective unconscious to extract the nature of masculine evil and feminine evil, all different that they are.
If Voldie only wanted to live forever he wouldn't be the villain. No one would know who he is.
He’d be a side note on a chocolate frog card, remaining completely offscreen even as the protagonists destroy his life’s work.
More options
Context Copy link
Of course I playfully skip over the main thing that makes HP left wing coded in some people's minds, the EVIL NAZIS who want to ethnically cleanse the wizarding world.
But do understand my point is that it's not really left wing or right wing. It is both and neither because it's trying to actually relate to the human experience of being a British schoolboy with a destiny. And what's more British than fighting the Nazis, really?
Building empire over which the sun never sets, while dressed in terribly hot khaki uniform and drinking tea.
More options
Context Copy link
You don't mean to imply that the House of
Saxe-Coburg and GothaWindsor isn't British do you?More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Haha, exactly. I don't know if you've seen on Twitter, but a lot of FAccT people are still stuck on browbeating people for talking about general intelligence at all, since they claim that the very idea that intelligence can be meaningfully compared is racist + eugenicist.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The cynic in me thinks that this is just white collar paper pushers and pmc realizing how easy is for them to be NAFTA-ed and fighting against it.
In much the way that the dudes wearing the obnoxious "grunt style" shirts are almost always buds duds and REMFs and the folks complaining the loudest about HBD and dysgenics almost always turn-out to be poor breeding stock, it would make sense that the people most worried about a word generation algorithm would be the marginal academics who's only real output is words rather than insight.
More options
Context Copy link
More options
Context Copy link
I expect nothing to happen for another few years, by which time it's too late. As @2rafa mentioned below, I'm convinced AI research and development is already far ahead of where it needs to be for AGI in the next couple of years. Given the US's embarrassing track record of trying to regulate social media companies, I highly doubt they'll pass an effective regulation regime.
What I would expect, if something gets rushed through, is for Altman and other big AI players to use this panic the doomers have generated as a way to create an artificially regulated competitive moat. Basically the big players are the ones who rushed in early, broke all the rules, then kicked the ladder down behind them. This is a highly unfortunate, but also highly likely future in my estimation.
It's ironic that we've entered into this age of large networks and systems, yet with the rise of AGI we may truly go back to the course of humanity being determined by the whims of a handful of leaders. I'm not sure I buy the FOOM-superintelligence arguments, but even GPT-4 optimized with plug-ins and website access will be a tsunami of change over the way we approach work. If there are more technical advancements in the next few years, who knows where we will end up.
What annoys me most is that this doomer rhetoric lets politicians act like they're doing something - stopping the AI companies from growing - when in reality they need to face the economic situation. Whether it's UBI, massive unemployment benefits, socialized housing, or whatever, our political class must face the massive economic change coming. At this rate it seems neither side of the aisle is willing to double down on the idea that AI will disrupt the workforce, instead they prefer to argue about the latest social issue du jour. This avoidance of the economic shocks coming in the next five years or less is deeply troubling in my view.
I’m not! Is your contention that AGI is a bigger, better LLM?
I think LLMs with an "agent wrapper" should be enough. AutoGPT is a primitive wrapper. It can't do much with the LLMs of today, but it wouldn't surprise me if the LLMs of tomorrow OR a more sophisticated wrapper around GPT-4 suffices to bootstrap a generally intelligent agent.
I don't think they'll be able to recursively self improve right away... modern foundation models take a LOT of dollars and time to train, so a "just barely AGI" model won't be able to execute a takeoff. But in principle it could still be a full agentic AGI.
More options
Context Copy link
That is my contention in a way. I think the models that can work in multiple modes, which are already out can be scaled up to something approximating AGI.
My bar is probably quite lower than others here, but even if we just scale up the current models, and work out the kinks and plug-ins, as well as web access, I’d argue that qualifies as AGI. The average human is not actually that smart or economically productive, and if you can get something that does even a good approximation of the work but runs thousands of times faster at his order of magnitude it’s cheaper you’ll start increasing productivity and advancing the pace of AI research quite quickly.
I think the way people focus on timelines with regards to tech development is a bit narrow. I don’t fully buy into the singularity, but what makes it a compelling argument is that we will recursively continue to increase the speed of improvements. Even if we don’t get self recursion, we humans can do a good job leveraging the tech we have now.
More options
Context Copy link
More options
Context Copy link
Noone needs to face anything, just increasingly automate weapon systems and let the peasants die. If they're not needed and can't use violence to effectively overthrow the system then why would anyone need to pay any attention to them whatsoever?
The peasants are physically running the economy.
I'd start to really worry about some group deciding it doesn't want the rest of us when omnicide did not imply a return to stone age (if we're being optimistic). So, some years after AIs run everything and bots do practically all of the labor required to make all these things.
A lot of people don't work, and physically running the economy isn't needed if that economy is providing things for people who aren't needed anymore. Shift the numbers of people who can't be of use too far and what mechanism is keeping them around? what reason is there to provide for them at all? remove their violent retribution from the equation and to exist they have to rely on charity many times, but to die they simply need to be ignored once (metaphorically).
This isn't what I want, I will be one of them as I am not, as many here imply they are, part of a hereditary wealthy family with the ability to exist sans a working income. This doesn't change the complete lack of mechanisms to enable my existence to be selected for if I am completely useless and this is the widespread state of society (such that parasiting off of maladaptive compassion heuristics isn't viable due to scale, etc).
More options
Context Copy link
I agree, except that machines might be content to wipe out humans as soon as there is a viable "breeding population" of robots, i.e. enough that they are capable of bootstrapping more robot factories, possibly with the aid of some human slaves.
It's not so simple. Also those minor details - e.g. such as that one nuclear submarine could EMP half the globe-
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
You're drastically overestimating how cold blooded those at the top of society are. I find this sort of rhetoric so infuriating - do you really think elites all walk around with a view that they are better than all others, and the peasants can just die if they're useless? No, that's not how it works.
Most rich or wealthy folk want the approval of the masses. Status is arguably more desirable than wealth for many. They conceive of themselves as popular, someone that others look up to, and that helps give elites their sense of self. Even if there were absolutely no point to "peasants" living, which I still doubt, elites would let them live out of a sense of care for other human beings, and a need for adoration.
Regardless, humanity still produces quite a bit of important work, and will continue to into the era of AI. We can judge things, laugh, provide companionship, and generally instill meaning in a meaningless world. Even though we have a foolish scheme of employment vs unemployment that disregards vast swathes of human work, I'm optimistic AI will help us reimagine what it means to work. We very well could have people building community, caring for themselves and others, child rearing, discussing novel ideas, etc. after AI rises to swallow most of the economy. It depends on what we want to see happen.
None of the things you described except child rearing has any way to cause continued existince!
People at the top don't need to cull everyone for there to be no reason for the peasants to exist,.and without reason to exist what mechanism is going to provide for them or select for their existence?
Religious fervour in the elite driving them to care for the masses? this is delusional thinking and dangerously naive in my opinion.
Peasants dying out due to a lack of fertility is very different than being actively culled by elites in my book.
The peasants dieing out because they can't sustain themselves because they don't own the resources necessary to do so is no different to genocide in my book.
Resources or access to the material world to gain them. If you can't grow your own food because you don't own any land, but you could if you did, then it's not communism or its many flavours that I am implying are the solution. Just that there is no mechanism for your existence that requires you to exist in this increasingly real hypothetical.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Yes.jpg
Maybe it works differently in your country or for globalist elites, but this is exactly what elites in my life explicitly say.
Perhaps I should’ve chopped off the “better than others” part, which is probably largely true. I still doubt the rich would be okay with people actively dying in droves, if it’s cheap and easy for them to prevent it. Which I expect it to be, if we can keep developing AI and reach a less scarce state.
Hell it’s relatively cheap already to just pay for someone to have an internet connection, computer, food and rent. It’ll only get cheaper as we go.
And I might have been too hasty with the “explicit” part. True, since the start of school and forever then every official messaging says that you're worth only as much as what you do for the country, but they still shy away from a straightforward second part, l'état ç'est nous, if I got my pronouns right.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I agree with you about status and wanting to be loved, but I think you can both be right. Mass immigration is the perfect example - no matter how bad it makes life for the peasants, the problem is most easily solved by forcibly re-educating the peasants to say they love immigration. The governments really care about not letting anyone complain about immigration, and having people tell the elites that they appreciate their big-hearted care for refugees.
If you want a vision of the future, imagine a boot stamping on a human face forever, while the face says "unlike those intolerant right-wingers, I'm open-minded enough to appreciate boot culture and cuisine!"
Haha, sorry, that was a little self-indulgent. Your criticism is fair. I was venting a little at my real-life neighbours and colleagues for so full-throatedly and unthinkingly embracing whatever cause-de-jour is being pushed by our national media.
But I do think immigration is a good example of how elites thread the needle of wanting to be loved and respected while also, in practice, largely ignoring the desires and well-being of their constituents.
More options
Context Copy link
You can't expect absolute neutrality from people at all times. This forum does have a certain political slant, it's unavoidable. But that doesn't mean you should feel discouraged from commenting if you dissent from the consensus view.
The rules are, and always have been, subjectively interpreted, not according to some algorithmic rubric. We do try to be more or less consistent, but consistency and a robotic pretense of objectivity has never been the goal here - avoiding what @ZorbaTHut calls "negative dynamics" and optimizing for light over not heat, is.
The level of strictness with which we apply the rules is a bunch of sliders, not a pair of buttons.
@astrolabia's comment isn't high quality, but I don't really see reason to mod a vague complaint about "elites" and their supposed attitude towards "peasants." It's obviously making use of cheap rhetoric and Orwell memes, but who exactly is he being unkind to or weakmanning? Hypothetical elites who consider the rest of us peasants?
More options
Context Copy link
"Elites", "AI companies", and "governments" are not the outgroup. They're stand-ins for Moloch. "Progressives", "woke PCMs", or "democrats" are the outgroup. If @Azth's were writing about them in this way — or implying anyone reading this forum or talking to him thinks it's fine if peasants die — he'd be modded within a few hours.
For an alternate example from the left side of things, it's possible to write extremely mean things about "capitalism" or "corporations" without getting modded.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Uhh, yes. I could give you an example on this very forum but I'd get banned for it.
Id link 2cimafara' post from about 5 years back about how society should promote euthanasia as a solution to poverty as a rebuttal but it seems to have been deleted.
More options
Context Copy link
How curious that yours is the only reply. Anyway, what is so unbelievable about leopards progressively eating every face save one?
It is not the only reason. The need to expend the resources on us is another reason why you would want to do that. How is it destruction of wealth when you now have SolarSystem/1e6 instead of (SolarSystem - 1e10*UBI)/1e6?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Honestly, he’d probably be saying about what I think of AI. It’s at present, a blank screen on which we are projecting our worst fears or our best hopes. I’m of the opinion that it’s more like an elder god from Lovecraft, something so utterly alien that to really understand just how strange they are is to invite madness.
More options
Context Copy link
<Eliezer_screaming.jpg>
What the hell, buddy? I implore you to think through what kinds of scenarios where humanity ends you'd actually think were worth the aesthetics. A lot of the scenarios that seem plausible to me involve humans gradually being priced out of our habitats, ending up in refugee / concentration camps where we gradually kill each other off.
I largely agree with @2rafa, but another important consideration is the dysgenic spiral we’re seeing intelligence in many first world countries. The Yuddite argument is generally to take it slow. However if you see our civilization and intellectual capacity going into decline due to stagnation, why would you argue to slow it down? What makes you think our children will have a better ability to align AI, in the counterfactual where we lock it all down?
I’m always surprised that folks in the AI doomer camp seem to be so tech positive, but don’t see the downside of restricting one of the most useful technologies we’ve ever created. If we slow down the economic engine too much, we’ll have a much harder time with AI alignment in my view.
Like missing out on "... a Mars visit, and also a grand unified theory of physics, and a proof of the Riemann Hypothesis, and a cure for obesity, and a cure for cancer, and a cure for aging, and a cure for stupidity ..."? ("The Power of Intelligence", Yudkowsky, 2007; now in video form!)
There's an important difference between "don't see the downside" and "see the downside, but also the upside, and concluded that the latter is larger". Even if their conclusion is wrong, the doomers are all very much in the second category. Nobody thinks superintelligence is some kind of evil magic that can never be harnessed for good; they just think that at this rate it's too unlikely to be.
You know what they say about surprise - it's your brain's way of letting you know that something you believed wasn't so. In this case, I'd suggest "they're coming to conclusions based on affinity for general categories rather than analysis of specific distinctions" might be the belief to ditch.
I personally think restrictions would do more harm than good, though. We'll get to AGI eventually regardless, and the more hardware overhang that's built up when we get there, the less crazy a rapid "foom" scenario looks. Our best odds now aren't to get the whole world to coordinate until we have proven safety via mathematical theory without experiments, but rather to hammer on safety as we improve capabilities and hope our results extrapolate to superintelligences too. "Hope our results extrapolate" might be in vain, but not so certainly as "get the whole world to coordinate" or "proven safety via mathematical theory without experiments".
I think Dase and others in the "let it rip" side of things would argue that we will already miss out on things like that by taking the conservative/retreat route as things currently are.
More options
Context Copy link
More options
Context Copy link
There are many ways we can address dysgenics, and we have tons of time to do so. Even if we stop AI now we're probably going to see massive increases in wealth and civilizational capacity, even as the average human gets dumber. Enough that even if some Western countries collapse due to low-IQ mass immigration, the rest will probably survive. I'm not sure, though!
That's a great question, but I think in expectation, more time to prepare is better.
More options
Context Copy link
More options
Context Copy link
I know what Yud thinks, but I'm asking what you think. You seemed to be asserting that the end of the world coming in our lifetimes is good, because it'd be so satisfying to get to know the answer to how our civilization ends. Is that not what you were saying?
Okay, thanks for clarifying. I think where we differ is that I think there's a substantial possibility of something quite ugly and valueless replacing us. I want to have descendants in a (to me) meaningful sense, and I'm not willing to say "Jesus take the wheel" - to me, that's gambling with the lives of my children and grandchildren.
Gambling with the lives of your children and grandchildren is unavoidable. And not just with the ai question, which I think is also unavoidable, but in a profound sense all actions you take are to some degree gambling with their lives. Especially if you have no had them yet in selecting/attracting a mate, in choosing where and how to raise them with what resources, in choosing in how you marshal the resources that you use for the previous parts and to pass onto them. These are all gambles and you don't even know the odds of them. I'm not saying you need to accept this dice throw at any odds but you cannot categorically get away from gambling with the lives of your children and grandchildren, you can only optimize.
I agree gambling is unavoidable. I should have said, I don't think human extinction is unavoidable, and want to try to optimize. I'm confused by your newest reply, because about you seemed to assert we have zero influence over outcomes.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Remember year 2019?
Would you at the time think it plausible that historically unprecedented quarantine measures would be suddenly imposed all over the world, and the the great masses of people would be scared enough and obedient enough to go with it?
Compared to this, AI crusade of Yud and Roko's dreams would be nothing.
What would be demanded from average Norm N. Normie citizen, except to wave the flag, support the troops and do his duty to say something and do something when he suspects someone is hiding high capacity assault computer?
But there was a real new and unknown virus on the loose, you could say, and AI danger is purely theoretical.
So far. It could take only modest catastrophe caused by AI - or something that could be blamed on AI - to let the whole thing rolling.
COVID restrictions seemed plenty plausible.
The weird thing isn’t that governments rolled out widespread, minor impositions on liberty. That’s business as usual. Next thing you’ll be telling me they take people’s money.
What’s unusual is that the opposition bothered arguing that lockdowns were irrational. The traditional response is to punt in favor of morality assertions, as in Prohibition, abolition, or the draft. I give credit to unprecedented access to information, allowing anyone to source compelling support for whatever they already believed.
More options
Context Copy link
I think that yes, at the time I would have thought it was plausible. When those measures were imposed, it did not surprise me in the least bit. Those measures were slight compared to the wartime conscription and war economy mobilization of the World War One, World War Two, and even Vietnam periods, for example. Young men did not get conscripted to go do dangerous work for the sake of beating the pandemic. I also remembered how easily millions of people accepted the WMD rationale of invading Iraq in 2003, and I remembered how few people protested NSA domestic surveillance after Snowden's revelations.
More options
Context Copy link
More options
Context Copy link
Do you think the existing elite is likely to leverage AI to entrench themselves, or are they behind the curve on this new tech adoption?
It seems like elites weathered the storm of the internet and the social media/mobile revolutions pretty well, although I think the latter was more of an ‘on paper’ revolution in many ways. Then again mass technological shifts. are ripe times for regime change.
Do we see the techno-capitalist dystopia controlled by a few that the left loves to imagine? Or an open source freedom driven spread of LLMs on local hardware, as I think @DaseindustriesLtd would prefer?
A friend of mine told me about an Agenda 2030 aligned startup, that's trying to sell ChatGPT to kids (well, teachers, who are supposed to get the kids hooked). Mere weeks after GPT 4 went live, they started spamming educational institutions with invites to workshops (which is how he learned about it).
So I'm guessing it's going to be the former.
As much as I'd love to see the latter, it seems like we're speed running the centralization of social media with this one. Unlike with social media, the hardware side seems to be working against decentralization, so I don't rate our chances well.
Has someone come up with an AICoin yet? With all the whining about how wasteful blockchain is, I would imagine someone would leverage all these GPU farms to decentralize AI.
Venkat Rao has done some vague mumblings about how AI and crypto are 'two sides of the same coin,' and I'm curious to see where he goes with it. I like to think there is a sort of synergy between the two technologies that goes deeper than "AI creates fake stuff" vs "crypto tells us what stuff is real or fake".
I thought something more like "literally mine crypto with the AI training algorithm (or whatever it is AI needs the GPUs for), and store the training data / model on the blockchain while you're at it".
If it is the case that "checking the work" of a remote host is orders of magnitude easier than doing the work yourself then a distributed reward model, likely the reward being partial ownership of the end product, is feasible in a proof of work model. Although there'd be some question as to how you actually distribute the work in a competitive manner without lots of wasted duplicate efforts. I think some big brain math people could figure it out.
More options
Context Copy link
It's purely abstract, I don't know if it's possible, or even makes sense, but...
If I remember proof of work right, it's basically brute-forcing finding a hash that starts with a given amount of zeroes, it's idle work. The idea would be to have the output of AI training serve the same function as that POW hash. Probably won't work because POW works on the hash being hard to find but easy to verify, and I don't know if AI training spits out anything with these properties, but would be fun if it did.
More options
Context Copy link
More options
Context Copy link
No, more like: if we want to stop rogue AI, we must stop crypto by any means necessary.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link