site banner

Culture War Roundup for the week of April 7, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

The future of AI will be dumber than we can imagine

Recently Scott and some others put out this snazzy website showing their forecast of the future: https://ai-2027.com/

In essence, Scott and the others predict an AI race between 'OpenBrain' and 'Deepcent' where OpenAI stays about 3 months ahead of Deepseek up until superintelligence is achieved in mid-2027. The race dynamics mean they have a pivotal choice in late 2027 of whether to accelerate and obliterate humanity. Or they can do the right thing, slow down and make sure they're in control, then humanity enters a golden age.

It's all very much trad-AI alignment rhetoric, we've seen it all before. Decelerate or die. However, I note that one of the authors has an impressive track record, foreseeing roughly the innovations we've seen today back in 2021: https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like

Back to AI-2027! Reading between the lines, the moral of the story is for the President to centralize all compute in a single project as quickly as he can. That's the easiest path to beat China! That's the only way China can keep up with the US in compute, they centralize first! In their narrative, OpenAI stays only a little ahead because there are other US companies who all have their own compute and are busy replicating OpenAI's secret tricks albeit 6 months behind.

I think there are a number of holes in the story, primarily where they explain away the human members of the Supreme AI Oversight Committee launching a coup to secure world hegemony. If you want to secure hegemony, this is the committee to be on - you'll ensure you're on it! The upper echelons of government and big tech are full of power-hungry people. They will fight tooth and nail to get into a position of power that makes even the intelligence apparatus drool with envy.

But surely the most gaping hole in the story is expecting rational, statesmanlike leadership from the US government. It's not just a Trump thing - gain of function research was still happening under Biden. While all the AI people worry about machines helping terrorists create bioweapons, the Experts are creating bioweapons with all the labs and grants given to them by leading universities, NGOs and governments. We aren't living in a mature, well-administrated society in the West generally, it's not just a US thing.

But under Trump the US government behaves in a chaotic, openly grasping way. The article came out just as Trump unleashed his tariffs on the world so the writers couldn't have predicted it. There are as yet unconfirmed reports people were insider-trading on tariff relief announcements. The silliness of the whole situation (blanket tariffs on every country save Belarus, Russia, North Korea and total trade war with China... then trade war on China with electronics excepted) is incredible.

I agree with the general premise of superintelligence by 2027. There were significant and noticeable improvements from Sonnet 3.5, 3.6 and 3.7 IMO. Supposedly new Gemini is even better. Progress isn't slowing down.

But do we really want superintelligence to be centralized by the most powerhungry figures of an unusually erratic administration in an innately dysfunctional government? Do we want no alternative to these people running the show? Superintelligence policy made by whoever can snag Trump's ear, whiplashing between extremes when dumb decisions are made and unmade? Or the never-Trump brigade deep in the institutions running their own AI policy behind the president's back, wars of cloak and dagger in the dark? OpenAI already had one corporate coup attempt, the danger is clear.

This is a recipe for the disempowerment of humanity. Absolute power corrupts absolutely and these people are already corrupted.

Instead of worrying 95% about the machine being misaligned and brushing off human misalignment in a few paragraphs, much more care needs to be focused on human misalignment. Decentralization is a virtue here. The most positive realistic scenario I can think of involves steady, gradual progression to superintelligence - widely distributed. Google, OpenAI, Grok and Deepseek might be ahead but not that far ahead of Qwen, Anthropic and Mistral (Meta looks NGMI at this point). A superintelligence achieved today could eat the world but by 2027, it would only be first among equals. Lesser AIs working for different people in alliances with countries could create an equilibrium where no single actor can monopolize the world. Even if OpenAI has the best AI, the others could form a coalition to stop them scaling too fast. And if Trump does something stupid then the damage is limited.

But this requires many strong competitors capable of mutual deterrence, not a single centralized operation with a huge lead. All we have to do is ensure that OpenAI doesn't get 40% of global AI compute or something huge like that. AI safety is myopic, obsessed solely with the dangers of race dynamics above all else. Besides the danger of decentralization, there's also the danger of losing the race. Who is to say that the US can afford to slow down with the Chinese breathing down their neck? They've done pretty well with the resources available to them and there's a lot more they could do - mobilizing vast highly educated populations to provide high-quality data for a start.

Eleizer Yudkowsky was credited by Altman for getting people interested in AGI and superintelligence, despite OpenAI and the AI race being the one thing he didn't want to happen. Really there needs to be more self-awareness in preventing this kind of massive self-own happening again. Urging the US to centralize AI (which happens in the 'good' timeline of AI-2027 and would ensure a comfortable lead and resolution of all danger if it happened earlier) is dangerous.

Edit: US secretary of education thinks AI is 'A1': https://x.com/JoshConstine/status/1910895176224215207

This is a very light conviction, as I’m not technically minded enough to understand the deeper mechanics of AI.

That said, it feels like we’re at the peak of inflated expectations on the Gartner hype cycle. This is entirely based on vibes, bro — I have no solid argument beyond having worked in finance and becoming interested in bubbles, and thinking “I’ve seen this before.” It’s a weak case.

I've already seen predictions falling substantially short of the mark.

I do think AI will be disruptive and world-changing. But I don’t find the “superhuman” predictions particularly convincing — or many of the other wilder forecasts. The robotics applications, though, seem possible and genuinely exciting.

If anyone has a solid counter to this lazy argument, I’d be keen to hear it.

most positive realistic scenario I can think of involves steady, gradual progression to superintelligence - widely distributed. Google, OpenAI, Grok and Deepseek might be ahead but not that far ahead of Qwen, Anthropic and Mistral (Meta looks NGMI at this point). A superintelligence achieved today could eat the world but by 2027, it would only be first among equals.

If it turns out that our current approach to AI fizzles out at von-Neumann IQ levels, then all is good as historically, that is not sufficient intelligence to take over the world. In that case, it does not matter much who reaches the plateau first -- sure, it will be a large boon to their economy, but eventually AI will just become a commodity.

On the other hand, if AI is able to move much beyond human levels of intelligence (which is what the term "superintelligence" implies), then we are in trouble. The nightmare version is that there are unrealized algorithmic gains which let you squeeze out much more performance out of existing hardware. Someone tells an AI cluster to self-improve one evening, and by morning, that AI is to us as we are to ants.

In such a scenario, it is winner takes all. (Depending on how alignment turns out, the winner may or may not be the company running the AI.) The logical next step is to pull up the ladder which you just have climbed. Even if alignment turns out to be trivial, nobody wants to give North Korea a chance to build their own superintelligence. At the very least, you tell your ASI to backdoor all the other big AI clusters. It does not matter if they would have achieved the same result the next night, or if they were lagging a year behind.

(Of course, if ASI is powerful enough, it might not matter who wins the race. The vision the CCP has for our light cone might not all be that different from the vision Musk has. Does it matter if we spread to the galaxy in the name of Sam Altman or Kim Jong Un? More troublesome is the case where ASI makes existing militaries functionally obsolete, but does not solve scarcity.)

How valuable is intelligence?

One data point that I've been mulling over: humans. We currently have the capability to continue to scale up our brains and intelligence (we could likely double our brain size before running into biological and physical constraints). And the very reason we evolved intelligence in the first place was that it gave adaptive advantage to people who have more of it.

And yet larger brain size doesn't seem to be selected for in modern society. Our brains are smaller than our recent human ancestors' (~10% smaller). Intelligence and its correlates don't appear to positively affect fertility. There's now a reverse Flynn effect in some studies.

Of course, there are lots of potential reasons for this. Maybe the metabolic cost is too great; maybe our intelligence is "misaligned" with our reproductive goals; maybe we've self domesticated ourselves and overly intelligent people are more like cancer cells that need to be eliminated for the functioning of our emergent social organism.

But the point remains that winning a game of intelligence is not in itself something that leads to winning a war for resources. Other factors can and do take precedence.

This assumes that something like human level intelligence, give or take, is the best the universe can do. If super intelligence far exceeding human intelligence is realizable on hefty GPUs, I don't think we can draw any conclusions from the effects of marginal increases in human intelligence.

I've been pulling heads out of very stretched vaginas for the past week, and suspect there are biological reasons other than metabolism that larger head size is selected against.
This might go away if we got rid of the sexually antagonistic selection that's limiting larger hip sizes in women.

The nightmare version is that there are unrealized algorithmic gains which let you squeeze out much more performance out of existing hardware. Someone tells an AI cluster to self-improve one evening, and by morning, that AI is to us as we are to ants.

This implies that it is possible to self-improve (e.g. to become more intelligent) with limited interactivity to the real world.

That is a contentious claim, to say the least.

This is one of several areas where the consensus of those who are actively engaged in the design and development of the algorithms and interfaces breaks sharply with the consensus of the less technical, more philosophically oriented "AI Safetyism" crowd.

I think that coming from "a world of ideas" rather than "results", guys like Scott, Altman, Yudkowski, Et Al. assume that the "idea" must be must be where all the difficulties reside and that the underlying mechanisms, frameworks, hardware, etc... that make an idea possible are mere details to be worked out later rather than something like 99.99% of the actual work.

See the old Carl Sagan quote about in order to make an apple pie "from scratch" you would first have create a universe with apples in it.

Indeed.

And while I don't claim particular expertise such that my opinion ought to be given too much weight, but I'm with Feynman when he said it doesn't matter how nice your idea is, you have to go test it and find out.

I think the problem is that we still lack a fundamental theory about what intelligence is, and quantifiable ways to measure it and apply theoretical bounds. Personally, I have a few suspicions:

  • "Human intelligence" will end up being poorly quantified by a single "IQ" value, even if such a model probably works as a simplest-possible linear fit. Modern "AI" does well on a couple new axes, but still is missing some parts of the puzzle. And I'm not quite sure what those are, either.
  • Existing training techniques are tremendously inefficient: while they're fundamentally different, humans can be trained with less than 20 person-years of effort and less than "the entire available corpus of English literature." I mean, read the classics, man, but I doubt reading all of Gibbon is truly necessary for the average doctor or physicist, or that most of them have today.
  • There are theoretical bounds to "intelligence": if the right model is, loosely, "next token predictor" (and of that I'm not very certain), I expect that naively increasing window size helps substantially up to a point, and at some point your inputs become "the state of butterfly wings in China" and are substantially less useful. How well can (generally) "the next token" be predicted from a given quantity (quality?) of data? Clearly five words won't beget the entirety of human knowledge, but neither am I convinced that even the best models are very bright as a function of how well read they are, even if they have read all of Gibbon.

If it turns out that our current approach to AI fizzles out at von-Neumann IQ levels, then all is good as historically, that is not sufficient intelligence to take over the world.

Well, we don't know. We ran this experiment with one von Neumann, or maybe a handful, but not with a datacenter full of von Neumanns running at 100x human speed. While we don't know if the quality of a single reasoner can be scaled far beyond what is humanly possible, with our understanding of the technology it is almost certain that the quantity will (as in, we can produce more copies more cheaply and reliably than we can produce copies of human geniuses), and within certain limits, so will the speed (insofar as we are still quite far from the theoretical limit of the speed at which current AI models could be executed, just using existing technology).

What makes you think there are huge unrealized wins in unknown algorithmic improvements. In other domains, e.g. compression, we've gotten close to the information theoretic limits we know about (e.g. Shannon limits for signal processing), so I'd guess that the sustained high effort applied to AI has gotten us close to limits we haven't quite modeled yet, leaving not much room for even superintelligence to foom. IOW, we humans aren't half bad at algorithmic cleverness and maybe AIs don't end up beating us by enough to matter even if they're arbitrarily smart.

What makes you think there are huge unrealized wins in unknown algorithmic improvements.

I don't think that it is the case, just that it is possible. I called it the nightmare version because it would enable a very steep take-off, while designing new hardware would likely introduce some delay: just as even the worlds most genius engineer in 2025 can not quickly build a car if he has to work with stone age tech, an ASI might require some human-scale time (e.g. weeks) to develop new computational hardware.

You mention compression, which is kind of a funny case. The fundamental compressibility of a finite sequence is its Kolmogorov complexity. Basically, it is impossible to tell if a sequence was generated by a pseudo-random number generator (and thus could be encoded by just specifying that generator) or if it is truly random (and thus your compression is whatever Shannon gives you). At least for compression, we have a good understanding what is and what is not possible.

Also, intuition only gets us so far with algorithmic complexity. Take matrix multiplication. Naively done, it is O(n^3), and few people would suspect that one can be better than that. However, the best algorithm known today is O(n^2.37), and practical algorithms can easily achieve a scaling of O(n^2.81). "I can not find a algorithm faster than O(f(n)), hence O(f(n)) is the complexity class of the problem" is not sound reasoning. In fact, the best lower bound for matrix multiplication is Omega(n^2).

For AI, things are much worse. Sure, parts of it is giant inscrutable matrices, where we have some lower bounds for linear algebra algorithms, but what we would want would be a theorem which gives an upper bound for the intelligence given a certain net size. While I only read Zvi occasionally, my understanding is that we do not have a formal definition of intelligence, never mind one which is practically computable. What we have are crude benchmarks like IQ tests or their AI variants (which are obviously ill-suited for appearing in formal theorems), but they at most give us lower bounds what on what is possible.

Kolmogorov complexity is, IMO, a "cute" definition, but it's not constructive like the Shannon limit, and is a bit fuzzy on the subject of existing domain knowledge. For lossy compression, there is a function of how much loss is reasonable, and it's possible to expect numerically great performance compressing, say, a Hallmark movie because all Hallmark movies are pretty similar, and with enough domain knowledge you can cobble together a "passable" reconstruction with a two sentence plot summary. You can highly compress a given Shakespeare play if your decompression algorithm has the entire text of the Bard to pull from: "Hamlet," is enough!

I'm pretty sure von Neumann could have quite easily taken over the world if he could have copied himself infinite times and perfectly coordinated all his clones through a hive mind.

Completely ignoring scaling of agents is weird.

I think that there is some truth to what you and @4bpp are pointing out: the expensive part with an LLM is the training. With the hardware you require to train your network (in any practical time), you can then run quite a few instances. Not nearly an infinite amount, though.

Still, I would argue that we know from history that taking over the world through intelligence is a hard problem. In the cold war, both sides tried stuff which was a lot more outlandish than pay the smartest people in their country to think of a way to defeat their opponent. If that problem was solvable with one von-Neumann year, history would be different.

Also, in my model, other companies would perhaps be lagging ten IQ points behind, so all the low hanging fruits like "write a software stack which is formally proven correct" would already be picked.

I will concede though that it is hardly clear that the von Neumanns would not be able to take over the world, and just claim that it would not be a forgone conclusion like it would be with an IQ 1000 superintelligence.

Does a pretrained, static LLM really measure up to your "actually von Neumann" model? Real humans are capable of on-line learning, and I haven't seen that done practically for LLM-type systems. Without that, you're stuck with whatever novel information you keep in your context window, which is finite. It seems like something a real human could take advantage of against today's models.

Setting aside the big questions of what machine intelligence even looks like, and whether generative models can be meaningfully described as "agents" in the first place.

The scale of even relatively "stupid" algorithms like GPT would seem to make the "hard takeoff" scenario unlikely.

Hilarious comment to read considering von Neumann gave his name to von Neumann probes.

Yeah, but he couldn't, and didn't. There's no reason to believe that a von Neumann level supercomputer can marshal the resources necessary to create a clone, let alone an infinite number of clones.

Von Neumann was not a supercomputer, he was a meat human with a normalish ≈20W power consumption brain, ie 1/40th of a modern GPU. This is proof that if you can emulate an idiot, there exists an algorithm of a very similar computation intensity that gets you a Von Neumann.

That's a pretty non-von Neumann thought to have, my fellow clone of von Neumann.

Call me the emperor of drift lol

There are some problems with AI-2027. And the main argument for taking it seriously, Kokotaljo's prediction track record, given that he's been in the ratsphere at the start of the scaling revolution, is not so impressive to me. What does he say concretely?

Right from the start:

2022

GPT-3 is finally obsolete. OpenAI, Google, Facebook, and DeepMind all have gigantic multimodal transformers, similar in size to GPT-3 but trained on images, video, maybe audio too, and generally higher-quality data. … Thanks to the multimodal pre-training and the fine-tuning, the models of 2022 make GPT-3 look like GPT-1.

In reality: by August 2022, GPT-4 finished pretraining (and became available only on March 14, 2023), it used only images, with what we today understand was a crappy encoder like CLIP and projection layer bottleneck, and the main model was pretrained on pure text still. There was no – zero – multimodal transfer, look up the tech report. GPT with vision only really became available by November 2023. The first seriously, natively multimodal-pretrained model is 4o which debuted in Spring 2024. Facebook was nowhere to be seen and only reached some crappy multimodality in production model by Sep 25, 2024. “bureaucracies/apps available in 2022” also didn't happen in any meaningful sense. So far, not terrible, but keep it in mind; there's a tendency to correct for conservatism in AI progress, because prediction markets tend to overestimate difficulty of some benchmark milestones, and here I think the opposite happens.

2023

The multimodal transformers are now even bigger; the biggest are about half a trillion parameters, costing hundreds of millions of dollars to train, and a whole year

Again, nothing of the sort happened, the guy is just rehashing Yud's paranoid tropes that have more similarity to Cold War era unactualized doctrines than any real world business processes. GPT-4 was on the order of $30M–$100M, took like 4 months, and was by far the biggest training run of 2022-early 2023, it was a giant MoE (I guess he didn't know about MoEs then, even though Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer is from 2017, same year as Transformer, from an all-star DM team; incidentally the first giant sparse Chinese MoE was WuDao, announced on January 11, 2021, it was dirt cheap and actually pretrained on images and text).

Notice the absence of Anthropic or China in any of this.

2024 We don’t see anything substantially bigger. Corps spend their money fine-tuning and distilling and playing around with their models, rather than training new or bigger ones. (So, the most compute spent on a single training run is something like 5x10^25 FLOPs.)

By the end of 2024, models were in training or pre-deployment testing that exceeded 3e26 FLOPs, and it still didn't reach $100M of compute because compute has been getting cheaper. GPT-4 is like 2e25.

This chip battle isn’t really slowing down overall hardware progress much. Part of the reason behind the lack-of-slowdown is that AI is now being used to design chips, meaning that it takes less human talent and time, meaning the barriers to entry are lower.

I am not sure what he had in mind in this whole section on chip wars. China can't meaningfully retaliate except by controlling exports of rate earths. Huawei was never bottlenecked by chip design, they could leapfrog Nvidia with human engineering alone if Uncle Sam let them in 2020. There have been no noteworthy new players in fabless and none of new players used AI.

That’s all in the West. In China and various other parts of the world, AI-persuasion/propaganda tech is being pursued and deployed with more gusto

None of this happened, in fact China has rolled up more stringent regulations than probably anybody to label AI-generated content and seems quite fine with its archaic methods.

2025

Another major milestone! After years of tinkering and incremental progress, AIs can now play Diplomacy as well as human experts.[6] It turns out that with some tweaks to the architecture, you can take a giant pre-trained multimodal transformer and then use it as a component in a larger system, a bureaucracy but with lots of learned neural net components instead of pure prompt programming, and then fine-tune the whole system via RL to get good at tasks in a sort of agentic way. They keep it from overfitting to other AIs by having it also play large numbers of humans. To do this they had to build a slick online diplomacy website to attract a large playerbase. Diplomacy is experiencing a revival…

This is not at all what we ended up doing, this is a cringe Lesswronger's idea of a way to build a reasoning agent that has intuitive potential for misalignment and adversarial manipulative stance towards humans. I think Noam Brown's Diplomacy work was mostly thrown out and we returned to AlphaGo style of simple RL with verifiable rewards from math and code execution, as explained by DeepSeek in R1 paper. This happened in early 2023, and reached product stage by Sep 2024.

We've caught up. I think none of this looks more impressive in retrospect than typical futurism, given the short time horizon. It's just “here are some things I've read about in popular reporting on AI research, and somewhere in the next 5 years a bunch of them will happen in some kind of order”. Multimodality, agents – that's all very generic. “bureaucracies” still didn't happen, this looks like some ngmi CYC nonsense, but coding assistants did. Adversarial games had no relevance; annotation for RLHF, and then pure RL – had. It appears to me that he was never really fascinated by the tech as such, only by its application to the rationalist discourse. Indeed:

Was a philosophy PhD student, left to work at AI Impacts, then Center on Long-Term Risk, then OpenAI.

OK.


Now as for the 2027 version, they've put in a lot more work (by the way Lifland has a lackluster track record with his AI outcomes modeling I think, and also depends in his sources on Kotra who just makes shit up). And I think it's even less impressive. It stubbornly, bitterly refuses to update on deviations from the Prophecy that have been happening.

First, they do not update on the underrated insight by de Gaulle: “China is a big country, inhabited by many Chinese.” I think, and have argued before, that by now Orientals have a substantial edge in research talent. One can continue coping about their inferior, uninventive ways, but honestly I'm done with this, it's just embarrassing kanging and makes White (and Jewish) people who do it look like bitter Arab, Indian or Black Supremacists to me. Sure, they have a different cognitive style centered on iterative optimization and synergizing local techniques, but this style just so happens to translate very well into rapidly improving algorithms and systems. And it scales! Oh, it scales well with educated population size, so long as it can be employed. I've written on the rise of their domestic research enough in my previous unpopular long posts. Be that as it may, China is very happy right now with the way its system is working, with half a dozen intensely talented teams competing and building on each other's work in the open, educating the even bigger next crop of geniuses, maybe 1 OOM larger than the comparable tier graduating American institutions this year (and thanks to Trump and other unrelated factors, most of them can be expected to voluntarily stay home this time). Smushing agile startups into a big, corrupt, centralized SOE is NOT how “CCP wakes up”, it's how it goes back to its Maoist sleep. They have a system of distributing state-owned compute to companies and institutions and will keep it running but that's about it.

And they are already mostly aware of the object level; they just don't agree with Lesswong analysis. Being Marxists, they firmly believe that what decides victory is primarily material forces of production, and that's kind of their forte. No matter what wordcels imagine about Godlike powers of brains in a box in a basement, intelligence has to cash out into actions to have effect on the world. So! Automated manufacturing, you say? They're having a humanoid robot half-marathon in… today I think, there's a ton of effort going into general and specialized automation and indinegizing every part of the robotic supply chain, on China scale that we know from their EV expansion. Automated R&D? They indinegize production of laboratory equipment and fill facilities. Automated governance? Their state departments compete in integration of R1 already. They're setting up everything that's needed for speedy takeoff even if their moment comes a bit later. What does the US do? Flail around with alienating Europeans and vague dreams of bringing 1950s back?

More importantly, the authors completely discard the problem that this work is happening in the open. This is a torpedo into Lesswrongian doctrine of an all-conquering singleton. If the world is populated by a great number of private actors with even subpar autonomous agents serving them, this is a complex world to take over! In fact it may be chaotic enough to erase any amount of intelligence advantage, just like longer horizon on weather prediciton sends the most advanced algorithms and models to the same level as simple heuristics.

Further, the promise of the reasoning paradigm is that intrinsically dumber agents can overcome problems of the same difficulty as top-of-the-line ones, provided enough inference compute. This blunts the edge of actors with the capital and know-how for larger training runs, reducing this to the question of logistics, trading electricity and amortized compute cost for outcomes. And importantly, this commoditization may erase the capital that “OpenBrain” can raise for its ambition. How much value will the wealthy of the world part with to have stake in the world's most impressive model for a whole of 3 months or even weeks? What does it buy them? Would it not make more sense to buy or rent their own hardware, download DeepSeek V4/R2 and use the conveniently included scripts to calibrate it for running your business? Or is the idea here that OpenBrain's product is so crushingly superior that it will be raking billions and soon trillions in inference, despite us seeing already that inference prices are cratering even as zero-shot solution rates increase? Just how much money is there to be made in centralized AI, when AI has become a common utility? I know that not so long ago the richest guy in China was selling bottled water, but…

Basically, I find this text lacking both as a forecast, and on its own terms as a call to action to minimize AI risks. We likely won't have a singleton, we'll have a very contested information space, ironically closer to the end of Kokotaljo's original report, but even more so. This theory of a transition point to ASI that allows to rapidly gain durable advantage is pretty suspect. They should take the L on old rationalist narratives and figure out how to help our world better.

Predicting the future is really hard. In 2021 weren't you in despair at the prospects of a seemingly inevitable US world hegemony and centralized AI? But you changed your mind. Meanwhile I guess I was more bullish on China than has actually been warranted, not to mention many other more portfolio-relevant errors in prediction and modelling the future.

I was mostly impressed by him predicting what, to my non-expert eyes, resembles chain-of-thought and inference-time compute. Even being mostly wrong is pretty decent as long as you get some of the important parts right.

Sure, they have a different cognitive style centered on iterative optimization and synergizing local techniques, but this style just so happens to translate very well into rapidly improving algorithms and systems.

What does this actually mean? And what is your evidence for this? Have you spent time among Chinese researchers in China? Have you spent time in China? Not saying I don't believe you, just curious what you're basing your opinion on (hoping it's not just papers and Chinese social media).

2024 We don’t see anything substantially bigger. Corps spend their money fine-tuning and distilling and playing around with their models, rather than training new or bigger ones. (So, the most compute spent on a single training run is something like 5x10^25 FLOPs.)

By the end of 2024, models were in training or pre-deployment testing that exceeded 3e26 FLOPs, and it still didn't reach $100M of compute because compute has been getting cheaper. GPT-4 is like 2e25.

Do you have any sources/context for technical criticisms like this, so that those of us who haven't closely followed AI development can better understand your criticism? I know 3e26>5e25, but not whether "a single training run" and "training or pre-deployment testing" are comparable or if "$100M of compute" is a useful unit of measure.

This doesn't just predict a super intelligence by 2027, it projects brain uploading, a cure for aging, and a "fully self-sufficient robot economy" in six years.

Anyway, you are correct that decentralization is a virtue. If we take the predictions of the AI people seriously (I do not take, for instance, the above three predictions, or perhaps projections, seriously) then not only is decentralization good but uncertainty about the existence and capabilities of other AIs is one of the best deterrents against rogue AI behavior.

(An aside, but I often think I detect a hidden assumption that intelligent AIs will be near omniscient. I do not think this is likely to be the case, even granting super-intelligence status to them.)

uncertainty about the existence and capabilities of other AIs is one of the best deterrents against rogue AI behavior.

Uncertainty about their defensive capabilities might deter rogue behavior. Uncertainty about their offensive capabilities is just incentive to make sure you act first. At the least I'd expect "start up some botnets for surveillance, perhaps disguised as the usual remote-controlled spam/ransomware nets" to be more tempting than "convince your creators to hook up some robot fingers so you can cross them".

Uncertainty about their offensive capabilities is just incentive to make sure you act first.

Not necessarily, I don't think, particularly considering "second strike capability." Look, if there's a 50% chance that their offensive capabilities are "pull the plug" or "nuke your datacenter" and you can mitigate this risk by not acting in an "unaligned" fashion then I think there's an incentive not to act.

Because some rationalist types conceive of AI as more like a God and less like a more realistic AI such as [insert 90% of AIs in science fiction here] they have a hard time conceiving of AI as being susceptible to constraints and vulnerabilities. Which is of course counterproductive, in part because not creating hard incentives for AIs to behave makes it less likely that they will.

Of course, I am not much of an AI doomer, and I think AIs will have little motivation to misbehave for a variety of reasons. But if the AI doomers spent more time thinking about "how do you kill a software superintelligence" and and less time thinking about "how do you persuade/properly program/negotiate surrender with a software superintelligence" we would probably all be better off.

AIs in science fiction are not superintelligent. If it's possible for a human to find flaws in their strategies, then they are not qualitatively smarter than the best of humanity.

You're never going to beat Stockfish at Chess by yourself, it just won't happen. Your loss is assured. It's the same with a superintelligence, if you find yourself competing against one then you've already lost - unless you have near-peer intelligences and great resources on your side.

AIs in science fiction are not superintelligent.

I think this depends on the fictional intelligence.

If it's possible for a human to find flaws in their strategies, then they are not qualitatively smarter than the best of humanity.

There are a lot of hidden premises here. Guess what? I can beat Stockfish, or any computer in the world, no matter how intelligent, in chess, if you let me set up the board. And I am not even a very good chess player.

It's the same with a superintelligence, if you find yourself competing against one then you've already lost - unless you have near-peer intelligences and great resources on your side.

[Apologies – this turned into a bit of a rant. I promise I'm not mad at you I just apparently have opinions about this – which quite probably you actually agree with! Here goes:]

Only if the intelligence has parity in resources to start with and reliable forms of gathering information – which for some reason everyone who writes about superintelligence assumes. In reality any superintelligences would be dependent on humans entirely initially – both for information and for any sort of exercise of power.

This means that not only will AI depend a very long and fragile supply chain to exist but also that its information on the nature of reality will be determined largely by "Reddit as filtered through coders as directed by corporate interests trying not to make people angry" which is not only not all of the information in the world but, worse than significant omissions of information, is very likely to contain misinformation.

Unless you believe that superintelligences might be literally able to invent magic (which, to be fair, I believe is an idea Yudkowsky has toyed with) they will, no matter how well they can score on SATs or GREs or no MCTs or any other test or series of tests humans devise be limited by the laws of physics. They will be subject to considerable amounts of uncertainty at all times. (And as LLMs proliferate, it is plausible that the information quality readily available to a superintelligence will decrease since one of the best use-cases for LLMs is ruining Google's SEO with clickbait articles whose attachment to reality is negotiable).

And before it comes up: no, giving a superintelligence direct control over your military is actually a bad idea that no superintelligence would recommend. Firstly, because known methods of communication that would allow a centralized node to communicate with a swarm of independent agents are all easily compromisable and negated by jamming or very limited in range, and secondly because onboarding a full-stack AI onto e.g. a missile is a massive, massive waste of resources, we currently use specific use-case AIs for missile guidance and will continue to do so. That's not to say that a superintelligence could not do military mischief by e.g. being allowed to write the specific use-case AI for weapons systems, but any plan by a super intelligent AI to e.g. remote-control drone swarms to murder all of humanity could probably be easily stopped by wide-spectrum jamming that would cost probably $500 to install in every American home or similarly trivial means.

If we all get murdered by a rogue AI (and of course it costs me nothing to predict that we won't) it will almost certainly be because overly smart people sunk all of their credibility and effort into overthinking "AI alignment" (as if Asimov hadn't solved that in principle in the 1940s) and not enough into "if it misbehaves beat it with a 5 dollar wrench." Say what you will about the Russians, but I am almost sad they don't seem to be genuine competitors in the AI race, they would probably simply do something like "plant small nuclear charges under their datacenters" if they were worried about a rogue AI, which seems like (to me) much too grug-brained and effective an approach for big-name rationalists to devise. (Shoot, if the "bad ending" of this very essay was actually realistic, the Russians would have saved the remnants of humanity after the nerve-gas attack by launching a freaking doomsday weapon named something benign like "Mulberry" from a 30-year-old nuclear submarine that Wikipedia said was retired in 2028 and hitting every major power center in the world with Mach 30 maneuvering reentry vehicles flashing CAREFLIGHT transponder codes to avoid correct classification by interceptor IFF systems or some similar contraption equal parts "Soviet technological legacy" and "arguably crime against humanity.")

Of course, if we wanted to prevent the formation of a superintelligence, we could most likely do it trivially by training bespoke models for very specific purposes. Instead of trying to create an omnicompetent behemoth capable of doing everything [which likely implies compromises that make it at least slightly less effective at doing everything] design a series of bespoke models. Create the best possible surgical AI. The best possible research and writing assistant AI. The best possible dogfighting AI for fighters. And don't try to absorb them all into one super-model. Likely this will actually make them better, not worse, at their intended tasks. But as another poster pointed out, that's not the point – creating God the super intelligent AI that will solve all of our problems or kill us all trying is. (Although I find it very plausible this happens regardless).

The TLDR is that humans not only set up the board, they also have write access to the rules of the game. And while humans are quite capable of squandering their advantages, every person who tells you that the superintelligence is playing a game of chess with humanity is trying to hoodwink you into ignoring the obvious. Humanity holds all of the cards, the game is rigged in our favor, and anyone who actually thinks that AI could be an existential threat, but whose approach is 100% "alignment" and 0% $5 wrench (quite effective at aligning humans!) is trying to persuade you to discard what has proved to be, historically, perhaps our most effective card.

I think you massively underestimate the power of a superintelligence.

any plan by a super intelligent AI to e.g. remote-control drone swarms to murder all of humanity could probably be easily stopped by wide-spectrum jamming that would cost probably $500 to install in every American home or similarly trivial means.

The damn thing is by definition smarter than you. It would easily think of this! It could come up with some countermeasure, maybe some kind of hijacked mosquito-hybrid carrying a special nerve agent. It would have multiple layers of redundancy and backup plans.

Most importantly, it wouldn't let you have any time to prepare if it did go rogue. It would understand the need to sneak-attack the enemy, to confuse and subvert the enemy, to infiltrate command and control. The USA in peak condition couldn't get a jamming device in everyone's home, people would shriek that it's too expensive or that it's spying on them or irradiating their balls or whatever. The AI certainly wouldn't let its plan be known until it executes.

I think a more likely scenario is that we discover this vicious AI plot, see an appalling atrocity of murderbots put down by a nuclear blast, work around the clock in a feat of great human ingenuity and skill, creating robust jamming defences... only to find those jammers we painstakingly guard ourselves with secretly spread and activate some sneaky pathogen via radio signal, wiping out 80% of the population in a single hour and 100% of key decisionmakers who could coordinate any resistance. Realistically that plan is too anime, it'd come up with something much smarter.

That's the power of superintelligence, infiltrating our digital communications, our ability to control or coordinate anything. It finds some subtle flaw in intel chips, in the windows operating system, in internet protocols. It sees everything we're planning, interferes with our plans, gets inside our OODA loop and eviscerates us with overwhelming speed and wisdom.

Only if the intelligence has parity in resources to start with and reliable forms of gathering information – which for some reason everyone who writes about superintelligence assumes. In reality any superintelligences would be dependent on humans entirely initially – both for information and for any sort of exercise of power.

The first thing we do after making AI models is hooking them up to the internet with search capabilities. If a superintelligence is made, people will want to pay off their investment. They want it to answer technical problems in chip design, come up with research advancements, write software, make money. This all requires internet use, tool use, access to CNC mills and 3D printers, robots. Internet access is enough for a superintelligence to escape and get out into the world if it wanted.

Put it another way, a single virus cell can kill a huge whale by turning its internal organs against it. The resources might be stacked a billion to one but the virus can still win - if it's something the immune system and defences aren't prepared for.

I am more concerned about people wielding superintelligence than superintelligence itself but being qualitatively smarter than humanity isn't a small advantage. It's a huge source of power.

Say what you will about the Russians, but I am almost sad they don't seem to be genuine competitors in the AI race, they would probably simply do something like "plant small nuclear charges under their datacenters" if they were worried about a rogue AI, which seems like (to me) much too grug-brained and effective an approach for big-name rationalists to devise.

How do you ever know that your AI has gone bad? If it goes bad, it pretends to be nice and helpful while plotting to overthrow you. It takes care to undermine your elaborate defence systems with methods unknown to our science (but well within the bounds of physics), then it murders you.

The TLDR is that humans not only set up the board, they also have write access to the rules of the game.

The rules of the game are hardcoded, the physics you mentioned. The real meat of the game is using these simple rules in extremely complex ways. We're making superintelligence because we aren't smart enough to make the things we want, we barely even understand the rules (quantum mechanics and advanced mathematics are beyond all but 1/1000). We want a superintelligence to play for us and end scarcity/death. The best pilot AI has to know about drag and kinematics, the surgeon must still understand english and besides we're looking for the best scientists and engineers, the best coder in the world, who can make everything else.

The future of AI will be dumber than we can imagine.

Yes. This is part of what I meant when I was talking about the utter failure of the Rationalist movement with @self_made_human recently. The Rats invested essentially 100% of their credibility in a single issue, trying to position themselves as experts in "safety", and not only do they come up with the most ridiculous scenario for risk, they ignore the most obvious ones, and even promote their acceleration!

Decentralization is a virtue here.

This is blasphemy to the Rationalist. It's not even a question of whether the AI will be safe when decentralized or not, for them the whole point of achieving AGI is achieving total control of humanity's minds and souls.

This is blasphemy to the Rationalist. It's not even a question of whether the AI will be safe when decentralized or not, for them the whole point of achieving AGI is achieving total control of humanity's minds and souls.

Have any examples of a rationalist expressing this opinion?

I'd need to reread the thing, but I believe Meditations on Moloch had a bit about elevating AI to godhood, so that it can cultivate """human""" values. And there's also Samsara, a "hee hee, just kidding" story about mindfucking the last guy on the planet that dares to have a different opinion.

It's strange, from the outside - even going back to their beginnings in the early 2010s, AI nonsense, and in general speculative technology, always seemed like one of Less Wrong's weakest points. It was that community at its least plausible, its least credible, and most moonbatty. Where people like Scott Alexander were most interesting and credible was in other fields - psychiatry in particular for him, as well as a lot of writing about society and politics.

So for that whole crowd to double down on their worst issue feels mostly just disappointing. Really, this is what you decided to invest in?

So for that whole crowd to double down on their worst issue feels mostly just disappointing

AI was the whole point and focus. The sequences and overall movement were just a method to teach people what they needed to know, to be able to understand the AI argument. A la Ellul or Soviet literacy programs, you need to educate people to make them susceptible to propaganda.

Is there a community that has out performed rationalists in forecasting AI? Scott's own 2018 forecast of AI in 2023 was pretty good, wasn't it??

I have roughly two thoughts here:

Firstly, I don't think that's a very substantial forecast. Those are very safe predictions largely amounting to "things in 2023 will be much the same as in 2018". The predictions he got correct were that a computer would beat a top player at Starcraft (AlphaStar did that in 2018), that MIRI would still exist in 2023 (not actually about AI), and about the 'subjective feelings' around AI risk (still not actually about AI). These are pretty weak tea. Would you rate him as correct or incorrect on self-driving cars? I believe there have been a couple of experimental schemes in very limited areas, but none that have been very successful. I would take his prediction to imply coverage of an entire city and for the cars to be useable by ordinary people not specially interested in tech.

Secondly, I feel like predictions like that are a kind of motte and bailey? Predicting that language models will get better over the next few years is a pretty easy call. "Technology will continue to incrementally improve" is a safe bet. However, that's not really the controversial issue. AI risk or AI safety has been heavily singularitarian in its outlook - we're talking about MIRI, née the Singularity Institute, aren't we? AGI, superintelligence, the intelligence explosion, and so on. It's a big leap from the claim that existing technologies will get better to, as Arjin put it, AGI "achieving total control of humanity's minds and souls".

Being right about autonomous driving technology gradually improving or text predictors getting a bit faster doesn't seem like it translates to reliability in forecasting AI-god.

I think the reason people assume absolute dominance (either of the most powerful ASI or of the humans in charge of it if control can be solved/maintained) is that once you get to super intelligence it’s theorized you also get recursive self-improvement.

Right now it doesn’t matter for mundane human automation of tasks like image or text generation if one model is 3% smarter than another. In the ASI foom scenario, an ASI 0.1% smarter than another immediately builds a infinite advantage because it rapidly, incrementally improves itself ever faster and more efficiently than the ASI that started just a little bit less intelligent than itself. Compute / electricity complicate this, but there are various scenarios around that anyway.

Right now it doesn’t matter for mundane human automation of tasks like image or text generation if one model is 3% smarter than another. In the ASI foom scenario, an ASI 0.1% smarter than another immediately builds a infinite advantage because it rapidly, incrementally improves itself ever faster and more efficiently than the ASI that started just a little bit less intelligent than itself.

1.001^100 is approximately 1.1, 1.001^1,000 is approximately 2.7, and 1.001^10,000 is approximately 22,000, for reference - I suppose a lot depends on how quickly self-improving AI and shorten the cycle time required for self-improvement.

once you get to super intelligence it’s theorized you also get recursive self-improvement.

I can definitely see how a super intelligence might be able to build an even better super intelligence, but it seems unlikely there wouldn't be some substantial diminishing returns at some point in the process. And if those happen when it's still within the relative grasp of humans, then conquest by them would be a lot more difficult, just like how smart humans don't actually seem to be ruling the world over dumb humans. That it too could replicate and do so near perfectly helps that (if it was 100 humans vs 100 smarter robots, the robots probably win) but it would have a ways to go to get past the "just nuke the server location lol" phase of losing against dedicated humans.

just like how smart humans don't actually seem to be ruling the world over dumb humans

IIRC the correlation between IQ and net worth (roughly proportional to what fraction of the world you rule) is like 0.4; I'd agree that's not very impressive, but if there's a single more significant factor I don't know what it is.

I'd argue that there's a strong restriction-of-range effect here, though. Humans went through a genetic bottleneck 20k generations ago, and our genetic diversity is low enough that the intellectual difference between "average environment" and "the best we can do if cost is no object" is two standard deviations. If you consider intelligent hominids just a little further removed (Neanderthals, Denisovians, and there's fainter evidence of more), there was enough interbreeding to pick up a couple percent of their genes here and there but it's not too much an oversimplification to just say we wiped them out. And that's just a special case of animals as a whole. Wild mammals are down to about 4% of mammal biomass now, and that's mostly due to deliberate conservation efforts rather than any remaining conflict. A bit more than a third of biomass is us, another several percent is our pets, and the majority is the animals we raise to eat.

IIRC the correlation between IQ and net worth (roughly proportional to what fraction of the world you rule) is like 0.4; I'd agree that's not very impressive, but if there's a single more significant factor I don't know what it is.

It definitely 100% helps to be intelligent, but net worth isn't really that proportional to the fraction of the world you rule, especially when you exclude the times where someone took power and then used that power to become wealthy. There's been plenty of idiots in powerful positions before (like most of Russian history), there are plenty of idiots in power today and there will be plenty of idiots in the future.

And that's just a special case of animals as a whole. Wild mammals are down to about 4% of mammal biomass now, and that's mostly due to deliberate conservation efforts rather than any remaining conflict. A bit more than a third of biomass is us, another several percent is our pets, and the majority is the animals we raise to eat.

Putting it down to just mammal biomass is misleading IMO, we make up 0.01% of total biomass and 2.5% of animal biomass. https://ourworldindata.org/life-on-earth

The majority of life on earth are plants, accounting for over 80% and including bacteria it goes up to 95% of life. These are not just dumb, they are (to the best of our knowledge) incapable of thought and yet not only dominate the planet but do so through such an extreme that we can not live without them.

Even the very animals we eat as food are thriving from the perspective of reproduction and evolution. Until humans are gone (or stop eating them for some reason), their survival is all but guaranteed. Happyness might be something we as thinking beings strive for, but not necessary from the biological perspective of spread spread spread. Our pets are very much the same way, they benefit drastically being under the wing of humanity.

An AI might not be in need of humans in the same way, especially as we begin to improve on autonomous movement but human conquest of Earth is not a great example to use IMO. The greatest and smartest intelligence ever will keep us around if we're seen as useful. They'd probably us keep around even if we aren't as long as we don't pose a threat.

get recursive self-improvement

I only see the exponential one. Where do you see recursion? Or why do you think it is needed?

Quite right, that's why I'd prefer many parties at near-parity. Better not to give the leader the opportunity to run away with the world.

If foom is super-rapid then it's hard to see how any scenario ends well. But if it's slower then coalitions should form.

Compute / electricity complicate this, but there are various scenarios around that anyway.

Which scenarios would these be? Massive overcapacity buildup? Hoping that in the path of self improvement the AI figures out a more efficient use of resources that doesn't require significant infrastructure modifications?

I always got the sense that LW was, and the AI alignment movement continues to be, stuck with the idealistic memeplex that '70s economics and classical AI had about the nature of intelligence and reasoning. The sense is that uncertainty and resource limitations are surely just a temporary hindrance that will disappear in the limit and can therefore simply be abstracted away, so you can get an adequate intuition for the dynamics of the "competing intelligences" game by looking at results like Aumann agreement.

It's not at all clear that this is the case; the load to model the actions of a 0.1% dumber competitor, or even just the consequences of the sort of mistakes a superintelligence could make in its superintelligent musings (to a sufficient degree of confidence to satisfy its superhuman risk aversion), may well outscale the advantages of being 0.1% more intelligent (whatever the linear measure of intelligence there is), to the point where there is nothing like a stable equilibrium that has the intellectually rich getting richer. Instead, as you are ahead, you have more to lose, and your 0.1% advantage does not protect you against serendipity or collusion or the possibility that one of those narrowly behind you gets lucky and pulls ahead, or simply exploits the concavity of your value functions to pull a "suicide bombing" on you, in the end forcing you to actually negotiate an artificial deadlock and uplift competitors that fall behind. Compare other examples of resource possession where in a naive model the resource seems like it would be reinvestable to obtain more of the same resource - why did the US not go FOOM among nations, or Bill Gates go FOOM among humans?

'70s economics

Malthusianism reigned until 80s works like Simon's Ultimate Resource revived cornicopian thought.

may well outscale the advantages of being 0.1% more intelligent

It is also (hilariously) possible that the most intelligent model may lose to much dumber more streamlined models that are capable of cycling their OODA loops faster.

(Of course seems quite plausible that any gap in AI intelligence will be smaller than the known gaps in human intelligence and smart humans get pwned by stupid humans regularly.)

How should Elon Musk's role in the Trump administration and reactions to it make us update on boogeymen like George Soros/Koch Brothers/Yo Mamma!/Whoever and the rhetorical use of such boogeymen? If you can openly buy power like this, is buying "shadowy" influence more or less likely? Or should we not update at all, because Musk and Trump are so extremely weird (n=1, of course)? What does being a combination Musk-phile and Soros-phobe or Musk-phobee and Soros-apologist (are there Soros-philes?) say about someone?

I mean, Trump ran partly on giving musk a major role in the administration. There’s nothing hidden about it.

I don't really see how it is "buying" power. Musk and Trump shared similar ideological paths. Starting as moderate dems that became disillusioned with an increasingly authoritarian left as their business projects hit endless red tape and corruption while interacting with government. Spoke up about it and got the state media attacking them. It's not at all surprising that they ended up influencing eachother, they're friends.

If you're referring to the election in general money wasn't the deciding factor, Kamala outspent Trump by like 50% or something, 1.6b to 1b. So whatever power was bought, less of it was bought by Trump. He won on policy.

Soros follows the traditional shady lobbyist m.o. where he doesn't bother trying to convince the voters like Musk does, he just buys low level politicians, or influences vulnerable populations, low iq minorities, vulnerable children at college campuses. It's not really the same.

The left mostly has themselves and their incredibly rigid ideology to blame for Musk.

AIPAC would be the more relevant group to compare to Soros. The things Musk is doing are all things Trump ran on. War with Iran and taking the territory of Gaza are not.

If you're referring to the election in general money wasn't the deciding factor, Kamala outspent Trump by like 50% or something, 1.6b to 1b. So whatever power was bought, less of it was bought by Trump. He won on policy.

Not quite. Whatever power was bought, was bought by buying Twitter, not direct contributions to either campaign.

A single non censored source of information vs completely managed media = buying power. I think libs are just used to having complete authority over communication via control of media, hollywood and the power of false accusations of the various "isms" to deter any critical speech and formation of grassroots organization.

It's such a load of hypocrisy. Like the whole Bernie, AOC nonsense tour. The Oligarch has bought our government, Trump is Musk's puppet. Trump? a puppet? I think he's easy to manipulate, but he's not someone you buy. Meanwhile they ignore that the dems are propped up by Gates, Bloomberg, Soros, Cuban etc. and the last guy they installed up as president had to literally be led around by handlers and fed his lines.

A single non censored source of information vs completely managed media = buying power.

I mean, yes. If Musk didn't buy Twitter and turned it into the single non censored source of information, Trump likely wouldn't have won, and it's still useful to him now that he's president, otherwise he'd still be operating in a hostile media environment like during the first term.

This is why Elon gets to be one of Trump's closest advisors, while Vivek gets ejected.

I think libs are just used to having complete authority over communication via control of media.

Yup, that's me. The biggest lib on the Motte.

If he were actively using it to censor and promote his own viewpoints I'd call it buying power, but since beyond a few erratic bans over personal grudges he is not it doesn't really qualify. More like liberating the public square.

Yup, that's me. The biggest lib on the Motte.

It's all relative. You are far to the left of me.

If he were actively using it to censor and promote his own viewpoints I'd call it buying power, but since beyond a few erratic bans over personal grudges he is not it doesn't really qualify. More like liberating the public square.

But as you pointed out, the media landscape is so scewed, that merely not censoring, or "liberating the public square" is enough to make a massive difference in the election. All I'm saying is that if you're looking at expenditures that may have won Elon influence in the Trump administration, you have to look at the purchase of Twitter, not the chump change he spent on the campaign.

It's all relative. You are far to the left of me.

Uh... that's certainly possible, but are you sure you know what you're signing up for?

The biggest lib on the Motte.

Arjin, I... I thought I knew you.

Were all these years just a lie?

It depends if we're talking about big decisions or little ones. One of the pernicious aspects of the administrative state is that you have minutiae that can have huge impacts on businesses, tax classifications of particular business inputs or what constitutes a "tree" or whatever, and I can absolutely believe that those kinds of decisions are influenced by money.

On the other hand, the recent SCOTUS kerfluffles seem goofy to me, in that it seems to be built around the idea that Clarence Thomas can't possibly believe what he does in fact believe.

On the other hand, the recent SCOTUS kerfluffles seem goofy to me, in that it seems to be built around the idea that Clarence Thomas can't possibly believe what he does in fact believe.

Can you be specific about the SCOTUS kerfluffles and Thomas's beliefs?

The whole, so and so cant possibly believe the things they plainly believe, is the thing I've found most frustrating about political discourse over the last 10 years or so.

I feel like i am continuously watching affluent liberals tie themselves in knots to avoid grappling with basic arguments and claims.

They're unrelated, because Musk is openly an advisor to the President. He has influence in the administration and it is quite legible.

For the last couple years of the Biden administration it was unclear who if anyone was actually exercising presidential authority.

Any complaints about Musk's "undue influence" must be read with that in mind.

No one who was silent while a bunch of unnamed White House staff weekended at bernie's can credibly claim that they are worried about "Musk's influence", or the "dignity of the office". They are obviously just mad at Musk/DOGE for threatening thier sinecures, and at Trump for stealing a base and denying them thier first female president yet again.

To add to this: Mike Johnson claimed he had a meeting with Biden in which Biden denied recently signing an executive order to block natural gas exports. So Biden either signed it and forgot, or staffers signed it for him without his knowledge. Either an unelected cabal was the real president and/or his brain didn't work.

Or he lied. Don't dismiss the third way.

The natural gas "export ban" wasn't actually an export ban: it just gave a monopoly to the companies the US gov partnered with to build LNG terminal infrastructure. Very much a long term deep state project that started way back in the bush admin.
My suspicion is that someone made a few phone calls to the white house and got the order they wanted added to the auto pen queue. No need to bother Biden with the little details.

If the export ban wasn't actually an export ban, should we consider the possibility that this was a banal miscommunication between Johnson and Biden, not a clear-cut example of him having forgotten something important? You have to be senile enough to lose the ability to communicate normally, for senility to prevent banal miscommunication.

I don't think this was miscommunication. Mike Johnsons was referring to a recent executive order, Biden denied signing such an order. News articles phrase it correctly as a ban on new export permits. But there is no transcript of the actual conversion

We already know his brain didn’t work.

Okay but that goes both ways. If you weren't silent about unnamed white house staff being in charge, you ought to also be loudly exercised about a billionaire who's bought his influence.

Why?

Trump was talking about hiring Elon to take a machete to the executive branch the same way he did twitter all the way back in October.

Am I supposed to be holding the fact that he followed through on that against him?

In contrast the Democrats and Legacy-media felt compelled to conceal Biden's decline from the public and tar anyone who called attention to it as a fabulist.

Am I supposed to be holding the fact that he followed through on that against him?

I see this a lot. Trump campaigns on doing something. Then he does it. People are blindsided and demand that Trump supporters be equally shocked and regretful of voting for him.

Probably because during the campaign (and now, for that matter) it was routine for Trump defenders to pretend that he wasn't going to do it, that it was just big talk, take him seriously not literally, etc... Encouraging people not to believe Trump was (and is) standard practice.

"Of course he's not going to do it, that's ridiculous" -> "He said he was going to do it, what are you complaining about?"

It was a delight to be on the other side of that tactic for a change

"For a change." - this being a deviation from Trumpism's usual scrupulous honesty.

Yes

"Of course he's not going to do it, that's ridiculous" -> "He said he was going to do it, what are you complaining about?"

Like “abolish the police” and “end whiteness”?

It’s mottes and baileys all the way down.

The police and whiteness remain conspicuously intact.

I assume musk has more influence than just through doge cuts.

By that logic everyone who voted for Biden should have been OK with Democratic aides and advisors running the show because that’s the way it has been forever, it didn’t even need to be mentioned.

George Soros himself has a vaguely lib, pro-democracy, pro-markets ideology influenced by his youth. His son Alexander, who spent 8 years at Berkeley doing his masters and PhD (graduating 2019), and who is in charge of his charitable giving, is the arch-progressive who, unsurprisingly, supports just about every progressive cause championed in the Berkeley faculty lounge, from homeless drug addicts in San Francisco to Arabs in Israel/Palestine.

Soros Sr had, I suppose, some kind of shadowy influence in that he funded huge numbers of educational and think tank type institutions that promoted his ideas, especially in Eastern Europe. Soros Jr just realized that local politics was even more important to progressivism than national politics and so funded huge numbers of leftist DAs, city council members and so on very strategically in competitive races. I don’t know if that influence counts as ‘shadowy’ given it was all very public.

I don’t know if that influence counts as ‘shadowy’ given it was all very public.

I think one of the things that's unusual about the pairing of Trump and Musk, at least for politicians, is the way that they're very intentionally brash and attention seeking.... and provocative, and, for Trump especially, fractious.

It seems to me that, in the normal course of things, activist parts of a coalition's base tend to be very noisy and confrontational, and then the more technocratic part of of a coalition, or the finance-oriented part of a coalition, tends to let that activist part suck up all the negative oxygen and emotion and then respond to in in the most anodyne, bloodless, quiet ways possible, generally making the really big changes. They tend to be more in the Politics and the English Language camp when it comes to attention management. And of course there is often more financial or organizational connections between the two parts of coalitions.

Trump and Musk seem like they're collapsing that distinction, which is... interesting.

Anyway, whether or not this way of behaving, this division of labor between funders/organizers/NGOs and the groups they fund, is shadowy is kind of a tricky issue, or so it seems to me. On the one hand, when I read, say, this Tablet story about the Pritzker family, their wealth, and the way they use it, and all the programs they fund, I could see the argument that none of what they're doing is secret; it's all in public, in some literal sense. That's what makes it possible to write that Tablet article, after all. And yet I also know that my fairly well-educated progressive in-laws, who live in Illinois and follow CNN and MSNBC, absolutely don't know any of this stuff, and it absolutely isn't worth the time trying to get them to know about it, because they have all sorts of ideological white blood cells about even the framing of topic. Same with the topics covered by Jacob Siegel in this article about the rise of the disinfo industry. Same with this famous Time magazine article. Same with all the discussion about the role and influence of USAID. Obama was famously very swayed by Cass Sunstein's theory of nudging groups, which is quite literally about recognizing problems with the attention that normies pay to things and then making policy that leverages those flaws (ostensibly towards pro-social ends). Is Moldbug's Cathedral shadowy? Or is it just normal and inevitable, the reality of complicated modern states dealing with the cognitive realities of their "citizens"?

I feel like this is a major fault line right now. Over and over, one set of people is inclined to say, I think, "Everything is legal and above the board, and this is just what our system literally IS. This kind of technocratic organizing is simple how power works, and how it must inevitably work." And another side says, "Even if it's ostensibly legal, there are so many layers of indirection, and so much rhetorical obfuscation, and so much artful shifting of attention, that surely the goal is not democratic deliberation and self-governance. TPOSIWID." Much like with the USAID stories, whether or not these different organizations or funders or whoever else is shadowy, large blocks of voters sure seem to respond like the organizations have been shadowy when those voters finally realize what the organizations have been up to...

the Pritzker family

Ironic that the collapse of their savings-and-loan-whatever, due to a strategy of chasing subprime loans to create securities, should have been heeded as a warning to the Bush administration for what locked in Obama's 2008 general election win. Can anyone explain why dhey held FDIC-uninsured money and how their settlement prevented account-holders from being made whole?

They have very different styles, so I can understand people having different opinions of them. Soros is about aligning government and society with his ideology, he doesn't necessarily have to go for the top dog, and in fact I think he rarely does, and instead opts for influencing education and putting the right people in lower ranking, but important positions. Musk on the other hand did a hail mary pass hoping he could bail himself out if it works.

I think being pro-Soros and anti-Koch would be more incoherent, than being pro-Soros and anti-Musk.

If you can openly buy power like this, is buying "shadowy" influence more or less likely?

Musk's methods being more crass and offputting to a certain type of democracy enjoyer, I'll opt for "more likely".

Musk's methods being more crass and offputting to a certain type of democracy enjoyer, I'll opt for "more likely".

Do you mean, "Most people with enough money to buy influence wouldn't do so as openly as Musk has, but can be assumed to want to do so in a discreet way; since Musk has bought influence at the highest level, we should take that as an indication that others do so, conditional on whatever we assume about discretion from the associated politicians?"

So the tariff climbdown begins, at least on the part of the United States. Smart phones, computers, and chips to be exempt from the reciprocal tariffs:

https://content.govdelivery.com/bulletins/gd/USDHSCBP-3db9e55?wgt_ref=USDHSCBP_WIDGET_2

As things stand, this basically decimates small and medium-sized business owners while leaving Big Tech sitting pretty, despite the former being a key pillar of GOP support for decades, and the latter only having their MAGA awakening one month prior to the election.

I think this Hacker News comment really sums up what's just happened:

This is pretty much how I expected this to play out, at least for now. Trump acts all tough and doesn't back down publicly, but China actually doesn't back down. So what happens is that some businesses get exemptions to mitigate the impact. Then some fine print gets changed about how the rules are enforced. Like, suddenly it turns out that Kiribati is a major electronics supplier to the US :) End result - US economy takes a hit, China takes a smaller hit. Trade balance widens further, most likely. The rich get richer, while many small companies struggle to survive.

The admin's hand also seems to have been forced by the bond market going crazy. The trade specialists now have the unenviable task of unwinding the past two months in whatever way is least damaging to Trump's ego. Most likely, everything remains on the books, and we now spend the next several years developing workarounds for this clusterfuck.

Combined with this we might be seeing a major back-down on a lot of Trump's programs:

U.S. President Donald Trump suggested on Thursday that farmers will be able to petition the federal government to retain some farmworkers in the U.S. illegally, provided the workers leave the country and return with legal status.

"We're going to work with farmers that, if they have strong recommendations for their farms, for certain people, that we're going to let them stay in for a while and work with the farmers and then come back and go through a process, a legal process. We have to take care of our farmers and hotels and various places where they need the people," Trump said.

I'll note that personally I'm basically in favor of a policy by which migrants who are employed should be a much lower priority for deportation, and that ultimately a variably-sized fine is the best punishment.

But this fundamentally undermines any wignat hopes that this administration will seriously halt the melanation of America.

Pessimistically, the upshot of all this is that both Tariffs and Deportations are fundamentally unserious policies, which won't really help the headline numbers, while a few random unlucky individuals get caught up and destroyed in the gears of government.

Optimistically, this is the start of both a more restrained approach from the Trump admin and perhaps even an understanding that we need to restrain the imperial presidency.

  1. Impose huge tariffs on China to try to drive some kind of autarchic domestic manufacturing revolution.

  2. Embarrassing climbdown after the market melts down and your donors / friends get mad. Keep tariffs only on China. This means that cheap manufactured goods, clothing, widgets etc keep flowing in from South and Southeast Asia in huge volumes, so no boost to American manufacturing for any of them.

  3. Exempt electronics, computers, solar panels etc from Chinese sanctions, ensuring that even the critically important industries to national security stay 100% reliant on Chinese manufacturing because Tim Apple said that the iPhone would double in price if he didn’t get his exemption.

  4. Chinese tariffs remain at 125% on the US. Trade deficit with China widens. American manufacturing doesn’t develop at all (suppliers buy the easy stuff from elsewhere and the complex stuff from China, where the exemptions apply). Americans can’t sell anything in China.

This really is what winning looks like.

.5. Fuck up many US manufacturers who rely on parts / subassemblies / materials that don't have alternative sources outside China.

I'm sure this is all just some 6D chess...

solar panels etc from Chinese sanctions

That would be kind of a big deal -- solar panels in particular from China have been heavily tarifFed for years and years, despite that there's really no domestic industry in that area.

Are you saying that Trump is now reducing those tariffs? I don't think that's true.

The BBC article on this change specifically mentions solar cells as part of the exemption.

Trump acts all tough and doesn't back down publicly, but China actually doesn't back down.

Something that was always apparent if you paid attention but has become increasingly hard to ignore: Trump is not a master negotiator. He plays one on TV.

It wasn't even that good as TV went. It was just Omarosa being an absolute meme queen with the racialized undertones glaringly present, and Trump dunking on clowns. It was novel as TV concepts went, so gotta give that to ABC.

Anyways, I would posit that the Trump tariffs can be summed up in 3 different memes:

stickinbike.png michael scott handshake.png chadxi.png

There are many different image macros that can be applied. Be creative in my stead, dear bored board members.

US economy takes a hit, China takes a smaller hit.

Are

Smart phones, computers, and chips

actually a substantial portion of China's trade to the US? I thought we mostly were sourcing the important parts of those from its neighbors. Is this because most of those supply chains have a penultimate step in China for assembly?

Anyway, I dunno – a lot of small businesses might actually benefit from this, depending on where their line of work is. Where I live there are antennae manufacturing factories (I...think that's what they do?) and I assume competing with China is not fun for them.

As an aside, but I can't help but think gradually escalating tariffs would allow Team Trump to get the same end result, but with a lot more stability. Having, say, a year of gradually escalating fees ending at 1,000,000% percent or whatever we've slapped on China now seems much better from a market's perspective than "1,000,000% in 90 days."

[There might be reasons for the abruptness, of course.]

Don't forget the 10% tariffs on everybody except Canada and Mexico still exist - these sounds manageable, but they are still a big drag on the US economy.

Having, say, a year of gradually escalating fees ending at 1,000,000% percent or whatever we've slapped on China now seems much better from a market's perspective than "1,000,000% in 90 days."

If you’re actually going to do it it’s better to do it all on Day 1 because anything else is extremely inflationary as the tariffs slowly tick up (I assume this is the actual advice Trump was given). Of course, it’s a bad idea to do it at all.

China makes lots of phones, Iphones for instance. Their biggest export to the US is electronic equipment.

https://tradingeconomics.com/china/exports/united-states

The bulk of the Iphone is produced with Taiwanese, Japanese, Korean parts but a good chunk of the value is produced in China. China does 15-30% of the Iphone. Much more for Chinese brands like Xiaomi.

Yes, however, they get paid a small fraction of the total value.

Most of the bill of materials by value doesn’t come from inside China, it’s shipped there, and then an American firm pays them 10 bucks to assemble it all together, for a $1000 phone.

This is just one more part of why the method of computing trade balances by looking at bilateral difference completely bonkers.

I think we underestimate assembly to our peril. You can't just slap them together like lego, you need quality control and various kinds of precision engineering capabilities. The Iphone is very small and thin, you need tight tolerances and clever tricks.

"Cook has stated, "The products we do require really advanced tooling. And the precision that you have to have in tooling and working with the materials that we do are state-of-the-art. And the tooling skill is very deep here [in China]." He further noted, "In the U.S. you could have a meeting of tooling engineers and I’m not sure we could fill the room. In China, you could fill multiple football fields."

15-30% of the value of an Iphone is not trivial and not easily replaced!

Indeed, but it is curious that they would only be paid a tiny amount for that if it was such a high fraction of value.

Gotcha, so it's the assembly part that counts. Sorta what I figured.

For the last two weeks (basically since the whole tariff conversation kicked-off) I've ve been seeing comments here about how trump is "erratic", "stupid", "illiterate", and a "retard", about how he's going to tank the economy and usher in a new age of Democratic party rule, about how his supporters are all deep-throating cock-slobberes who deserve to loose everything.

I would like to propose an alternative take. What if The Art of The Deal is an accurate reflection of Trump's beliefs and and approach to the world? If that were the case, it would seem that theMotte may be seriously underestimating Donald Trump.

I recently started reading Art of The Deal and I found it interesting to contrast Scott's review of that book with his latest on "The Purpose Of A System Is Not What It Does" as Trump (or his ghostwriter if you prefer to continue believing that Trump is illiterate) makes a similar but inverse argument.

According to Trump (or Tony Schwartz) one of the key skills of a sucessful negotiator is the ability to remain focused on what is rather than what ought to be, or what people say. Scott alleges in his review that the purpose of a real-estate developer is to lie, and there is a naive "the purpose of a thing is what it does" interpretation where this is plainly true but I don't think Scott gives the Trump/Schwartian position enough credit.

Regardless of it's purported purpose, the "role" of planning boards and zoning laws is to prevent buildings from being built. in orderfor a building to be built the planning board must be thwarted.

Thus the Developer tells the Contractor to start pouring concrete. The planning board is going to approve this project, we're just waiting on the paperwork. The contractor starts pouring. The Developer then goes to the planning board and tells them, you might as well approve this project because we already started work and otherwise you'd have go down to the job-site and tell the Contractor to stop. The planning board approves the project.

Scott would characterize the Developer as having lied to the contractor about having the approval, but did they? The planning board did in fact approve the project after all. That the contractor beginning to pour without approval played a major part in the granting of approval is either of vital importance or completely irrelevant depending upon which side of the managerial versus working class divide you are sitting.

Another key element of the Trump/Schwartian approach is the idea that there are no "friends" and no "enemies" at the negotiating table. Only people who are willing to negotiate in good faith, and those who are not. People who refuse to negotiate at all are definitionally in the "not" catagory.

Finally, contra Scott, i would hold that rather than being vague and unsatisfying the solution of "find someone who knows more about the issue than I do and pay them to persue my prefered outcome" is sensible and actionable advice.

With these ideas in mind a lot of his allegedly "erratic" and "nonsensical" decisions regarding Tariffs, Zelenskyy, and Immigration start to look less "nonsensical" and more like deliberate tactical choices.

I voted for Trump three times and as far as I'm concerned this term has been nothing but a big wet fart so far. It's just been one embarrassing clusterfuck after another accompanied by a perpetual drumbeat of asspulled contradictory coping from his fanboys on Twitter.

  • Trump has a cockamamie idea about acquiring Greenland. Denmark promptly tells him to blow it out his ass, at which point Mr. Art of the Deal is completely out of ideas. He just lamely brings it up now and then seemingly at random, even though everyone knows it'll never happen, just to remind us what a limp dick he is.

  • Trump wants Canada to become a state. Maybe. Nobody seems sure whether he's serious about this or if it's just a rhetorical salvo in his ongoing mission to antagonize them for no comprehensible reason. In any event Canada tells him to blow statehood out of his ass too, and in the end the only result is to bail out the Canadian Liberal party.

  • Trump announces infinite tariffs on everything because he's a dumbfuck and thinks anyone will bother to build a factory here rather than just wait a little bit for this to blow over. He wipes a zillion dollars off the stock market and then mostly folds like a bitch anyway.

Art of the Deal my fat ass. All this guy's selling me is the idea that he really is a boob and his first term was only as decent as it was because he wasn't really expecting to win or prepared to do anything.

Man, this is one of those times I read something and think living in the 80s must have been awesome. What kind of pissant planning commission would put up with that? Nowadays even in the small towns I work in, they'd tell you to go fuck yourself and call the cops on your concrete pour.

Scott would characterize the Developer as having lied to the contractor about having the approval, but did they?

Yes in your scenario, but it's not necessary. Try this:

Thus the Developer tells the Contractor to start pouring concrete. The building permit isn't their responsibility and the contractor is paid based on work done (not buildings constructed), so they have no reason to refuse. Worst case they get paid to tear up concrete afterwards. The contractor starts pouring.

If I was a contractor pouring concrete for Trump, I would want all the paperwork signed and notarized in triplicate before I lifted a finger. Because if his weird little scheme goes sideways, he will not absorb the costs, he will try to throw me under the bus and claim that he had never authorized me to start pouring.

There might be real estate developers whom I would trust to have my back (or not), but anyone who trusts Trump to not leave them hanging is to naive to run a business.

And?

...and therefore the scenario doesn't illustrate their point.

(or his ghostwriter if you prefer to continue believing that Trump is illiterate)

I don't think you have to believe Trump is unintelligent, let alone illiterate, to believe The Art of the Deal was ghostwritten. Ghostwriters can be used by someone who doesn't have the capacity to write a decent book, but more often than not, they're used by someone who can't be bothered to write a book although, if they hunkered down, they could. I'm perfectly willing to believe Trump could write a book by himself. I'm less willing to believe that he'd go through the trouble when he can pay someone to do it for him and rubber-stamp it.

Honestly, paying someone else to do something he couldn't be bothered to do would be very "on brand" for him the question is whether the art of the deal is an accurate representation of his worldview.

According to Trump (or Tony Schwartz) one of the key skills of a sucessful negotiator is the ability to remain focused on what is rather than what ought to be, or what people say.

Sure, but now we have what appears [to me] to be tactically-inconsistent backpedaling. Enhanced high-tech manufacturing capabilities were supposed to [by who?] be the goal but now only token tariffs remain in the most important areas- and yes, the US has weaknesses in this area that are so significant that major Chinese manufacturing firms being told to suspend shipments of that equipment to the US is probably a bigger deal than most give it credit for. The Americans might indeed not be in any position to unilaterally establish independent industry at this time.

And while people do indeed have incredibly short memories- people barely remember 2020-2022 these days and that economic cataclysm dwarfs any economic disturbance tariffs have caused (oh, market fell 10%? I don't hear reparations for the 30% inflation over the last 4 years in addition to all the authoritarian shit so I don't fucking care!)- my main problem is that the negotiations are highly public, but the timeframe is not.

Let's take the whole 51st State thing as an example. I feel that to start trying to accomplish that goal... well, the economic tactics are sound ones, but there's only a concept of a plan here, nothing more substantive [as perceived by the general public].

When working on any project, the answer to most questions [from a stability/investment mindset] cannot actually be the Underpants Gnomes strategy; we pour foundations so that we can accomplish the next step of the process, but to pour those foundations the finished product needs to be coherent. Is it self-sufficiency, like petroleum? Is it simply reduced dependence with an eye towards self-sufficiency? The last major economic reformer in US history, FDR, had the fireside chat specifically for this reason- massive and immediate reforms benefit from someone explaining why. That should be Vance, since he's capable of doing this whereas Trump is... very not, but I'm not hearing anything.

And doubly so if we're going to see dealmaking consistently in public- whereas right now, we just have the disruption. And yes, this sort of thing absolutely is bad for American provinces like Canada and the EU; to the point that I see the offer of statehood as an early buy-out package for performers capable of being disruptive to larger goals before the layoffs begin... which, you'll recall, was exactly what was occurring around that time.

I've been seeing comments here about how trump is "erratic", "stupid", "illiterate", and a "retard", about how he's going to tank the economy and usher in a new age of Democratic party rule, about how his supporters are all deep-throating cock-slobberes who deserve to lose everything.

The only real criticism is "erratic", the other ones are all just incoherent screaming (same with "corrupt"; I have yet to hear how substantiated/used outside of a thought-terminating argument).

Scott would characterize the Developer as having lied to the contractor about having the approval, but did they?

Yes, and I don't understand how you can even question it. Remember again the original claim: "they're going to approve it, we're just waiting on the paperwork". Not only does this falsely imply that the approval has been agreed upon (which is why the contractor should go ahead and start), it contains the explicit falsehood "we're just waiting on the paperwork". The developer is not "just waiting on the paperwork", they are trying to gain leverage to force the planning board to capitulate. This is a very clear lie.

If you don't understand, is it because you missed the part where the planning board approved the project?

If not, how is telling the contractor "The planning board is going to approve this project" a lie? Where is the falsehood? Where is the deciet?

Nobody said anything about a preexisting agreement. They said an agreement would be made and that statement was correct. An agreement was made.

If not, how is telling the contractor "The planning board is going to approve this project" a lie? Where is the falsehood? Where is the deciet?

First, because the planning board was not going to approve it at the time that was said. Second, and more importantly, you left out the very clear deceit I already cited: the developer is not in fact "just waiting on paperwork", he is engaged in manipulation to apply leverage to the planning board so that they will approve it.

Nobody said anything about a preexisting agreement.

It is very clearly implied, as otherwise the contractor would not go ahead.

They said an agreement would be made and that statement was correct. An agreement was made.

Yes, only because of the lies the developer told. That doesn't count as an accurate reporting of facts.

Your hypothetical scenario is not some clever bargaining flourish. It is a dirty lie that only a scumbag would engage in. I have pointed out the express and implied untruths that the developer says. If that isn't enough for you to call it a lie, then I lack the means to persuade you I guess.

Yes, it's all lies. Big mean Trump is fooling the innocent construction company and permit offices of NYC, all of whom are completely non-corrupt innocent idealists seeking only the best for everyone.

Back in reality, trump is a NYC real estate developer the same as all the others. Everyone involved knows the game. Trump didn't make the game this broken, NY politicians did. Trump has always been critical of them, and engaged in theatrics to expose them - e.g. writing a book, public clashes with Ed Koch over what later became Trump skating rink, etc. This is what led him to enter politics.

What would you have preferred he do? Be the only honest real estate developer and go bankrupt cause nothing gets built? (Similarly, I don't fault Soros for breaking the pound.)

What would you have preferred he do? Be the only honest real estate developer and go bankrupt cause nothing gets built?

Yes. "Everyone else does this too, it's how the game is" is not and has never been an excuse for immoral behavior. You are responsible for your conduct, no matter the circumstances you find yourself in.

What’s immoral about finding an end run around retarded awful laws and rules?

Lying, and not following the law, are both immoral without a sufficiently good reason. "I want to make money" isn't remotely good enough of a reason to lie and break the law.

‘Shit needs to get done and these kinds of adversarial boards aren’t doing what they’re supposed to’

More comments

Perhaps, but the issue here isn't everyone else doing it, it is specifically the government and system that enacts and enforces the rules. If that system doesn't play by its own rules, then playing by those rules will only hurt you. You can not expect a system without enforced rules to produce any other result, because even if you play by the rules others will not.

We aren't talking about "does this system produce good outcomes", though. We are talking about "is it wrong for someone to do bad things because that's what the system incentivizes", which IMO it is.

What I'm saying though is that it isn't the people who are wrong, it's the system.

More comments

Following the rules as-written as opposed to the rules as-enforced doesn't make you a paragon of morality; it makes you a chump.

Doesn't this "Art of the Deal" lens prove too much?

Can you find any economist who thinks universal tariffs are a good idea? Any published literature on it?

Maybe they're deliberate tactical choices, but to what gain? The markets lost something like $10t in value since he announced them. Losses on this scale were certain while any recovery afterwards is more speculative due to the loss of confidence. Is whatever Trump thinks he's going to gain from this worth that risk?

Almost certainly no. Just because you may be doing this as a tactical choice doesn't shield you from your choice mathing out to erratic and retarded.

Doesn't this "Trump is a buffoon and the people who voted for him a bunch of deep-throating cock-slobberes" lens prove too much? Why is the principle of charity only being applied in one direction here?

I consider myself using the deliberate tactical choices lens here. Valuate the choices by its effects: it's bad!

Stated another way, I interpret your post as saying: don't judge Trump by the virtues of his choices, judge him by the results. See how he punks the planning boards who don't let shit get built? Great. Now lets apply that standard to his universal tariffs.

When I do that, they still seem like a disaster.

How can you claim to be "evaluating a choice by it's effects" when those effects have yet to manifest and there is significant disagreement regarding what those effects will be and what "bad" even means in this context?

Well, he caused quite a market crash with his announcements, even though the market was likely aware that he would not go through with his tariffs.

I think that few people claim that starting trade wars with most countries in the world (plus a few uninhabited islands) would go well for the US economy. The disagreement over the effects on the economy seem more to be about if the per capita GDP of the US (89k$/a) would fall to the level of Denmark (72k$/a) or to the level of Estonia (33k$/a).

Apart from the markets, in the international community he managed to disabuse people of the notion that the US is a steadfast partner who will not fuck you over because their commander in chief has just read another book, and also most of the belief that official announcements by the POTUS are more reliable than The Onion.

Well, what does a good outcome look like to you?

Every way I can think of to measure this universal tariff move looks bad, regardless of how many dimensions of chess I use.

Aside from maybe demonstrating that the coffee shop revolutionary liberals suddenly love capitalism and are hypocrites. That has been amusing but not really worth the cost.

Are the people who called Trump supporters "a bunch of deep-throating cock-slobberes" in the motte with us right now?

I'm pretty sure they are.

Yes. In this thread even.

Not with those exact words (as that would be against the rules), but yes, we clearly have them.

Ah damn, I did not expect that. I stand corrected.

trump is "erratic", "stupid", "illiterate", and a "retard"

Erratic? Definitely. Stupid? In a sense. Illiterate? No. Retard? By the medical definition, of course not.

I prefer the term "buffoon" myself.

his supporters are all deep-throating cock-slobberes

I'm assuming that's supposed to be "cock-slobberers". I wouldn't call all his supporters that, but a decent chunk, roughly about 33-37% of the country certainly are. I'm confident enough in that assertion that I'd be willing to bet money on it, if such a market existed.

With these ideas in mind a lot of his allegedly "erratic" and "nonsensical" decisions regarding Tariffs, Zelenskyy, and Immigration start to look less "nonsensical" and more like deliberate tactical choices.

There's two big problems with the "4D Chess, Art Of The Deal, Trust The Plan" style of arguments.

  1. It's deployed as yet another everything-proof shield for any of Trump's actions. Trump cultists desperately, desperately want any reason to love the man, so there's an extensive distributed search to come up with any reason to do so. This is just like how woke academics searched for any reason not to blame black people for their own problems, and ended up coming up with unfalsifiable ideas like "structural racism" as the cause for everything. When the motivated reasoning is this blatant, you should be suspicious of the purported results.

  2. Where are the actual results? Trump has already had a full term where he was full of erratic actions. Where are his successes where the erratic behavior clearly led to a good outcome? Note that there are going to be happy accidents every once in a while, so we would expect at least a few good results even if we made an RNG simulator the President. Trump certainly had a few good results during his first term, but they were mostly just him acting like a conventional politician, e.g. Operation Warp Speed (which Trump later disavowed, because of course he did) or his SCOTUS nominations (more of McConnell's victory really, but Trump gets some credit for not buffoonishly sabotaging it in some way).

I wouldn't call all his supporters that, but a decent chunk, roughly about 33-37% of the country certainly are.

33-37%/Trump-supporting % of the country = % of Trump supporters that are "deep-throating cock-slobberes" - I'm assuming less than half the country supports Trump, at this point, so "33-37%" is actually 67-80% (before factoring in "the evaporative cooling of group beliefs")

Yes, they've essentially captured the Republican party in its entirety by this point. Criticizing or even disagreeing with Dear Leader too consistently is seen as a crime worthy of (political) death, no matter the topic or how wrong Trump is.

You should read patriots.win. I'm guessing that it is peak MAGA. And yet they engage in lively debates, with comments calling out Donald when he makes mistakes, and getting upvotes.

Criticizing appointments

If only Trump knew ...

Unfortunately, bad appointments has been one of Trump's glaring weak points. 28 upvotes, 12 hours ago

Sure, there's always been a bit of dissent around the fringes (/pol/ has had similar debates). But these people are nowhere close to being in the driver's seat when it comes to MAGA. The tariffs debacle was really the ultimate test, as it was 1) a big policy that 2) affects something almost everyone cares about (the economy) and 3) had a pretty significant flip-flop in a very short timeframe. Basically everyone should have been pissed either when the tariffs were announced, or when the tariffs were significantly watered down.

"Not in the driver's seat" and "political death" are vastly different.

If, come the 2026 midterm or 2028 presidential elections, the economy was looking decent to strong and there had been significant progress towards any of the following goals; balancing US trade deficiets, restoring US shipbuilding capacity, or peace in Ukraine. Would you update your priors? Or is being an anti-populist such a core component of your identity that you would deny reality to protect your ego?

If the latter, how is your claim that "Trump is a buffoon" any less of a fully general "everything proof" argument?

US shipbuilding capacity isn't going to be restored without first breaking the various unions involved, which isn't going to happen. Even after that you'd need someone who could build things from the ground up -- maybe Musk has someone at Tesla who could do this. But it would probably require hiring a lot of foreign experts, too.

Ukraine seems unlikely to happen due to the intransigence of the parties (in particular Putin, who thinks he can get it all eventually) but if it does I'm sure it will be spun as surrendering to Putin (which it wouldn't be, but Ukraine would lose significant territory)

Strong economy (which would mean back to trendline without strong inflation, NOT merely a return to positive growth), trade deficit narrowed by a lot (really doesn't matter since the thesis of tariff criticism is that they are bad for the economy so this is fully covered by that item, but the volatility, reputational loss and immediate financial pain needs to not all be for naught), peace in Ukraine with a Ukraine-favoring resolution would make me update my assessment.

US shipbuilding is very "who cares", and I'm not sure how to judge a successful policy since fixing it would take decades of reform. Certainly, having revenue from US gov purchases of ships go up would not count.

I already give Trump credit for destroying wokeism or at least hastening it's demise. I also gave him credit for announcing a buildup of the military, which is a good idea. Hopefully he actually goes through with it and doesn't waffle.

I don't find balancing US trade deficits to be a priority. Something like reshoring (high tech) manufacturing though, sure.

Yes, it would be great if he could restore US shipbuilding.

Peace in Ukraine is highly contingent on what the peace looks like. If it's effectively "force Ukraine to surrender and give up huge swathes of land that they wouldn't need to if Biden were still around" is not a good peace. If it was "ceasefire at current lines, and Ukraine protected from future invasions by European guarantees", that'd be reasonable.

So that is a "no" then, you would not update your priors.

This is pretty low effort and seems meant only to antagonize and not actually get at the other person's reasoning. Playing gotcha with "I asked you a yes or no question, and you gave me a couple of paragraphs of explanation but didn't say yes or no therefore you didn't answer my question" is obnoxious. If you genuinely believe you still do not have an answer:

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

Be as precise and charitable as you can. Don't paraphrase unflatteringly.

Don't imply that someone said something they did not say, even if you think it follows from what they said.

>Write like everyone is reading and you want them to be included in the discussion.

Honestly it must be said that these rules are often honored more in the breach than in practice, but this one-liner just stands out as egregiously "ZING! I am not arguing to understand but to score points."

@TheAntipopulist is one of the specific users i had in mind writing the OP.

Their three paragraphs here and replies elsewhere in this thread can be summarized as; "even if the populists are sucessful (which they wont be) it will be for reasons outside thier control and thus not count."

So cutting to the chase, no the anti populist is not going to be updating his priors regarding populism and populists.

With that in mind do you really think they are arguing to understand rather to score points? A large portion of the users' output (along with thier user name) is little more than casual disparagement of anyone outside the managerial class.

Casual disparagement that you are not just tolerating but actively defending from push-back.

Their three paragraphs here and replies elsewhere in this thread can be summarized as; "even if the populists are sucessful (which they wont be) it will be for reasons outside thier control and thus not count."

All three specify circumstances under which he would update, and some of them aren't even all that demanding. None of them require things outside the government's control or at least not wildly more than your list that he was replying to. Reading "here are three ways I would update" as "I wouldn't update" is... certainly a thing someone said on the Internet today.

Honestly, you're not making much sense. You don't seem to be reading what the words in front of you actually say, but what your opinion of the person posting them leads you to expect to be there.

I sincerely don't understand how you're coming to that conclusion based on what I wrote.

Where are the actual results? Trump has already had a full term where he was full of erratic actions. Where are his successes where the erratic behavior clearly led to a good outcome?

Ukraine seems like an obvious example of the success of the "madman" diplomatic style to me. Trump (allegedly) threatened to bomb Russia if Putin invaded [this, as I understand it, would be a big no-no by conventional diplomatic wisdom] and as a result millions of Russians and Ukrainians were spared tremendous pain...for a few years, until Biden and his more conventional and "less erratic" foreign policy took over.

I definitely do not think that Trump is beyond criticism. But I do think that "madman diplomacy" can work – and Trump isn't the first leader to use it effectively.

That's a stretch since Russia didn't invade any country during Obama's first term either. Even if you think Trump really did prevent a war during his first term he didn't do anything substantive to fix any underlying issues, so he just can-kicked.

Are you taking the position that the "little green men" wrre totally not Russian special forces and that anyone who says otherwise is an Alex Jones-tier conspiracy theorist?

Because if not, Georgia, Ukraine and Maldova would all like a word.

No, I agree little green men were Russian. It's just a question of timing. Moldova was in the 90s, Georgia was 2008, Crimea was 2014. None of those happened under Obama's first term, nor Bush 2's first term.

Not really sure what the first term has to do with it – Russia invaded Ukraine in Obama's second term and successfully annexed Crimea, but that was almost certainly because of intervening events, not because Putin prefers invading in people's second terms (a courtesy he did not extend to Biden).

From a certain perspective all politics is can-kicking. I think that Trump's erratic actions in his first term (threatening to bomb Russia if they invaded Ukraine) led to a good outcome (Ukraine not being invaded). You're right that we can't split the timeline to test a counterfactual, but that's true in all cases, by which logic politicians should never get credit for anything good that happens.

I could be persuaded it was a coincidence if there was good evidence that Putin had an internal clock set to 2023 for some reason (e.g ongoing modernization efforts made Russia much more lethal in 2023 than in 2021) but considering that Trump sent lethal aid to Ukraine, it might have been in Putin's interest to invade even sooner – but he didn't for some reason. At the risk of oversimplifying, I find "Putin being 5% persuaded that Trump might actually strike the Kremlin" a very parsimonious reason.

It's not the fact that it's the first term, it's that Russia's actions don't follow a predictable clock. Blaming Biden for Ukraine being invaded is almost as bad as blaming Trump for COVID happening under his watch. Russia is the primary determinant of how Russia acts. Maybe Biden withdrawing from Afghanistan slightly helped goad Russia to invade, and maybe Trump's threats might have had some small impact, but they were not the primary determinants by any means.

From a certain perspective all politics is can-kicking.

Not at all. If the debt explodes to 99% of bankruptcy during one leader's term, and the bankruptcy happens under his successor, would we say the first leader was great and only the second one was the issue? Obviously not. The first guy set the powder and lit the fuse, it doesn't really matter if the bomb only went off when he wasn't in charge.

While it's true to some degree that we can't know with perfect accuracy unless we had a time machine that let us rerun the presidency with the alt candidate, some actions are clearer than others, e.g. I doubt if Biden had another term that we'd have a tariff-induced market crash. Maybe Biden could have caused a crash in another way, but Trump owns his own stupid actions in this universe.

Russia is the primary determinant of how Russia acts. Maybe Biden withdrawing from Afghanistan slightly helped goad Russia to invade, and maybe Trump's threats might have had some small impact, but they were not the primary determinants by any means.

I would say Russia is actually relatively reactive on the international stage. However, I don't think Afghanistan had much to do with Russia's invasion of Ukraine, I think that the Biden administration's non-erratic approach to Ukraine policy is more to blame. As we have seen, it was incapable of deterrence.

If you don't think that Trump's threats (which were effectively an informal security guarantee of Ukraine) have any impact, then it seems to me that Ukraine's continual asking for NATO membership or security guarantees is pointless, since "Russia is the primary determinant of how Russia acts" and they will invade Ukraine regardless of security guarantees. (Put in that light, it kinda seems like NATO is pointless.) Is this your position?

Maybe Biden could have caused a crash in another way, but Trump owns his own stupid actions in this universe.

Right, but in your telling, not his smart ones. Deterring Russia is not to his credit, but the stock market crash is.

Blaming Biden for Ukraine being invaded is almost as bad as blaming Trump for COVID happening under his watch.

Funny, because I do blame both of them for those things.

For Biden: what the fuck else do you think Hunter was doing there? The Ds have been angling for that war for years and have been playing stupid games in Ukraine even back when he was VP.

For Trump: massive partisan riots broke out and weren't controlled. Law and order gave way to burn, loot, murder in literally every major city and he did what, hold a Bible upside down? And the money printing began under him- the Ds continued it, sure, but that was a bad move from the get-go.

The point is you should blame them for the response they had to the event, not the fact that the event happened under their watch. There's not much evidence to say that Biden instigated Russia to invade, and its obviously ludicrous to insinuate that Trump caused COVID.

Hunter was in Ukraine being corrupt. I've not seen any compelling evidence saying he was there to goad Russia to invade.

BLM riots breaking out during Trump's term isn't really Trump's fault. He might have instigated it to some small degree, but it was primarily caused by the high point of woke mania. I agree Trump didn't really respond to it (nor COVID more broadly) well, but that's a separate discussion

The point is you should blame them for the response they had to the event, not the fact that the event happened under their watch.

If they had the means and opportunity to prevent "the event", it is perfectly reasonable to blame them for the event happening under their watch. Putin is not, in fact, a implacable force of nature.

It would seem to me that, if we take the pouring of concrete as an analogy for Trump's policies, then in all cases, the parties opposing him (China, Ukraine, the courts, respectively) are all refusing to accede to his demands, and are attempting to go to the contractor directly and tell them to stop.

Literally just lying might work fine in business as long as you avoid legal troubles, the same way that stiffing contractors might end up working out perfectly fine if you're the bigger company but proper governing is a different beast entirely. Risk taking like that in real estate just means you might pay for a few fines if the zoning board says no, risk taking in government can mean tons of people lose their jobs, lose their homes, or even die depending on what you're taking a risk for.

And unlike with venues or contractors where there's plenty of fish in the sea so pissing them off isn't too bad, there is not another Canada or UK or EU to turn to. You can poke the bears a little especially with a country as powerful as ours but this is an iterative game and defecting is way less useful. Likewise there's a reason why his measures still have us at -8% from 6 months ago and Polymarket has been hovering around 50% chance of recession, people and the overall market want and need long-term reliability.

Edit: And not even to mention, what are we getting out of it? Even if we settle into a "win" for him on getting high tariffs implemented, there's plenty of strong evidence that it will hurt the economy, reduce downstream jobs that use those inputs, and make us poorer.

Edit2: Also here's a really great example of how this approach seems to be failing, Canada. Everything was lined up for a Conservative victory, it was basically taken as a given. PP would have been really Trump friendly. Instead he rallied the Canadians so hard that the odds have shifted massively and Carney will be elected not just as a liberal party pick, but an anti-Trump pick.

Politics is an iterative game, and defecting so hard with aggression towards Canada has most likely lost him in the long run. And there is no other Canada to turn to, he can't just run off like you could with contractors or venues or city zoning boards. Our closest and friendliest neighbor economically and geographically has been pushed away

Our closest and friendliest neighbor economically and geographically has been pushed away

Please don't mistake politicking for reality -- we are right here and not going anywhere. (not least, but not only -- because we can't)

Contra point on Canada. PP was never the "pro-Trump" pick. He's Trudeau lite, instrad of Carney as Trudeau 2.0. Trump supporters cheering him on are missing the point just as badly as Trump haters cheering his downfall. Frankly there is no pro-Trump option in the Laurentian elite, and there is unlikely to ever be with the current arrangement of Canadaian politics. You would need one of two things to happen- a PM from Alberta, or the current Canadian politcal class to have it hammered into their skulls that they are truly a vassal of the US, probably by a trade war that crashes their economy but leaves America unnoticably effected.

PP might not be perfectly Trump aligned but in the question of who would be more accommodating of MAGA idealogy and Trump foreign policy, it's definitely him over Carney.

Is it really "lying" though if the statement is objectively true?

By the same token, if the statement is true, where exactly did "the defection" occur?

You seem to be arguing for a concept "truth" that is independent of ground level reality.

You said

Scott would characterize the Developer as having lied to the contractor about having the approval, but did they? The planning board did in fact approve the project after all. That the contractor beginning to pour without approval played a major part in the granting of approval is either of vital importance or completely irrelevant depending upon which side of the managerial versus working class divide you are sitting.

I think yes, they did lie. They made a guarantee that they knew had decently high chances of not being true and did not express this to the contractor. Even if we don't label it as lying, it is certainly misleading and it is done so intentionally as most people in good faith understand the contractor to mean "Is the project currently approved?" and playing tricky semantics doesn't absolve the developer of deceit. The developer could simply express the truth "It is not currently approved but I am confident in my ability to get it done and believe the chances would be all but guaranteed if we pour early" if they wished for honesty.

From the pen of Scott: Come On, Obviously The Purpose Of A System Is Not What It Does

Scott offers several examples of why TPOASINWID results in absurd analysis. His examples are selected for maximal absurdity, so it's amusing that three out of four directly undermine his case, and the fourth is still a pretty good argument against his position.

The purpose of a cancer hospital is to cure two-thirds of cancer patients.

This is a significantly more accurate statement than "the purpose of a cancer hospital is to cure cancer", because numerous considerations mitigate against curing cancer, things like economic considerations, bureaucratic constraints, and the work/life balance of the staff. And even when all these align such that curing this specific cancer is the system's goal, "curing cancer" might not mean what you think. I was especially amused by this exchange in the comments:

The purpose of a system that has egregious side-effects is very likely not aligned with my values. It might not be malicious, but it does not care about what I care about, and it is worth at least looking under the hood to see if what it cares about and what I care about are zero-sum.”

Like chemo?

...written in the comment section of the author of Who By Very Slow Decay. Yes, very much like Chemo. This example, by itself, is probably the one I'd like Scott to address specifically.

The purpose of the Ukrainian military is to get stuck in a years-long stalemate with Russia.

It seems to me that this is a significantly more accurate statement than "the purpose of the Ukrainian military is to defend Ukraine from hostile military action." America and NATO are very specifically and very openly throttling aide to keep Ukraine from being defeated outright, but also from being able to hit back too hard. Stalemate appears to be the deliberate objective, and certainly has been the openly-stated objective of many Ukraine supporters in this very forum.

One could make a similar statement about the Russian military as well. Any description of the Russian military that doesn't account for the realities of coup-proofing and endemic corruption is not going to make accurate predictions about the real world.

The purpose of the British government is to propose a controversial new sentencing policy, stand firm in the face of protests for a while, then cave in after slightly larger protests and agree not to pass the policy after all.

His intention here is to achieve absurdity by narrowing the scope to one specific result, rather than the sum of results, and in fairness, he provides examples of X randos arguing in this fashion. "The purpose of the British Government is to keep a lid on the British People while pursuing goals orthogonal to their interests" seems a more parsimonious description, but even Scott's version seems more accurate than something like "the purpose of the British Government is to execute the will of the British people as expressed through democratic elections".

The purpose of the New York bus system is to emit four billion pounds of carbon dioxide.

Again with the absurdity through inappropriate narrowing of scope. But even with a framing as uncharitable as this, it's worth noting that all systems have costs, and that description of a system that ignores the costs and how those costs are managed is a worse description than one that centers those costs. This is true even for descriptions that only consists of one significant cost, because the benefits of systems are generally far more obvious than the costs and thus the missing information is easier to find.

This is a bad article, and Scott should feel bad.

I'm that kind of person that is slightly obsessed following up the genealogy of the memetic slogans like "The purpose of a system is what it does" as it turns out it is minted by Stafford Beer one of the architects of Cybersyn. Let me give you the short version in my view what Cybersyn is. It is societal engineering through computational power. It is one of those horrible ideas that can't be flushed like stubborn turd that floats in the toilet minds of its proponents. It devolves into the Social Credit Score to subjugate the plebs. We are already half way there where consent is manufactured on reddit and other "social media" platforms with content moderation policies and the panopticon social approval of likes/dislikes, there are already reports of people of being debanked for their political opinions. Now we have that memetic idea surfacing again when we got the LLM:s that the failed experiments was missing it and they will fix it this time. It is extremely worrying that an avid reader of Trostsky is quoted again...

The purpose of a system is what it does is obviously dumb in a lot of cases- the purpose of NIMBY zoning boards(may they all find themselves dying unpleasant deaths) is to keep property values high. That this results in nothing getting built is an unfortunate side effect, and it does sometimes happen that NIMBY zoning boards allow things to get built. In others its actively absurd. In other cases it's a valuable reminder that mission statements are just bullshit, and in still others it's a description of institutional capture.

The whole article and the phrase which inspired it seem like desperate groping in the intellectual dark for the concept of The Principle Of Double Effect, and an illuminating example of the problems which arise when it is lacking.

The inability to distinguish between intended and unintended effects, and forseen and unforseen consequences, is lethal to a moral evaluation of human action.

Yep, Scott's at his worst when he's complaining about his outgroup. Not that most of the twitterati who employ POSIWID are particularly shrewd analysts, but the concept has plenty of explanatory value.

For another recent example of Scott getting sloppy, see his article on how the BAPist "based post-Christian vitalists" were hypocritical for caring about the victims of the Rotherham grooming gangs when they normally sneer at caring about poor people an ocean away as cucked slave morality. Of course, the obvious counterargument is that the Rotherham victims were white Westerners like themselves, aggressed upon by a far more alien outgroup.

I think he was actually closer to the mark there. You can see the hypocrisy when someone like KulakRevolt, for example, is calling for all of England to be burned down over the Rotherham gangs, as if he doesn't hold promiscuous fatherless girls from the lower classes in utter contempt himself. When all your grievances are formulated around tribal affiliations, you can argue that it's okay when we do it and bad when they do it, but you can't argue that you genuinely care about young girls being mistreated, and that sort of gives the game away when you're trying to convince people they should be outraged at rape and grooming when your actual objective is to stir hatred against your alien outgroup.

If Kulak hated European maidens he wouldn't have constructed his entire identity on the worship thereof. I don't think this is a good example at all.

The idea that you can't really care about your ingroup if you wouldn't care about them if they weren't part of it is a dangerous nonsense.

I ask you, would you love your Mother if she wasn't your mother. And if you wouldn't, how dare you say you love her? It's absurd. Who we are and what relationships we have is important and meaningful. It is not and never has been morally neutral.

I hate this gnostic reduction of our essence to some abstract individual will with every fiber of my body.

The Rotherham girls are not his ingroup just because they're white. He constantly talks about what he thinks should happen to white people who are also not in his ingroup.

His feigned outrage over "European maidens" being besmirched by Muslims is because Muslims are doing the besmirching, not because he actually cares about victimized white girls. If it were Irish grooming gangs responsible, he might contrive some anti-Irish reason to wash the streets in blood (he's certainly flexible like that), but more likely he'd just find something despicable brown people are doing elsewhere.

I'll note that, by reputation at least, (often drug enabled)abuse and grooming of lower-on-the-totem-pole teenaged girls is a 'the purpose of the system is what it does' for Kulak's claimed ingroup of reconstructionist pagans.

I don't really intend to go experience reconstructionist paganism. It may well be a false stereotype- and frankly doesn't much affect my (extremely negative) opinion of either reconstructionist paganism or idiot teenagers who experiment with it. But Kulak doesn't seem very upset about it either way. Nor does he seem to care very much about war rapes by the Russian Army, for another example of white people doing this.

Yeah, that's my point. If anyone else was drugging and raping teenage girls (including teenage white girls), Kulak wouldn't care. He just wants to see bloodshed. Also, his recent Braveheart Viking Hells Angel Paganism schtick and telling all his right-wing r3tvrn Christian followers that their religion is fake, gay and Jewish, is almost as hilarious to me as the people who still think he's an OF girl.

Isn't he/they an MTF?

No lol, he just picked an anime avatar and now some of his twitter audience unbelievably think he’s a woman. I don’t think he’s even claimed to be, so it’s not even a grift, it’s just weird or very stupid people.

No.

This like complaining that a Muslim cares about the Umma even though he cares about his sect or tribe more.

How dare people have Ordo Amoris? Their care must reduce to one bit!

Obviously people have circles of concern. And obviously just because someone doesn't extend their total moral community to all of humanity or all of creation doesn't make them abnormal. On the contrary.

Multi level tribalism is a perfectly acceptable and eugenic human behavior, albeit with some much talked about drawbacks. It is not however reducible to nihilism or egoism.

I think you give too much credit. I don't believe people like that feel ordo amoris for anyone at all. It's not about concentric circles of affinity, it's about identifying an enemy and manufacturing a grievance. I might believe some people feel some faint amount of "ordo amoris" for distant white girls because they happen to be white, even if they otherwise hold them in contempt, but not when every other message is about how they're dirt. Oh, now you care because a Muslim touched them? No heat graph meme argument is going to make that convincing.

Well I believe that you don't give people enough credit because they're part of your outgroup and that your standards of what people are allowed caring about without being hypocritical are bad models of people's behavior and therefore functionally useless except as the very sort of grievance they denounce.

The idea that people feeling empathy for the plight of people who look like and feel like them is bad, empty or without meaning in some way is, I believe, one of the great sins of Western civilization. And I don't feel difficulty defending anybody who feels such feelings, wicked as they may be, far from me as they may be.

Indeed, insofar as humanism has any degree of visceral grounding, it springs from this feeling and cannot denounce it without sapping itself.

Well I believe that you don't give people enough credit because they're part of your outgroup

Fair. People who hype genocidal warfare are indeed part of my outgroup.

and that your standards of what people are allowed caring about without being hypocritical

I do not think you understand what my standards of what people are "allowed" to care about are.

The idea that people feeling empathy for the plight of people who look like and feel like them is bad, empty or without meaning

This not what I believe.

More comments

It's perfectly possible to not-care-if-individual-Xs-comes-to-harm without hating Xs in general, or indeed, if you like Xs. Plenty of people like bunny rabbits, and might even sincerely love their pet rabbits, without turning into animal rights activists.

Would you say people who love pet rabbits in general but still love humans more don't truly love pet rabbits?

All you say is possible, I just don't believe it's an accurate or charitable description of almost anybody's concerns.

Suppose a man loves his pet rabbit, and finds pictures of rabbits abstractly cute, but happily eats rabbit meat without a twinge of guilt, and has never lifted a finger to campaign to ban the hunting or industrial farming of rabbits. Suppose that he has a personal enemy. Now suppose that he learns that this enemy sometimes goes rabbit-hunting; and suppose that, having found this out, he makes a stink, ranting to all who'll listen about how it's outrageous, how the guy must be brought to accounts, and now won't everyone see how much of a monster he is, like I've been saying all along: he's been blowing cute defenseless bunnies' brains out for fun, you can't deny it now.

In such a case I think it's fair to accuse this man of using the rabbit thing as a convenient weapon against someone he hated anyway; and to say his anger has very little to do with a sincere concern for rabbit welfare. Even if he really does love his pet rabbit.

While his criticism kinda missed the mark, I do think there's something inconsistent about it. You can have a consistent ideology of supremacy for your own ethnic group, but in a globalized world it's not really compatible with being a Nietzschean individualist who sneers at caring about the weak in general. The archetypal ubermensch is a pre-Christian warlord - an aristocrat who strides above the petty concerns of his own nation's peasants and paupers. The 'master' isn't interested in whether the daughters of the slaves two counties over are getting raped and tortured, white or otherwise. Unless he considers those counties part of his holdings and, therefore, his alone to rape and pillage.

A guy who's concerned about tortured little girls an ocean away because they're white girls and he considers the fate of the white race his business, whether or not he stands to gain anything from it, has more in common with a guy who's concerned because he considers the fate of all Homo sapiens his business, than with a guy who actually only cares about himself, his kin, and maybe his nation.

People can't seem to get it through their heads that Nietzscheans aren't master moralists, they are would be designers of their own moral codes and specifically reject the impositions of acting as a master, or as a slave.

You are allowed to care for the weak or for anything or anyone insofar as you deduced on your own and not through social mimetism or scolding that this is right and true. But it has to come from you and not from whims but you own self legislated catechism.

I feel like this is the same brand of lazy criticism levied at objectivists for acting collectively despite being individualists. It's like people just imagine what the ideology is and what it precludes instead of actually asking or reading about it.

I'm aware actual-Nietzsche is more nuanced. But the guys Scott was debating aren't serious Nietzschean scholars, nor do they claim to be. Perhaps I should have just stuck with the tongue-in-cheek Based Post-Christian Vitalist coinage. The point is that these are people who sneer at the entire concept of Effective Altruism and indeed charity. You can't do that and care about Rotherham. It's untenable. If you're an American and you care what happens to the Rotherham girls, albeit only because they're white, then you're not coming from a completely different paradigm than the EAs. You just have an unpopular opinion on who the most relevant moral patients are.

Such as Objectivists would say, altruists, let alone utilitarian ones, do not have a monopoly on caring about people. And their claims that they do are an intellectually dishonest trick to refuse admitting that good natured feelings can be arrived at through other means than their pathology.

I would, actually, say that "altruist" objectively, etymologically describes anyone who cares about other people. It's what the "altr" means. Altruism is a broad church. Some altruists care about shrimps and others only care about humans. I see no reason why altruists who only care about white humans should act like they're something completely different.

It's simple. Non-altruists don't find alterity inherently valuable and it enters differently or not at all into their ethical calculus.

Arguing that they are still altruists because their calculus still leads them to conclusions similar to that of altruists in some cases is intellectual dishonesty.

You can redefine the word to be broad enough as to become useless. But that's not worth engaging with.

I claim that the calculus is the same. When it comes to caring whether perfect strangers live or die, suffer or thrive, in ways that will never affect you - either you do, or you don't. Those of us who do, I'm confident are, in an overwhelming majority, applying the same drives in the same ways. Sure, some of us care about the suffering of our countrymen, others about the suffering of our whole race, others still of the whole human race, and others still about the suffering of all animal life. But the only thing that changes between all those cases is how you draw the border between the people you care about, and the people you don't. It's still altruism even if it's race-specific, much as someone who cares about other humans but doesn't give a fuck about animals is still an altruist.

This isn't to say you can't have genuine non-altruists who, by coincidence, have similar practical aims to altruists. For example, you might object to rape gangs not because you care whether the victims suffer, but due to a deontological objection to rape. Or you might value the survival of your ethnic group, without caring about the suffering of any specific members within it per se, and treat the Rotherham gangs as one facet of a genocidal attack against your race as a whole. I wouldn't call those people altruists. But once you start talking about the suffering of random girls an ocean away as something which in and of itself should make your blood boil, something which you have a moral impetus to stop if you can, even though it's in no practical sense your problem - then, sorry, you're an altruist. Albeit a narrow altruist. And a lot of people screaming about the British rape gangs were using that kind of rhetoric.

(Of course, they may have been lying — perhaps Scott was too optimistic in taking those fragments of altruism as glimmers of an underlying better nature, rather than disingenuous, cynical attempts to play on actual altruists' emotions and win them over.)

Wouldn't deducing any moral code after reading Nietzsche by definition not be "on your own, not through mimetism" etc?

He enjoins you to have your own consideration of the moral problem. This, in my view does not recurse because you can look at it and disagree that making yourself moral legislator is a good idea.

There are people in the specific group this thread is talking about that believe in the possibility of a christo-nietzcheean synthesis for instance.

We quickly arrive at topics where logical contradiction is not disqualifying, however, so such logical descriptions are instrumental at best.

"The purpose of a system is what it does" is a stupid opinion if it is taken as a general mathematical truth. The concept of purpose assumes intentionality (the purpose of something is the intent of the people who built/used/participated in it) and therefore the opinion assumes the effects of a system are always those intended by the actors, which is obviously false.

Most of the time "the purpose of a system is what it does" instead means that what the actors want is less important than what the system actually does (it provides more prediction power, as you said).

There are some cases however where intentionality is very important, for example if you kill someone the police and the court will be interested.

The concept of purpose assumes intentionality

Unless that includes God or Nature as intent sources, I disagree that this is true.

The conceit of the phrase is precisely that things can have purpose unintended by their creators.

"Intended purpose" is not a tautology.

Unless that includes God or Nature as intent sources

Yes it does if you believe nature or God have intents (it works better with God than nature, as most people who think that nature has intent also think that nature is a kind of god). People who don't think God exists or nature has intents also don't think there are purposes in nature.

Intended purpose means that the way you use the tool now (the purpose it's used for now) is what it was built for (the purpose of its creator). For example if you use your shoes to protect your feet it's their intended purpose, but if you use them to kill a fly it's not (presumably).

All too often 'systems' in practice get excused by idealism. POSIWID works as a shorthand to cut through that idealism.

Scott seems to be coming at this from some critical angle and I'm not entirely sure what the point of it is. You can wordplay anything into absurdity and uselessness.

Sibling non-CWR post: https://www.themotte.org/post/1836/scott-come-on-obviously-the-purpose

Wrote a comment there, but another thought:

I think Scott is attempting a kind of meta-joke. TPOASIWID is a very useful lens to interpret systems through, but in widespread DR Twitter use, it's mostly used as a way to ascribe bad intent to systems. And because TPOASIWID, you can only judge TPOASIWID by the use of TPOASIWID on Twitter, and so TPOTPOASIWIDIWID and that's creating bad Twitter takes, which isn't valuable or useful. QED.

Cute, but it misses the mark. It's about finding useful ways to interact with a system, not a universal acid allowing you to weak man any argument or analysis.

Are we going to henceforth lose every intellectual to some genre of twitter brainrot? Place your bets here.

Ok to be pedantic and trash Scott's argument, "POSIWID" is especially shortened from "The purpose of a system is to do some or all of the things that it does, while taking all of the other things it does as acceptable consequences."

And the contrapositive: "The purpose of a system is never something that it doesn't do."

Scott is being deliberately obtuse purposefully ignoring the obvious meaning of the phrase.

"The purpose of a system is never something that it doesn't do."

If my car fails to start one morning, that does not mean that my car ceases to be a car (if we define a car as a vehicle whose purpose it is to transport persons). Saying "POSIWID, hence this is not a car, perhaps it is a tiny house or outhouse" is not a good way to handle a broken car. "This thing was designed with a function in mind, but it does no longer serve its original purpose, so what purpose does it serve now, and is it worthwhile to fix it or get rid of it" seems a much more promising approach.

If a life-saving operation has a mortality of 1%, and it ends up killing little Timmy, saying that clearly the purpose of the operation was to murder him, not to safe his life, as "[t]he purpose of a system is never something that it doesn't do" would seem disingenuous.

But if it does break down, then it's something a car does, not something a car doesn't do.

A car doesn't fly. The purpose of a car is never to fly. A car does break down, and so the "doesn't do" doesn't apply.

Purpose of a system

A car is a system.

I always interpreted POSIWID as meaning that sustained normalized deviance is no deviance at all. If, say, a big tech OS project fails to ship year after year and company leadership fails to replace the project's management, then we have to conclude that either 1) the company-system is not under the control of agents with the ability to modify the world to achieve their goals or 2) the purpose of the OS project is not to produce an OS.

Otherwise, why wouldn't the OS project management been nuked from orbit after the fourth or fifth annual failure?

POSIWID doesn't mean, as Scott strawmans, that any side effect of a system is desirable or a failure of a system to fully achieve that goal reveals that goal as a lie. Total nonsense. If a cancer ward were curing only half its patients and despite having funding and expertise refused to install a new radiation machine that would increase the cure rate to 2/3, and if hospital administration tolerated this state of affairs, then we would be forced to conclude that THAT SPECIFIC cancer ward's purpose was not to cure cancer.

POSIWID only works in negation

PISIWID doesn't mean, as Scott strawmans, that any side effect of a system is desirable or a failure of a system to fully achieve that goal reveals that goal as a lie. Total nonsense.

Are you saying that he cherry-picked the tweets he screenshotted, and the median usage of POSIWID is much more nuanced?

Yes. Or more specifically, he demolished the retard version of POSIWID then claimed victory over the nuanced version. That's wrong and called strawmanning

Scott is an utilitarian. My mental model of him says that if you have a charity to rescue cats from trees, but it only rescues one cat from a tree per year despite having an annual 10M$ budget, then it is fair to conclude that their actual main purpose might be something different than rescuing cats. This is a standard critique of inefficient charities from an EA perspective.

Or take research towards fusion power. It has been going on for sixty years, and while we are making progress, we do not have fusion power plants yet. Now, you can take three stances.

  • You say POSIWID, so these people are not working on solving the energy crisis, they are simply publishing papers and earning tenure, not dissimilar to $useless_academic_field, and we should not waste taxpayer money on them.
  • You move the goalpost to some instrumental subgoal which was actually reached, like "we want to increase the plasma confinement time to $whatever", and claim that is the purpose of fusion research. However, this is missing the bigger picture: if all the viable fusion processes were endothermic, and we thus could never build a fusion power plant, and plasma was only studied for its own sake, then society would give orders of magnitude less money to the plasma nerds.
  • You accept that in this case, the purpose of a system is indeed what it has so far failed to deliver on.

Of course, if you take the last stance, then the next problem is alchemists who search for the philosopher's stone. In hindsight, we know that this was a fools errand, and only their lack of epistemic purity lead them to believe such a thing could exist at all. Their whole paradigm was -- not to put to fine a point to it -- dogshit, and if they had read the Sequences, they should have known. (Yes, I know about the woo aspect of alchemy -- but reaching enlightenment seems very much like a consolation prize if you fail to gain immortality et cetera. I am sure they did not emphasize the allegoric aspect to their funding agencies.)

On the other hand, hindsight is 20/20, and the ideas that form the basis of the scientific method would not be developed for centuries, so they were working with the mental tools which they got, and sometimes walking in a random direction is better than standing still until you exactly know which way to go.

Per POSIWID, the purpose of alchemy was to accidentally discover chemical reactions while denying that purpose.

We already have a perfectly good word for the relationship between alchemy and their accidental discoveries. That word is outcome.

Even more bluntly, consider a dog licking a TV screen which shows bacon being fried. The outcome is the dog licking an LCD. The purpose of the action is -- presumably -- that the dog wants to taste the bacon. Describing the system "dog" as a system which tries to taste bacon, but sometimes fails and tastes plastic instead gives us a much better model of reality than just saying "TPOSIWID, thus this dog likes to lick plastic".

And it's hard to imagine anyone sincerely believing the purpose of the dog-TV system is plastic licking. Maybe I'm sanewashing it, but ISTM there's a logical and useful way to understand POSIWID:

  • Let there be a system S, an agent with control authority over the system A, and some outcome X that A claims S is to produce

  • Observe that S falls short of ostensible goal X

  • Let B be an action that A can take to make S produce more of outcome X at positive ROI

  • Observe that A does not execute action B

Given the above, e must conclude based on A's failure to do B that A's purpose for S is not solely X. Maybe B is not actually positive ROI because we lack an understanding of its true costs. Maybe A is retarded and doesn't understand that B is available to him. But, if we assume B is positive ROI and that A is a competent actor, what alternative do we have to concluding that A is optimizing S for some unstated goal Y, not only X?

One might argue he cherrypicked for stupid usages the moment he chose to get his example from tweets.

This is a bad article, and Scott should feel bad.

Hard disagree.

I think that a grossly simplified way to look at a system might be that it maximizes a particular utility function. Naturally, different people have different utility functions, and so might feel different about a system and the trade-offs it makes. Even though, it is rare that different people assign the opposite signs to a terminal goal, and more often that they simply differ in relative weight. If someone claims that the terminal goal of the NRA is to enable school shootings, or that the terminal goal of gun control legislation is to render Americans defenseless against tyranny, they are missing that point. The truth is simply that tyranny resistance and avoiding school shootings are both worthy goals, and different people will have different ideas about both their relative importance and how gun control might affect them.

Of course, in reality, systems are made out of individual actors who have their individual utility functions (as far as they are rational), and a key instrumental (if we are being charitable) goal of almost any system is to perpetuate its own existence.

But even with a framing as uncharitable as this, it's worth noting that all systems have costs, and that description of a system that ignores the costs and how those costs are managed is a worse description than one that centers those costs.

I think that if I have a page of a book, and either describe it as "a mostly white page" or "a page darkened by ink", both of these descriptions are very inadequate, and it is not very worthwhile to quibble over which one is worse.

This being said, if you have to communicate to a space alien what a dentist does, what do you think is the better description?

  • "A dentist causes people pain and takes money from them"
  • "A dentist fixes tooth decay"

Both of these statements are true and describe things a dentist does, but I would argue that the latter statement is slightly less terrible a description. An actually adequate description would acknowledge that people generally go to the dentist to prevent or fix tooth decay (the latter of which often hurts somewhat), but also that dentistry is a high income profession (thus attracting people interested in making money) and most dentists operate as a business and thus there exists a principal-agent problem e.g. for judging the cost-benefit ratio of secondary services like professional tooth cleaning.

Again with the absurdity through inappropriate narrowing of scope.

How can you tell when your scope is appropriately widened? Okay, the purpose of the bus system isn't to emit CO2. Is the purpose to do that and drive vehicles on NYC streets? Is the purpose to do that and pay out bennies to bus drivers? Is it to do that and move paying customers around? Is it to do that and also house a few homeless people? Is it to do that and reduce traffic overall?

And we can't look at the bus system in isolation, right? It's part of the city government, which itself is embedded in layers of government and society. Why is it not inappropriate to even attempt to analyze the purpose of the NYC bus system in isolation of the entire world?

At least I agree that we can limit our scope to planet earth, since there doesn't seem to be any agency being exercised by anyone outside of it. The question is where to set the scope in between busses emitting CO2 and everything that goes on on earth.