DaseindustriesLtd
late version of a small language model
Tell me about it.
User ID: 745
When have you last been there and in what city? This was like watching Serpentza's sneering at Unitree robots back to back with Unitree's own demos and Western experiments using these bots.
Buses broke down, parts of my quite expensive apartment fell off, litter and human feces were everywhere
I simply call bullshit on it as of 2025 for any 1st tier city. My friends also travel there and work there, as do they travel to and live and work in the US. They report that straight from the gate in JFK, US cities look dilapidated, indeed littered with human feces (which I am inclined to trust due to your massive, easily observable and constantly lamented feral homeless underclass) and of course regular litter, squalid, there is a clear difference in the condition of infrastructure and the apparent level of human capital. I can compare innumerable street walk videos between China and the US, and I see that you guys don't have an edge. I do not believe it's just cherrypicking, the scale of evidence is too massive. Do you not notice it?
And I have noticed that Americans can simply lie about the most basic things to malign the competition, brazenly so, clearly fabricating «personal evidence» or cleverly stiching together pieces of data across decades, and with increasingly desperate racist undertones. Now that your elected leadership looks Middle Eastern in attitude, full of chutzpah, and is unapologetically gaslighting the entire world with its «critical trade theory», I assume that the rot goes from top to bottom and you people cannot be taken at your world any more than the Chinese or Russians or Indians can be (accidentally, your Elite Human Capital Indians, at Stanford, steal Chinese research and rebrand as their own). Regardless, @aqouta's recent trip and comments paint a picture not very matching yours.
I think that if they were truly crushing America in AI, they would be hiding that fact
They are not currently crushing the US in AI, those are my observations. They don't believe they are, and «they» is an inherently sloppy framing, there are individual companies with vastly less capital than US ones, competing among themselves.
When the Deepseek news came out about it costing 95% less to train, my bullshit detectors went off. Who could verify their actual costs? Oh, only other Chinese people. Hmm, okay.
This is supremely pathetic and undermines your entire rant, exposing you as an incurious buffoon. You are wrong, we can estimate the costs simply from token*activated params. The only way they could have cheated would be to use many more tokens but procuring a lot more quality data than the reported 15T, a modal figure for both Western and Eastern competitors on the open source frontier, from Alibaba to Google to Meta, would in itself be a major pain. So the costs are in that ballpark, indeed the utilization of reported hardware (2048 H800s) turns out to even be on the low side. This is the consensus of every technical person in the field no matter the race or side of the Pacific.
They've opensourced most of their infra stack on top of the model itself, to advance the community and further dispel these doubts. DeepSeek's RL pipeline is currently obsolete with many verifiable experiments showing that it's been still full of slack, as we'd expect from a small team rapidly doing good-enough job.
The real issue is that the US companies have been maintaining the impression that their production costs and overall R&D are so high that it justifies tens or hundreds of billions in funding. When R1 forced their hand, they started talking how it's actually "on trend" and their own models don't cost that much more, or if they are, it's because they're so far ahead that they finished training like a year ago, with less mature algorithms! Or, in any case, that they don't have to optimize, because ain't nobody got time for that!
But sarcasm aside it's very probable that Google is currently above this training efficiency, plus they have more and better hardware.
Meta, meanwhile, is behind. They were behind when V3 came out, they panicked and tried to catch up, they remained behind. Do you understand that people can actually see what you guys are doing? Like, look at configs, benchmark it? Meta's Llama 4, which Zuck was touting as a bid for the frontier, is architecturally 1 generation behind V3, and they deployed a version optimized for human preference on LMArena to game the metrics, which turned into insane embarrassment when people found out how much worse the general-purpose model performs in real use, to the point that people are now leaving Meta and specifying they had nothing to do with the project (rumors of what happened are Soviet tier). You're Potemkining hard too, with your trillion-dollar juggernauts employing tens of thousands of (ostensibly) the world's best and brightest.
Original post is in Chinese that can be found here. Please take the following with a grain of salt. Content: Despite repeated training efforts, the internal model's performance still falls short of open-source SOTA benchmarks, lagging significantly behind. Company leadership suggested blending test sets from various benchmarks during the post-training process, aiming to meet the targets across various metrics and produce a "presentable" result. Failure to achieve this goal by the end-of-April deadline would lead to dire consequences. Following yesterday’s release of Llama 4, many users on X and Reddit have already reported extremely poor real-world test results. As someone currently in academia, I find this approach utterly unacceptable. Consequently, I have submitted my resignation and explicitly requested that my name be excluded from the technical report of Llama 4. Notably, the VP of AI at Meta also resigned for similar reasons.
This is unverified but rings true to me.
Grok 3, Sonnet 3.7 also have failed to convincingly surpass DeepSeek, for all the boasts about massive GPU numbers. It's not that the US is bad at AI, but your corporate culture, in this domain at least, seems to be.
But if Chinese research is so superior, why aren't Western AI companies falling over themselves to attract Chinese AI researchers?
How much harder do you want them to do it? 38% of your top quintile AI researchers came straight from China in 2022. I think around 50% are ethnically Chinese by this point, there are entire teams where speaking Mandarin is mandatory.
Between 2019 and 2022, «Leading countries where top-tier AI researchers (top 20%) work» went from 11% China to 28%; «Leading countries where the most elite AI researchers work (top 2%)» went from ≈0% China to 12%; and «Leading countries of origin of the most elite AI researchers» went from 10% China (behind India's 12%) to 26%. Tsinghua went from #9 to #3 in institutions, now only behind Stanford and Google (MIT, right behind Tsinghua, is heavily Chinese). Extrapolate if you will. I think they'll crack #2 or #1 in 2026. Things change very fast, not linearly, it's not so much «China is gradually getting better» as installed capacity coming online.
It's just becoming harder to recruit. The brain drain is slowing in proportional terms, even if it holds steady in absolute numbers due to ballooning number of graduates: the wealth gap is not so acute now considering costs of living, coastal China is becoming a nicer place to live in, and for top talent, more intellectually stimulating as there's plenty of similarly educated people to work with. The turn to racist chimping and kanging both by the plebeians since COVID and by this specific administration is very unnerving and potentially existentially threatening to your companies. Google's DeepMind VP of research left for ByteDance this February, and by now his team in ByteDance is flexing a model that is similar but improves on DeepSeek's R1 paradigm (BD was getting there but he probably accelerated them). This kind of stuff has happened before.
many Western countries are still much nicer places to live than all but the absolute richest areas of China
Sure, the West is more comfortable, even poor-ish places can be paradaisical. But you're not going to move to Montenegro if you have the ambition to do great things. You'll be choosing between Shenzhen and San-Francisco. Where do you gather there's more human feces to step into?
But as I said before in the post you linked, Chinese mind games and information warfare are simply on a different level than that of the more candid and credulous Westerner
There is something to credulousness, as I've consistently been saying Hajnalis are too trusting and innocently childlike. But your nation is not a Hajnali nation, and your people are increasingly draught horses in its organization rather than thought leaders. You're like the kids in King's story of how he first learned dread:
We sat there in our seats like dummies, staring at the manager. He looked nervous and sallow-or perhaps that was only the footlights. We sat wondering what sort of catastrophe could have caused him to stop the movie just as it was reaching that apotheosis of all Saturday matinee shows, "the good part." And the way his voice trembled when he spoke did not add to anyone's sense of well-being.
"I want to tell you," he said in that trembly voice, "that the Russians have put a space satellite into orbit around the earth. They call it . . . Spootnik.” We were the, kids who grew up on Captain Video and Terry and the Pirates. We were the kids who had seen Combat Casey kick the teeth out of North Korean gooks without number in the comic books. We were the kids who saw Richard Carlson catch thousands of dirty Commie spies in I Led Three Lives. We were the kids who had ponied up a quarter apiece to watch Hugh Marlowe in Earth vs. the Flying Saucers and got this piece of upsetting news as a kind of nasty bonus.
I remember this very clearly: cutting through that awful dead silence came one shrill voice, whether that of a boy or a girl I do not know; a voice that was near tears but that was also full of a frightening anger: "Oh, go show the movie, you liar!”
I think Americans might well compete with North Koreans, Israelis and Arabs in the degree of being brainwashed about their national and racial superiority (a much easier task when you are a real superpower, to be fair), to the point I am now inclined to dismiss your first hand accounts as fanciful interpretations of reality if not outright hallucinations. Your national business model has become chutzpah and gaslighting, culminating in Miran's attempt to sell the national debt as «global public goods». You don't have a leg to stand on when accusing China of fraud. Sorry, that era is over, I'll go back to reading papers.
I am not sure how to answer. Sources for model scales, training times and budgets are part from official information in tech reports, part rumors and insider leaks, part interpolation and extrapolation from features like inference speed and pricing and limits of known hardware, SOTA in more transparent systems and the delta to frontier ones. See here for values from a credible organization..
$100M of compute is a useful measure of companies' confidence in returns on a given project, and moreover in their technical stack. You can't just burn $100M and have a model, it'll take months, and it practically never makes sense to train for more than, say, 6 months, because things improve too quickly and you finish training just in time to see a better architecture/data/optimized hardware exceed your performance at a lower cost. So before major releases people spend compute on experiments validating hypotheses and on inference, collect data for post-training, and amass more compute for a short sprint. Thus, “1 year” is ludicrous.
Before reasoning models, post-training was a rounding error in compute costs, even now it's probably <40%. Pre-deployment testing depends on company policy/ideology, but much heavier in human labor time than in compute time.
This actually means, for example, that a strong paper from a Western lab will be about one big idea, big leap or cross-domain generalization of an analytical method, like applying some physical concept. Eg nonequilibrium thermodynamics to image generation. Or consider dropout (Hinton, Sutskever):
A motivation for dropout comes from a theory of the role of sex in evolution (Livnat et al., 2010). Sexual reproduction involves taking half the genes of one parent and half of the other, adding a very small amount of random mutation, and combining them to produce an offspring. The asexual alternative is to create an offspring with a slightly mutated copy of the parent’s genes. It seems plausible that asexual reproduction should be a better way to optimize individual fitness because a good set of genes that have come to work well together can be passed on directly to the offspring. … A closely related, but slightly different motivation for dropout comes from thinking about successful conspiracies.
I can scarcely remember such a Chinese paper, although to be honest a vast majority of these big Western ideas turn out to be duds. A strong Chinese ML paper is usually just a competent mathematical paper.
Whereas a typical Chinese paper will have stuff like
The positive impact of fine-grained expert segmentation in improving mode performance has been well-documented in the Mixture-of-Experts (MoE) literature (Dai et al. 2024; A. Yang et al. 2024). In this work, we explore the potential advantage of applying a similar fine-grained segmentation technique to MoBA. MoBA, inspired by MoE, operates segmentation along the context-length dimension rather than the FFN intermediate hidden dimension. Therefore our investigation aims to determine if MoBA can also benefit when we partition the context into blocks with a finer grain.
And then 10 more tricks by shorter-range combinatorial noticing of redundancies, similarities, affinities. It doesn't look like much, but three papers later you see a qualitative, lifelike evolution of the whole stack, and you notice this research program is moving very quickly. They do likewise in large hardware projects.
I have Chinese friends. I have read a lot of papers and repositories and watched as research programs developed, yes, sorry to bash your hopes. I have played their games, consumed their media, used their technology, acquainted myself with their tradition a little. I have considered the work of the allegedly greatest Chinese mathematician, Terence Tao, and his style of work. And there is the oft-repeated thesis that Asians tend towards holistic rather than analytical thinking which is exactly about the bias in exploration style I'm talking about.
I am interested in whether you find this an impoverished or wrong perspective.
It's hard to account for human factor. Xi could just suddenly go senile and enact the sort of policies they predict, for example. Americans elected a senile president and then changed him for a tried-and-true retard with a chip on his shoulder who surrounded himself with ineffectual yes-men. That's history.
Technical directions are more reliable and are telegraphed years in advance.
Chain-of-thought is 2020 4chan tech. In 2020 also, Leo Gao wrote:
A world model alone does not an agent make, though.[4] So what does it take to make a world model into an agent? Well, first off we need a goal, such as “maximize number of paperclips”.
So now, to estimate the state-action value of any action, we can simply do Monte Carlo Tree Search to estimate the state-action values! Starting from a given agent state, we can roll out sequences of actions using the world model. By integrating over all rollouts, we can know how much future expected reward the agent can expect to get for each action it considers.
Altogether, this gets us a system where we can pass observations from the outside world in, spend some time thinking about what to do, and output an action in natural language.
Another way to look at this is at cherrypicking. Most impressive demos of GPT-3 where it displays impressive knowledge of the world are cherrypicked, but what that tells us is that the model needs to improve by approx log2(N)/Llog2(N)/L bits, where N and L are the number of cherrypickings necessary and the length of the generations in consideration, respectively, to reach that level of quality. In other words, cherrypicking provides a window into how good future models could be
The idea of inference time compute was more or less obvious since GPT-3 tech report aka “Language Models are Few-Shot Learners”, 2019. Transformers (2017) are inherently self-conditioning, and thus potentially self-correcting machines. LeCun's Cake, aka unsupervised (then after Transformers, self-supervised) learning - Supervised – RL "cherry" is NIPS 2016. AlphaGo is 2015. And so on. I'm not even touching older RL work from Sutton or Hutter.
So in retrospect, it was more or less clear that we will have to
-
pretrain strong models with innately high or increased via post-training and synthetic data chain of thought capability
-
get a source of verifiable rewards and pick some RL algorithm and method
-
sample a lot of trajectories and propagate updates such that the likelihood of correct answers increases
Figuring out details took years though. Process reward models, MCTS have wasted a lot of brain cycles. But perhaps they could have worked too, we just found an easier way with another branch of this tech tree.
In this context, I find details of his predictions disappointing. The search space was narrowed enough that for someone in the know and trying to actually do a technically informed forecast could have done about as well as he did by semi-random guessing of buzzwords.
It's quite arrogant to say so without having written a better prediction (I predicted the chip war around 2020 too, but my guess was that we'd go way higher with way sparser models, a la WuDao, earlier). But this is just a low bar for claiming prescience.
Von Neumann was not a supercomputer, he was a meat human with a normalish ≈20W power consumption brain, ie 1/40th of a modern GPU. This is proof that if you can emulate an idiot, there exists an algorithm of a very similar computation intensity that gets you a Von Neumann.
There are some problems with AI-2027. And the main argument for taking it seriously, Kokotaljo's prediction track record, given that he's been in the ratsphere at the start of the scaling revolution, is not so impressive to me. What does he say concretely?
Right from the start:
2022
GPT-3 is finally obsolete. OpenAI, Google, Facebook, and DeepMind all have gigantic multimodal transformers, similar in size to GPT-3 but trained on images, video, maybe audio too, and generally higher-quality data. … Thanks to the multimodal pre-training and the fine-tuning, the models of 2022 make GPT-3 look like GPT-1.
In reality: by August 2022, GPT-4 finished pretraining (and became available only on March 14, 2023), it used only images, with what we today understand was a crappy encoder like CLIP and projection layer bottleneck, and the main model was pretrained on pure text still. There was no – zero – multimodal transfer, look up the tech report. GPT with vision only really became available by November 2023. The first seriously, natively multimodal-pretrained model is 4o which debuted in Spring 2024. Facebook was nowhere to be seen and only reached some crappy multimodality in production model by Sep 25, 2024. “bureaucracies/apps available in 2022” also didn't happen in any meaningful sense. So far, not terrible, but keep it in mind; there's a tendency to correct for conservatism in AI progress, because prediction markets tend to overestimate difficulty of some benchmark milestones, and here I think the opposite happens.
2023
The multimodal transformers are now even bigger; the biggest are about half a trillion parameters, costing hundreds of millions of dollars to train, and a whole year
Again, nothing of the sort happened, the guy is just rehashing Yud's paranoid tropes that have more similarity to Cold War era unactualized doctrines than any real world business processes. GPT-4 was on the order of $30M–$100M, took like 4 months, and was by far the biggest training run of 2022-early 2023, it was a giant MoE (I guess he didn't know about MoEs then, even though Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer is from 2017, same year as Transformer, from an all-star DM team; incidentally the first giant sparse Chinese MoE was WuDao, announced on January 11, 2021, it was dirt cheap and actually pretrained on images and text).
Notice the absence of Anthropic or China in any of this.
2024 We don’t see anything substantially bigger. Corps spend their money fine-tuning and distilling and playing around with their models, rather than training new or bigger ones. (So, the most compute spent on a single training run is something like 5x10^25 FLOPs.)
By the end of 2024, models were in training or pre-deployment testing that exceeded 3e26 FLOPs, and it still didn't reach $100M of compute because compute has been getting cheaper. GPT-4 is like 2e25.
This chip battle isn’t really slowing down overall hardware progress much. Part of the reason behind the lack-of-slowdown is that AI is now being used to design chips, meaning that it takes less human talent and time, meaning the barriers to entry are lower.
I am not sure what he had in mind in this whole section on chip wars. China can't meaningfully retaliate except by controlling exports of rate earths. Huawei was never bottlenecked by chip design, they could leapfrog Nvidia with human engineering alone if Uncle Sam let them in 2020. There have been no noteworthy new players in fabless and none of new players used AI.
That’s all in the West. In China and various other parts of the world, AI-persuasion/propaganda tech is being pursued and deployed with more gusto
None of this happened, in fact China has rolled up more stringent regulations than probably anybody to label AI-generated content and seems quite fine with its archaic methods.
2025
Another major milestone! After years of tinkering and incremental progress, AIs can now play Diplomacy as well as human experts.[6] It turns out that with some tweaks to the architecture, you can take a giant pre-trained multimodal transformer and then use it as a component in a larger system, a bureaucracy but with lots of learned neural net components instead of pure prompt programming, and then fine-tune the whole system via RL to get good at tasks in a sort of agentic way. They keep it from overfitting to other AIs by having it also play large numbers of humans. To do this they had to build a slick online diplomacy website to attract a large playerbase. Diplomacy is experiencing a revival…
This is not at all what we ended up doing, this is a cringe Lesswronger's idea of a way to build a reasoning agent that has intuitive potential for misalignment and adversarial manipulative stance towards humans. I think Noam Brown's Diplomacy work was mostly thrown out and we returned to AlphaGo style of simple RL with verifiable rewards from math and code execution, as explained by DeepSeek in R1 paper. This happened in early 2023, and reached product stage by Sep 2024.
We've caught up. I think none of this looks more impressive in retrospect than typical futurism, given the short time horizon. It's just “here are some things I've read about in popular reporting on AI research, and somewhere in the next 5 years a bunch of them will happen in some kind of order”. Multimodality, agents – that's all very generic. “bureaucracies” still didn't happen, this looks like some ngmi CYC nonsense, but coding assistants did. Adversarial games had no relevance; annotation for RLHF, and then pure RL – had. It appears to me that he was never really fascinated by the tech as such, only by its application to the rationalist discourse. Indeed:
Was a philosophy PhD student, left to work at AI Impacts, then Center on Long-Term Risk, then OpenAI.
OK.
Now as for the 2027 version, they've put in a lot more work (by the way Lifland has a lackluster track record with his AI outcomes modeling I think, and also depends in his sources on Kotra who just makes shit up). And I think it's even less impressive. It stubbornly, bitterly refuses to update on deviations from the Prophecy that have been happening.
First, they do not update on the underrated insight by de Gaulle: “China is a big country, inhabited by many Chinese.” I think, and have argued before, that by now Orientals have a substantial edge in research talent. One can continue coping about their inferior, uninventive ways, but honestly I'm done with this, it's just embarrassing kanging and makes White (and Jewish) people who do it look like bitter Arab, Indian or Black Supremacists to me. Sure, they have a different cognitive style centered on iterative optimization and synergizing local techniques, but this style just so happens to translate very well into rapidly improving algorithms and systems. And it scales! Oh, it scales well with educated population size, so long as it can be employed. I've written on the rise of their domestic research enough in my previous unpopular long posts. Be that as it may, China is very happy right now with the way its system is working, with half a dozen intensely talented teams competing and building on each other's work in the open, educating the even bigger next crop of geniuses, maybe 1 OOM larger than the comparable tier graduating American institutions this year (and thanks to Trump and other unrelated factors, most of them can be expected to voluntarily stay home this time). Smushing agile startups into a big, corrupt, centralized SOE is NOT how “CCP wakes up”, it's how it goes back to its Maoist sleep. They have a system of distributing state-owned compute to companies and institutions and will keep it running but that's about it.
And they are already mostly aware of the object level; they just don't agree with Lesswong analysis. Being Marxists, they firmly believe that what decides victory is primarily material forces of production, and that's kind of their forte. No matter what wordcels imagine about Godlike powers of brains in a box in a basement, intelligence has to cash out into actions to have effect on the world. So! Automated manufacturing, you say? They're having a humanoid robot half-marathon in… today I think, there's a ton of effort going into general and specialized automation and indinegizing every part of the robotic supply chain, on China scale that we know from their EV expansion. Automated R&D? They indinegize production of laboratory equipment and fill facilities. Automated governance? Their state departments compete in integration of R1 already. They're setting up everything that's needed for speedy takeoff even if their moment comes a bit later. What does the US do? Flail around with alienating Europeans and vague dreams of bringing 1950s back?
More importantly, the authors completely discard the problem that this work is happening in the open. This is a torpedo into Lesswrongian doctrine of an all-conquering singleton. If the world is populated by a great number of private actors with even subpar autonomous agents serving them, this is a complex world to take over! In fact it may be chaotic enough to erase any amount of intelligence advantage, just like longer horizon on weather prediciton sends the most advanced algorithms and models to the same level as simple heuristics.
Further, the promise of the reasoning paradigm is that intrinsically dumber agents can overcome problems of the same difficulty as top-of-the-line ones, provided enough inference compute. This blunts the edge of actors with the capital and know-how for larger training runs, reducing this to the question of logistics, trading electricity and amortized compute cost for outcomes. And importantly, this commoditization may erase the capital that “OpenBrain” can raise for its ambition. How much value will the wealthy of the world part with to have stake in the world's most impressive model for a whole of 3 months or even weeks? What does it buy them? Would it not make more sense to buy or rent their own hardware, download DeepSeek V4/R2 and use the conveniently included scripts to calibrate it for running your business? Or is the idea here that OpenBrain's product is so crushingly superior that it will be raking billions and soon trillions in inference, despite us seeing already that inference prices are cratering even as zero-shot solution rates increase? Just how much money is there to be made in centralized AI, when AI has become a common utility? I know that not so long ago the richest guy in China was selling bottled water, but…
Basically, I find this text lacking both as a forecast, and on its own terms as a call to action to minimize AI risks. We likely won't have a singleton, we'll have a very contested information space, ironically closer to the end of Kokotaljo's original report, but even more so. This theory of a transition point to ASI that allows to rapidly gain durable advantage is pretty suspect. They should take the L on old rationalist narratives and figure out how to help our world better.
I can list a number of more serious cases of brain drain, though they have nothing to do with DOGE. For example, Dr. Wu Yonghui, former Vice President of Google DeepMind, «has joined ByteDance as the head of foundational research for its large model team, Seed, according to Chinese media outlet, Jiemian.» That was around January. By now, they've created a model Seed-Thinking-v1.5 that's on par or better than DeepSeek R1 with 2x fewer activated parameters and 3.5x smaller, trained in a significantly more mature way, here's the tech report; they have the greatest stash of compute in Asia and will accelerate from now.
That's off the top of my head because I've just read the report. But from personal communication, a great ton of very strong Chinese are not coming anymore, and many are going back, due to the racism of this admin, general sense of meh that the American culture and way of life increasingly evoke, and simply because China can offer better deals now – in terms of cost of living, public safety, infrastructure, and obvious personal affinities. This isn't like the previous decade where only ancient academics retired to teach in Tsinghua or whatever, these are brilliant researchers in their prime, carrying your global leadership on their shoulders.
If I were American, that'd worry me a lot.
Godspeed! More wins to come then.
If you mean civilians only, then yes. But according to the US and Israel messaging, Palestinians are ontologically incapable of being civilians, so it's a wash.
The problem is that you consume too much neocon/Zionist propaganda from trash like Zenz. The reporting bias may actually run in the other direction. Xinjiang today is peaceful and Uighurs are beneficiaries of strong labor laws and affirmative action. Western tourists can visit it, Americans marry Uighur people, economy is booming, infrastructure is being built… Uighurs are still the majority and will likely remain the majority because there's a finite and dwindling supply of Han people in China. Whatever has happened there during the heavy enforcement and «reeducation» period, has ended with a state of affairs both parties can at least survive without bloodshed. This is not an endorsement of what has been done. This is a point of comparison.
Meanwhile Gaza is a smoldering ruin with casualties on par with Russia-Ukraine war, and Israel is negotiating for a thorough ethnic cleansing, while the fighting goes on.
No matter how you look at it, Israelis have been extraordinarily brutal and inefficient at that. It's like saying Russia has shown exemplary discipline in Chechnya, any nation would do the same in its position. No we haven't, it was a shitshow (and ended in humiliation of handing it over to Kadyrov).
but I don't understand people who aren't willing to choose the lesser of two evils
What is the argument for the need to make a choice? Does the US pay much attention to the war between Congo and Rwanda (despite clearly laying blame on one side)? Actually have you even heard of it?
Any reasonable country in Israel's position would react similarly.
No, not at all. Or only on the crudest level of analysis. There is no way to argue that Israeli policy is the only reasonable response, not even Israelis would say that. There are many possible options. Eg China has shown its take on the situation, in Xinjiang.
So how has this intelligent reasonable agency worked out for you? Not tired of winning yet?
Realistically, there aren't $500B of goods in the warehouses awaiting delivery to the US. The produced stock is not that large.
What I find curious in these arguments is the idea that China rigidly produces some “goods”, as in a fixed nomenclature, rather than it operates factories and employs people who can do much of anything with their capital.
China can absorb this production capacity, but it needn't be the civilian China. They can use the tried and true way of defense spending. Their trade surplus with nations other than the US allows enough margin for that.
We are aware that at the time the Polish-Lithuanian Commonwealth wasn't the «poor little plucky Poland, the sacrificial lamb of Europe, bullied and partitioned by cruel great powers» which I'm told is your national narrative, but a more developed and organized, competent expansionist power and that, indeed, it «could have been» that we'd have lost sovereignty indefinitely and been supplanted in history by the mighty Polish Empire. This feeds into schadenfreude and relief about your subsequent decline and losses of sovereignty. Pre-Romanov era Poland is viewed as a quite serious actor, without any condescension.
So, there's enough of a cause for pride to both sides I guess.
P.S. I also should look into how the Polish side sees that episode.
Russians are proud of the episode in its fullness, not just the part where Kremlin gets occupied but before it's liberated, of course. I could have phrased this better but whatever.
I was not goading, I explained why I will not engage further (it's one thing to disagree even virtiolically, but if someone simply lies about my words, this is obviously a dead end). I don't even see what he replied.
you yourself seem to pattern-match as "Nazism" when Europeans advocate for that same premise.
Lie. Blocked.
No. Where's the 3D model?
Do you mean that this “artist's rendering” of F-47 is 2D? Well, I admit this possibility didn't occur to me, but now that I look closer…
I think that if we are trying to genuinely compare apples to apples, PPP is inadequate between significantly different developed systems and we may indeed have to fall back to Marxism-Leninism and factors of production. In the end, what is being discussed is whether China will be able to finance their debt, and any analysis must have to backchain to the possibility to say anything about that.
National Socialism with Chinese Characteristics...
It's a funny joke but really, they're not any more National Socialist than any normal European state was before WWII. They are quite different from historical Nazis. They have a representation for minorities (even repressed ones) and affirmative action, they have legalized gender transition, they employ open furries in the PLA (explicitly as fursuit engineers, to develop next generation combat armor). Their notions of “degeneracy” or “racial hygiene” would be quite alien to Germans. The basic level of care for the ethnic majority is just what a state is supposed to do. And Socialism – that they owe to being literally Marxists, with a big portrait of Marx in their main hall of power and stuff. They're far more Capitalist than the Third Reich was, too. Xi has restored the cult of personality, though. Seriously speaking, it's its own complex thing, and should be considered on its own merits in its own historical context, not as a copy or a pastiche of Western paradigms. When all is said and done they're just a modernized Chinese empire.
You simultaneously mock Europeans for being "not capable of resisting a tiny tribe of natural wordcels"
I apologize. My sarcasm there may have been too confusing. I don't think Jews are solely guilty for the quality of your media. Jews, from what I can tell, genuinely like their sermonizing slop, but so does the audience, and creators are increasingly Gentiles too. I think you just have ran out of gas. Particularly Americans. Your culture is vulgar and plain bad, and you should feel bad about it. Your mavericks are sleazy hustlers at best and psychopaths at worst, and you do not resist your worst impulses to bow before the undeserving strongman. You come up with zany and harmful ideas and then force them upon everybody else. Thus, you are what has to be resisted now, at least until you improve somehow.
You just hate Europeans, particularly the West Europeans, you see them as your enemy and you always have.
I don't hate Europeans. I am disappointed in you. In you collectively and in you, SecureSignals, personally. You are less than what I figured, you don't deliver on the crucial advertised open-mindedness and ability-to-change-opinions features, and you take pride in stuff that's completely meh or plainly disgusting. You're like some purebred dogs. Remarkable, peculiar, WEIRD phenotype, but no spark, or almost never. Disappointing.
and I do not want to see them under Chinese hegemony
And at the rate you're going, you may well see Chinese hegemony. It is indeed unfortunate because the Chinese themselves never had it in them to establish one, I don't think. Too insular, too mercantile, too autistically uncharismatic, and frankly not capable enough to dismantle natural affinities and alliances. They'd have secured their backyard and grew content to have limited trade with barbarians, and this was the scenario I still consider preferable. But a few more iterations of low-IQ, smug WINNING and ROCKING THE BOAT, and who knows, they may have to pick up the crown tossed their way.
And the ironic thing is that all this is because you'd have wanted your own hegemony, because for all the denialism – the dream, the hope of being Intrinsically Racially Superior, crushing lessers under the jackboot, still lives and yearns for confirmation. Alas.
What matters is not whether I go full Moldbug, but whether Trump will go full retard. He does suggest that the EU buys impossible volume of oil, no? How is my plan much worse?
Do you imply that F-47 is not “NGAD”? The one from 2020 presumably was like that Boeing art.
Didn't Russia fight a violent internal war against minority separatist groups? I seem to recall that happening.
Yes, we won. I refer to the “decolonizing” partition plans during this war, like this one. In reality, the colonized Buryats and all others eagerly enlist and fight in Ukraine (and get killed disproportionately). My point is that even a moderately effective state can easily suppress ethnic minority separatism within its borders, so hoping that China will somehow collapse due to ethnic tensions is not serious.
- Prev
- Next
I've seen plenty of Nuking Three Gorges Dam posting, “China is the welfare queen of nations” posting, “we built up those chinks with our toil and look at how they repay us” posting, “Ways That Are Dark” posting, “only steals and poorly copies” posting and all other sorts of unhinged, entitled and dismissive posting that receives applause lately that I feel secure in saying that there is an undertone of stereotype-driven racial animus and condescension/cope, and it goes way back to the Chinese exclusion act. Again, this is also visible in the smug confidence with which Trump's team initiated a trade war, assured that Xi will fold due to his sweatshop of a nation being existentially dependent on exporting cheap junk to the US. It is perhaps not at all or only marginally present in normal people, but then again normal people probably don't care a lot about the topic. I'll also say that I've definitely seen some Americans liken Ruskies to Orcs, but generally it's a European (or even specifically Baltic) thing, I will grant that Americans do not imagine themselves Elves, they're happy enough being citizens of a real great nation.
Your anecdotes sound completely believable, I don't put much trust in Chinese law system or IP protections for foreigners and recognize that most of the country is pretty poor.
Scooters on sidewalks, however annoying, are a far cry from human feces on sidewalks - a matter of lacking civic virtue or manners, but not decay of civilization. I don't see scooters on sidewalks here in Buenos Aires, but I do have to look where I'm stepping. Was the other way around in Moscow, would that it were the same way here.
More options
Context Copy link