site banner

What does Kirin 9000S tell us about the future

I've been wrong, again, pooh-poohing another Eurasian autocracy. Or so it seems.

On 29 August 2023, to great jubilation of Chinese netizens («the light boat has passed through a thousand mountains!», they cry), Huawei has announced Mate 60 and 60 Pro; the formal launch is scheduled for September 25th, commemorating the second anniversary of return of Meng Wanzhou, CFO and daughter of Huawei's founder, from her detainment in Canada. Those are nice phones of course but, specs-wise, unimpressive, as far as flagships in late 2023 go (on benchmarks, score like 50-60% of the latest iPhone while burning peak 13W so 200% of power). Now they're joined by Mate X5.

The point, however, is that they utilize Huawei's own SoC, Hisilicon Kirin 9000S, not only designed but produced in the Mainland; it even uses custom cores that inherit simultaneous multithreading from their server line (I recommend this excellent video review, also this benchmarking). Their provenance is not advertised, in fact it's not admitted at all, but now all reasonable people are in agreement that it's SMIC-Shanghai made, using their N+2 (7nm) process, with actual minimum metal pitch around 42 nm, energy efficiency at low frequencies close to Samsung's 4nm and far worse at high (overall capability in the Snapdragon 888 range, so 2020), transistor density on par with first-gen TSMC N7, maybe N7P (I'm not sure though, might well be 10% higher)… so on the border of what has been achieved with DUV (deep ultraviolet) and early EUV runs (EUV technology having been denied to China. As a side note, Huawei is also accused of building its own secret fabs).

It's also worse on net than Kirin 9000, their all-time peak achievement taped out across the strait in 2020, but it's… competitive. They apparently use self-aligned quad patterning, a DUV variant that's as finicky as it sounds, an absurd attempt to cheat optics and etch features many times smaller than the etching photons' wavelength (certain madmen went as high as 6x patterning; that said, even basic single-patterning EUV is insane and finicky, «physics experiment, not a production process»; companies on the level of Nikon exited the market in exasperation rather than pursue it; and it'll get worse). This trick was pioneered by Intel (which has failed at adopting EUV, afaik it's a fascinating corporate mismanagement story with as much strategic error as simple asshole behavior of individual executives) and is still responsible for their latest chips, though will be made obsolete in the next generations (the current node used to be called Intel's 10 nm Enhanced SuperFin, and was recently rebranded to Intel 7; note, however, that Kirin 9000S is a low-power part and requirements there are a bit more lax than in desktop/server processors). Long story short: it's 1.5-2 generations, 3-4 years behind the frontier of available devices, 5-6 years behind frontier production runs, 7-8 years after the first machines to make such chips at scale came onto market; but things weren't that much worse back then. We are, after all, in the domain of diminishing returns.

Here are the highlights from the first serious investigation, here are some leaks from it, here's the nice Asianometry overview (esp 3:50+), and the exhilarating, if breathlessly hawkish perspective of Dylan Patel, complete with detailed restrictions-tightening advice. Summarizing:

  1. This is possible because sanctions against China have tons of loopholes, and because ASML and other suppliers are not interested in sacrificing their business to American ambition. *
  2. Yes, it qualifies for 7nm in terms of critical dimensions. Yes, it's not Potemkin tulou, they likely have passable yields, both catastrophic and parametric (maybe upwards of 50% for this SoC, because low variance in stress-testing means they didn't feel the need to approve barely-functional chips, meaning there weren't too many defects) and so it's economically sustainable (might be better in that sense than e.g. Samsung's "5nm" or "4nm", because Samsung rots alive due to systemic management fraud) [I admit I doubt this point, and Dylan is known to be a hawk with motivated reasoning]. Based on known capex, they will soon be able to produce 30K wafers per month, which means 10s of millions of such chips soon (corroborated by shipment targets; concretely it's like 300 Kirins *29700 wafers so 8.9M/month, but the cycle is>1 month). And yes, they will scale it up further, and indeed they will keep polishing this tech tree and plausibly get to commercially viable "5nm" next - «the total process cost would only be ≈20% higher versus a 5nm that utilizes EUV» (probably 50%+ though).
  3. But more importantly: «Even with 50% yields, 30,000 WPM could support over 10 million Nvidia H100 GPU ASIC dies a year […] Remember GPT-4 was trained on ≈24,000 A100’s and Open AI will still have less than 1 million advanced GPUs even by the end of next year». Of course, Huawei already had been producing competitive DL accelerators back when they had access to EUV 7nm; even now I stumble upon ML papers that mention using those.
  4. As if all that were not enough, China simply keeps splurging billions on pretty good ML-optimized hardware, like Nvidia A/H800s, which abide with the current (toothless, as Patel argues) restrictions.
  5. But once again: on a bright (for Westerners) side, this means it's not so much Chinese ingenuity and industriousness (for example, they still haven't delivered a single ≤28nm lithography machine, though it's not clear if the one they're working on won't be rapidly upgraded for 20, 14, 10 and ultimately 7nm processes – after all, SMIC is currently procuring tools for «28nm», complying with sanctions, yet here we are), as it's the unpicked low-hanging fruit of trade restrictions. In fact, some Chinese doomers argue it's a specific allowance by the US Department of Commerce and overall a nothingburger, ie doesn't suggest willingness to produce more consequential things than gadgets for patriotic consumers. The usual suspects (Zeihan and his flock) take another view and smugly claim that China has once again shot itself in the foot while showing off, paper tiger, wolf warriors, only steals and copies etc.; and, the stated objective of the USG being «as large of a lead as possible», new crippling sanctions are inevitable (maybe from Patel's list). There exists a body of scholarship on semiconductor supply chain chokepoints which confirms these folks are not delusional – something as «simple» as high-end photoresist is currently beyond Chinese grasp, so the US can make use of a hefty stick.

All that being said, China does advance in on-shoring the supply chain: EDA, 28nm scanners, wafers etc.

* Note: Patel plays fast and loose with how many lithography machines exactly, and of what capacity, are delivered/serviced/ordered/shipping/planned/allowed, and it's the murkiest part in the whole narrative; for example he describes ASML's race-traitorous plans stretching to 2025-2030, but the Dutch and also the Japanese seem to already have began limiting sales of tools he lists as unwisely left unbanned, and so the August surge or imports may have been the last, and certainly most 2024+ sales are off the table I think.

All of this is a retreading of a discussion from over a year ago, when a less mature version of SMIC N7 process was used - also surreptitiously – for a Bitcoin mining ASIC, a simple, obscenely high-margin part 19.3mm² in size, which presumably would have been profitable to make even at pathetic yields, like 10%; the process back then was near-idential to TSMC N7 circa 2018-2019. 9000S is 107 mm² and lower-margin. Nvidia GH100, the new workhorse of cutting edge ML, made with 4nm TSMC node, is 814 mm²; as GPU chips are a strategic resource, it'd be sensible to subsidize their production (as it happens, H100 with its 98 MTr/mm² must be equally or a bit less dense than 9000S; A100, a perfectly adequate 7nm downgrade option, is at 65 MTr/mm² so we can be sure they'll be capable of making those, eg resurrecting Biren BR100 GPUs or things like Ascend 910). Citing Patel again, «Just like Apple is the guinea pig for TSMC process nodes and helps them ramp and achieve high yield, Huawei will likewise help SMIC in the same way […] In two years, SMIC will likely be able to produce large monolithic dies for AI and networking applications.» (In an aside, Patel laments the relative lack of gusto in strangling Chinese radio/sensor capabilities, which are more formidable and immediately scary than all that compute. However, this makes sense if we look at the ongoing chip trade war through the historical lens, with the reasonable objective being Chinese obsolescence a la what happened to the Soviet Union and its microelectronics, and arguably even Japan in the 80s, which is why ASML/Samsung/TSMC are on the map at all; Choyna military threat per se, except to Taiwan, being a distant second thought, if not a total pretext. This r/LessCredibleDefense discussion may be of interest).


So. I have also pooh-poohed the Chinese result back then, assuming that tiny crypto ASICs are as good as they will get within the bounds assigned to them, «swan song of Chinese industry», and won't achieve meaningful yields. Just as gwern de facto did in October 2022, predicting the slow death of Chinese industry in view of «Export Controls on Advanced Computing and Semiconductor Manufacturing Items to the PRC» (even mentioning the yellow bear meme). Just as I did again 4 months ago, saying to @RandomRanger «China will maybe have 7nm in 2030 or something». I maintain that it's plausible they won't have a fully indigenized supply chain for any 7nm process until 2030 (and/or will likewise fail with securing chains for necessary components other than processors: HBM, interposers etc), they may well fall below the capacity they have right now (reminder that not only do scanners break down and need consumables, but they can be remotely disabled), especially if restrictions keep ramping up and they'll keep making stupid errors, e.g. actually starting and failing an attempt at annexing Taiwan, or going for Cultural Revolution Round II: Zero Covid Boogaloo, or provoking an insurgency by force-feeding all primary school students gutter oil breakfasts… with absolute power, the possibilities are endless! My dissmissal was informed not by prejudice but years upon years of promises by Chinese industry and academia representatives to get to 7nm in 2 more weeks, and consistent failure and high-profile fraud (and in fact I found persuasive this dude's argument that by some non-absurd measures the gap has widened since the Mao's era; and there was all the graphene/quantum computing "leapfrogging" nonsense, and so on). Their actors haven't become appreciably better now.

But I won't pooh-pooh any more, because their chips have become better. I also have said: «AGI can be completed with already available hardware, and the US-led bloc has like 95% of it, and total control over means of production». This is still technically true but apparently not in a decisive way. History is still likely to repeat – that is, like the Qing China during the Industrial Revolution, like the Soviet Union in the transistor era, the nation playing catch-up will once again run into trade restrictions, fail at the domestic fundamental innovation and miss out on the new technological stage; but it is not set in stone. Hell, they may even get to EUV through that asinine 160m synchrotron-based electron beam thing – I mean, they are trying, though it still looks like ever more academic grift… but…

I have underestimated China and overestimated the West. Mea culpa. Alphanumericsprawl and others were making good points.


Where does this leave us?

It leaves us in the uncomfortable situation where China as a rival superpower will plausibly have to be defeated for real, rather then just sanctioned away or allowed to bog itself down in imperialist adventurism and incompetence. They'll have enough suitable chips, they have passable software, enough talent for 1-3 frontier companies, reams of data and their characteristically awkward ruthlessness applied to refining it (and as we've learned recently, high-quality data can compensate for a great disparity in compute). They are already running a few serious almost-OpenAI-level projects – Baidu's ERNIE, Alibaba's Tongyi Qianwen (maybe I've mentioned it already, but their Qwen-7B/VL are really good; seems like all groups in the race were obligated to release a small model for testing purposes), maybe also Tsinghua's ChatGLM, SenseTime etc.'s InternLM and smaller ones. They – well, those groups, not the red boomer Xi – are well aware of their weaknesses and optimize around them (and borrowing from the open academic culture helps, as can be often seen in the training methods section – thanks to MIT&Meta, Microsoft, Princeton et al). They are preparing for the era of machine labor, which for now is sold as means to take care of the aging population and so on (I particularly like the Fourier Intelligence's trajectory, a near-perfect inversion of Iron Man's plot – start with the medical exoskeleton, proceed to make a full humanoid; but there are other humanoids developed in parallel, eg Unitree H1, and they seem competitive with their American equivalents like Tesla Optimus, X1 Neo and so on); in general, they are not being maximally stupid with their chances.

And this, in turn, means that the culture of the next years will be – as I've predicted in Viewpoint Focus 3 years ago – likely dominated by the standoff, leading up to much more bitter economic decoupling and kinetic war; promoting bipartisan jingoism and leaving less space for «culture war» as understood here; on the upside, it'll diminish the salience of progressive campaigns that demoralize the more traditionally minded population.

It'll also presumably mean less focus on «regulation of AI risks» than some would hope for, denying this topic the uncontested succession to the Current Thing №1.

That's about all from me, thoughts?

29
Jump in the discussion.

No email address required.

On the gripping hand, we know that Xi does not think like myself, and may operate a more brutal Putin-like logic: China stronk, war easy, dragon rising to meet the bear, reee! Also, it may even be sensible to dispense with TSMC Taiwan, and fulfill the great dream of annexing the island, if this doesn't mean total chip embargo any more (and it's clear that Intel/TSMC Arizona/TSMC Kumamoto/etc will go live in the coming years anyway); in this vein, we might expect continued increase in belligerence as the ≤28nm supply chain is increasingly on-shored. Actually this line of thought seems the most solid to me.

I suppose I question whether AGI (or semiconductor factories in general) factors as much as this into the logic of CCP decisionmaking on Taiwan as people involved in blogging on tech and semiconductors and AI seem to think. If an invasion comes, I think it is likely to be for political science rather than computer science reasons. Denying the US chips for two or three years by poisoning the well and destroying TSMC in an invasion exposes China to so many additional external costs for what may not even be a substantial real-world advantage; and we don’t know what Xi’s opinions on AGI are anyway.

As you say, this debate is largely irrelevant because the compute for AGI certainly already exists, in both the US and in China. Patel seems obsessed with hardware and consults as a ‘strategy guy’ (to be honest, I question what he offers that internal analysts and big consultants don’t, because he doesn’t have any special insight or knowledge, writes badly (with many typos) and doesn’t seem particularly well connected in a way that, say, some of the better strategic intelligence firms probably are). But Patel is a hardware guy, he has to believe that spending trillions producing a hundred million more A100s is going to make all the difference because if you can jerry-rig AGI on $100m of rented compute and a couple of engineering breakthroughs (something he pooh-poohs when mocking HuggingFace or Databricks, not that they’re going to do it, but still) then the whole GPU arms race is, if not obsolete, then certainly less pressing to the really big questions.

In any case, the main thing the “more tflops = more AGI” logic forgets is that as soon as anyone has “AGI” (a relatively amorphous concept, obviously, and for military applications or whatever, non-AGI AI might still be preferable in various ways) everyone’s going to have it, even if it means more UAE via Caymans shell companies renting cloud access. There’s no world in which the US trains AGI and China just sits there with a very long telescope being sad for 5+ years while punching the wall.

And as ever, if (self-improving) AGI is soon, nothing matters. If it’s a long way away (unlikely), then China’s largest problem is birthrates, not hardware.