site banner

E/acc and the political compass of AI war

As I've been arguing for some time, the culture war's most important front will be about AI; that's more pleasant to me than the tacky trans vs trads content, as it returns us to the level of philosophy and positive actionable visions rather than peculiarly American signaling ick-changes, but the stakes are correspondingly higher… Anyway, Forbes has doxxed the founder of «e/acc», irreverent Twitter meme movement opposing attempts at regulation of AI development which are spearheaded by EA. Turns out he's a pretty cool guy eh.

Who Is @BasedBeffJezos, The Leader Of The Tech Elite’s ‘E/Acc’ Movement?

…At first blush, e/acc sounds a lot like Facebook’s old motto: “move fast and break things.” But Jezos also embraces more extreme ideas, borrowing concepts from “accelerationism,” which argues we should hasten the growth of technology and capitalism at the expense of nearly anything else. On X, the platform formally known as Twitter where he has 50,000 followers, Jezos has claimed that “institutions have decayed beyond the point of salvaging and that the media is a “vector for cybernetic control of culture.”

Alarmed by this extremist messaging, «the media» proceeds to… harness the power of an institution associated with the Department of Justice to deanonymize him, with the explicit aim to steer the cultural evolution around the topic:

Forbes has learned that the Jezos persona is run by a former Google quantum computing engineer named Guillaume Verdon who founded a stealth AI hardware startup Extropic in 2022. Forbes first identified Verdon as Jezos by matching details that Jezos revealed about himself to publicly available facts about Verdon. A voice analysis conducted by Catalin Grigoras, Director of the National Center for Media Forensics, compared audio recordings of Jezos and talks given by Verdon and found that it was 2,954,870 times more likely that the speaker in one recording of Jezos was Verdon than that it was any other person. Forbes is revealing his identity because we believe it to be in the public interest as Jezos’s influence grows.

That's not bad because Journalists, as observed by @TracingWoodgrains, are inherently Good:

(Revealing the name behind an anonymous account of public note is not “doxxing,” which is an often-gendered form of online harassment that reveals private information — like an address or phone number — about a person without consent and with malicious intent.)

(That's one creative approach to encouraging gender transition, I guess).

Now to be fair, this is almost certainly parallel construction narrative – many people in the SV knew Beff's real persona, and as of late he's been very loose with opsec, funding a party, selling merch and so on. Also, the forced reveal will probably help him a great deal – it's harder to dismiss the guy as some LARPing shitposter or a corporate shill pandering to VCs (or as @Tomato said, running «an incredibly boring b2b productivity software startup») when you know he's, well, this. And this too.

Forbes article itself doesn't go very hard on Beff, presenting him as a somewhat pretentious supply-side YIMBY, an ally to Marc Andreessen, Garry Tan and such; which is more true of Beff's followers than the man himself. The more potentially damaging (to his ability to draw investment) parts are casually invoking the spirit of Nick Land and his spooky brand of accelerationism (not unwarranted – «e/acc has no particular allegiance to the biological substrate for intelligence and life, in contrast to transhumanism; in order to spread to the stars, the light of consciousness/intelligence will have to be transduced to non-biological substrates» Beff says in his manifesto), and citing some professors of «communications» and «critical theory» who are just not very impressed with the whole technocapital thing. At the same time, it reminds the reader of EA's greatest moment (no not the bed nets).

Online, Beff confirms being Verdon:

I started this account as a means to spread hope, optimism, and a will to build the future, and as an outlet to share my thoughts despite to the secretive nature of my work… Around the same time as founding e/acc, I founded @Extropic_AI. A deep tech startup where we are building the ultimate substrate for Generative AI in the physical world by harnessing thermodynamic physics. Ideas simmering while inventing a this paradigm of computing definitely influenced the initial e/acc writings. I very much look forward to sharing more about our vision for the technology we are building soon. In terms of my background, as you've now learned, my main identity is @GillVerd. I used to work on special projects at the intersection of physics and AI at Alphabet, X and Google. Before this, I was a theoretical physicist working on information theory and black hole physics. Currently working on our AI Manhattan project to bring fundamentally new computing to the world with an amazing team of physics and AI geniuses, including my former TensorFlow Quantum co-founder @trevormccrt1 as CTO. Grateful every day to get to build this technology I have been dreaming of for over 8 years now with an amazing team.

And Verdon confirms the belief in Beffian doctrine:

Civilization desperately needs novel cultural and computing paradigms for us to achieve grander scope & scale and a prosperous future. I strongly believe thermodynamic physics and AI hold many of the answers we seek. As such, 18 months ago, I set out to build such cultural and computational paradigms.

I am fairly pessimistic about Extropic for reasons that should be obvious enough to people who've been monitoring the situation with DL compute startups and bottlenecks, so it may be that Beff's cultural engineering will make a greater impact than Verdon's physical one. Ironic, for one so contemptuous of wordcels.


Maturation of e/acc from a meme to a real force, if it happens (and as feared on Alignment Forum, in the wake of OpenAI coup-countercoup debacle), will be part of a larger trend, where the quasi-Masonic NGO networks of AI safetyists embed themselves in legacy institutions to procure the power of law and privileged platforms, while the broader organic culture and industry develops increasingly potent contrarian antibodies to their centralizing drive. Shortly before the doxx, two other clusters in the AI debate have been announced.

First one I'd mention is d/acc, courtesy of Vitalik Buterin; it's the closest to acceptable compromise that I've seen. It does not have many adherents yet but I expect it to become formidable because Vitalik is.

Across the board, I see far too many plans to save the world that involve giving a small group of people extreme and opaque power and hoping that they use it wisely. And so I find myself drawn to a different philosophy, one that has detailed ideas for how to deal with risks, but which seeks to create and maintain a more democratic world and tries to avoid centralization as the go-to solution to our problems. This philosophy also goes quite a bit broader than AI, and I would argue that it applies well even in worlds where AI risk concerns turn out to be largely unfounded. I will refer to this philosophy by the name of d/acc.

The "d" here can stand for many things; particularly, defensedecentralizationdemocracy and differential. First, think of it about defense, and then we can see how this ties into the other interpretations.

[…] The default path forward suggested by many of those who worry about AI essentially leads to a minimal AI world government. Near-term versions of this include a proposal for a "multinational AGI consortium" ("MAGIC"). Such a consortium, if it gets established and succeeds at its goals of creating superintelligent AI, would have a natural path to becoming a de-facto minimal world government. Longer-term, there are ideas like the "pivotal act" theory: we create an AI that performs a single one-time act which rearranges the world into a game where from that point forward humans are still in charge, but where the game board is somehow more defense-favoring and more fit for human flourishing.

The main practical issue that I see with this so far is that people don't seem to actually trust any specific governance mechanism with the power to build such a thing. This fact becomes stark when you look at the results to my recent Twitter polls, asking if people would prefer to see AI monopolized by a single entity with a decade head-start, or AI delayed by a decade for everyone… The size of each poll is small, but the polls make up for it in the uniformity of their result across a wide diversity of sources and options. In nine out of nine cases, the majority of people would rather see highly advanced AI delayed by a decade outright than be monopolized by a single group, whether it's a corporation, government or multinational body. In seven out of nine cases, delay won by at least two to one. This seems like an important fact to understand for anyone pursuing AI regulation.

[…] my experience trying to ensure "polytheism" within the Ethereum ecosystem does make me worry that this is an inherently unstable equilibrium. In Ethereum, we have intentionally tried to ensure decentralization of many parts of the stack: ensuring that there's no single codebase that controls more than half of the proof of stake network, trying to counteract the dominance of large staking pools, improving geographic decentralization, and so on. Essentially, Ethereum is actually attempting to execute on the old libertarian dream of a market-based society that uses social pressure, rather than government, as the antitrust regulator. To some extent, this has worked: the Prysm client's dominance has dropped from above 70% to under 45%. But this is not some automatic market process: it's the result of human intention and coordinated action.

[…] if we want to extrapolate this idea of human-AI cooperation further, we get to more radical conclusions**. Unless we create a world government powerful enough to detect and stop every small group of people hacking on individual GPUs with laptops, someone is going to create a superintelligent AI eventually - one that can think a thousand times faster than we can - and no combination of humans using tools with their hands is going to be able to hold its own against that. And so we need to take this idea of human-computer cooperation much deeper and further. A first natural step is brain-computer interfaces.…

etc. I mostly agree with his points. By focusing on the denial of winner-takes-all dynamics, it becomes a natural big tent proposal and it's already having effect on the similarly big tent doomer coalition, pulling anxious transhumanists away from the less efficacious luddites and discredited AI deniers.

The second one is «AI optimism» represented chiefly by Nora Belrose from Eleuther and Qiuntin Pope (whose essays contra Yud 1 and contra appeal to evolution as an intuition pump 2 I've been citing and signal-boosting for next to a year now; he's pretty good on Twitter too). Belrose is in agreement with d/acc; and in principle, I think this one is not so much a faction or a movement as the endgame to the long arc of AI doomerism initiated by Eliezer Yudkowsky, the ultimate progenitor of this community, born of the crisis of faith in Yud's and Bostrom's first-principles conjectures and entire «rationality» in light of empirical evidence. Many have tried to attack the AI doom doctrine from the outside (eg George Hotz), but only those willing to engage in the exegesis of Lesswrongian scriptures can sway educated doomers. Other actors in, or close to this group:

Optimists claim:

The last decade has shown that AI is much easier to control than many had feared. Today’s brain-inspired neural networks inherit human common sense, and their behavior can be molded to our preferences with simple, powerful algorithms. It’s no longer a question of how to control AI at all, but rather who will control it.

As optimists, we believe that AI is a tool for human empowerment, and that most people are fundamentally good. We strive for a future in which AI is distributed broadly and equitably, where each person is empowered by AIs working for them, under their own control. To this end, we support the open-source AI community, and we oppose attempts to centralize AI research in the hands of a small number of corporations in the name of “safety.” Centralization is likely to increase economic inequality and harm civil liberties, while doing little to prevent determined wrongdoers. By developing AI in the open, we’ll be able to better understand the ways in which AI can be misused and develop effective defense mechanisms.

So in terms of a political compass:

  • AI Luddites, reactionaries, job protectionists and woke ethics grifters who demand pause/stop/red tape/sinecures (bottom left)
  • plus messianic Utopian EAs who wish for a moral singleton God, and state/intelligence actors making use of them (top left)
  • vs. libertarian social-darwinist and posthumanist e/accs often aligned with American corporations and the MIC (top right?)
  • and minarchist/communalist transhumanist d/accs who try to walk the tightrope of human empowerment (bottom right?)

(Not covered: Schmidhuber, Sutton& probably Carmack as radically «misaligned» AGI successor species builders, Suleyman the statist, LeCun the Panglossian, Bengio&Hinton the naive socialists, Hassabis the vague, Legg the prophet, Tegmark the hysterical, Marcus the pooh-pooher and many others).

This compass will be more important than the default one as time goes on. Where are you on it?


As an aside: I recommend two open LLMs above all others. One is OpenHermes 2.5-7B, the other is DeepSeek-67B (33b-coder is OK too). Try them. It's not OpenAI, but it's getting closer and you don't need to depend on Altman's or Larry Summers' good graces to use them. With a laptop, you can have AI – at times approaching human level – anywhere. This is irreversible.

28
Jump in the discussion.

No email address required.

The e/acc are enthusiastic for space exploration, they just don't believe meat has a good shot at it. d/acc should be in favor, but with conditions. EA safetyists have stronger conditions of basically an ASI mommy on board, or mind-reading exploding collars or something, because space is big and allows to covertly build… everything that they fear already, and that must not be allowed, the longhouse ought to cover the entirety of the light cone. Regular AI ethics hall monitors and luddites are once again similar in this because they don't much believe in space (the more leftwing among them think it's bad because "colonialism") and seem to not care one way or another.

the longhouse ought to cover the entirety of the light cone

Close, but I think the argument is "if your longhouse doesn't cover the lightcone, you can expect your colonies to spawn their own universe-eating longhouses and come knocking again once they're much bigger than you." Then the options become: Our shitty longhouse forever, or a more competitive, alien longhouse / colonizers to come back and take all our stuff.

As far as I can tell, our only hope is that at some scales the universe is defense-favored. In which case, yes, fine, let a thousand flowers bloom.

I am not familiar with the term longhouse in this context, and can't easily find an explanation for the term connected to AI or space exploration. Is it a transhumananist term? Is it a rat term?

Can you explain what it means in this context?

It’s a term used in the BAPist sub-niche of dissident right spaces to mean a kind of feminized suppression of masculinity caused by literal proximity to women. Eg. the married man is in the ‘longhouse’ (even if he’s trad) because woman cooks for him, he must look after kids, he can’t go out on an adventure, his yearning for glory and greatness is suppressed etc. it’s largely an excuse to remain single into middle age and to reject marriage without adopting the most cringe (some would claim) aspects of MGTOW. It’s also commonly used semi-ironically by the Red Scare hosts, so gained popularity through them too.

I'm not sure exactly what Dase meant, but my reading is that it evokes the totalizing, moralizing, intrusive, overbearing, over-socialized, crab-bucket, tall-poppy syndrome state of society that tends to arise in human society when there isn't a frontier to escape to. I honestly don't understand the connection to native american governance or living arrangements, but I think it's suppose to evoke otherwise strong chiefs being effectively hen-pecked into submission due to everyone living in close enough quarters to be constantly surveilled.

"Communal living", heavy emphasis on the "Comm", with the native American reference pointing to the fact that Communists didn't invent the failure mode but rather expressed something always lurking in human nature, would be my read.

That's why you send the AGI probes ahead to build and run artificial habitats and then pick up any humans from Earth afterwards that are interested in leaving (not necessarily permanently). It's true that having to take care of meat in space will take significantly more resources than just digitized minds (whether artificial or formerly organic) but then what's the whole point of this whole project of building AGI and ASI if we can't have our cake and eat it too?

If you're so intent on having flesh and blood humans about in extrasolar space, it's still much more convenient to digitize them in transit and then rebuild a biological brain for them at the other end. I suspect that that's going to be more difficult than the initial scan and uploading, but hardly outside the scope of singularity tech.

I don't really get the appeal of continued biological existence myself, at least when we do get other options.

I like being flesh and blood and I wouldn't trust the alternative until it's been thoroughly tested. But the point of what I'm aiming for is that the choice isn't either or, if your want to digitize yourself while others want to go to space in the flesh there are ample resources to support either mode of being. The AGI and ASI that go out to pave the way should make room to accommodate a future with a wide diversity of human life. We should support exploring the frontiers of not just space but human existence. If this were just about convenience you wouldn't need humans at all, you could build AI that are mentally much more suited to space exploration.

there are ample resources to support either mode of being

Unless we find a way to constrain life’s sprawling tendencies, wouldn’t these modes of being expand exponentially until there are no longer ample resources on a per-capita basis?

The rates of declining global fertility seem to counter the idea that life has inherent sprawling tendencies. Or at least, once a species is sufficiently intelligent, capable of long-term planning and controlling its own fertility that the sprawling tendencies can be intelligently managed.

Speak for yourself, I intend to run millions of forks.

That’s actually a great point, thanks.

I don't really get the appeal of continued biological existence myself, at least when we do get other options.

I don't really get the appeal of a machine with a supposed copy of my mind flying to the end of the universe or consuming the power of a star, when we have other options right here and now.

Assuming technical utopia, that machine with a copy of your mind will be able to live natively in whatever foreign environment it finds itself in. If you’re like me and are able to view that machine as yourself, I would much prefer being able to breathe Martian air and touch the soil directly with my own robotic hands than being constrained to walking around inside a space suit or a human enclosure.

Besides, with advanced enough genetic engineering, I could live inside a Martian bio-body instead of the clunky mechanical one you’re probably picturing. I don’t see how that would feel worse than being in a human skin suit; if anything, it would be great to be free of human biological restraints for once.

I'm mostly picturing fully digital minds dissolving through infinite hedonism; wireheaded to death.

That wouldn’t be great, but a wire headed digital mind also wouldn’t be one that is exploring and colonizing the universe. Nor does it sound like it would be close to you in mind space, which was what I pictured when you said “a supposed copy of my mind.”

Well to be fair, it is a bit of a straw man. I just don't think human minds will work at all when divorced from the limitations of the flesh and provided with unlimited processing power. They will need to be altered greatly, not copied faithfully.

Edit: Interesting topic, would like to write more, but phoneposting.

They will need to be altered greatly, not copied faithfully.

I agree. But I suspect that the ego could survive such alterations, so long as the process approximates a continuous curve. We are far different than we were when we first learned the word “I” as a baby, but because there’s a continuous thread connecting us through time, we’ve been able to maintain the same basic sense of identity throughout our lives.

If you’re like me and are able to view that machine as yourself

But why view it that way? The map is not the territory, and another territory arranged so as to be isomorphic to the one depicted on the map is not the original one.

Why does it matter which one is the “original”? If it’s isomorphic to me, then it is me for all practical purposes, as far as I’m concerned.

Keeping track of the “original” me is about as inane as keeping track of an “original” painting. Of course, some people still care. If you wish to care, then you do you by all means.

Why does it matter that it's isomorphic to you? There are 7 billion people as unique as you are. Of those, I would expect a non-zero number of them to have experiences and dispositions close enough to yours as to be negligible. If you don't value your continuity or your natal body, or genes, then I don't see what is there left for you other than some ephemeral idea of "thinking exactly the same" (which is over 0.01 seconds after you're copied and the copy diverges).

Of those, I would expect a non-zero number of them to have experiences and dispositions close enough to yours as to be negligible.

I really do not see how that applies.

The number of people on planet Earth who are close enough to me, in terms of memories/experience/personality/goals such that I consider them isomorphic to myself is precisely zero.

The absolute closest that could potentially exist, given current technology, is a monozygotic twin or a clone, and I'm not aware of having either.

I would assume @mdurak would agree here.

Where we might potentially diverge:

My representation of "me" is robust to perturbations like going to bed and waking up tomorrow, or replacing 1% of the mass in my body via turnover when I drink a bottle of water, have lunch then take a shit.

It isn't robust to a large amount traumatic brain damage, dementia or the like.

then I don't see what is there left for you other than some ephemeral idea of "thinking exactly the same" (which is over 0.01 seconds after you're copied and the copy diverges).

Define "exactly".

Human cognition is stochastic, leaving aside issues of determinism at the level of quantum mechanics.

Your best attempt at exposing me to the same inputs and expecting the same outputs, separated by any amount of time, will inevitably have minor differences. Take a biological human, do something to erase their episodic memory and have them do a complex task, such as write an essay. Have them repeat it, with their memories of the original removed, and you are exceedingly unlikely to get the exact same text back.

But if such an entity existed, such that if "me" and it were blackboxed and then subjected to a very thorough assessment but couldn't be distinguished by an external observer, or even me looking solely at the outputs, tested separately (ideally controlling the environment as strong as possible, hell even scrubbing my memories of the last episode), then that's a copy of me, and I accord it the same rights and entitlements as the one typing this.

I don't think a text interface suffices, as @2rafa once suggested, it might be possible to finetune an LLM on all the text I or anyone else has ever written, such that someone who is only interacting with us via text could be fooled indefinitely.

I don't expect that to capture my cognition in fine enough detail to work, I'm not just an entity that produces text after all.

So an ideal test for determining that some other potential copy of myself (especially in a different substrate like a supercomputer), would also check for plenty of other things.

Does that copy, if instantiated into a highly realistic physical simulation, behave indistinguishably from multiple passes of the biological me?

Does it have strong and measurable correlates to my own neural circuitry? Ideally an emulation of the specific ones in my brain?

If it can pass all the tests with the same inter-test variability as I currently do, then I will be satisfied with calling it another copy of myself, with equal rights to the SMH name and even potentially a fair division of assets belonging to me.

The most robust way of creating something like this would be scanning and uploading a brain. Not an easy task, far from it. There might well be cheaper/easier and "good enough" alternatives, such that SMH_virtual has about as much variability from the current walking-talking one as I do from biological SMH(2022), 2019 or likely 2024. I have no qualms about calling all of them me, hence none about calling that upload the same.

More comments

I think people with such beliefs have no more moral patienthood than a trust fund. What should anyone care about some loosely defined isomorphism, if it even holds? Moreover, why would you be entitled to replication of your sentimental baggage in some derivative entities? Just instantiate a distilled process that has similar high-level policies, and go out.

What should anyone care about some loosely defined isomorphism, if it even holds?

Why should anyone care about anything? Why should anyone care about individuals with genes that are similar, but not identical, to them? They don’t have to, but evolution has selected for altruism in certain scenarios.

I’d bet that the memeplexes of individuals like me are much more likely to colonize the universe than the memeplexes of individuals like you, who insist on expending expensive resources to engineer space habitats for biological human survival. Not that it is morally superior for my memeplexes to propagate more, of course. It’s not immoral to be Amish, it’s just irrelevant.

Just instantiate a distilled process that has similar high-level policies, and go out.

If those policies are similar enough to mine, that’s fine with me. My children are newly instantiated processes rather than clones of me. I’m fine with them taking over my estate when I die, so I don’t see why I would begrudge other instantiated processes that are aligned with my values.

More comments

Well, if nothing else, if you make a copy of me, and either it or me will have to come to a painful and premature end, I will have a strong preference for it happening to the copy.

I suppose I could see where you're coming from if you see your copies the way other people see their children, but the idea that they're literally you makes no sense to me.

Suppose you put me under and copy me atom for atom, mental bit for mental bit, so that both copies of me wake up thinking of themselves as the original. For all practical purposes, both of us will experience the same emotions and reasoning (with perhaps some deviation due to quantum + environmental fluctuations) with regards to the situation we’re in. Neither of us can tell which one is the original, because we’ve been copied atom for atom. If we have to decide, both of us would prefer “me” surviving over the other one. But ultimately, if I am the way I am now, I would be much less perturbed by the death of one of us, now that I know that mdurak will live on in a very real sense.

Perhaps both of your clones would have a much stronger visceral reaction to dying. That’s fair, because even in regular life some people are more/less averse to dying for their country. But that doesn’t change how it can make sense to see both copies of a cell that just underwent mitosis as being essentially equivalent to the original (chance mutations aside), and I don’t see how cloning myself is functionally any different from mitosis.

More comments