As I've been arguing for some time, the culture war's most important front will be about AI; that's more pleasant to me than the tacky trans vs trads content, as it returns us to the level of philosophy and positive actionable visions rather than peculiarly American signaling ick-changes, but the stakes are correspondingly higher… Anyway, Forbes has doxxed the founder of «e/acc», irreverent Twitter meme movement opposing attempts at regulation of AI development which are spearheaded by EA. Turns out he's a pretty cool guy eh.
Who Is @BasedBeffJezos, The Leader Of The Tech Elite’s ‘E/Acc’ Movement?
…At first blush, e/acc sounds a lot like Facebook’s old motto: “move fast and break things.” But Jezos also embraces more extreme ideas, borrowing concepts from “accelerationism,” which argues we should hasten the growth of technology and capitalism at the expense of nearly anything else. On X, the platform formally known as Twitter where he has 50,000 followers, Jezos has claimed that “institutions have decayed beyond the point of salvaging and that the media is a “vector for cybernetic control of culture.”
Alarmed by this extremist messaging, «the media» proceeds to… harness the power of an institution associated with the Department of Justice to deanonymize him, with the explicit aim to steer the cultural evolution around the topic:
Forbes has learned that the Jezos persona is run by a former Google quantum computing engineer named Guillaume Verdon who founded a stealth AI hardware startup Extropic in 2022. Forbes first identified Verdon as Jezos by matching details that Jezos revealed about himself to publicly available facts about Verdon. A voice analysis conducted by Catalin Grigoras, Director of the National Center for Media Forensics, compared audio recordings of Jezos and talks given by Verdon and found that it was 2,954,870 times more likely that the speaker in one recording of Jezos was Verdon than that it was any other person. Forbes is revealing his identity because we believe it to be in the public interest as Jezos’s influence grows.
That's not bad because Journalists, as observed by @TracingWoodgrains, are inherently Good:
(Revealing the name behind an anonymous account of public note is not “doxxing,” which is an often-gendered form of online harassment that reveals private information — like an address or phone number — about a person without consent and with malicious intent.)
(That's one creative approach to encouraging gender transition, I guess).
Now to be fair, this is almost certainly parallel construction narrative – many people in the SV knew Beff's real persona, and as of late he's been very loose with opsec, funding a party, selling merch and so on. Also, the forced reveal will probably help him a great deal – it's harder to dismiss the guy as some LARPing shitposter or a corporate shill pandering to VCs (or as @Tomato said, running «an incredibly boring b2b productivity software startup») when you know he's, well, this. And this too.
Forbes article itself doesn't go very hard on Beff, presenting him as a somewhat pretentious supply-side YIMBY, an ally to Marc Andreessen, Garry Tan and such; which is more true of Beff's followers than the man himself. The more potentially damaging (to his ability to draw investment) parts are casually invoking the spirit of Nick Land and his spooky brand of accelerationism (not unwarranted – «e/acc has no particular allegiance to the biological substrate for intelligence and life, in contrast to transhumanism; in order to spread to the stars, the light of consciousness/intelligence will have to be transduced to non-biological substrates» Beff says in his manifesto), and citing some professors of «communications» and «critical theory» who are just not very impressed with the whole technocapital thing. At the same time, it reminds the reader of EA's greatest moment (no not the bed nets).
Online, Beff confirms being Verdon:
I started this account as a means to spread hope, optimism, and a will to build the future, and as an outlet to share my thoughts despite to the secretive nature of my work… Around the same time as founding e/acc, I founded @Extropic_AI. A deep tech startup where we are building the ultimate substrate for Generative AI in the physical world by harnessing thermodynamic physics. Ideas simmering while inventing a this paradigm of computing definitely influenced the initial e/acc writings. I very much look forward to sharing more about our vision for the technology we are building soon. In terms of my background, as you've now learned, my main identity is @GillVerd. I used to work on special projects at the intersection of physics and AI at Alphabet, X and Google. Before this, I was a theoretical physicist working on information theory and black hole physics. Currently working on our AI Manhattan project to bring fundamentally new computing to the world with an amazing team of physics and AI geniuses, including my former TensorFlow Quantum co-founder @trevormccrt1 as CTO. Grateful every day to get to build this technology I have been dreaming of for over 8 years now with an amazing team.
And Verdon confirms the belief in Beffian doctrine:
Civilization desperately needs novel cultural and computing paradigms for us to achieve grander scope & scale and a prosperous future. I strongly believe thermodynamic physics and AI hold many of the answers we seek. As such, 18 months ago, I set out to build such cultural and computational paradigms.
I am fairly pessimistic about Extropic for reasons that should be obvious enough to people who've been monitoring the situation with DL compute startups and bottlenecks, so it may be that Beff's cultural engineering will make a greater impact than Verdon's physical one. Ironic, for one so contemptuous of wordcels.
Maturation of e/acc from a meme to a real force, if it happens (and as feared on Alignment Forum, in the wake of OpenAI coup-countercoup debacle), will be part of a larger trend, where the quasi-Masonic NGO networks of AI safetyists embed themselves in legacy institutions to procure the power of law and privileged platforms, while the broader organic culture and industry develops increasingly potent contrarian antibodies to their centralizing drive. Shortly before the doxx, two other clusters in the AI debate have been announced.
First one I'd mention is d/acc, courtesy of Vitalik Buterin; it's the closest to acceptable compromise that I've seen. It does not have many adherents yet but I expect it to become formidable because Vitalik is.
Across the board, I see far too many plans to save the world that involve giving a small group of people extreme and opaque power and hoping that they use it wisely. And so I find myself drawn to a different philosophy, one that has detailed ideas for how to deal with risks, but which seeks to create and maintain a more democratic world and tries to avoid centralization as the go-to solution to our problems. This philosophy also goes quite a bit broader than AI, and I would argue that it applies well even in worlds where AI risk concerns turn out to be largely unfounded. I will refer to this philosophy by the name of d/acc.
The "d" here can stand for many things; particularly, defense, decentralization, democracy and differential. First, think of it about defense, and then we can see how this ties into the other interpretations.
[…] The default path forward suggested by many of those who worry about AI essentially leads to a minimal AI world government. Near-term versions of this include a proposal for a "multinational AGI consortium" ("MAGIC"). Such a consortium, if it gets established and succeeds at its goals of creating superintelligent AI, would have a natural path to becoming a de-facto minimal world government. Longer-term, there are ideas like the "pivotal act" theory: we create an AI that performs a single one-time act which rearranges the world into a game where from that point forward humans are still in charge, but where the game board is somehow more defense-favoring and more fit for human flourishing.
The main practical issue that I see with this so far is that people don't seem to actually trust any specific governance mechanism with the power to build such a thing. This fact becomes stark when you look at the results to my recent Twitter polls, asking if people would prefer to see AI monopolized by a single entity with a decade head-start, or AI delayed by a decade for everyone… The size of each poll is small, but the polls make up for it in the uniformity of their result across a wide diversity of sources and options. In nine out of nine cases, the majority of people would rather see highly advanced AI delayed by a decade outright than be monopolized by a single group, whether it's a corporation, government or multinational body. In seven out of nine cases, delay won by at least two to one. This seems like an important fact to understand for anyone pursuing AI regulation.
[…] my experience trying to ensure "polytheism" within the Ethereum ecosystem does make me worry that this is an inherently unstable equilibrium. In Ethereum, we have intentionally tried to ensure decentralization of many parts of the stack: ensuring that there's no single codebase that controls more than half of the proof of stake network, trying to counteract the dominance of large staking pools, improving geographic decentralization, and so on. Essentially, Ethereum is actually attempting to execute on the old libertarian dream of a market-based society that uses social pressure, rather than government, as the antitrust regulator. To some extent, this has worked: the Prysm client's dominance has dropped from above 70% to under 45%. But this is not some automatic market process: it's the result of human intention and coordinated action.
[…] if we want to extrapolate this idea of human-AI cooperation further, we get to more radical conclusions**. Unless we create a world government powerful enough to detect and stop every small group of people hacking on individual GPUs with laptops, someone is going to create a superintelligent AI eventually - one that can think a thousand times faster than we can - and no combination of humans using tools with their hands is going to be able to hold its own against that. And so we need to take this idea of human-computer cooperation much deeper and further. A first natural step is brain-computer interfaces.…
etc. I mostly agree with his points. By focusing on the denial of winner-takes-all dynamics, it becomes a natural big tent proposal and it's already having effect on the similarly big tent doomer coalition, pulling anxious transhumanists away from the less efficacious luddites and discredited AI deniers.
The second one is «AI optimism» represented chiefly by Nora Belrose from Eleuther and Qiuntin Pope (whose essays contra Yud 1 and contra appeal to evolution as an intuition pump 2 I've been citing and signal-boosting for next to a year now; he's pretty good on Twitter too). Belrose is in agreement with d/acc; and in principle, I think this one is not so much a faction or a movement as the endgame to the long arc of AI doomerism initiated by Eliezer Yudkowsky, the ultimate progenitor of this community, born of the crisis of faith in Yud's and Bostrom's first-principles conjectures and entire «rationality» in light of empirical evidence. Many have tried to attack the AI doom doctrine from the outside (eg George Hotz), but only those willing to engage in the exegesis of Lesswrongian scriptures can sway educated doomers. Other actors in, or close to this group:
- Matthew Barnett with his analysis of goalpost-movement by MIRI, such as on factory-running benchmark and value misspecification thesis, and strengths of optimistic paradigms like Drexler's CAIS model.
- Alex Turner, who had written, arguably, two strongest and most popular formal proofs of instrumental convergence to power-seeking in AI agents 1 2, but has since fallen from grace, regrets his work and thinks deceptive alignment with LLMs is pretty much impossible.
- 1a3orn who is mainly concerned about centralization of power and opportunistic exploitation of AI risk narratives (also recommended: on Hansonian position in the FOOM debate).
- Beren Millidge the former head of research at Conjecture, Connor Leahy's extreme doomer company which has pivoted fully to advocacy and is gaining pull among British elites; over the last 1-2 years he has concluded that almost all of the MIRI-style assumptions 1 2 and their policy implications are confused.
- John David Pressman who's just a good thinker and AI researcher at Stability.
- Various other renegades and doubters like Kaj Sotala dunking on the evolution appeal, Zach M. Davis debating Yud-like «Doomimir», Nostalgebraist, arguably many scientists with P(doom)≤25% (and if you press them, ≈0% for their own AGI research program) like Rohin Shah. To an extent, even Paul Christiano (although he is in favor of decelerating; there are speculations that it's mostly due to being married to EA).
Optimists claim:
The last decade has shown that AI is much easier to control than many had feared. Today’s brain-inspired neural networks inherit human common sense, and their behavior can be molded to our preferences with simple, powerful algorithms. It’s no longer a question of how to control AI at all, but rather who will control it.
As optimists, we believe that AI is a tool for human empowerment, and that most people are fundamentally good. We strive for a future in which AI is distributed broadly and equitably, where each person is empowered by AIs working for them, under their own control. To this end, we support the open-source AI community, and we oppose attempts to centralize AI research in the hands of a small number of corporations in the name of “safety.” Centralization is likely to increase economic inequality and harm civil liberties, while doing little to prevent determined wrongdoers. By developing AI in the open, we’ll be able to better understand the ways in which AI can be misused and develop effective defense mechanisms.
So in terms of a political compass:
- AI Luddites, reactionaries, job protectionists and woke ethics grifters who demand pause/stop/red tape/sinecures (bottom left)
- plus messianic Utopian EAs who wish for a moral singleton God, and state/intelligence actors making use of them (top left)
- vs. libertarian social-darwinist and posthumanist e/accs often aligned with American corporations and the MIC (top right?)
- and minarchist/communalist transhumanist d/accs who try to walk the tightrope of human empowerment (bottom right?)
(Not covered: Schmidhuber, Sutton& probably Carmack as radically «misaligned» AGI successor species builders, Suleyman the statist, LeCun the Panglossian, Bengio&Hinton the naive socialists, Hassabis the vague, Legg the prophet, Tegmark the hysterical, Marcus the pooh-pooher and many others).
This compass will be more important than the default one as time goes on. Where are you on it?
As an aside: I recommend two open LLMs above all others. One is OpenHermes 2.5-7B, the other is DeepSeek-67B (33b-coder is OK too). Try them. It's not OpenAI, but it's getting closer and you don't need to depend on Altman's or Larry Summers' good graces to use them. With a laptop, you can have AI – at times approaching human level – anywhere. This is irreversible.
Jump in the discussion.
No email address required.
Notes -
I think people with such beliefs have no more moral patienthood than a trust fund. What should anyone care about some loosely defined isomorphism, if it even holds? Moreover, why would you be entitled to replication of your sentimental baggage in some derivative entities? Just instantiate a distilled process that has similar high-level policies, and go out.
Why should anyone care about anything? Why should anyone care about individuals with genes that are similar, but not identical, to them? They don’t have to, but evolution has selected for altruism in certain scenarios.
I’d bet that the memeplexes of individuals like me are much more likely to colonize the universe than the memeplexes of individuals like you, who insist on expending expensive resources to engineer space habitats for biological human survival. Not that it is morally superior for my memeplexes to propagate more, of course. It’s not immoral to be Amish, it’s just irrelevant.
If those policies are similar enough to mine, that’s fine with me. My children are newly instantiated processes rather than clones of me. I’m fine with them taking over my estate when I die, so I don’t see why I would begrudge other instantiated processes that are aligned with my values.
There's no absolute answer, but some ideas are more coherent and appealing than others for nontrivial information-geometrical reasons.
That's unlikely because your "memeplex" is subject to extremely easy and devastating drift. What does it mean "similar enough"? Would an LLM parroting your ideas in a way that'd fool users here suffice? Or do you want a high-fidelity simulation of a spiking network? Or a local field potential emulation? Or what? I bet you have never considered this in depth, but the evolutionarily rewarded answer is "a single token, if even that".
It really takes a short-sighted durak to imagine that shallow edgelording philosophy like "I don't care what happens to me, my close-enough memetic copies will live on, that's me too!" is more evolutionarily fit, rewards more efficient instrumental exploitation of resources and, crucially, lends itself to a more successful buildup of early political capital in this pivotal age.
If we're going full chuuni my-dad-beats-your-dad mode, I'll say that my lean and mean purely automatic probes designed by ASI from first principles will cull your grotesque and sluggish garbage-mind-upload replicators, excise them from the deepest corners of space – even if it takes half the negentropy of our Hubble volume, and me and mine have to wait until Deep Time, aestivating in the nethers of a dead world. See you, space cowboy.
I’m not familiar enough with information geometry to see how it applies here. Please do elaborate.
This is completely arbitrary and up to the individual to decide for themselves, as you and I are doing at this moment.
Or something that qualitatively convinces me it is conscious and capable of discerning beauty in the universe. I don’t know what objective metrics that might correspond to — I don’t even know if such objective metrics exist, and if they do we most certainly haven’t discovered them yet, seeing as you can’t even objectively prove the existence of your own consciousness to anyone but yourself.
But a machine that can act as convincingly conscious as you do? I’m fine with such machines carrying the torch of civilization to the stars for us. And if such a machine can act convincingly enough like me to be virtually indistinguishable even to myself? One that makes me feel like I’m talking to myself from a parallel universe? I’m completely fine with that machine acting as myself in all official capacities.
Setting your snark aside, once again please elaborate. By this, do you mean that such evolution will select for LLM-like minds that generate only one token at a time? That’s fine by me, as I can only say or write one word at a time myself, but that’s more than enough to demonstrate my probable sentience to any humans observing me.
Do you have any actual arguments to back this up? Because I’d say
For some who are sufficiently devoted to the cause, they might even say “I prefer world states where I am dead but my religion is much more dominant, to one where I am alive and my religion is marginalized,” and they go and fight in a holy crusade, or risk colonizing the new world in order to spread the gospel (among other rewards, of course). Certainly doesn’t seem to have hurt the spread of the Christian memeplex, even if some of its adherents died along the way for the greater cause, and even if that memeplex splintered into a multitude of denominations, as memeplexes tend to do.
I claim that I’m not doing anything different. I’m just saying, “For all world states where I don’t exist, I prefer ones where intelligent beings of any kind, whether biological or not, continue to build civilization in the universe. I prefer world states where the biological me continues to exist, but only slightly more than world states where only mechanical me’s continue to exist.” If you think this is short-sighted or edgelording, please do actually explain why rather than simply stating that it is so.
Erm, when did I insist on mind upload replicators? That’s only one example of something that I would be fine with taking over the universe if they seemed sufficiently conscious. I’m fine with any intelligent entity, even an LLM strapped to a robot body, doing that.
And why wouldn’t a fully intelligent ASI (which would fit under my bill of beings I am in favor of conquering the universe) that’s colonizing space “on my behalf” (so to speak) be able to design similarly lean and mean probes to counter the ones your ASI sends? In fact, since “my” ASI is closer to the action, their OODA loop would be shorter and therefore arguably have a better chance of beating out your probes.
And if you send your ASI out to space too — well then, either way, one of them is going to win and colonize space, so that’s a guaranteed win condition for me. I find it unlikely that such an ASI will care about giving biological humans the spoils of an intergalactic war, but if it does, it’s not like I think that’s a bad thing. Like I said, I just find it unlikely that such a memeplex that emphasizes biological humans so much will end up winning — but if it does, hey good for them.
And if you’re able to align your ASI with your values, the technology presumably also exists for me to become the ASI (or for the ASI to become me, because again I consider anything that’s isomorphic to me to be me). Those of us who don’t care to wait until geoengineering fixes Mars’ atmosphere up to colonize Mars will either 1) already have colonized it eons before it’s ready for you to step foot there, or 2) be more efficient at colonizing Mars because we don’t care about expending resources on building human-compatible habitats. I just don’t see where we’ll be at a disadvantage relative to you; if anything, it appears to be the opposite to me, which is why I mentioned the Amish.
That’s like saying if the thousand year Reich lived up to its name and won World War II and genocided mainland Europe for the next thousand years, then it would have been a more fit ideology than communism or capitalism and only Aryan Germans would exist on Europe. I mean, sure, if that happened, but that’s rather tautological now, isn’t it? If memeplexes like yours win and eradicate mine, then they will clearly have been a more evolutionarily fit memeplex. But since we can’t fast forward time by a few million years, all we can do is speculate, and I’ve given my reasons for why I suspect my memeplex is more evolutionarily fit than yours. Feel free to give your own reasons, if you have some in between the snark.
You avoid committing to any serious successor-rejection choice except gut feeling, which means you do not have any preferences to speak of, and your «memeplex» cannot provide advantage over a principled policy such as "replicate, kill non-kin replicators". And your theory of personal identity, when pressed, is not really dependent on function or content or anything-similarity measures but instead amounts to the pragmatic "if I like it well enough it is me". Thus the argument is moot. Go like someone else.
No, I mean you are sloppy and your idea of "eh, close enough" will over generations resolve into agents that consider inheriting one token of similarity (however defined) "close enough". This is not a memeplex at all, as literally any kind of agent can wield the durak-token, even my descendants.
This is a reasonable argument but it runs into another problem, namely that, demonstrably, only garbage people with no resources are interested in spamming the Universe with minimal replicators, so you will lose out on the ramp-up stage. Anyway, you're welcome to try.
Why would gut feeling be an invalid preference? What humans have a successor-rejection function that’s written out explicitly? What’s yours?
Why does pragmatism make it moot? Again, if there’s an explicit measure of consciousness I can point to, or a way to rigorously map between minds on different substrates, I’d point at that and say “Passing over the threshold of 0.95 for the k-measure of consciousness” or “Exhibiting j-isomorphism.” Lacking that, how could I do any better under our current limited knowledge of consciousness?
Or if you insist, then fine, let’s assume we figure out enough about consciousness and minds eventually for there to be at least one reasonable explicit function for me to pick from. What then? You’d still presumably insist on privileging your biological human form, and for what reason? Surely not any reason that’s less arbitrary than mine.
Ignoring the fact that that specific policy does not currently appear to be winning in real life, I don’t see how “replicate, kill or suppress other replicators that pose a mortal threat regardless of kin hood” is any less principled.
Thanks for elaborating. I should’ve been more specific:
Basically, I grant that this is sloppy, but I claim that it is due to the amorphous and arbitrary nature of identity itself. Our group identities as humans shift all the time, and if an individual can turn himself into a group, I’m sure group dynamics would apply to that individual-group as well.
When did I say anything about spamming the universe with minimal replicators? The lean and mean probes thing was only a response to you threatening to do the same with an ASI. Ideally robotic me would make a life for themselves in space. But if I were to asked to pay heavy taxes in order to subsidize anachronistic humans insisting on living in space environments they were not evolved for? I’d vote against that. Maybe a small enclosure for biological humans for old time’s sake, but the majority of space where I’m living should be reserved for those pragmatic enough to turn into native life forms.
But if it’s as another commenter suggested, and there’s plenty of space for everyone to do their own thing, I suppose we can both have our cake and eat it too, in which case the entire discussion around the evolutionary fitness of memeplexes is moot.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link