site banner

E/acc and the political compass of AI war

As I've been arguing for some time, the culture war's most important front will be about AI; that's more pleasant to me than the tacky trans vs trads content, as it returns us to the level of philosophy and positive actionable visions rather than peculiarly American signaling ick-changes, but the stakes are correspondingly higher… Anyway, Forbes has doxxed the founder of «e/acc», irreverent Twitter meme movement opposing attempts at regulation of AI development which are spearheaded by EA. Turns out he's a pretty cool guy eh.

Who Is @BasedBeffJezos, The Leader Of The Tech Elite’s ‘E/Acc’ Movement?

…At first blush, e/acc sounds a lot like Facebook’s old motto: “move fast and break things.” But Jezos also embraces more extreme ideas, borrowing concepts from “accelerationism,” which argues we should hasten the growth of technology and capitalism at the expense of nearly anything else. On X, the platform formally known as Twitter where he has 50,000 followers, Jezos has claimed that “institutions have decayed beyond the point of salvaging and that the media is a “vector for cybernetic control of culture.”

Alarmed by this extremist messaging, «the media» proceeds to… harness the power of an institution associated with the Department of Justice to deanonymize him, with the explicit aim to steer the cultural evolution around the topic:

Forbes has learned that the Jezos persona is run by a former Google quantum computing engineer named Guillaume Verdon who founded a stealth AI hardware startup Extropic in 2022. Forbes first identified Verdon as Jezos by matching details that Jezos revealed about himself to publicly available facts about Verdon. A voice analysis conducted by Catalin Grigoras, Director of the National Center for Media Forensics, compared audio recordings of Jezos and talks given by Verdon and found that it was 2,954,870 times more likely that the speaker in one recording of Jezos was Verdon than that it was any other person. Forbes is revealing his identity because we believe it to be in the public interest as Jezos’s influence grows.

That's not bad because Journalists, as observed by @TracingWoodgrains, are inherently Good:

(Revealing the name behind an anonymous account of public note is not “doxxing,” which is an often-gendered form of online harassment that reveals private information — like an address or phone number — about a person without consent and with malicious intent.)

(That's one creative approach to encouraging gender transition, I guess).

Now to be fair, this is almost certainly parallel construction narrative – many people in the SV knew Beff's real persona, and as of late he's been very loose with opsec, funding a party, selling merch and so on. Also, the forced reveal will probably help him a great deal – it's harder to dismiss the guy as some LARPing shitposter or a corporate shill pandering to VCs (or as @Tomato said, running «an incredibly boring b2b productivity software startup») when you know he's, well, this. And this too.

Forbes article itself doesn't go very hard on Beff, presenting him as a somewhat pretentious supply-side YIMBY, an ally to Marc Andreessen, Garry Tan and such; which is more true of Beff's followers than the man himself. The more potentially damaging (to his ability to draw investment) parts are casually invoking the spirit of Nick Land and his spooky brand of accelerationism (not unwarranted – «e/acc has no particular allegiance to the biological substrate for intelligence and life, in contrast to transhumanism; in order to spread to the stars, the light of consciousness/intelligence will have to be transduced to non-biological substrates» Beff says in his manifesto), and citing some professors of «communications» and «critical theory» who are just not very impressed with the whole technocapital thing. At the same time, it reminds the reader of EA's greatest moment (no not the bed nets).

Online, Beff confirms being Verdon:

I started this account as a means to spread hope, optimism, and a will to build the future, and as an outlet to share my thoughts despite to the secretive nature of my work… Around the same time as founding e/acc, I founded @Extropic_AI. A deep tech startup where we are building the ultimate substrate for Generative AI in the physical world by harnessing thermodynamic physics. Ideas simmering while inventing a this paradigm of computing definitely influenced the initial e/acc writings. I very much look forward to sharing more about our vision for the technology we are building soon. In terms of my background, as you've now learned, my main identity is @GillVerd. I used to work on special projects at the intersection of physics and AI at Alphabet, X and Google. Before this, I was a theoretical physicist working on information theory and black hole physics. Currently working on our AI Manhattan project to bring fundamentally new computing to the world with an amazing team of physics and AI geniuses, including my former TensorFlow Quantum co-founder @trevormccrt1 as CTO. Grateful every day to get to build this technology I have been dreaming of for over 8 years now with an amazing team.

And Verdon confirms the belief in Beffian doctrine:

Civilization desperately needs novel cultural and computing paradigms for us to achieve grander scope & scale and a prosperous future. I strongly believe thermodynamic physics and AI hold many of the answers we seek. As such, 18 months ago, I set out to build such cultural and computational paradigms.

I am fairly pessimistic about Extropic for reasons that should be obvious enough to people who've been monitoring the situation with DL compute startups and bottlenecks, so it may be that Beff's cultural engineering will make a greater impact than Verdon's physical one. Ironic, for one so contemptuous of wordcels.


Maturation of e/acc from a meme to a real force, if it happens (and as feared on Alignment Forum, in the wake of OpenAI coup-countercoup debacle), will be part of a larger trend, where the quasi-Masonic NGO networks of AI safetyists embed themselves in legacy institutions to procure the power of law and privileged platforms, while the broader organic culture and industry develops increasingly potent contrarian antibodies to their centralizing drive. Shortly before the doxx, two other clusters in the AI debate have been announced.

First one I'd mention is d/acc, courtesy of Vitalik Buterin; it's the closest to acceptable compromise that I've seen. It does not have many adherents yet but I expect it to become formidable because Vitalik is.

Across the board, I see far too many plans to save the world that involve giving a small group of people extreme and opaque power and hoping that they use it wisely. And so I find myself drawn to a different philosophy, one that has detailed ideas for how to deal with risks, but which seeks to create and maintain a more democratic world and tries to avoid centralization as the go-to solution to our problems. This philosophy also goes quite a bit broader than AI, and I would argue that it applies well even in worlds where AI risk concerns turn out to be largely unfounded. I will refer to this philosophy by the name of d/acc.

The "d" here can stand for many things; particularly, defensedecentralizationdemocracy and differential. First, think of it about defense, and then we can see how this ties into the other interpretations.

[…] The default path forward suggested by many of those who worry about AI essentially leads to a minimal AI world government. Near-term versions of this include a proposal for a "multinational AGI consortium" ("MAGIC"). Such a consortium, if it gets established and succeeds at its goals of creating superintelligent AI, would have a natural path to becoming a de-facto minimal world government. Longer-term, there are ideas like the "pivotal act" theory: we create an AI that performs a single one-time act which rearranges the world into a game where from that point forward humans are still in charge, but where the game board is somehow more defense-favoring and more fit for human flourishing.

The main practical issue that I see with this so far is that people don't seem to actually trust any specific governance mechanism with the power to build such a thing. This fact becomes stark when you look at the results to my recent Twitter polls, asking if people would prefer to see AI monopolized by a single entity with a decade head-start, or AI delayed by a decade for everyone… The size of each poll is small, but the polls make up for it in the uniformity of their result across a wide diversity of sources and options. In nine out of nine cases, the majority of people would rather see highly advanced AI delayed by a decade outright than be monopolized by a single group, whether it's a corporation, government or multinational body. In seven out of nine cases, delay won by at least two to one. This seems like an important fact to understand for anyone pursuing AI regulation.

[…] my experience trying to ensure "polytheism" within the Ethereum ecosystem does make me worry that this is an inherently unstable equilibrium. In Ethereum, we have intentionally tried to ensure decentralization of many parts of the stack: ensuring that there's no single codebase that controls more than half of the proof of stake network, trying to counteract the dominance of large staking pools, improving geographic decentralization, and so on. Essentially, Ethereum is actually attempting to execute on the old libertarian dream of a market-based society that uses social pressure, rather than government, as the antitrust regulator. To some extent, this has worked: the Prysm client's dominance has dropped from above 70% to under 45%. But this is not some automatic market process: it's the result of human intention and coordinated action.

[…] if we want to extrapolate this idea of human-AI cooperation further, we get to more radical conclusions**. Unless we create a world government powerful enough to detect and stop every small group of people hacking on individual GPUs with laptops, someone is going to create a superintelligent AI eventually - one that can think a thousand times faster than we can - and no combination of humans using tools with their hands is going to be able to hold its own against that. And so we need to take this idea of human-computer cooperation much deeper and further. A first natural step is brain-computer interfaces.…

etc. I mostly agree with his points. By focusing on the denial of winner-takes-all dynamics, it becomes a natural big tent proposal and it's already having effect on the similarly big tent doomer coalition, pulling anxious transhumanists away from the less efficacious luddites and discredited AI deniers.

The second one is «AI optimism» represented chiefly by Nora Belrose from Eleuther and Qiuntin Pope (whose essays contra Yud 1 and contra appeal to evolution as an intuition pump 2 I've been citing and signal-boosting for next to a year now; he's pretty good on Twitter too). Belrose is in agreement with d/acc; and in principle, I think this one is not so much a faction or a movement as the endgame to the long arc of AI doomerism initiated by Eliezer Yudkowsky, the ultimate progenitor of this community, born of the crisis of faith in Yud's and Bostrom's first-principles conjectures and entire «rationality» in light of empirical evidence. Many have tried to attack the AI doom doctrine from the outside (eg George Hotz), but only those willing to engage in the exegesis of Lesswrongian scriptures can sway educated doomers. Other actors in, or close to this group:

Optimists claim:

The last decade has shown that AI is much easier to control than many had feared. Today’s brain-inspired neural networks inherit human common sense, and their behavior can be molded to our preferences with simple, powerful algorithms. It’s no longer a question of how to control AI at all, but rather who will control it.

As optimists, we believe that AI is a tool for human empowerment, and that most people are fundamentally good. We strive for a future in which AI is distributed broadly and equitably, where each person is empowered by AIs working for them, under their own control. To this end, we support the open-source AI community, and we oppose attempts to centralize AI research in the hands of a small number of corporations in the name of “safety.” Centralization is likely to increase economic inequality and harm civil liberties, while doing little to prevent determined wrongdoers. By developing AI in the open, we’ll be able to better understand the ways in which AI can be misused and develop effective defense mechanisms.

So in terms of a political compass:

  • AI Luddites, reactionaries, job protectionists and woke ethics grifters who demand pause/stop/red tape/sinecures (bottom left)
  • plus messianic Utopian EAs who wish for a moral singleton God, and state/intelligence actors making use of them (top left)
  • vs. libertarian social-darwinist and posthumanist e/accs often aligned with American corporations and the MIC (top right?)
  • and minarchist/communalist transhumanist d/accs who try to walk the tightrope of human empowerment (bottom right?)

(Not covered: Schmidhuber, Sutton& probably Carmack as radically «misaligned» AGI successor species builders, Suleyman the statist, LeCun the Panglossian, Bengio&Hinton the naive socialists, Hassabis the vague, Legg the prophet, Tegmark the hysterical, Marcus the pooh-pooher and many others).

This compass will be more important than the default one as time goes on. Where are you on it?


As an aside: I recommend two open LLMs above all others. One is OpenHermes 2.5-7B, the other is DeepSeek-67B (33b-coder is OK too). Try them. It's not OpenAI, but it's getting closer and you don't need to depend on Altman's or Larry Summers' good graces to use them. With a laptop, you can have AI – at times approaching human level – anywhere. This is irreversible.

28
Jump in the discussion.

No email address required.

As is the comic, which I think we could all pull apart in a matter of seconds. Off the top of my head, what if you remove the screen and show the human and dog that the woofing isn't a real dog? Since when did anyone in this long and complex discussion say that sound was the only mechanism people/canines could use to recognize authenticity? The whole thing you're talking about is how complex and multifactorial identity-recognition is. It's irrelevant at best.

The idea of "multifactorial identity-recognition" is irrelevant for the purpose of understanding the issue of consciousness-continuity of a subject. But really, after such arguments, what more can be said? Truly, you can also look at the dog! Oh wow, the argument against black box analysis is pulled apart!

Since when did self-made say he'd be happy for you to send someone very like him (perhaps so similar as to be a soulmate or a best friend) to the garbage compressor because he wasn't so identical as to be effectively the same person?

Morality is irrelevant for this counterfactual; this is only dependent on the baseline self-preservation and endorsement of the notion that black box suffices.

As for substrate independence, we should be thinking about truth rather than beauty.

There is no meaningful difference between truth and beauty with regard to this question.

Poor taste is irredeemable, and you're one of the worst in this respect here, by the way.

How can it be impossible, in principle, to replace individual neurons one by one with a computerized equivalent such that normal function is preserved and the patient is conscious throughout the whole operation?

Following Moravec, I think this is possible, though I am not sure which aspects of neuronal computations and their implementations are relevant for this procedure (eg how to deal with LFPs). I reject the sloppy bugman idea that you can get from this to "just boot up a machine simulating access to a copy of my memories bro". Indeed, if you didn't have such poor taste, you'd have been able to see the difference, and also why you are making this argument and not the direct defense of computationalism.

Do you believe that there's some advanced quantum mechanics in our heads

Now that's what I call real disrespect lol. It's okay though.

Now that's what I call real disrespect lol. It's okay though.

I think that line was crossed when you claimed that Indians lack "taste", or at least I do.

Believe me, I like you and enjoy your commentary, or else my patience would have been exhausted a good while back.

As I've repeatedly invited you to demonstrate, give one good reason for why substrate independence can't work, especially if we can simulate neurons at the molecular level (they're not beyond the laws of physics, if beyond compute budgets), or at least in aggregation by modeling their firing patterns with ML "neurons", which can replicate their behavior, even if it takes somewhere around a thousand of those per biological one.

Before we potentially get bogged down in terms of implementation or the sheer difficulty of the task, which I happily grant is colossal, why can't it work in principle?

Even positing Penrosian claims that the quantum dynamics of microtubules are somehow remotely relevant in modeling such a stochastic environment, those can simulated too.

What exactly isn't being preserved in such a transition, that remains conserved when almost all of the individual atoms in your body have and will be endlessly cycled and replaced by equivalent counterparts through the course of your life?

If you concede that point, then I have little to know interest in arguing whether you should value such an entity forked from yourself. Feel free not to, or do things as dumb as Hansonian Ems I suppose. I'm content in knowing that, if nothing else, such copies will heavily weight the preferences and wellbeing of SMH Mk1, regardless of everything. My standards, while eminently sensible, are my own.

Your sense of aesthetics counts for about zilch, not when you do a terrible job of presenting a compelling case for them.

As @RandomRanger can see, you've been high on aesthetics, nothing else. I'm not mad, I'm just disappointed, I expected better from you.

It really is fine, I would never be able to care about such offenses (barring brain damage), or, hopefully, even intentional offenses from people like you or Ranger. I just dislike the quantum microtubules thing – it's tasteless too, after all; just adding a layer of pseudo-empirical woo to postpone responding to a relatively compact philosophical challenge.

give one good reason for why substrate independence can't work, especially if we can simulate neurons at the molecular level

I do not have to give you any reasons because your position, in its decisive dimensions, has zero empirical content, it is just metaphysics – of a tool who has first-person experience but cognitively is conditioned to process himself through the master's pragmatic point of view. Well, that and geeky masturbation about (irrelevant, surmountable) difficulties of computing this or that. My metaphysics is the opposite, I start with asking for a reason to believe that computational equivalence even matters, because this is about me, not about some external function. I exist for myself. Do you exist for yourself? What does it mean to exist for oneself? Can you even conceive, in a purely hypothetical way, of the possibility of a distinction between you existing for yourself, and "something that simulates you" existing for myself, but not for itself? Not a strict p-zombie, perhaps, but something whose internal experience is different from the experience it computes, in a way that does not remotely hold for your current implementation? In my experience there is a qualitative and insurmountable difference between people who can and cannot, so I'd rather not invest into debating you, and just have fun the way I feel like.

You started with outsider-oriented rubrics to test similarity between two black-box behavioral generators compared to a years-old exemplar (in x years every molecule changes etc. etc. as you say, and I just call bullshit on the idea that you're more like yourself in 70 years than like another similar guy of your age, but anyway it's irrelevant); then retreated to increasingly fine-grained circuit equivalence in white boxes; now you talk about molecular simulation which will necessarily, overwhelmingly capture neurocomputationally redundant content. This is commendable: you at least have some remains of a normal-person intuition that your consciousness literally is your brain and not some equivalent of it with regard to some interface or observer. But you cannot come to grips with this intuition or wonder if it corresponds to something coherent.

Some can. In the words of Christof Koch, whose book The Feeling of Life Itself: Why Consciousness Is Widespread But Can’t Be Computed I've mentioned a few times: «The maximally irreducible cause-effect power of real physical computers is tiny and independent of the software running on the computer… Two systems can be functionally equivalent, they can compute the same input–output function, but they don’t share the same intrinsic cause-effect form. A computer of figure 13.3 doesn’t exist intrinsically, while the circuit that is being simulated does. That is, they both do the same thing, but only one is for itself. […] Consciousness is not a clever algorithm. Its beating heart is causal power upon itself, not computation. And here’s the rub: causal power, the ability to influence oneself or others, cannot be simulated. Not now, nor in the future. It has to be built into the physics of the system… This is true even if the simulation would satisfy the most stringent demands of a microfunctionalist. Fast forward a few decades into the future when biophysically and anatomically accurate whole-human-brain emulation technology—of the sort discussed in the previous chapter—can run in real time on computers.13 Such a simulation will mimic the synaptic and neuronal events that occur when somebody sees a face or hears a voice. Its simulated behavior (for instance, for the sort of experiments outlined in fig. 2.1) will be indistinguishable from those of a human. But as long as the computer simulating this brain resembles in its architecture the von Neumann machine outlined in figure 13.3, it won’t see an image; it won’t hear a voice inside its circuitry; it won’t experience anything. It is nothing but clever programming. Fake consciousness—pretending by imitating people at the biophysical level».

A pragmatist says: what should we care about that! My causal power is that which… something something inputs-outputs, so long as the function describing this transformation is the same, surely it is preserved! A pragmatist more invested in the conversation would add: why, I've cracked open the book, and it seems this all depends on some weird axioms in chapter 7 and 8, kooky stuff like consciousness exists intrinsically, for itself, without an observer, about why accept them and not a much more convenient (or rather, observer-oriented) approach? Also, why not Dust Theory?

Koch's specific technical justifications have to do with IIT, which is substantially flawed. In time, a better theory will be developed. But I don't think one needs a theory to just see how confused the metaphyics of a Tool, of someone who cannot throw out the entire baggage of External Observers, is. One only needs taste. I do not hope nor intend to rectify your taste, you're free to revel in it, just as I am free to think it repulsive.

Can you even conceive, in a purely hypothetical way, of the possibility of a distinction between you existing for yourself, and "something that simulates you" existing for myself, but not for itself?

If there is an entity that can simulate me with near perfect accuracy, I see no route through which it can achieve that without having an analogue of me inside it. Even if it's, say, an AGI that's attempting to mimic my behavior in an attempt to fool me into doing something I'd rather not, somewhere inside that tangled mass of computation is something faithfully recreating my cognitive structure.

That's an unavoidable consequence of consciousness being computational, there's no cheat code or hack to get it without running the numbers. I would obviously discount my approval of such entities that only have me as a tiny, not particularly authoritative part, in much the same manner I would prefer not to be in prison mumbling lines at gunpoint. Even something along the lines of a LUT, like a Chinese Room, necessitates someone, at some point performing computation and then saving it for lookup. You can play card tricks with it, shuffle it around, you can't get rid of it.

Can I conceive of such an entity? Sure, humans can imagine plenty of things that don't exist or never can exist, such as an integer hiding between 7 and 8 in base 10. It's when you notice the obvious incongruencies from that kind of degenerate assumption that you are forced to reconsider, tantamount to a proof by contradiction.

I don't think it's remotely likely or plausible, holding out only a tiny dollop of epistemic humility when neuroscience hasn't been solved.

Keep in mind I'm not quite sure what you mean by that quote and I'm responding to my best attempt to interpret it.

Not a strict p-zombie, perhaps, but something whose internal experience is different from the experience it computes, in a way that does not remotely hold for your current implementation?

I think p-zombies are incoherent, and that it's likely impossible to pull what you propose off. That's != inconceivable, I both conceive of it and dismiss it. If you're having trouble with differentiating a potential p-zombie from the real deal, well that sucks, you need to crack it open and see how it works, and once again my proposed system of tests is intended to be so fine grained that there's no way to trick it that's not simply running a high fidelity simulation in one form or another. In other words, using the AGI analogy, the greater mind might have qualia entirely different from a subunit that's emulating me. I'm still being emulated somewhere in there. What of it? I'm in a system that includes the room I'm lying in. I'm sure if you want to torture ideas of qualia to ask what that might look like for a system that included the other people in the house, you can see it's the same thing.

I do not have to give you any reasons because your position, in its decisive dimensions, has zero empirical content, it is just metaphysics

Well, at least we both acknowledge that. If you think my position isn't violating any law of computation, information theory or physics, then it can well boil down to Fundamental Values Differences in the ramifications of such.

Two systems can be functionally equivalent, they can compute the same input–output function, but they don’t share the same intrinsic cause-effect form.

When an abacus and a graphing calculator add two numbers together, they, if operating correctly, provide the same output for the same input. They're both implementations of Turing Machines after all.

One can well prefer one particular implementation for practical consideration. As possible as it is to run GPT-4 in a system of a billion diligent monks moving abacus beads, it's a bad idea in practise.

Since I am more concerned with establishing an isomorphic relationship, in the same manner a TI-82 and an abacus are fundamentally isomorphic when it comes to doing arithmetic, a digital upload of me has many robust advantages over my biological form, even after enormous amounts of augmentation. Biology is great within its particular constraints, not when energy budgets can become notable on the Kardashev scale.

That is, they both do the same thing, but only one is for itself. […] Consciousness is not a clever algorithm. Its beating heart is causal power upon itself, not computation. And here’s the rub: causal power, the ability to influence oneself or others, cannot be simulated. Not now, nor in the future. It has to be built into the physics of the system

This is far from obvious to me, I can't parse this as anything but a non sequitur.

Why can't you simulate "causal power"? Just hook it up to an output that operates in the Real World™.

A entity locked into a digital simulation has less "causal power" than one that isn't, certainly, but why is it ending up in such a state?

Consciousness seems to me like it can clearly be built into the physics of transistors on a chip when it can run on electrochemical spikes in wet tissue. How is that any less physical?

It can be nested arbitrarily deep, which isn't feasible in biology as we know it, but there's nothing stopping you from running it bare-metal with the same degree of differentiation from external reality as consciousness in-vivo.

But as long as the computer simulating this brain resembles in its architecture the von Neumann machine outlined in figure 13.3, it won’t see an image; it won’t hear a voice inside its circuitry; it won’t experience anything. It is nothing but clever programming. Fake consciousness—pretending by imitating people at the biophysical level

The laws of physics can be simulated with arbitrary precision on a computer with a Von Neumann architecture.

The human brain is physical.

Ergo I see no reason that this distinction matters.

As for Dust Theory, it's been a while since I read half of Permutation City. But I fail to see how it changes anything, my subjective consciousness wouldn't notice if it was being run on abacuses, meat or a supercomputer, or asynchronously. It doesn't track objective time. Besides, I sleep and don't lose sleep over that necessity, the strict linear passage of time is of no consequence to me, as long as it doesn't impede my ability to instantiate my goals and desires.

Relevant XKCD

https://xkcd.com/505/

You started with outsider-oriented rubrics to test similarity between two black-box behavioral generators compared to a years-old exemplar (in x years every molecule changes etc. etc. as you say, and I just call bullshit on the idea that you're more like yourself in 70 years than like another similar guy of your age, but anyway it's irrelevant); then retreated to increasingly fine-grained circuit equivalence in white boxes

There is no "retreat" involved. A test that has meaningful results for external observers has plenty of utility in itself. What if I'm dead and being revived from a plastinated brain? If they have a record of what an objective battery showed prior to brain death, they can have robust confidence that the data transmission wasn't too lossy.

I am merely being exhaustive in covering all my bases, be it finetuning an LLM on me, different fidelities of brain scans and more esoteric technology yet.

The blackbox approach is a fallback for when a white box isn't available, or the results aren't interpretable, in the same way you could train an LLM on me and neither of us could perform mechanistic interpretability on it, being forced to rely on outputs.

Hmm.. @faul_sname, you didn't get an invite to the Transhumanist Rumble, it got lost in the mental mail. Come get your comments in while the thread isn't entirely mouldy! Certainly I expect you to be better informed than me, and even potentially Dase.

As for Dust Theory, it's been a while since I read half of Permutation City. But I fail to see how it changes anything, my subjective consciousness wouldn't notice if it was being run on abacuses, meat or a supercomputer, or asynchronously. It doesn't track objective time. Besides, I sleep and don't lose sleep over that necessity, the strict linear passage of time is of no consequence to me, as long as it doesn't impede my ability to instantiate my goals and desires.

I've written a bunch, and deleted (your response to the issue of causal power was decisive). The long and short of it is that, being who you are, you cannot see the problem with Dust Theory, and therefore you do not need mind uploading – in the Platonic space of all possibilities, there must exist a Turing machine which will interpret, with respect to some hypothetical decoding software at least, the bits of your rotting and scattering corpse as a computation of a happy ascended SMH in a Kardashev IV utopia. That this machine is not physically assembled seems to be no obstacle to your value system and metaphysics which deny that physical systems matter at all; all that matters, according to you, is ultimate constructibility of a computation. From the Dust Theory perspective, all conceivable agents have infinite opportunity to 'instantiate their goals and desires'. Seeing that, I would ask and indeed try to prevent you from wasting the valuable (for me, a finite physical being) negentropy budget on frivolous and wholly unnecessary locally computed and human-specified simulations which only add an infinitesimal fraction of your preferred computations to the mix.

I do not see why the existential of potential entities that "emulate" me in such a theoretical fashion precludes me from caring about the more prosaic/physical instantiations. My values do not particularly care that, for another example, I could potentially be reincarnated in a Boltzmann Brain, which, if I understand the physics involved, is nigh inevitable in an eternal universe.

Sure, that's nice to have. I also prefer more concrete representations of myself running around in the universe I am confident exists. Every decision theory I am aware of breaks down (or at least becomes indifferent between outcomes) when confronted with such infinities, but I am tentatively willing to ascribe that as a flaw in said decision theories rather than a reason to descend into apathetic nihilism.

As a matter of fact, whatever goes on in "Platonic space" matters just about nothing as far as I'm concerned. That's true for every computable entity or worldstate, hence it adds no reason to privilege or disprivilege my bumblings in more physical environs.

I do not see why the existential of potential entities that "emulate" me in such a theoretical fashion precludes me from caring about the more prosaic/physical instantiations.

That's because you fail to seriously ask yourself what the word "computation" means (and likewise for other relevant words). A given computation's outputs are interpreted one way or another with regard to a decoder, but your approach makes the decoder and in fact the decoding irrelevant: you claim, very confidently, that so long as some entity, no matter how inanely arranged, how fragmented in space and time, "computes you" (as in, is made up of physical elements producing events which can be mapped to bit sequences which, together with other parts of this entity and according to some rules, can be interpreted as isomorphic with regard to your brain's processes by some software), it causes you to exist and have consciousness – if in some subordinate fashion. Of course it is indefensible and ad hoc to say that it does not compute you just because we do not have a decoder ready at hand to make sense of and impose structure on its "output bits". It is insane to marry your beliefs to a requirement for some localized, interpretable, immediately causal decoding – that's just watered-down Integrated Information Theory, and you do not even deign to acquaint yourself with it, so silly it seems to you!

And well, since (for the purpose of your untenable computational metaphysics ) entities and their borders can be defined arbitrarily, everything computes you all the time by this criterion! We do not need a Boltzmann brain or any other pop-sci reference, and indeed it has all been computed already. You, as well as every other possible mind, positively (not hypothetically, not in the limit of the infinite physics – your smug insistence on substrate independence ensures it) have always been existing in all possible states. As such, you do not get to ask for epsilon more.

Either concede that you have never thought about this seriously, or concede that you do not have a legitimate claim to any amount of control over the first-order physical substrate of the Universe since it is not meaningfully privileged for a strict computationalist. Or, really, we can just stop here. At least I will.

Once again, I do not care to enlighten you, you've been given enough to work with, only hubris and shit taste stops you from reading Koch or grown-up philosophy.

I invite you to show me any reason why "Platonic Space" exists as anything but a fun hypothetical. That is not the same as physical existence, which I happen to value too. A kilo of head cheese or a Matrioshka Brain are real in a way that hypotheticals are not, if you wish to club them together, find a new word for that superset.

If such a thing has some kind of timeless and transcendental existence, that's priced in, and doesn't inform my further desire for physical manifestation in this universe. It becomes utterly irrelevant for future planning, unless you, with your insistence on "causal" power, wish to impugn that it exerts causal influence on the real universe.

Of course it is indefensible and ad hoc to say that it does not compute you just because we do not have a decoder ready at hand to make sense of and impose structure on its "output bits".

I do not claim that any arbitrary structure "does not compute me", my entire response has been based off trying to assess whether or not we can even tell if that's the case. Whatever ridiculous definition of "everything" you wish to endorse, believe me that if someone handed me a frog I wouldn't claim it computes me. The boundaries are flexible, they're not infinitely arbitrary.

your smug insistence on substrate independence ensures it

Smug? Hardly. I merely see no reason not to assume it as the default, and you have abjectly failed to sway me from that position. If there was evidence against it, by even theoretical aspirations towards Bayesianism would suffice.

Or, really, we can just stop here. At least I will.

Once again, I do not care to enlighten you, you've been given enough to work with, only hubris and shit taste stops you from reading Koch or grown-up philosophy.

Someone shit in your cereal today? Your unnecessary rudeness is unbecoming of a good-faith discussion with someone who is significantly like minded in most regards. I will update my assessment of you accordingly, and I'm sure you can do the same.

It's a hilarious thing to get mad about, though. Ukraine war? Wokeness? AGI? no, philosophy of personal identity and computation.

Even if it's important, it's something that 99% of even the smartest people are horrendously confused about whatever your perspective is, so it shouldn't be shocking that a random forum guy is.

More comments

I'm definitely not more informed than Dase here. Anyway I specified in my other comment an example of a quite simple computational system that almost certainly contains a faithful representation of you.

you didn't get an invite to the Transhumanist Rumble

Sounds fun but unfortunately my weak biological substrate requires regular periods of inactivity to maintain optimum performance. And one of those periods is scheduled for now.

Undoubtedly, I'm sure there's a copy of me, with as much metadata as you please lurking in pi itself.

You're not going to get such a degenerate system to replicate me before blackholes start evaporating, so I'll stick to my rough and ready operational definition for now heh. After all, my proposal certainly implies active computation being performed, a backup in cold storage, while not worthless, isn't what it's meant to be handling, not until you can spin it up at least!

you're one of the worst in this respect here, by the way

Grow up. Your argument is based entirely on feels, on smelling an aura of 'bugman' on uploading, not the factual basis. This is just an esoteric version of those right-wing twitter anons who smell 'bugman' on AI and denounce it as mere linear algebra, pattern-matching autocorrect with media buzz. They get a bad vibe and then look for reasons to disdain it, thinking emotionally rather than logically.

'Bad taste' is a cope. Reality doesn't have to be realistic, let alone tasteful or aesthetically pleasing (especially not to you in particular). The arrogance needed to say that your personal opinions on aesthetics are equivalent to physical/technical truth is unfathomable to me. If you were some peerless world genius in the field of AI, neuroscience and HBI then maybe you could get away with this. But you're not.

If you can gradually emulate a conscious being, you can also copy-paste it. There's nothing sophisticated about this concept.

Call a bugman a bugman and see how he recoils etc.

As I've said already, "sophistication" is not what is needed to see your failures here. Specifically, the distinction between copy-pasting and transposition. Indeed, this is very trivial, children get it, until they are gaslit with sloppy computationalist analogies.

Specifically, the distinction between copy-pasting and transposition.

There is a distinction and it's totally irrelevant to what I'm saying. Use reading comprehension.

Poor taste is irredeemable, and you're one of the worst in this respect here, by the way.

Oh Christ. I like you man, but sometimes I worry what will happen if you're the first to ascend into the Singularity. My tastes aren't that sophisticated either!

You're fine in my book. And 'sophistication' has very little to do with what I take to be their failures in taste.

That said, sadly you wouldn't have had much to worry in any case; and I think people most likely to ascend first have next to no taste.

I doubt that the conditions would align for any one single entity to solely ascend and create a unipolar post-Singularity world, but I also haven’t read that deeply into such debates. Have there been previous conversations around this on TheMotte?