site banner

E/acc and the political compass of AI war

As I've been arguing for some time, the culture war's most important front will be about AI; that's more pleasant to me than the tacky trans vs trads content, as it returns us to the level of philosophy and positive actionable visions rather than peculiarly American signaling ick-changes, but the stakes are correspondingly higher… Anyway, Forbes has doxxed the founder of «e/acc», irreverent Twitter meme movement opposing attempts at regulation of AI development which are spearheaded by EA. Turns out he's a pretty cool guy eh.

Who Is @BasedBeffJezos, The Leader Of The Tech Elite’s ‘E/Acc’ Movement?

…At first blush, e/acc sounds a lot like Facebook’s old motto: “move fast and break things.” But Jezos also embraces more extreme ideas, borrowing concepts from “accelerationism,” which argues we should hasten the growth of technology and capitalism at the expense of nearly anything else. On X, the platform formally known as Twitter where he has 50,000 followers, Jezos has claimed that “institutions have decayed beyond the point of salvaging and that the media is a “vector for cybernetic control of culture.”

Alarmed by this extremist messaging, «the media» proceeds to… harness the power of an institution associated with the Department of Justice to deanonymize him, with the explicit aim to steer the cultural evolution around the topic:

Forbes has learned that the Jezos persona is run by a former Google quantum computing engineer named Guillaume Verdon who founded a stealth AI hardware startup Extropic in 2022. Forbes first identified Verdon as Jezos by matching details that Jezos revealed about himself to publicly available facts about Verdon. A voice analysis conducted by Catalin Grigoras, Director of the National Center for Media Forensics, compared audio recordings of Jezos and talks given by Verdon and found that it was 2,954,870 times more likely that the speaker in one recording of Jezos was Verdon than that it was any other person. Forbes is revealing his identity because we believe it to be in the public interest as Jezos’s influence grows.

That's not bad because Journalists, as observed by @TracingWoodgrains, are inherently Good:

(Revealing the name behind an anonymous account of public note is not “doxxing,” which is an often-gendered form of online harassment that reveals private information — like an address or phone number — about a person without consent and with malicious intent.)

(That's one creative approach to encouraging gender transition, I guess).

Now to be fair, this is almost certainly parallel construction narrative – many people in the SV knew Beff's real persona, and as of late he's been very loose with opsec, funding a party, selling merch and so on. Also, the forced reveal will probably help him a great deal – it's harder to dismiss the guy as some LARPing shitposter or a corporate shill pandering to VCs (or as @Tomato said, running «an incredibly boring b2b productivity software startup») when you know he's, well, this. And this too.

Forbes article itself doesn't go very hard on Beff, presenting him as a somewhat pretentious supply-side YIMBY, an ally to Marc Andreessen, Garry Tan and such; which is more true of Beff's followers than the man himself. The more potentially damaging (to his ability to draw investment) parts are casually invoking the spirit of Nick Land and his spooky brand of accelerationism (not unwarranted – «e/acc has no particular allegiance to the biological substrate for intelligence and life, in contrast to transhumanism; in order to spread to the stars, the light of consciousness/intelligence will have to be transduced to non-biological substrates» Beff says in his manifesto), and citing some professors of «communications» and «critical theory» who are just not very impressed with the whole technocapital thing. At the same time, it reminds the reader of EA's greatest moment (no not the bed nets).

Online, Beff confirms being Verdon:

I started this account as a means to spread hope, optimism, and a will to build the future, and as an outlet to share my thoughts despite to the secretive nature of my work… Around the same time as founding e/acc, I founded @Extropic_AI. A deep tech startup where we are building the ultimate substrate for Generative AI in the physical world by harnessing thermodynamic physics. Ideas simmering while inventing a this paradigm of computing definitely influenced the initial e/acc writings. I very much look forward to sharing more about our vision for the technology we are building soon. In terms of my background, as you've now learned, my main identity is @GillVerd. I used to work on special projects at the intersection of physics and AI at Alphabet, X and Google. Before this, I was a theoretical physicist working on information theory and black hole physics. Currently working on our AI Manhattan project to bring fundamentally new computing to the world with an amazing team of physics and AI geniuses, including my former TensorFlow Quantum co-founder @trevormccrt1 as CTO. Grateful every day to get to build this technology I have been dreaming of for over 8 years now with an amazing team.

And Verdon confirms the belief in Beffian doctrine:

Civilization desperately needs novel cultural and computing paradigms for us to achieve grander scope & scale and a prosperous future. I strongly believe thermodynamic physics and AI hold many of the answers we seek. As such, 18 months ago, I set out to build such cultural and computational paradigms.

I am fairly pessimistic about Extropic for reasons that should be obvious enough to people who've been monitoring the situation with DL compute startups and bottlenecks, so it may be that Beff's cultural engineering will make a greater impact than Verdon's physical one. Ironic, for one so contemptuous of wordcels.


Maturation of e/acc from a meme to a real force, if it happens (and as feared on Alignment Forum, in the wake of OpenAI coup-countercoup debacle), will be part of a larger trend, where the quasi-Masonic NGO networks of AI safetyists embed themselves in legacy institutions to procure the power of law and privileged platforms, while the broader organic culture and industry develops increasingly potent contrarian antibodies to their centralizing drive. Shortly before the doxx, two other clusters in the AI debate have been announced.

First one I'd mention is d/acc, courtesy of Vitalik Buterin; it's the closest to acceptable compromise that I've seen. It does not have many adherents yet but I expect it to become formidable because Vitalik is.

Across the board, I see far too many plans to save the world that involve giving a small group of people extreme and opaque power and hoping that they use it wisely. And so I find myself drawn to a different philosophy, one that has detailed ideas for how to deal with risks, but which seeks to create and maintain a more democratic world and tries to avoid centralization as the go-to solution to our problems. This philosophy also goes quite a bit broader than AI, and I would argue that it applies well even in worlds where AI risk concerns turn out to be largely unfounded. I will refer to this philosophy by the name of d/acc.

The "d" here can stand for many things; particularly, defensedecentralizationdemocracy and differential. First, think of it about defense, and then we can see how this ties into the other interpretations.

[…] The default path forward suggested by many of those who worry about AI essentially leads to a minimal AI world government. Near-term versions of this include a proposal for a "multinational AGI consortium" ("MAGIC"). Such a consortium, if it gets established and succeeds at its goals of creating superintelligent AI, would have a natural path to becoming a de-facto minimal world government. Longer-term, there are ideas like the "pivotal act" theory: we create an AI that performs a single one-time act which rearranges the world into a game where from that point forward humans are still in charge, but where the game board is somehow more defense-favoring and more fit for human flourishing.

The main practical issue that I see with this so far is that people don't seem to actually trust any specific governance mechanism with the power to build such a thing. This fact becomes stark when you look at the results to my recent Twitter polls, asking if people would prefer to see AI monopolized by a single entity with a decade head-start, or AI delayed by a decade for everyone… The size of each poll is small, but the polls make up for it in the uniformity of their result across a wide diversity of sources and options. In nine out of nine cases, the majority of people would rather see highly advanced AI delayed by a decade outright than be monopolized by a single group, whether it's a corporation, government or multinational body. In seven out of nine cases, delay won by at least two to one. This seems like an important fact to understand for anyone pursuing AI regulation.

[…] my experience trying to ensure "polytheism" within the Ethereum ecosystem does make me worry that this is an inherently unstable equilibrium. In Ethereum, we have intentionally tried to ensure decentralization of many parts of the stack: ensuring that there's no single codebase that controls more than half of the proof of stake network, trying to counteract the dominance of large staking pools, improving geographic decentralization, and so on. Essentially, Ethereum is actually attempting to execute on the old libertarian dream of a market-based society that uses social pressure, rather than government, as the antitrust regulator. To some extent, this has worked: the Prysm client's dominance has dropped from above 70% to under 45%. But this is not some automatic market process: it's the result of human intention and coordinated action.

[…] if we want to extrapolate this idea of human-AI cooperation further, we get to more radical conclusions**. Unless we create a world government powerful enough to detect and stop every small group of people hacking on individual GPUs with laptops, someone is going to create a superintelligent AI eventually - one that can think a thousand times faster than we can - and no combination of humans using tools with their hands is going to be able to hold its own against that. And so we need to take this idea of human-computer cooperation much deeper and further. A first natural step is brain-computer interfaces.…

etc. I mostly agree with his points. By focusing on the denial of winner-takes-all dynamics, it becomes a natural big tent proposal and it's already having effect on the similarly big tent doomer coalition, pulling anxious transhumanists away from the less efficacious luddites and discredited AI deniers.

The second one is «AI optimism» represented chiefly by Nora Belrose from Eleuther and Qiuntin Pope (whose essays contra Yud 1 and contra appeal to evolution as an intuition pump 2 I've been citing and signal-boosting for next to a year now; he's pretty good on Twitter too). Belrose is in agreement with d/acc; and in principle, I think this one is not so much a faction or a movement as the endgame to the long arc of AI doomerism initiated by Eliezer Yudkowsky, the ultimate progenitor of this community, born of the crisis of faith in Yud's and Bostrom's first-principles conjectures and entire «rationality» in light of empirical evidence. Many have tried to attack the AI doom doctrine from the outside (eg George Hotz), but only those willing to engage in the exegesis of Lesswrongian scriptures can sway educated doomers. Other actors in, or close to this group:

Optimists claim:

The last decade has shown that AI is much easier to control than many had feared. Today’s brain-inspired neural networks inherit human common sense, and their behavior can be molded to our preferences with simple, powerful algorithms. It’s no longer a question of how to control AI at all, but rather who will control it.

As optimists, we believe that AI is a tool for human empowerment, and that most people are fundamentally good. We strive for a future in which AI is distributed broadly and equitably, where each person is empowered by AIs working for them, under their own control. To this end, we support the open-source AI community, and we oppose attempts to centralize AI research in the hands of a small number of corporations in the name of “safety.” Centralization is likely to increase economic inequality and harm civil liberties, while doing little to prevent determined wrongdoers. By developing AI in the open, we’ll be able to better understand the ways in which AI can be misused and develop effective defense mechanisms.

So in terms of a political compass:

  • AI Luddites, reactionaries, job protectionists and woke ethics grifters who demand pause/stop/red tape/sinecures (bottom left)
  • plus messianic Utopian EAs who wish for a moral singleton God, and state/intelligence actors making use of them (top left)
  • vs. libertarian social-darwinist and posthumanist e/accs often aligned with American corporations and the MIC (top right?)
  • and minarchist/communalist transhumanist d/accs who try to walk the tightrope of human empowerment (bottom right?)

(Not covered: Schmidhuber, Sutton& probably Carmack as radically «misaligned» AGI successor species builders, Suleyman the statist, LeCun the Panglossian, Bengio&Hinton the naive socialists, Hassabis the vague, Legg the prophet, Tegmark the hysterical, Marcus the pooh-pooher and many others).

This compass will be more important than the default one as time goes on. Where are you on it?


As an aside: I recommend two open LLMs above all others. One is OpenHermes 2.5-7B, the other is DeepSeek-67B (33b-coder is OK too). Try them. It's not OpenAI, but it's getting closer and you don't need to depend on Altman's or Larry Summers' good graces to use them. With a laptop, you can have AI – at times approaching human level – anywhere. This is irreversible.

28
Jump in the discussion.

No email address required.

Why does it matter which one is the “original”? If it’s isomorphic to me, then it is me for all practical purposes, as far as I’m concerned.

Keeping track of the “original” me is about as inane as keeping track of an “original” painting. Of course, some people still care. If you wish to care, then you do you by all means.

Why does it matter that it's isomorphic to you? There are 7 billion people as unique as you are. Of those, I would expect a non-zero number of them to have experiences and dispositions close enough to yours as to be negligible. If you don't value your continuity or your natal body, or genes, then I don't see what is there left for you other than some ephemeral idea of "thinking exactly the same" (which is over 0.01 seconds after you're copied and the copy diverges).

Of those, I would expect a non-zero number of them to have experiences and dispositions close enough to yours as to be negligible.

I really do not see how that applies.

The number of people on planet Earth who are close enough to me, in terms of memories/experience/personality/goals such that I consider them isomorphic to myself is precisely zero.

The absolute closest that could potentially exist, given current technology, is a monozygotic twin or a clone, and I'm not aware of having either.

I would assume @mdurak would agree here.

Where we might potentially diverge:

My representation of "me" is robust to perturbations like going to bed and waking up tomorrow, or replacing 1% of the mass in my body via turnover when I drink a bottle of water, have lunch then take a shit.

It isn't robust to a large amount traumatic brain damage, dementia or the like.

then I don't see what is there left for you other than some ephemeral idea of "thinking exactly the same" (which is over 0.01 seconds after you're copied and the copy diverges).

Define "exactly".

Human cognition is stochastic, leaving aside issues of determinism at the level of quantum mechanics.

Your best attempt at exposing me to the same inputs and expecting the same outputs, separated by any amount of time, will inevitably have minor differences. Take a biological human, do something to erase their episodic memory and have them do a complex task, such as write an essay. Have them repeat it, with their memories of the original removed, and you are exceedingly unlikely to get the exact same text back.

But if such an entity existed, such that if "me" and it were blackboxed and then subjected to a very thorough assessment but couldn't be distinguished by an external observer, or even me looking solely at the outputs, tested separately (ideally controlling the environment as strong as possible, hell even scrubbing my memories of the last episode), then that's a copy of me, and I accord it the same rights and entitlements as the one typing this.

I don't think a text interface suffices, as @2rafa once suggested, it might be possible to finetune an LLM on all the text I or anyone else has ever written, such that someone who is only interacting with us via text could be fooled indefinitely.

I don't expect that to capture my cognition in fine enough detail to work, I'm not just an entity that produces text after all.

So an ideal test for determining that some other potential copy of myself (especially in a different substrate like a supercomputer), would also check for plenty of other things.

Does that copy, if instantiated into a highly realistic physical simulation, behave indistinguishably from multiple passes of the biological me?

Does it have strong and measurable correlates to my own neural circuitry? Ideally an emulation of the specific ones in my brain?

If it can pass all the tests with the same inter-test variability as I currently do, then I will be satisfied with calling it another copy of myself, with equal rights to the SMH name and even potentially a fair division of assets belonging to me.

The most robust way of creating something like this would be scanning and uploading a brain. Not an easy task, far from it. There might well be cheaper/easier and "good enough" alternatives, such that SMH_virtual has about as much variability from the current walking-talking one as I do from biological SMH(2022), 2019 or likely 2024. I have no qualms about calling all of them me, hence none about calling that upload the same.

My representation of "me" is robust to perturbations like going to bed and waking up tomorrow, or replacing 1% of the mass in my body via turnover when I drink a bottle of water, have lunch then take a shit.

It isn't robust to a large amount traumatic brain damage, dementia or the like.

This is not responsive to the argument. Your memorized experiences are fungible. Your differences from another Smart Indian Guy who's maximally close to you in embedding space are overwhelmingly mere contingent token, not type differences. Like, you love your mom and he loves his mom (very different!), you write sci-fi fanfics and he writes speculative fiction, you're on The Motte and he's on DSL, you are a GP and he is a cardiologist, you're into boobs and he's into armpits, you prefer 23°C and he sets it to 22,5… sure we can pile on dimensions to the point you become, well, a single extremely unique point, a snowflake indeed, but what of it? This is not how your consciousness works! This is not why you are infallibly you and he is indisputably him, this is merely why I can quickly tell apart those two instances of a Smart Indian! You are performing more or less identical calculations, on very similar hardware, to a near-identical result, and if you one day woke up, Zhuangzi style, to be him, your own life story a mere what-if distribution shift quickly fading into the morning air – I bet you would have only felt a tiny pinprick of nostalgia before going on with his business, not some profound identity crisis.

Moreover, if you get brain damage or dementia, your hardware and computational divergences will skyrocket, but you will insist on being a continuous (if diminished) person, and me and him will agree! It is pathetic and shallow as fuck to cling to a perceptive hash of a token sequence and say "this is me, see, day-to-day perturbations are OOMs lower than the distance to the next closest sample" – it's confusion of the highest order! Seriously, think this through.

(I am, incidentally, immune to this issue because I do not believe in computationalism or substrate independence. My self is literally the causal physical process in my brain, not the irrelevant hypothetical program which could define the computation of the process with the same shape with regard to its outputs hitting some reductive interface like an observer performing a classification task. This process can be modified near-arbitrarily and remain "me"; or it can be copied precisely, yet the copy would not be me but instead another, equal instance. I am not confused about first and third perspective, and the fact that physics teaches us frames of reference are irrelevant is trivial to me: they are irrelevant for an independent observer; yet the whole of my notion of identity is about the instantiation of the observer's egocentric frame of reference. I have made peace with the fact that most people can be gaslit into seeing themselves through the interlocutor's eyes. This comports with the repulsive fact that most people have been bred to serve an aristocratic class and accept perspectives imposed on them, and strongly suggests to me that most people probably really are means, not ends unto themselves. For deontological reasons, I will reject this conclusion until the time I have an opportunity to get much smarter and reexamine the topic or perhaps design some fix for this pervasive mental defect).

No, Dase, simply finding another Indian nerd with such similar personality traits is far from sufficient for me to consider him isomorphic to myself. I do not care that he loves his mother, I love mine. Certainly from your perspective you might well be indifferent between us, but I am merely me.

Like, is he closer to me than almost everyone else? Certainly. As are humans practically negligibly different in the space of All Possible Minds. Doesn't make them me.

Is that sufficient for him to be considered me? Not at all.

Leaving aside that what entities one identifies with is inherently subjective, I've proposed a reasonably robust criterion for determining that, at least to my satisfaction. You blackbox both of us, and assess response to a wide variety of relevant stimuli. If the variability between us is within acceptable parameters, such as being less than the variability seen in the biological me after a nap or when I took the test 2 years ago, then that system is close enough to count as including a copy of "me".

This accounts for. even mind uploads, hence the blackboxing, I don't particularly privilege my biological form, though you could do a DNA test and MRI if you really prefer to.

It might well count an emulation of me within a wider system, say a Superintelligence, but that's a feature and not a bug. That component, if it can be isolated, counts as me, leaving aside more practical concerns like what degree of power it has in the ensemble.

Moreover, if you get brain damage or dementia, your hardware and computational divergences will skyrocket, but you will insist on being a continuous (if diminished) person, and me and him will agree! It

I am tolerant of minor performance fluctuations, but a sufficient amount of brain damage or dementia? Then I consider myself gone, in most of the aspects I care about, even if the system is physically and temporally contiguous.

The primary reason I might still value further existence is-

  1. Hopes that the damage can be mitigated with future advances, if not losslessly.

  2. Until the damage gets really bad, that poor soul is still closer to me than anyone else.

But if it gets bad enough, I assure you I consider the core construct to be dead.

I am, incidentally, immune to this issue because I do not believe in computationalism or substrate independence

I have always found this a peculiar view, and certainly I haven't seen any particular reason to assume a difference in internal qualia because of a difference in substrate, as long as the algorithms deriving it are interchangeable in terms of inputs and outputs.

Is it possible? I can't rule it out. But the bulk of my probability mass is against it.

If you have a convincing argument otherwise, I'm curious to hear it.

I will reject this conclusion until the time I have an opportunity to get much smarter and reexamine the topic or perhaps design some fix for this pervasive mental defect

My current approach to modeling myself has enough practical ramification that I will accept it on an operational basis. Certainly I would love to re-examine it in more detail when it becomes more relevant, such as if I'm contemplating a mind upload and am either smarter myself or have an ASI to answer my questions.

But it reduces to normality for almost every situation I can expect to encounter today, so it's hardly the most pressing matter.

You are performing more or less identical calculations, on very similar hardware, to a near-identical result, and if you one day woke up, Zhuangzi style, to be him, your own life story a mere what-if distribution shift quickly fading into the morning air – I bet you would have only felt a tiny pinprick of nostalgia before going on with his business, not some profound identity crisis.

Believe it or not, I have often imagined, idly, having my consciousness magically transferred into the shell of someone I envy. The conclusion I have drawn is that there are some aspects of my life I would happily discard, if he's an accomplished banker (and I retain his skills and memories), I would happily not attempt to pursue medicine. But I would still prefer my original parents or kin, and attempt to convey my conundrum to them, likely by divulging privileged information only known to the original me.

If my "original" is still around? Inform him and work with him. I might be suitably disposed to help the "replaced" persons family and friends, but largely because they're predisposed to help me, assuming they don't know the truth.

After all, I expect and wish to continue preferring the consciousnesses descended from my own current kin even after we've all become post-biological, a mere swap of DNA carrier, while extremely queer and not entirely desirable, represents no major impediment.*

*The primary reason I am attached to my genes is because they code for people and personalities similar to mine. I couldn't care less about most phenotypic traits.

I think your problem is typical for Indians (and most other non-WEIRDs and non-Japanese, to be fair, including my people… but worse so in Indians): you have no taste, not even the notion of "taste", to you it's probably an arbitrary set of markers of one's social milieu rather than some relatively lawful intuition. So you settle for mediocre half-baked ideas easily as long as they seem "cool" or "practical", and – physics of consciousness being currently impractical – coolness is a much simpler function than tastefulness. I am not sure how or why this works. Maybe @2rafa can explain better; maybe she'll opine I'm wrong and it is in fact purely about social markers. (Also interested in the input of @Southkraut and @ArjinFerman). In any case, it's exasperating to debate such uncertain grounds without the recourse to "this is just ugly" when it patently is.

I've proposed a reasonably robust criterion for determining that, at least to my satisfaction. You blackbox both of us, and assess response to a wide variety of relevant stimuli. If the variability between us is within acceptable parameters, such as being less than the variability seen in the biological me after a nap or when I took the test 2 years ago, then that system is close enough to count as including a copy of "me".

Oh yeah? So which is it, a nap or a 2-year time span? Are you sure you can, really, practically can, define a rubric such that no other person I find comes closer to the first data point in the latter case? Sure you can do this without including password-recovery-tier questions, the answers to which are entirely value-free, RNG-produced token sequences, in no way corresponding to actually unique specifics of your inner conscious computation?

It's only reasonably robust from the viewpoint of a time-constrained clerk – or an archetypal redditor. As stated, I claim that you might well fail this test under realistic and legitimate conditions of dropping cheat items; and then, if I decide, in this contrived scenario, that the non-self-made-human is to be sent to the garbage compressor, you will very loudly (and rightfully) complain, not showing any "satisfaction" whatsoever. The only reason you propose it is your confidence that this does not matter in actuality – which it admittedly does not. And in any case, you do not need to optimize for a le scientific, robust, replicable, third-person-convincing etc. identity test. Rather, you need to think about what it is you are trying to achieve by clinging to the idea that a cluster of behavioral correlates an observer can identify will carry on your mind – just gotta make it dense enough that in practice you won't be confused for another naturally occurring person.

certainly I haven't seen any particular reason to assume a difference in internal qualia because of a difference in substrate, as long as the algorithms deriving it are interchangeable in terms of inputs and outputs.

Fair enough.

But I would still prefer my original parents or kin, and attempt to convey my conundrum to them, likely by divulging privileged information only known to the original me.

I'll trust you on this even though I strongly suspect this would depend on the intensity of original memories vs. the recovered set.

I have to say calling someone an archetypal redditor is a very low blow. As is the comic, which I think we could all pull apart in a matter of seconds. Off the top of my head, what if you remove the screen and show the human and dog that the woofing isn't a real dog? Since when did anyone in this long and complex discussion say that sound was the only mechanism people/canines could use to recognize authenticity? The whole thing you're talking about is how complex and multifactorial identity-recognition is. It's irrelevant at best.

Since when did self-made say he'd be happy for you to send someone very like him (perhaps so similar as to be a soulmate or a best friend) to the garbage compressor because he wasn't so identical as to be effectively the same person? How can you think he would be forced to that conclusion, based on the premises he's established, which explicitly include weighting highly those like himself?

@self_made_human isn't treating you with that kind of disrespect.

As for substrate independence, we should be thinking about truth rather than beauty. How can it be impossible, in principle, to replace individual neurons one by one with a computerized equivalent such that normal function is preserved and the patient is conscious throughout the whole operation? Do you believe that there's some advanced quantum mechanics in our heads that's integral to our identity that simply cannot be emulated with machinery, no matter how advanced our technology? How can the human brain be so sophisticated, it was thrown together with incredible resource constraints on the savannah.

And since when did people agree that dementia patients were continuous with their original selves? They're not, I've observed this first-hand. Past a certain point there's a qualitative change. I can't tell you when or where, just like it's hard to say how many bricks a house can lose before it collapses or is unliveable.

As is the comic, which I think we could all pull apart in a matter of seconds. Off the top of my head, what if you remove the screen and show the human and dog that the woofing isn't a real dog? Since when did anyone in this long and complex discussion say that sound was the only mechanism people/canines could use to recognize authenticity? The whole thing you're talking about is how complex and multifactorial identity-recognition is. It's irrelevant at best.

The idea of "multifactorial identity-recognition" is irrelevant for the purpose of understanding the issue of consciousness-continuity of a subject. But really, after such arguments, what more can be said? Truly, you can also look at the dog! Oh wow, the argument against black box analysis is pulled apart!

Since when did self-made say he'd be happy for you to send someone very like him (perhaps so similar as to be a soulmate or a best friend) to the garbage compressor because he wasn't so identical as to be effectively the same person?

Morality is irrelevant for this counterfactual; this is only dependent on the baseline self-preservation and endorsement of the notion that black box suffices.

As for substrate independence, we should be thinking about truth rather than beauty.

There is no meaningful difference between truth and beauty with regard to this question.

Poor taste is irredeemable, and you're one of the worst in this respect here, by the way.

How can it be impossible, in principle, to replace individual neurons one by one with a computerized equivalent such that normal function is preserved and the patient is conscious throughout the whole operation?

Following Moravec, I think this is possible, though I am not sure which aspects of neuronal computations and their implementations are relevant for this procedure (eg how to deal with LFPs). I reject the sloppy bugman idea that you can get from this to "just boot up a machine simulating access to a copy of my memories bro". Indeed, if you didn't have such poor taste, you'd have been able to see the difference, and also why you are making this argument and not the direct defense of computationalism.

Do you believe that there's some advanced quantum mechanics in our heads

Now that's what I call real disrespect lol. It's okay though.

Now that's what I call real disrespect lol. It's okay though.

I think that line was crossed when you claimed that Indians lack "taste", or at least I do.

Believe me, I like you and enjoy your commentary, or else my patience would have been exhausted a good while back.

As I've repeatedly invited you to demonstrate, give one good reason for why substrate independence can't work, especially if we can simulate neurons at the molecular level (they're not beyond the laws of physics, if beyond compute budgets), or at least in aggregation by modeling their firing patterns with ML "neurons", which can replicate their behavior, even if it takes somewhere around a thousand of those per biological one.

Before we potentially get bogged down in terms of implementation or the sheer difficulty of the task, which I happily grant is colossal, why can't it work in principle?

Even positing Penrosian claims that the quantum dynamics of microtubules are somehow remotely relevant in modeling such a stochastic environment, those can simulated too.

What exactly isn't being preserved in such a transition, that remains conserved when almost all of the individual atoms in your body have and will be endlessly cycled and replaced by equivalent counterparts through the course of your life?

If you concede that point, then I have little to know interest in arguing whether you should value such an entity forked from yourself. Feel free not to, or do things as dumb as Hansonian Ems I suppose. I'm content in knowing that, if nothing else, such copies will heavily weight the preferences and wellbeing of SMH Mk1, regardless of everything. My standards, while eminently sensible, are my own.

Your sense of aesthetics counts for about zilch, not when you do a terrible job of presenting a compelling case for them.

As @RandomRanger can see, you've been high on aesthetics, nothing else. I'm not mad, I'm just disappointed, I expected better from you.

More comments

you're one of the worst in this respect here, by the way

Grow up. Your argument is based entirely on feels, on smelling an aura of 'bugman' on uploading, not the factual basis. This is just an esoteric version of those right-wing twitter anons who smell 'bugman' on AI and denounce it as mere linear algebra, pattern-matching autocorrect with media buzz. They get a bad vibe and then look for reasons to disdain it, thinking emotionally rather than logically.

'Bad taste' is a cope. Reality doesn't have to be realistic, let alone tasteful or aesthetically pleasing (especially not to you in particular). The arrogance needed to say that your personal opinions on aesthetics are equivalent to physical/technical truth is unfathomable to me. If you were some peerless world genius in the field of AI, neuroscience and HBI then maybe you could get away with this. But you're not.

If you can gradually emulate a conscious being, you can also copy-paste it. There's nothing sophisticated about this concept.

More comments

Poor taste is irredeemable, and you're one of the worst in this respect here, by the way.

Oh Christ. I like you man, but sometimes I worry what will happen if you're the first to ascend into the Singularity. My tastes aren't that sophisticated either!

More comments

As for substrate independence, we should be thinking about truth rather than beauty. How can it be impossible, in principle, to replace individual neurons one by one with a computerized equivalent such that normal function is preserved and the patient is conscious throughout the whole operation? Do you believe that there's some advanced quantum mechanics in our heads that's integral to our identity that simply cannot be emulated with machinery, no matter how advanced our technology?

That's not the argument being made though. The argument is "a copy of my mind out on Mars in a separate body is still effectively me". That's what I disagree with.

Also interested in the input of @Southkraut and @ArjinFerman

I have no opinion on these topics of qualia and whatnot. It's far above my pseud paygrade. A hypothetical faithful copy of my responses to stimuli would hypothetically be me for everyone else involved. As for myself - I guess a continuous transfer from the biological to the digital or another body might preserve my sense of identity, but would also change me to some extent that depends on the particulars of the mechanism. Is that then me, or someone else? It doesn't matter until this actually happens to someone.

IMO mind uploading, consciousness copying, immortality, cryogenics, singularity etc. are all wishful thinking, clung to by people who attempt to distract themselves from their mortality. Thinking of yourself as an immortal-in-waiting then makes the comparably short natural lifespan seem even shorter, driving those people to believe even harder in the certainty of their imminent apotheosis.

The truth is that we're all going to die, most of us sooner than we'd like, and it's going to be fairly horrible for almost everyone involved, as it always has been.

(Also interested in the input of Southkraut and ArjinFerman).

I've always had very rigid opinions on the subject, and the only way I can "steelman" his idea is by assuming I must be misunderstanding what he means by the copy being him. Like I said, the only way it makes the slightest bit of sense to me, is if he's looking at it the same way one might look at being survived by their children. Bringing up another Indian nerd isn't even that far off the mark. When people can't have children they're prone to becoming mentors so a part of them can live on through the impact they've made on others. From there, I suppose I can understand building some sort of Pinocchio that will share your memories and quirks of personality.

The problems is that when I look at @self_made_human's actual words, the above seems like blatant sane-washing. He seems to believe any such copy will actually be him in some non-symbolic sense, which seems rather absurd. Maybe you can argue it from some cosmic-nihilist bird's eye view, but, as you pointed out, it's hard to defend from a "help! you've put the wrong one in the garbage compressor!" perspective.

He seems to believe any such copy will actually be him in some non-symbolic sense, which seems rather absurd.

I do believe it will be "me" to my desired level of satisfaction, and far better at the job than the currently available options of having kids, promulgation one's cultural or ethical values, or even a biological clone. To the point that if such a being appeared before me, it can have half my money no strings attached.

As for it being absurd to you? That's simply irrelevant to me. I don't think you agree with Dase about our other transhumanist predilections, so I don't see it mattering for the purposes of the debate.

I do believe it will be "me" to my desired level of satisfaction, and far better at the job than the currently available options of having kids, promulgation one's cultural or ethical values, or even a biological clone.

So just to be sure I understand you: you don't actually think it will be *you*? We are simply discussing your potential descendants. Far better, by your estimation, than any other descendant we can currently come up with, but still just a descendant?

To the point that if such a being appeared before me, it can have half my money no strings attached.

You gotta move the the US. Someone might actually be tempted to train an LLM on your output, to get a chunk of an American doctor's salary.

More comments

I think your problem is typical for Indians (and most other non-WEIRDs and non-Japanese, to be fair, including my people… but worse so in Indians): you have no taste, not even the notion of "taste", to you it's probably an arbitrary set of markers of one's social milieu rather than some relatively lawful intuition. So you settle for mediocre half-baked ideas easily as long as they seem "cool" or "practical", and – physics of consciousness being currently impractical – coolness is a much simpler function than tastefulness.

I sincerely fail to see what taste has to do with any of this.

I will note that my conception of replication of personality/consciousness has plenty of backers in WEIRD transhumanist circles, almost certainly any of the sizeable number of them who think mind uploading is a valid replication of theirs.

In any case, it's exasperating to debate such uncertain grounds without the recourse to "this is just ugly" when it patently is.

I can hardly stop you from using whatever form of argumentation you want in a debate, just know that appeals to aesthetics will do nothing, at best, in convincing me of your point.

Oh yeah? So which is it, a nap or a 2-year time span?

The smaller the divergence, the better, but once again, as a purely operational standard open to revision, I would accept any entity that has the same degree of inter-test variability has SMH from the age of 19 to about the age at which cognitive decline kicks in earnest, maybe 70. That accounts for about the period of time my personality and interests largely crystallized, to where I will simply become dumber and more forgetful, barring medical advances. Oh, and no severe illnesses with permanent neurological decline, for obvious reasons.

These are hardly strict cutoffs for entities I care for more than the norm, I would certainly value 7 yo or 90 yo me more than any other human alive, who isn't me-now. If someone handed me a baby clone, I'd certainly do my best to raise it as I'd have wished to be raised.

Are you sure you can, really, practically can, define a rubric such that no other person I find comes closer to the first data point in the latter case?

Yes. Presumably restricting the set of candidates to biological baseline humans living today.

If you wish to attempt this, go ahead and perform stylometry on my writings, and find one person whose writings I can't distinguish from my own with a large corpus of text. Using a finetuned LLM to generate it is forbidden, because my idealized test would consider far more than textual artifacts. I will happily concede the case if you manage the former, because I expect you'd fail miserably.

Sure you can do this without including password-recovery-tier questions, the answers to which are entirely value-free, RNG-produced token sequences, in no way corresponding to actually unique specifics of your inner conscious computation?

If it's not pure rhetoric, do you really think the battery of tests I propose are anywhere near "password recovery tier"?

I'm talking simulation of a virtual environment beyond baseline sensory discrimination with blinding, a massive battery of psychometrics and behavioral experiments, ideally circuit level analysis of neuroarchitecture. The blackboxing is purely for the case where we can't interpret the neuroarchitecture, or for proving the point where I believe in substrate indepence.

If I had a person show up to, in the flesh and blood, who claimed to be a mental copy of me, I am confident that I could discriminate the truth of that proposition to a great degree of confidence, not that my current understanding of the necessary advances in biology, computational and isekai-implementation wouldn't have me concerned someone/something was fucking with me. But that's hardly the most robust metric, hence the proposal for something that would satisfy me beyond reasonable doubt.

Fair enough.

Funny. Not remotely edifying.

I'll trust you on this even though I strongly suspect this would depend on the intensity of original memories vs. the recovered set.

I am agnostic on whether a total replacement of memories would violate much of the replication of the parts of consciousness I care about. Taken arbitrarily far? Yes. I doubt an entity based off me but only possessing memories of being tortured for a quadrillion years is anything I'd identify with.

If I suddenly became amnestic, I'd consider it a severe blow, but not to the extent I'd consider myself dead, assuming I retained skills and knowledge.

To make it clear, this is not a binary process, how could it be? The ideal entity for the purposes of duplication has has 0% variability from me, the limit of acceptability is somewhere well before someone currently existing as a separate biological person, even with 7 billion such people to sample from. I do not expect any of them to ever overlap in the same volume of mind space I've mapped in my trajectory from birth to death, not unless they were intentionally forked.

if I decide, in this contrived scenario, that the non-self-made-human is to be sent to the garbage compressor, you will very loudly (and rightfully) complain, not showing any "satisfaction" whatsoever

I object to either of us being sent to the garbage compressor. To the extent that I think having more high fidelity copies of me around is a good thing (hardly the most good thing, the marginal value becomes close to infinite when it's 1 vs 0, and then drops when there's a sufficient number to probabilistically make it to Heat Death), I am opposed to that number being decreased unless strictly necessary.

If the only means of transport available to me was a Star Trek transporter that disassembled and reassembled a copy of me at the atomic level, you best believe I'm auditing the process and comparing it to known test constructs to make sure the fidelity is as high as needed.

If it's necessary? Sure. Say I'm about to die right now and the only available route is a destructive, only partially validated brain uploading tech, or even worse, desperately uploading every aspect of my biometric or stylometric output as training data for an AI meant to replicate me. Or if I've been cryopreserved and need to be reanimated. All of them are far preferable to utter information theoretic death. It's a minor consolation to me that things I've written and described have ended up in the training set of future LLMs and other AI.

If none of that applies, then I will resist, violently, an attempt to murder me or my clones.

Well, if we're summoning fellow transhumanists to the brawl of the decade, I call forth:

@RandomRanger @curious_straight_ca

And probably a bunch of others, not that I can remember their names offhand.

I'm certainly with you as well! This has been a frustrating debate to follow if not only for the satisfaction of reading someone else express my views on continuity of self identity almost exactly. I wish people would not sneer as much, but I suppose as an uploadist, one needs to have the humility to recognize their view is the more peculiar one.

More comments

You’ve said it well. I agree, even with the part where you said we might potentially disagree.

I think people with such beliefs have no more moral patienthood than a trust fund. What should anyone care about some loosely defined isomorphism, if it even holds? Moreover, why would you be entitled to replication of your sentimental baggage in some derivative entities? Just instantiate a distilled process that has similar high-level policies, and go out.

What should anyone care about some loosely defined isomorphism, if it even holds?

Why should anyone care about anything? Why should anyone care about individuals with genes that are similar, but not identical, to them? They don’t have to, but evolution has selected for altruism in certain scenarios.

I’d bet that the memeplexes of individuals like me are much more likely to colonize the universe than the memeplexes of individuals like you, who insist on expending expensive resources to engineer space habitats for biological human survival. Not that it is morally superior for my memeplexes to propagate more, of course. It’s not immoral to be Amish, it’s just irrelevant.

Just instantiate a distilled process that has similar high-level policies, and go out.

If those policies are similar enough to mine, that’s fine with me. My children are newly instantiated processes rather than clones of me. I’m fine with them taking over my estate when I die, so I don’t see why I would begrudge other instantiated processes that are aligned with my values.

Why should anyone care about anything?

There's no absolute answer, but some ideas are more coherent and appealing than others for nontrivial information-geometrical reasons.

I’d bet that the memeplexes of individuals like me are much more likely to colonize the universe than the memeplexes of individuals like you

That's unlikely because your "memeplex" is subject to extremely easy and devastating drift. What does it mean "similar enough"? Would an LLM parroting your ideas in a way that'd fool users here suffice? Or do you want a high-fidelity simulation of a spiking network? Or a local field potential emulation? Or what? I bet you have never considered this in depth, but the evolutionarily rewarded answer is "a single token, if even that".

It really takes a short-sighted durak to imagine that shallow edgelording philosophy like "I don't care what happens to me, my close-enough memetic copies will live on, that's me too!" is more evolutionarily fit, rewards more efficient instrumental exploitation of resources and, crucially, lends itself to a more successful buildup of early political capital in this pivotal age.

If we're going full chuuni my-dad-beats-your-dad mode, I'll say that my lean and mean purely automatic probes designed by ASI from first principles will cull your grotesque and sluggish garbage-mind-upload replicators, excise them from the deepest corners of space – even if it takes half the negentropy of our Hubble volume, and me and mine have to wait until Deep Time, aestivating in the nethers of a dead world. See you, space cowboy.

There's no absolute answer, but some ideas are more coherent and appealing than others for nontrivial information-geometrical reasons.

I’m not familiar enough with information geometry to see how it applies here. Please do elaborate.

What does it mean "similar enough"?

This is completely arbitrary and up to the individual to decide for themselves, as you and I are doing at this moment.

Or what?

Or something that qualitatively convinces me it is conscious and capable of discerning beauty in the universe. I don’t know what objective metrics that might correspond to — I don’t even know if such objective metrics exist, and if they do we most certainly haven’t discovered them yet, seeing as you can’t even objectively prove the existence of your own consciousness to anyone but yourself.

But a machine that can act as convincingly conscious as you do? I’m fine with such machines carrying the torch of civilization to the stars for us. And if such a machine can act convincingly enough like me to be virtually indistinguishable even to myself? One that makes me feel like I’m talking to myself from a parallel universe? I’m completely fine with that machine acting as myself in all official capacities.

I bet you have never considered this in depth, but the evolutionarily rewarded answer is "a single token, if even that".

Setting your snark aside, once again please elaborate. By this, do you mean that such evolution will select for LLM-like minds that generate only one token at a time? That’s fine by me, as I can only say or write one word at a time myself, but that’s more than enough to demonstrate my probable sentience to any humans observing me.

It really takes a short-sighted durak to imagine that shallow edgelording philosophy like "I don't care what happens to me, my close-enough memetic copies will live on, that's me too!" is more evolutionarily fit, rewards more efficient instrumental exploitation of resources and, crucially, lends itself to a more successful buildup of early political capital in this pivotal age.

Do you have any actual arguments to back this up? Because I’d say

  1. This already happens to us. Immortality hasn’t been solved yet, so we all must choose which portions of our identity (if any) we’d like to emphasize in the next generation to come. For some people, this means “For all future world states without me in them, I prefer the ones that have more of my religion in it.” For others, they might care instead about their genetics, or family lineage, or nation, or ideology, or even only their own reputation post-death. Or most likely for most people, some amalgamation of all of the above.

For some who are sufficiently devoted to the cause, they might even say “I prefer world states where I am dead but my religion is much more dominant, to one where I am alive and my religion is marginalized,” and they go and fight in a holy crusade, or risk colonizing the new world in order to spread the gospel (among other rewards, of course). Certainly doesn’t seem to have hurt the spread of the Christian memeplex, even if some of its adherents died along the way for the greater cause, and even if that memeplex splintered into a multitude of denominations, as memeplexes tend to do.

I claim that I’m not doing anything different. I’m just saying, “For all world states where I don’t exist, I prefer ones where intelligent beings of any kind, whether biological or not, continue to build civilization in the universe. I prefer world states where the biological me continues to exist, but only slightly more than world states where only mechanical me’s continue to exist.” If you think this is short-sighted or edgelording, please do actually explain why rather than simply stating that it is so.

  1. Why should any of this reflect on the efficacy of resource extraction and concentration of political capital? Are you assuming that I’ll readily give up the economic or political capital I have to any random person? I’d do it for mechanical me, but that’s because if they’re a high-fidelity enough copy of me, they’d do the same for me. I wouldn’t do the same for you, because we don’t have that kind of trust and bond.

If we're going full chuuni my-dad-beats-your-dad mode, I'll say that my lean and mean purely automatic probes designed by ASI from first principles will cull your grotesque and sluggish garbage-mind-upload replicators

Erm, when did I insist on mind upload replicators? That’s only one example of something that I would be fine with taking over the universe if they seemed sufficiently conscious. I’m fine with any intelligent entity, even an LLM strapped to a robot body, doing that.

And why wouldn’t a fully intelligent ASI (which would fit under my bill of beings I am in favor of conquering the universe) that’s colonizing space “on my behalf” (so to speak) be able to design similarly lean and mean probes to counter the ones your ASI sends? In fact, since “my” ASI is closer to the action, their OODA loop would be shorter and therefore arguably have a better chance of beating out your probes.

And if you send your ASI out to space too — well then, either way, one of them is going to win and colonize space, so that’s a guaranteed win condition for me. I find it unlikely that such an ASI will care about giving biological humans the spoils of an intergalactic war, but if it does, it’s not like I think that’s a bad thing. Like I said, I just find it unlikely that such a memeplex that emphasizes biological humans so much will end up winning — but if it does, hey good for them.

And if you’re able to align your ASI with your values, the technology presumably also exists for me to become the ASI (or for the ASI to become me, because again I consider anything that’s isomorphic to me to be me). Those of us who don’t care to wait until geoengineering fixes Mars’ atmosphere up to colonize Mars will either 1) already have colonized it eons before it’s ready for you to step foot there, or 2) be more efficient at colonizing Mars because we don’t care about expending resources on building human-compatible habitats. I just don’t see where we’ll be at a disadvantage relative to you; if anything, it appears to be the opposite to me, which is why I mentioned the Amish.

excise them from the deepest corners of space – even if it takes half the negentropy of our Hubble volume, and me and mine have to wait until Deep Time, aestivating in the nethers of a dead world. See you, space cowboy.

That’s like saying if the thousand year Reich lived up to its name and won World War II and genocided mainland Europe for the next thousand years, then it would have been a more fit ideology than communism or capitalism and only Aryan Germans would exist on Europe. I mean, sure, if that happened, but that’s rather tautological now, isn’t it? If memeplexes like yours win and eradicate mine, then they will clearly have been a more evolutionarily fit memeplex. But since we can’t fast forward time by a few million years, all we can do is speculate, and I’ve given my reasons for why I suspect my memeplex is more evolutionarily fit than yours. Feel free to give your own reasons, if you have some in between the snark.

You avoid committing to any serious successor-rejection choice except gut feeling, which means you do not have any preferences to speak of, and your «memeplex» cannot provide advantage over a principled policy such as "replicate, kill non-kin replicators". And your theory of personal identity, when pressed, is not really dependent on function or content or anything-similarity measures but instead amounts to the pragmatic "if I like it well enough it is me". Thus the argument is moot. Go like someone else.

By this, do you mean that such evolution will select for LLM-like minds that generate only one token at a time?

No, I mean you are sloppy and your idea of "eh, close enough" will over generations resolve into agents that consider inheriting one token of similarity (however defined) "close enough". This is not a memeplex at all, as literally any kind of agent can wield the durak-token, even my descendants.

And why wouldn’t a fully intelligent ASI (which would fit under my bill of beings I am in favor of conquering the universe) that’s colonizing space “on my behalf” (so to speak) be able to design similarly lean and mean probes to counter the ones your ASI sends? In fact, since “my” ASI is closer to the action, their OODA loop would be shorter and therefore arguably have a better chance of beating out your probes.

This is a reasonable argument but it runs into another problem, namely that, demonstrably, only garbage people with no resources are interested in spamming the Universe with minimal replicators, so you will lose out on the ramp-up stage. Anyway, you're welcome to try.

You avoid committing to any serious successor-rejection choice except gut feeling, which means you do not have any preferences to speak of

Why would gut feeling be an invalid preference? What humans have a successor-rejection function that’s written out explicitly? What’s yours?

And your theory of personal identity, when pressed, is not really dependent on function or content or anything-similarity measures but instead amounts to the pragmatic "if I like it well enough it is me". Thus the argument is moot. Go like someone else.

Why does pragmatism make it moot? Again, if there’s an explicit measure of consciousness I can point to, or a way to rigorously map between minds on different substrates, I’d point at that and say “Passing over the threshold of 0.95 for the k-measure of consciousness” or “Exhibiting j-isomorphism.” Lacking that, how could I do any better under our current limited knowledge of consciousness?

Or if you insist, then fine, let’s assume we figure out enough about consciousness and minds eventually for there to be at least one reasonable explicit function for me to pick from. What then? You’d still presumably insist on privileging your biological human form, and for what reason? Surely not any reason that’s less arbitrary than mine.

your «memeplex» cannot provide advantage over a principled policy such as "replicate, kill non-kin replicators".

Ignoring the fact that that specific policy does not currently appear to be winning in real life, I don’t see how “replicate, kill or suppress other replicators that pose a mortal threat regardless of kin hood” is any less principled.

No, I mean you are sloppy and your idea of "eh, close enough" will over generations resolve into agents that consider inheriting one token of similarity (however defined) "close enough". This is not a memeplex at all, as literally any kind of agent can wield the durak-token, even my descendants.

Thanks for elaborating. I should’ve been more specific:

  • There’s the more general memeplex of “Machine minds are legitimate bearers of individual and cultural identity; for machines to flourish is for the continuation of human civilization to flourish” that this topic started around, and which I believe for aforementioned reasons to be better suited at gaining and retaining power than the memeplex of “Humans are the only thing that truly matters, and human civilization flourishing must necessarily mean specifically biological human expansion in space”
  • There’s the more specific identity issue of who counts as me or not. If you bear the durak-token but in no way act like me, then the durak purists should reject you as not a true durak. However, if there were some hive mind thing wherein your descendants take on the durak-token and enough of durak values and durak memories to be recognizably durak in ways, then sure, I will have become part of that durak-dase conglomerate entity. Perhaps there will even be a whole spectrum of pure duraks to melded hivemind duraks. I cannot predict what will happen then, whether they will reject or accept one another.

Basically, I grant that this is sloppy, but I claim that it is due to the amorphous and arbitrary nature of identity itself. Our group identities as humans shift all the time, and if an individual can turn himself into a group, I’m sure group dynamics would apply to that individual-group as well.

This is a reasonable argument but it runs into another problem, namely that, demonstrably, only garbage people with no resources are interested in spamming the Universe with minimal replicators, so you will lose out on the ramp-up stage. Anyway, you're welcome to try.

When did I say anything about spamming the universe with minimal replicators? The lean and mean probes thing was only a response to you threatening to do the same with an ASI. Ideally robotic me would make a life for themselves in space. But if I were to asked to pay heavy taxes in order to subsidize anachronistic humans insisting on living in space environments they were not evolved for? I’d vote against that. Maybe a small enclosure for biological humans for old time’s sake, but the majority of space where I’m living should be reserved for those pragmatic enough to turn into native life forms.

But if it’s as another commenter suggested, and there’s plenty of space for everyone to do their own thing, I suppose we can both have our cake and eat it too, in which case the entire discussion around the evolutionary fitness of memeplexes is moot.

Well, if nothing else, if you make a copy of me, and either it or me will have to come to a painful and premature end, I will have a strong preference for it happening to the copy.

I suppose I could see where you're coming from if you see your copies the way other people see their children, but the idea that they're literally you makes no sense to me.

Suppose you put me under and copy me atom for atom, mental bit for mental bit, so that both copies of me wake up thinking of themselves as the original. For all practical purposes, both of us will experience the same emotions and reasoning (with perhaps some deviation due to quantum + environmental fluctuations) with regards to the situation we’re in. Neither of us can tell which one is the original, because we’ve been copied atom for atom. If we have to decide, both of us would prefer “me” surviving over the other one. But ultimately, if I am the way I am now, I would be much less perturbed by the death of one of us, now that I know that mdurak will live on in a very real sense.

Perhaps both of your clones would have a much stronger visceral reaction to dying. That’s fair, because even in regular life some people are more/less averse to dying for their country. But that doesn’t change how it can make sense to see both copies of a cell that just underwent mitosis as being essentially equivalent to the original (chance mutations aside), and I don’t see how cloning myself is functionally any different from mitosis.

I am the way I am now, I would be much less perturbed by the death of one of us, now that I know that mdurak will live on in a very real sense.

Right, that's pretty analogous to how people think about being survived by their children (with the exception being that they tend to prefer sacrificing themselves over surviving at their cost). That I can understand, but he's talking about it like it would literally be him surviving, which I don't quite get.

But then, even with your view, here's a curveball for you. Have you watched Invasion of the Body Snatchers? Would you even parse shape-shifting extra-terrestrial fungi, taking your form, copying your memories, and taking your place as horror? Given the choice of being Snatched and dying uncopied, which would you prefer?

I just watched the movie based on what you said. Great premise!

It’s unfortunate that the snatched bodies are such an obviously flawed copy of you, or else it would be an easy choice for me to make. As it is, it is a bit horrifying for my utility function to be forcibly updated against my will, yes. But the suffering from fear and panic will only last for a few days before I finally succumb to the snatching, and afterwards I will get to live in an enlightened world, so it’s certainly not entirely horrifying.

If I could choose when to get myself snatched, I’d do it right before my imminent physical death, because I have nothing to lose at that time anyways. But the more interesting question is if it’s a one time choice. In that case, I’d choose to remain un-snatched because the clones appear to be devoid of emotion and personality, so it doesn’t seem as if they even enjoy their post-enlightenment state.

If the aliens would only preserve more of what makes me me (at least as I perceive my identity to be), sure I’ll choose to be snatched. I don’t see much of a difference between that and getting a trans human upgrade whereby I gain the ability to trivially solve coordination problems with other snatched humans.

I suppose you wouldn’t want to be snatched under any circumstances?

he's talking about it like it would literally be him surviving, which I don't quite get.

Depends on the definition of “him.” If I am just a pattern of mental bits, multiple copies of literally me can exist at the same time. The copies will diverge into different patterns, at which point they would no longer be exactly me as I see myself, and it would be sad to lose some of them and their unique experiences. But if we’re talking about the same exact pattern, isomorphic to different physical substrates, before it’s had a chance to diverge? That is literally me in a sense.

I suppose you wouldn’t want to be snatched under any circumstances?

Yup, just kill me. While in this story snatching is just a device to talk about other types of horror, stories about doppelgangers taking your place aren't that uncommon, and that's how the whole thing feels to me.

OTOH the horror of Roko's Basilisk's threat is something I could never grok, and would laugh in the face of, were it ever issued to me (acausal blackmail aside).

That is literally me in a sense.

Literally, in a sense...

That's just the thing, I can understand the "in a sense" part. Like I said in the other comment, a Geppetto carving a Pinocchio, so that a part of him lives on in his creation, is something I get, and even find romantic, but I don't see the "literally'.

Have you seen the first zombie episode of Midnight Gospel? I just realized that I may have been less perturbed by the body snatchers due to having already been exposed to a more positive spin on the idea of “fearing but ultimately accepting crossing over to a better side.” That being said, there’s no cloning in that episode, so I guess my main fear (being forcibly transformed into a worse version of myself, whether in this body or not) is different from yours (this body of yours dying, regardless of how good any other copy of you has it).

I don't see the "literally'.

That’s fair. It all boils down on how exactly you define “you.”