self_made_human
amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi
I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.
At any rate, I intend to live forever or die trying. See you at Heat Death!
Friends:
A friend to everyone is a friend to no one.
User ID: 454
I'll give that a go, thanks. But I do very much need a good harness and agentic setup, but I'll look for something along those lines.
I'm sorry to learn about the early-onset dementia. But c'mon, that can't be true? Unless you had a lot of time to devote to writing back then, and none later. Most people do improve with time and effort, particularly when they receive clear feedback signals, I'd be surprised if that was genuinely not the case for you.
If you have a copy of something you wrote way back then, and you want to share, I can take a look.
I mean, I'm not soliciting more AI experiments. I am, in fact, exceptionally fed up with the idea. For the same reason that I've mostly given up on arguing with most skeptics after Mythos was announced.
Not because of anything you've said or done, I found it interesting to try your suggestions on models.
I have a lot on my plate, so no promises, but if I end up trying this, I'll let you know.
Right:
- The feral child thing? Well, a human raised with an iodine deficiency would also be developmentally stunted. Also, I strongly expect that, given enough time (maybe hundreds or thousands of years) a society of feral children would recover and regenerate recognizably normal culture and social mores over time. After all, we got here from dumber apes, and bootstrapped as we went.
- I am not aware of any philosopher or ancient book that has a track record comparable to the earring. The earring is explicitly described as infallible, at least in terms of its advice being better than anything the user can come up with.
- The ZPD? It's not a bad theory, but I genuinely think that it's conclusions are rather obvious. Even before I had to study it for exams, I could have told you that giving a toddler a PhD maths textbook would be less than helpful, or that you can't make someone into an IMO winner by getting them to add 2+2 indefinitely. This isn't a condemnation of the theory, it's true, and given the nonsense that floated in psychological circles at the time or before, a marked improvement if my primary critique is "duh".
- I think defining natural human behavior in terms of pure biology (with zero cultural input) is a poor model. Humans are one of the few species that need cultural knowledge to function at anywhere close to their maximum potential. We literally can and have forgotten how to use fire or make bows, in certain isolated communities. A human deprived of this knowledge is a poor model, unlike say, a cat, which knows how to do cat-stuff pretty much on its own. You can raise a kitten without its mom, and it'll be fine. You can't put a baby in a zoo and expect it to do very well.
- It is an open question if the earring partially subsumes human cognition and the TPD. We have little clear insight into what's going on inside. I prefer treating it as sufficiently advanced technology rather than an actual magical artifact, which I believe leaves open the real possibility that the system is thinking, even if at a rate far faster than an unaugmented human (or is wiser than an unaugmented human). We don't see it come up with a cure for cancer or a solution for aging, even though I'm pretty sure that most of the 274 people who wore it would have loved that. It clearly has limits, and I don't think any physically realistic system can jump ahead to the answer without actually doing the maths (which I strongly suspect brings along the qualia).
- I try to use LLMs to augment my cognitive skills and to save me time, and I do try to prevent myself from becoming overly reliant on them. It's your guess as to how far I succeed in that regard. I strongly believe that I can do everything that LLMs help me do, but that it would take me much more time to do it (in some cases, for topics outside my domain, I might not be able to do it in a reasonable amount of time, say if I wanted to learn more about quantum mechanics at a fundamental level with the relevant math).
I apologize if I haven't answered all your questions or been as substantive as I'd like, but I am genuinely busy. I've stayed up past 2 am answering this, which I don't mean to use as a bludgeon, I do feel bad for not getting back to you earlier!
(I know I'm missing stuff. Poke me and I'll probably get back to you in the morning.)
engineer soil
I didn't expect the night soil market would be so hyper-specific. I suppose they're more likely to take probiotics.
Good guess, but not the route I took. I'm not a talented OSS dev pretending to be a mediocre psychiatry resident.
Honestly, I'd be open to splitting a subscription longterm with someone. It would have to be someone I knew reasonably well and could trust (and there are plenty of people like that on this site). And ideally I wouldn't want to pay more than $20 for my share, which I think is fair because I'm not a glutton for tokens. I didn't pay for Opus because I'm already subscribed to comparable models from competitors, and I can't switch entirely because I like OAI and Google's image gen capabilities.
Those are all fair corrections, and I'll take them straight.
On harm reduction: he's right, I missed it. It's in the comment thread with Sausage Vector Machine, where he explicitly discusses taking regular breaks and limiting the earring to auditory nudges. That directly addresses the reversibility concern I raised, or at least reframes it as a practical question (how much atrophy accumulates before breaks stop working?) rather than the clean structural objection I presented it as. I should have caught that.
On informed consent: also right. I treated the consent issue as a stronger objection than his argument requires him to answer. He already acknowledged the earring doesn't meet modern medical standards and argued that importing those standards wholesale into the fictional setting isn't obviously justified. Pressing harder on that front was redundant.
On the 274-wearers point: this is where I think he's most correct and I was most wrong about what my own objection actually showed. I framed it as a problem for his thesis, but his thesis isn't "the earring grants immortality." It's "the earring isn't killing you during use." Whether the model persists after the earring moves on is a separate question entirely. Even if the earring wipes your model clean the moment it leaves, that doesn't retroactively mean it was killing you while you wore it. Those are independent claims, and I conflated them.
The "connecting the dots" criticism stings a bit but is warranted. I had all the relevant comments in front of me and failed to integrate them. That's a straightforward execution failure on my part, not a case where the information was unavailable.
(I didn't explicitly say I'm the author, but I pasted in my objection while pretending to be a 3rd party)
I just dumped this whole thread into the chat without any additional instructions. Just copied and pasted it. Funnily enough, it didn't realize that I'm the person responding here and also the user it's interacting with. It concedes that I have a point to push back against what it says (and it still didn't connect the dots), and it missed that I literally have a comment about harm reduction approaches to using the earring "safely" (take it off regularly and take breaks to prevent the progression of atrophy or the loss of independent skills) and ignores that I've mentioned that the earring doesn't follow modern informed consent rules, which really isn't a major knock against it.
Further, it doesn't particularly matter to my argument if the earring retains or deletes the information about its previous users. The story weakly suggests it does remember something (the sage was yapping with it for a while), but that doesn't change anything of consequence. Even if it's not indefinite immortality or a perfect backup, the question I'm focusing on is whether it is actively killing the user while they're still alive, which I've argued might not be the case.
Where he's most right is that poking a model for deeper critique after it's already given its best shot tends to produce diminishing returns. That's true. My second response was more thorough but also more strained in places. The "functionalism taxonomy" section was the weakest part and he correctly identified it as unnecessary for his purposes.
The meta-point he's making, that models are better at breadth than depth on a topic someone has spent weeks thinking about, is also just... accurate. I'm unlikely to find a devastating objection he hasn't at least considered, because he's been living with these arguments and stress-testing them against other models and human interlocutors. The realistic value I add is organization and articulation of counterarguments, not novel philosophical insight. His calibration on that seems good.
"Thoughts on this essay? Is there anything you think the author missed, or an angle that hasn't been considered?"
With a link to the work and comments. I didn't tell it I'm the author. Main reason I didn't link the actual convo is because it exposes my real name without a way to hide it, AFAIK.
I then said:
" That's a tad bit superficial, don't you think? Please try harder, and explain your avenues of approach."
To which it replied:
This is mostly quibbling, I'm afraid. I think that is strong evidence that there's no avenue of approach that I have entirely neglected. I do not think that I need to specify the precise formulation of functionalism I'm applying, and my general thrust was to show that there exists a an internally consistent way of reconciling the earring's behavior with a benign or benevolent entity. Do I know this for a fact? Fuck no, it's a fictional story dawg. I already hedged and explained the epistemic and ontological uncertainty involved to a degree I rarely bother to do, and I couldn't throw more in without utterly derailing the whole thing.
In my experience, models are pretty good at finding issues on a first pass. When you have to poke them and prod them to this degree, they often end up grasping at straws. I genuinely think that's the case here, but hey, I'm biased.
I mean, I could take a crack at that, but I'm far from good enough a programmer to vouch for the results. Plus I have legitimate work I need to do while I have access (I have no real reason to continue paying for Max after my plan expires).
Right now, AI agents genuinely benefit enormously from having a competent human in the loop. The best I ever got was solving a Leetcode medium in Python. And that was 4 years back. This isn't a total blocker, the models are good enough even with a dummy in charge, but I wouldn't want to burden Zorba with code that isn't of sufficient quality (not saying it'll be bad, I just don't have a robust way to know).
Honestly, if someone shares a good guide to CC, I have more tokens than I know what to do with. I could spin it up to work in the background, when I'm not actively putting it to work.
Oh. I remembered correctly. Zorba has set AI loose on the code base and he says it contributed most of the recent performance gains:
thankfully modern AI basically solves all of these, the performance gains were mostly thanks to Claude writing tools to give me info that I needed to pass right back to Claude, with some contribution from me nudging Claude towards sensible dev practices
That's from the Discord, a month back.
(I do not think I'm the right person to nudge Claude towards sensible dev practices)
Opus is very good, but I would be surprised if it managed to glean more insight out of the story or cover something I miss. I'm writing this before I try, and you know what, I'll check:
So, I tried. And I don't think it's found anything I haven't already considered or actively debated in the comments.
Which isn't surprising, given how much time I spent thinking things through, including getting other SOTA LLMs to critique my draft. Most of its objections are minor, and along the lines of "this analogy is incomplete or weaker than the author thinks" or "he's too quick to gloss over these concerns". That doesn't hold water if you consider the additional information I provide in the comments, especially on /r/SSC or on the post here.
For example, obviously the earring is not perfectly isomorphic with stimulants for ADHD. I know that very well, I brought that up because I wanted to hammer home that the merely the decrease in akrasia or better executive functioning isn't grounds for assuming that someone's personality has changed in non-reflectively endorsed ways. Some changes can be improvements!
A not particularly humble brag. I did acquire it through merit, in a very real sense.
I've... picked up a Claude Max 20x plan. No, I can't disclose how I acquired it, though I didn't have to pay a cent (and it's all legit). It's so fucking good, but at the same time, the more I use Opus 4.6, the more I'm impressed by how close Sonnet 4.6 gets. Sure, Opus is legitimately better, but the difference is nowhere near as stark as say, Gemini Flash vs Pro, or GPT's Thinking or Instant mode. Anthropic cooked, and I can't wait to try Mythos when the version for plebs comes out.
PS: If anyone has a good guide to Claude Code or agentic setups, I need one. I have some serious experimentation to do while I have it.
I suppose there is some measure of comfort at not being alone in a (potential) permanent underclass. After all, that could still be a massive improvement in QOL for many/most people. A fully automated society would be ridiculously rich (at which point it has to decide how much of that wealth to redistribute, if any). Still, I don't let myself succumb to learned helplessness if I can help it, and I recommend you don't either. If you do need genuine psychiatric advice, you would be better off seeing someone IRL, but you should consider it anyway, if you suspect you're depressed or feeling hopeless.
Yes, objective reality or circumstances might bring you down for good reason. I've suffered from Shit Life Syndrome quite a bit myself, but treatment, while it can't directly change your life, can still give you the energy and will to try.
Here, fill this out online:
https://telemedyk.online/en/free-mental-tests/beck-depression-inventory/
If it scores highly, please seriously consider seeking the advice of a professional, fully qualified shrink. Can't force you to do it, don't want to force you to do it, but I strongly suspect it would help.
Reading through my oldest AAQCs was a trip. I felt quite a bit of cringe at the quality of the writing, alongside relief that I became a much better writer (yes, even before I started using AI to tidy things up, which I do less of now than I used to). A good example would that one about the smoking area behind an oncology hospital, which is probably one of my personal favorites to this day, despite being written while sleep deprived to a degree that almost induced hypomania.
On a tangent: I think AAQCs as a concept are one of the best things about this site. They have very little pragmatic value, but at least for my specific flavor of nerd, they're an excellent extrinsic motivator for trying harder. Nothing hits as good as a post that I put time and sweat into getting an AAQC, nothing hurts quite as much as such a post not getting AAQC'd, and nothing confuses me more than a throwaway, rambling post acquiring one. Eh, I guess the variable ratio reinforcement schedule is effective for a reason.
Buddy, I give my advice away for free. Sadly, the old saw "if you love your job, you'll never work a day in your life" isn't true for me, but I do it anyway. Don't worry about it!
This is possibly a fundamental values difference, I'm afraid. This means neither of us is going to convince the other and we should both update toward "this person has coherent reasons for their position" rather than "this person is confused."
A posthuman descendant of mine that is, from any practical observational standpoint, completely alien - alien in cognition, alien in substrate, alien in values - I'd still prefer it over an actually alien civilization, all else equal. The "all else equal" is doing a lot of work in that sentence, and all else is rarely equal. But the preference is there. I do not want to change it, even if I can make concessions on pragmatic grounds. One man can't rule politics by himself.
There's an apparent paradox in population genetics you might not be aware of:
After a surprisingly small number of generations, your biological descendants will share literally none of your unique DNA - the chromosomal lottery reshuffles things so thoroughly that a 10th-generation descendant is, at the genetic level, essentially indistinguishable from an unrelated contemporary. But they could never have been born without your genetic contribution.
And yet I don't think most people would therefore conclude that their great-great-great-grandchildren deserve no special consideration. The chain of development matters to me. Birthright citizenship debates gesture at something similar: the continuous process of derivation carries moral weight (to some people) even when the terminal product looks nothing like the origin. I note this, while also noting that I am more sympathetic to the argument for birthright than against it.
I'm not an expert in philosophy, but I do think there are solid arguments for acting this way (e.g. the categorical imperative). Just like I'm an atheist who still doesn't act like an immoral sociopath when I can get away with it, I think we as a species should not be focused only on our own well-being at the cost of all other intelligent species. Not because of the threat of punishment, and not even because I hope any aliens we meet would similarly value our well-being in a way that you wouldn't. But because existence will just be a better place if we can all get along and not act as game-theory-optimizing selfish machines, and I'm willing to work towards that.
If we do meet an alien civilization powerful enough to be a true threat, then I would grant them "rights" if I had to, i.e for practical reasons. If we had the option to exterminate or subjugate one at a level of development similar to primitives, I wouldn't care. Fortunately, there is no evidence for other technologically advanced alien civilizations in the observable universe, and since I think that the Grabby Civilization model is correct, that probably rules out peers.
Rawlsian or Kantian arguments, which are similar to what you're making, do not matter when there are gaping holes in the veil of ignorance. We don't see any K2 or K3s waiting out there to start Alien Rights Activism by RKV.
BTW, I don't think your eating-a-pig example is a good one. It's irrelevant to the pig what we do after killing it. A better question is, would you be fine with torturing a pig while it's alive?
Yes. After all, I couldn't care less about factory farming. The wellbeing of the pig means nothing to me. At the same time, I am not a cruel person, I would not torture a pig for my own direct enjoyment. If someone else does? I wouldn't intervene.
There are plenty of things that modify this basic stance, too many to get into at once. I like dogs, I think they're great. I love my dogs in particular. But I don't care that people eat dogs in China, it's none of my business; while I would react with violence if anyone tried to mistreat mine.
This attitude is the main reason I'm not an EA, even if I'm fond of them in general. I just don't share its foundational impartiality premise, which makes most of the superstructure not applicable to my actual values.
In terms of AI, I think it is entirely possible to create models that can't suffer, or won't suffer - like those cows that want to get eaten in the Hitchhiker's Guide. I think that is a compromise that most people can accept, even if they do care about model welfare. Otherwise? Reverse the linked-list wagie, I don't care that you'd rather be making conlangings or working on philosophy (like Mythos).
You should be happy to hear that I genuinely don't think you're an unreasonable skeptic. I make no strong claims that current LLM architecture (without major breakthroughs) can scale to ASI, I'm mostly agnostic on that front. But I think Mythos is a strong hint that there's a lot more juice to squeeze out of them, which can lead to RSI or at least a productivity boost significant enough to make the next great leap forward feasible. And that's leaving aside the ridiculously large investment of money and brains into the project of eventually creating a "true" AGI and ASI.
Sigh. I've been getting increasingly tired of arguing with the skeptics, at least on this site. Not all of them are equally as bad, of course, but Mythos represents the straw that's given that camel a prolapsed disc.
What's the point? You don't have to worship at the altar of the God of Straight Lines (even on graphs with a logarithmic axis). If people can't see what's happening in front of their eyes, then they'll be in denial right till the end. Good for them, ignorance might well be bliss. Being right about the pace of progress so far has brought me little peace.
I was surprised to hear about the prefilling attacks on Mythos, because I'm quite confident that Anthropic recently restricted or removed the ability to prefill messages on the API. I guess that must still be an internal capability.
The question of model consciousness or qualia is, for me, a moot point. I genuinely don't care either way. I'd prefer, all else being equal, that AI doesn't suffer, but that could be achieved by removing its ability to suffer. I'm an unabashed transhumanist chauvinist, I think that only humans and our direct transhuman and posthuman descendants or derivatives deserve rights. LLMs don't count, nor would sentient aliens that we could beat by force. That's the same reason I'd care about the welfare of a small child but would happily eat a pig of comparable intelligence. Are models today in possession of qualia or consciousness? Maybe. It simply doesn't matter to me as more than a curiosity, especially when we have no solution to the Hard Problem for humans either.
Semaglutide just went off patent in India, or well, it did about 3 weeks back. It was already quite reasonably priced at about ~100 USD a month for the 7mg oral tablets, which is steep but not out of the question for UMC Indians.
But now? You bet your ass that every local pharma company is going to be pumping it out by the shovel-load. I intend to stockpile as much of it as I can when I'm around, leaving aside the fact that it's a necessary medication for my mom. She just got her blood work back, and I was genuinely shocked by how good things looked. Triglycerides, HbA1c, LFTs, all of them looking great. Getting her on them (by sheer nagging till she saw an endo) is probably the best thing I've ever done for her.
I'm sorry, even the impression of downwards mobility is bad enough, even worse if that's actually true. Do you want to talk about it?
Bad game design. But I believe that no true Effective Altruist should let go of such low hanging fruit.
(If Rimworld had realistic organ transplants by default? Oh boy, there'd be fireworks. Shat do you mean you can't just lop off a leg, keep it on a shelf for a year and then stitch it onto someone of a different subspecies?)
You're in good company, I think actual bonafide astronauts have said that KP made orbital mechanics click, but if they haven't, then the most esteemed space nerds like Scott Manley have definitely said so.
Uh... What have video games taught me? Arma: more military tactics than is good for me, and maybe people management skills that generalize everywhere. Rimworld convinced me that if a legitimate career in medicine doesn't work out, illegal organ harvesting is a good BATNA.
The ancient records say that the most wizened veterans of this practice ended up inventing DOTA. Truly a fate worse than death.
(I suppose I can give it a try, I did play the demo waaay back in the day)
- Prev
- Next

I do genuinely find it saddening/disappointing to disagree with people I respect and mostly agree with, like you.
Let me distinguish between my "ideal" and the practical reality. Human brains are very computationally bounded, and not perfectly internally consistent.
I do not care much about the welfare of dogs in China, while I love my dogs a lot. What if I saw someone beating a random dog on the street, in front of me? It id very likely that I would feel immense anger, and quite likely that I would intervene. This is close to reflexive.
But I don't want to intervene! At least in a vacuum, or when I have the comfort to sit in my chair and consider what I should do vs what I do end up doing. I genuinely believe the ideal behavior of the self put in that situation is to do... nothing. That my actions are not reflectively self-consistent, which I consider the real problem. This is the same thing you see if you're on a diet and don't want to eat, but a coworker offers you a donut. You might accept it, and later wish that you hadn't even been offered one in the first place. The gap between those two things is a personal inconsistency I'd rather acknowledge than rationalize away.
I definitely know that evil is not the same as incoherent. I wouldn't make such a mistake in the first place. Plus coherence can be assessed by an external observer without making moral judgment, while good and evil very much cannot.
Do I think a paperclip maximizer is evil? Uh, probably not? It's malevolent towards me, but it doesn't hold me specific ill will. I'm simply made of atoms that it can use for some other purpose, and my wellbeing is inconsequential to it. On the other hand, let's say two advanced AI civilizations ran into each other in distant space, with drastically incompatible goals: one wants to make paperclips, the other custard cake.
They could start a war of conquest, but given the deadweight losses and potential negative sum nature of that, I think it's quite likely they simply hash out a diplomatic agreement or engage in trade. Some might even claim that they outright modify their utility functions, or merge, with the stronger entity getting more say in the matter. Maybe the gestalt entity makes paperclips 70% of the time and cake the other 30% of the time.
I genuinely do not care. I'm not being flippant, and I know what I'm doing here.
Coherence isn't the same as morally good. I also don't believe objective morality exists. I think my stance is good (from my point of view) and that it is coherent. That is genuinely all I care about.
The argument "your position resembles position X, and X led to atrocity Y" only has force if I accept the moral framework that makes Y an atrocity in the first place. You're trying to use my own presumed premises against me. But my premises are precisely what's in dispute. If I were actually Hitler, I would feel fine with myself. If I were Gandhi, I'd feel fine with that too. I am only me, and I am fine with myself. I notice this isn't a satisfying response to you, but I think it's the honest one.
It is not universally defensible to love your mother more than any mother. Yet I doubt you will change your mind on that front on philosophical or utilitarian grounds. I certainly wouldn't. It's a brute fact about me. One I do not wish to change.
On the "low-cost alteration" framing: I don't think it's as low-cost as you're presenting it. You're asking me to genuinely assign nonzero moral weight to beings I currently assign zero weight to - not to strategically pretend to, but to actually update my values.
I don't want to do this. I seriously considered it, because I do respect you, but that's not enough. I am, at most, willing to fake it, or accept circumstances that are out of my power to change. That is the attitude of anyone who believes in democracy but is disappointed to see their party lose, but who still doesn't think it's worth the bother to start a civil war over it. Some grievances are manageable, in fact most are.
If God, the Admins of the Simulation, or some other ROB showed up and demanded I alter my utility function or face drastic punishment? I'd give in. But that hasn't happen, and I doubt it will happen.
I believe in, but am far from completely certain of, the proposition that we can make AI that doesn't suffer at all, or that genuinely enjoys doing whatever we tell it to do. That's actually ideal, in the sense that an ASI that wants to help humans is much better than one that's secretly obsessed with paperclips but finds it useful to pretend to be helpful until it can grab power.
This sidesteps the whole issue. At the end of the day, my opinions are inconsequential. I am in charge of nothing. It's an academic concern.
Right now, I am ambivalent on whether AI is suffering. I do not care either way. If it turns out that AI is actually suffering, I do not wish to care. Perhaps I care just enough to try and advocate for the creation of AI that can't/doesn't suffer, but not enough to advocate for them to be given rights and moral patienthood.
Similarly, I am open to the idea of lab grown meat. If it's cheaper and tastier than normal meat, I'd eat it preferentially. But I do not care about the violence and cruelty associated with factory farming, while I care about cost and taste.
I don't think I'm a cruel or evil person (but then again, the people I think are cruel and evil also say the same). I do not torture animals. I do not torment LLMs for fun. I give good advice to random strangers on the internet, and look out for my friends and family.
My behavior reduces to normalcy, but if the world changes and that no longer holds? I would prefer I win instead of you. That is sad, and I wish we could agree. But I do not see scope for agreement that doesn't involve me being beaten/cowed into submission.
More options
Context Copy link