self_made_human
amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi
I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.
At any rate, I intend to live forever or die trying. See you at Heat Death!
Friends:
A friend to everyone is a friend to no one.
User ID: 454
For what it's worth, I'm not being sarcastic when I say I have a low opinion of the Hippocratic oath.
Seriously "do no harm"? Am I allowed to use a needle to prick skin. Oh, that shouldn't be taken at face value, and there's some kind of implicit utilitarian calculus involved? Why doesn't it just say so?
Similarly I will not give to a woman a pessary to cause abortion.
There's a reason very few institutions use the original oath, leaving aside the random injunction against operating on kidney stones.
Do you have second thoughts?
Not particularly! I've certainly never had anyone identify a particular person on the basis of a post. The closest was when I was almost geographically doxxed, but the person doing it was acting mostly out of curiosity. There's no way for a casual actor to identify anyone I've described, and it's far too late to deploy the kind of OPSEC that truly motivated actors would have issues cracking. In other words, pray for me and not for anyone else I've written an ink-portrait of.
Could be generational. You seem often to seek out the snark; I'm far more traditionalist.
Well, you're an unusually sincere person. I like to think that I'm usually sincere and honest, but yes, I do enjoy a helping of sarcasm. At the very least, British humor appeals to me on a spiritual level.
My apologies, while I didn't interpret it as as a challenge, I was slightly snarky in my reply because of an unrelated internet argument.
When it comes to formal case reports or research publications, there are relatively bright lines doctors are expected to follow. This varies heavily from place to place, but for example, I can use a CT scan of a patient in a publication without their express consent, as long as I make sure things like name or ID is reasonably redacted.
When it comes to random writing on the internet, there is some grey, but mostly "nobody really cares." If I had mentioned actual names (and someone then raised a complaint) and provided very specific information, the GMC could theoretically come knocking (assuming they could then identify me, I doubt Reddit would care, they're not the same as the UK government, even if they're their attack dogs).
I mean, if I was writing about the UK. They don't care what I do in India as long as I don't break local laws or get into trouble with the police/local regulators. If I was in the UK, there is a small but non-zero risk associated, but once again, depends on what exactly I say. The British equivalent of this story, as written, would be fine.
not the Hippocratic oath.
Never swore it. I'm not kidding. Some places don't hold particularly high opinions of some long dead Greek bloke who said that doctors shouldn't operate on kidney stones. Not even the modernized version. It's not legally binding anyway, there are actual laws and professional codes of conduct that supersede it.
Since none of this contains patient-identifiable information, I'm in the clear. And for all anyone else knows, this might be an entirely fictional scenario with all characters simply fractured fragments of my psyche. I am also a dog on the internet, woof!
Beyond that, it depends on the jurisdiction, and even the UK isn't anal enough to come after me for something so trivial and vague.
Love your posts bro.
Thanks <3, whatever level of homo is socially acceptable these days haha.
Write a book.
I do, but it's about a cyborg psychiatrist who does way cooler things than I do. Also on hiatus, because his not-as-cool creator has a lot going on.
If you want a non-fiction book or memoir, I don't think I've quite got the material yet. It usually takes a lifetime to build that up. My job is usually (and thankfully) quite boring and mundane most of the time. I seem to come across something worth writing about once every few months or so, and the majority of the time it makes more sense as an essay.
And come to America, specifically Florida, I want to read your multi part write up dealing with our insurance, our minorities, and our whites.
I would if I could! I still harbor hope of moving to the States one day, at this point I would happily trade all the headaches American doctors face for the ones I have, let alone the massively higher pay. If not, I'm sure I'll visit at some point, and I would happily swing by if you'd have me. What's a gator but a very ornery dog? I can handle those just fine.
E: absolutely insane that you still have a Reddit account that’s 11 years old. I find that sort of thing fascinating as well.
Eh, it's there, I mostly use it to lurk these days and occasionally post. The closest I came to violating Reddit's TOS was Motte-posting, and that hasn't been an issue since I migrated here with everyone else. My engagement levels dropped drastically. Even if I had something to say, there are few places I'd want to say it, or where I'd expect a good reception. Culture War? That's here. Less controversial stuff? I happily crosspost.
In general, I think I'm a pretty good citizen by Reddit standards. I've only once been banned, on /r/SSC of all places for tangentially referring to the Motte as the place for CW issues, and that was quickly overturned on polite appeal. For what it's worth, it's less self-censorship than it is the fact that I do not enjoy engaging with the average Redditor.
Thank you for taking the time to write that up! It aligns with what other neurologists have said on Reddit, and my attempts to dig deeper.
liked staying up late = maybe just maybe, he may have an inkling that the episodes are more common in night (=nocturnal seizures).
I didn't get that impression, but I'm not going to make strong claims either way, this clinical assessment was far from ideal. If I had the time, I would have drilled deeper, specifically looking for any temporal patterns, but at the least the mom didn't mention it. In her words, the boy just liked staying up late, and that's more likely to be because he's got a phone.
Call the Resident, if possible.
Sadly, that probably wouldn't help. It is very difficult to contact a patient like that (EMR? What EMR?) and nobody would bother short of an acute emergency. At least we arranged a followup in a month, and I expect that the other doctor will probably be there. I'll drop him a text anyway, just in case it makes a difference!
The child was quite extroverted and responsive when talking to me or my colleague. If he was the shy type, he's better at hiding it than I am haha.
I can't really comment on his articulacy. My Hindi is far from the best, and his mother was the primary informant. But he sounded... fine?
If this was a once off? Kids do dumb things for no good reason. So do us adults. But the repeated pattern and general picture points towards something in the DSM and not "just a rambunctious boy child". But what precisely? Impossible to answer authoritatively with the information I have at present. I hope I do get to see the followup and final diagnosis, but I wouldn't bet on it.
For what it's worth, you can use the contact us option in the sidebar to message (all) mods. But it's probably just faster to ping or DM us, I know that I rarely check the general mod mail.
Yup. I've let it out of the cage, @ControlsFreak
Aggression related to panic attacks?
Very unlikely! Even plain old panic attacks would be unusual at that age, let alone such a specific kind of aggression. They're also not usually associated with amnesia or dissociation, more like hyper-focus.
After I posted on /r/Medicine, I had a few actual senior neurologists show up. They lean towards my hypothesis that it's some kind of seizure activity, but there's no consensus on whether it's a temporal lobe one, a different kind of focal seizure such as one affecting the frontal lobe, or if there's a slightly different variant called absence seizures that might be causing sleep issues and poor academic performance. The only real way to know would be an EEG, which would hopefully be identified the next time they attend (I regret not insisting on it, but I was a guest and deferring to those with more local expertise).
He'd be dead, wouldn't he? Survival time is usually less than a week after symptoms appear, though I'm surprised to learn you can have morbid rabies for months or years before symptoms show up.
My mention of rabies was mostly sarcasm. The kid would have a lot of other issues before they (might) end up biting people. It would have been glaringly obvious and even here, with less than perfect triage and routing, very unlikely to show up in the psych OPD. But yes, if it was rabies, he would be done for.
I was about to claim that it's impossible for rabies to be latent for years, but apparently there are a handful of claimed cases?
https://www.nejm.org/doi/full/10.1056/NEJM199101243240401
Rabies infection in these three patients did not originate in the United States but resulted from exposures in Laos, the Philippines, and Mexico. Since the three patients had lived in the United States for 4 years, 6 years, and 11 months, our findings suggest that the onset of the clinical manifestations of rabies occurred after long incubation periods.
I am not sure how much to trust them. Either way, it's rare. But funny excerpt:
The patient's father recalled that the child had been bitten by a neighbor's dog shortly before leaving the Philippines for the United States. The dog was said to have remained healthy and was eaten about a month later.
and the CCP looks like it’s actually going to stand up to him about that
I would like to know more. I've heard about the firings, but not about any signs of the rest of the party developing a backbone.
Oops. Thanks!
Hah. It's only fair that you make it your life's goal to educate me on Heidegger (without asking for consent, though I probably would have given it anyway), you notice something attributed to Heidegger come up in conversation, and then, with dawning dismay, realize that it was a misattribution. I can imagine the disappointment! I relish in schadenfreude!
I'm not competent enough a psychiatrist to answer that question.
I've been aware of this phrase for years, mostly from Reddit. Is there a canonical definition, however? I say this with genuine curiosity / bewilderment. Capitalism, to my mind, is an economic condition bounded by certain conditions. I didn't know (and I am dubious) about there being a temporal aspect to it
"Werner Sombart, who used the phrase Spätkapitalismus (literally "late capitalism") in his 1902 work Der moderne Kapitalismus. Sombart was developing a stage-theory of capitalism, arguing that the system passed through distinct historical phases: early, high, and late. His framework was descriptive and evolutionary, not necessarily apocalyptic."
https://en.wikipedia.org/wiki/Late_capitalism
In the 21st century era of the global Internet, mobile telephones and artificial intelligence, the idea of "late capitalism" is again used in left-wing political discussions about the decadence, degeneration, absurdities and ironies of contemporary business culture, often with the suggestion that capitalism is now getting near the end of its existence (or is already being transformed into a post-capitalism of some sort)
The gist of it is that it's a shibboleth and a cue to boo the outgroup on command.
If there's anything someone dislikes about modern consumerism or globalization, it's a convenient brush to paint with. Gentrification? Late stage capitalism. Rent too damn high? Late stage capitalism. Netflix enshittified its offerings? Late stage capitalism.
The unresolved questions were: "late" in what sense? In comparison to what? How do we know? What could possibly replace capitalism? The liberal economist Paul Krugman stated in 2018 that:
"I've had several interviews lately in which I was asked whether capitalism had reached a dead end, and needed to be replaced with something else. I'm never sure what the interviewers have in mind; neither, I suspect, do they."
Neuroplasticity, as you probably intuited, is basically the mechanism by which brains work at all. Reading rewires brains. Suffering rewires brains. Learning to juggle demonstrably changes cortical gray matter density in a way you can see on an MRI, and nobody is writing Substack posts about the demonic influence of juggling on children. When someone says "screens rewire brains," the word doing all the actual work is "rewires" in the pejorative sense, meaning "changes in bad ways that are hard to reverse," but that claim is being smuggled in without justification, under cover of a neuroscience fact that's technically true but completely uninformative. Everything that does anything to you rewires your brain. The question is whether the rewiring is bad, and repeating the neuroplasticity point louder doesn't answer that. It's actually worse than uninformative, because it makes the arguer sound scientific while doing no scientific work whatsoever. The neuroplasticity framing is rhetorical judo: it borrows the authority of neuroscience while gesturing vaguely at harm it has not actually demonstrated.
This matters because it makes the claim unfalsifiable in practice. If a child improves at chess from watching chess videos, that's also rewiring their brain, but presumably Davidson isn't worried about that one. The rewiring point can't distinguish between the two cases, so it isn't doing any of the work it's being credited with. What it's actually doing is priming the listener to accept that harm has been established before the argumentative heavy lifting has begun. I'd rather the harm be argued directly, at which point it would be subject to actual scrutiny, than laundered through the vocabulary of neuroscience.
"Screen time," while far from ideal as terminology, is also far from the worst offense around. The deeper problem is that the category is wildly underdetermined. It seems to matter enormously what the screen displays. A child who spends three hours reading Wikipedia articles about the Byzantine succession crisis, watching a documentary about migratory birds, and then video-calling their grandmother is doing something categorically different from one who has spent those three hours cycling through TikTok thirst traps and casino-mechanic reward loops dressed up as games. Lumping these together under "screens" and then asking whether "screen time" is harmful is a bit like asking whether "food time" is healthy. The answer will depend almost entirely on what food we're talking about, and the aggregate will tell you almost nothing useful.
The medium-is-the-message people have a point that the delivery mechanism shapes the experience in ways content alone doesn't capture. But even granting McLuhan more than he's usually owed, there is still an enormous variance in what screens deliver that gets erased the moment we start talking about "screens" as a unified phenomenon. Calling slot machines "levers" would be a more accurate description than calling all interactive digital media "screens," because at least all levers share the mechanical property of force multiplication. What screens share is a glowing rectangle that displays imagery, which is not doing much analytical work.
A lot of the older empirical literature was also methodologically shabby in ways that should give us pause before crediting its conclusions. Much of it was observational, relied heavily on self-report (or parent-report, which introduces its own distortions), lumped television with TikTok with WhatsApp with gaming with educational apps, and then asked whether the aggregate was good or bad. The effect sizes, when statistically significant at all, were in many cases embarrassingly small. Jean Twenge's widely-cited work was criticized by Andrew Przybylski and Amy Orben, who used the same datasets and found that the association between screen time and adolescent wellbeing was approximately the same magnitude as the association between wearing glasses and adolescent wellbeing. Spectacle-wearing doesn't cause depression; it's a proxy for other things. The same concern applies to screen time, which correlates with socioeconomic status, parenting style, pre-existing behavioral difficulties, and a hundred other things that are doing the actual causal work.
I'd say that it's not worth losing sleep over, except that the most robust and consistent negative findings deal with sleep, specifically that device use near bedtime disrupts both sleep onset and sleep quality, probably through a combination of blue-light effects on melatonin and the obvious fact that you can't scroll and sleep simultaneously. This is worth taking seriously precisely because it's one of the few findings that replicates, has a plausible mechanism, and shows an effect size large enough to matter. The irony, not lost on me, is that "no phones in the bedroom at bedtime" is not a very interesting or monetizable policy conclusion, so it gets lost in the noise of more dramatic claims about societal collapse. Good luck enforcing that for the kids, with how their parents embrace their phones.
Jonathan Haidt thinks children shouldn't be able to post on social media or have smartphone access, and there's something to this if we're being specific about the "posting photos of yourself" piece. The performative identity-construction that social media incentivizes does seem like a weird thing to encourage in adolescents who are in the middle of figuring out who they are, and there's a reasonable case that the particular feedback loops involved are nastier than equivalent analogue experiences of social humiliation, which at least fade from memory. But "no smartphones" as a category encompasses an enormous amount of genuinely useful functionality, and "no posting photos" is a much more targeted and defensible intervention than "no smartphone," which tends to be what people actually mean.
I'm also skeptical of enforcement mechanisms. Not because I think children's online safety doesn't matter, but because I don't trust that the rules will land where the advocates for them seem to expect. Age verification regimes tend to produce either security theater or comprehensive surveillance infrastructure, and comprehensive surveillance infrastructure does not stay narrowly targeted at protecting children for very long. The same legislative sessions that produce "think of the children" bills about social media often produce other bills I would find considerably more alarming. The willingness to build the infrastructure is the thing that should worry us, independent of the stated justification.
I should be honest about my personal stake in this, because it seems relevant. When I was a kid, my ADHD predominantly manifested as inattention. I was notorious for reading novels under the desk in class, reading while walking, compulsively reading every newspaper and the labels on shampoo bottles and the copyright page of books and anything else that had text on it. My parents were extremely conservative about digital affordances during my childhood and adolescence: no broadband internet connection, no smartphone, until late in my teens.
This did nothing good for me. You do not treat ADHD with sensory deprivation. I was not going to pay more attention in class because I didn't have a phone handy; I was just more likely to zone out and stare at a water stain on the ceiling and construct elaborate fantasies about the history of civilizations I'd invented. I was bored, in a persistent and grinding way that I now recognize as one of the more unpleasant features of the condition, and I'm genuinely grateful that advances in technology have made that particular flavor of boredom substantially more optional. ADHD medication improved my academics and my functioning in the world. Austerity did not. The restriction removed a coping mechanism without addressing the underlying issue.
I'm aware that my case doesn't generalize. Plenty of kids are not managing a neurological attention deficit when they're scrolling, they're just enjoying an entertainment product, and there's a reasonable question about whether that entertainment product is well-calibrated for their long-term flourishing. But I'm suspicious of framings that assume the counterfactual to device use is some kind of improving, wholesome activity, rather than the much more realistic counterfactual of staring at the wall, or in my case, reading the back of a cereal box for the fourteenth time.
I've watched a teenage relative of mine scroll through Instagram Reels, and it was not a pleasant experience. None of it was erudite. Most of it was AI-generated, and obviously so to anyone over twenty-five, though apparently not to her. The content was a kind of undifferentiated slurry of dumb pranks, "interesting" facts that were wrong, and videos that seemed designed less to convey anything than to fill attention with sensation. I wanted to say something. I didn't, because it wasn't my call and the headache of saying something would have outweighed the benefit. Also, she isn't a particulay bright kid, as hard as that is to say about your own kin. But I felt, for a moment, what the "screens are demonic" people feel, and I think I understand why they reach for that language.
(Don't get me started on an elderly great-uncle and his consumption of the most ludicrously fake AI-slop on YouTube. I did my best to inform him, but wise words only get you so far at that age.)
The problem is that "demonic" and "insane" and "evil" are not diagnostic, they're expressive. They communicate that the speaker has had a visceral negative reaction, which I also had. What they don't do is tell you anything useful about what the actual harm is, what causes it, how it might be addressed, or how to distinguish between the things that caused the visceral reaction and the much broader category of digital media that gets swept up in the resulting policy proposals. Louise Perry's instinct to distinguish between fairy tales on a screen and watching another child play on YouTube seems right to me, not because one is "screens" and the other isn't, but because they're different things doing different things to a child's attention and social cognition. That distinction is worth making carefully, and the "screens" framing makes it harder rather than easier.
If I were forced to endorse a population-wide intervention, it would be this: device manufacturers and online services should be required to provide genuinely functional parental controls, to be setup at the convenience of the person making the purchase. Not draconian age-restriction policies that produce surveillance infrastructure and don't actually work. Just real tools that let parents do what parents are supposed to do, which is make situated judgments about their specific kid, in their specific circumstances, with their specific needs, rather than relying on either blanket permissiveness or blanket prohibition. A child's use of electronics is something that should be monitored in conjunction with their behavior and academic performance, the same way you'd monitor anything else in their life that was potentially impacting them.
The people most confident that they know the right policy for all children are usually people who have identified a single dimension of risk, optimized hard against it, and are not tracking the costs of their proposed solution. The costs are real. Restriction has costs. Surveillance has costs. Boredom has costs. Social exclusion from peer networks that now largely operate digitally has costs. A child who can't participate in the group chat is not being protected from social life, they're being excluded from it, and that exclusion has downstream consequences that are unlikely to show up in studies asking whether "screen time" correlates with self-reported wellbeing.
Not to mention, that if childhood and adolescence is treated as a sort of preparatory phase for adult life: are the adults doing anything different? We live on our phones, there are few facets of modern living not mediated by transistors, light emitting diodes and the internet. And I think that's great: I have a device in my hands that, for about my weekly wage, allows access to nearly the sum total of human knowledge and the ability to interact with people across the globe with milliseconds of latency. I use it to learn more, say more, do more, and yes, entertain myself. If you can't manage to use such capabilities in an ennobling manner, I'm tempted to declare a skill-issue. Don't try and dictate terms for the rest of us, mind your own kids.
I've had at least 3 women I've been in relationships with complain about the inconveniences associated with big breasts and their plans of getting a breast reduction at some point. I was never very happy about that prospect, and I offered to carry them in my palms to help relieve the burden. For some reason, they never were as keen on taking up that offer as I was extending it.
You have my condolences.
Same here.
My understanding is that even a few years of experience at SpaceX is an easy ticket to a much more cushy/comfortable position elsewhere in aerospace, especially for a newly minted engineer. These guys aren't idiots, they weren't coerced into anything. They made the conscious choice of opting-in to work in the most exciting and fast paced company in their field, and most of them saw that as a golden opportunity. If they don't like it, they can quit, and many have. Elon isn't a slave-driver, he's just a hardass boss.
If that's the "best" solution you've heard, I really don't want to know what the others were.
So who is better, Bezos or me? It is going to depend on whom you ask
You're welcome to use whatever criteria for "goodness", I've just used mine. I value intelligence, conscientiousness, ambition, drive etc etc, and at least by those metrics, I think the type of billionaire I've singled out is ahead of the both of us. There are obviously other factors I care about, these are just the ones where they're clearly above average.
Nor would it be reasonable to say that he has more moral worth than me (which is what the "all men are created equal" line means).
I happen to disagree with that. I think different people can and do have different moral worth. I think a billionaire contributes more than a peasant in Africa, or a hardworking middle class person deserves more moral consideration than a serial killer, and thus I value their existence more, and would make ethical decisions accordingly. So does the rest of the world, going by revealed preference.
Easy to claim, harder to prove. If you have studies saying so, I'll take a look.
I've updated my opinion of Elon considerably downward over the past few years. This isn't motivated reasoning or tribal updating, I think his early work genuinely represented some of the most impressive entrepreneurial achievement of the century, and I weighted that heavily for a long time. It's just that the account has been drawn down pretty substantially at this point.
my space autism getting triggered unimaginably by space-x getting fucking shackled to a corpse right before IPO
SpaceX exists because Elon Musk willed it into existence through what can only be described as an unreasonable application of personal capital, obsession, and tolerance for failure. The prior probability of a private company successfully developing orbital rockets from scratch was, charitably, very low. He is not a steward of someone else's vision. He is the vision, or at least was its necessary precondition. You can believe the xAI merger is strategically stupid (I have no strong view either way) while still acknowledging that "guy makes questionable decisions about his own company" is a fairly weak indictment. The stronger criticism would need to be about the employees or shareholders who signed on under different assumptions, which is at least a real argument.
On billionaires more broadly, I find myself in the uncomfortable position of being genuinely more sympathetic to them than the median person in my reference class seems to be, to the point I'd have gone on that Pro-Billionaire March in SF if I was eligible and there, and I've spent some time trying to figure out whether this is motivated reasoning or just correct.
Here's the thing about capitalism that I think gets systematically underappreciated: it's one of the only wealth-generation mechanisms in human history that is even structurally capable of being positive-sum. The historical alternatives (conquest, extraction, inheritance, political rent-seeking) are zero or negative sum almost by definition. Someone taking your grain is not creating value; they are relocating it, while destroying some in the process.
Whereas Bezos building a logistics network that genuinely makes it cheaper and faster to get goods to people is, at minimum, attempting to create value, and by revealed preference it mostly succeeds. I'm simplifying, obviously. There are real extraction dynamics in modern capitalism. But the baseline isn't "billionaires take from the poor"; the baseline is "billionaires accumulate disproportionately while also expanding the total pie," which is a meaningfully different situation that calls for different policy responses.
The talent point is one people find uncomfortable to make explicitly, but I think it's basically right and the discomfort is doing a lot of work to obscure valid reasoning. The self-made billionaires specifically, the ones who started from, say, merely-upper-middle-class and compounded from there, represent a genuinely unusual combination of intelligence, risk tolerance, executive function, and social skill. You can find individual counterexamples, and luck is real and important, but as a population they seem to have a lot of the thing that makes economies grow. Artificially capping that seems like exactly the kind of policy that makes a satisfying fairness intuition and a mediocre growth outcome.
These people genuinely are *better" than you and me. Smarter, more driven, more ambitious, and more willing to take risks. All men are categorically not made equal.
("If you're so smart, why aren't you rich?" is not an entirely invalid argument)
I'll steelman the opposition: you could argue that beyond some threshold, wealth primarily compounds through rent-seeking rather than value creation, and that extreme concentration creates political capture that undermines the positive-sum game entirely. I think this is the correct version of the billionaire critique, and it's pretty different from "they are morally bad people who should be humbled."
The relationship between wealth and virtue is much weaker and in an opposite direction from what progressive discourse implies, and if you're selecting for "demonstrates poor impulse control, short time horizons, willingness to harm others for small personal gain," you will find that distribution pretty spread across income levels, possibly with some concerning concentrations nowhere near the top of the wealth distribution.
(If it's not obvious, this is not my stance on all billionaires. It doesn't extend to Russian oligarchs, tinpot dictators etc.)
For context, the cousins in question were murderous, rapey assholes who had cheated Arjuna and his siblings out of their birthright and then tried to assassinate them multiple times.
Poor bastard still feels bad about killing them because of familial ties, hence the little pep talk from his Uber driver Krishna.
- Prev
- Next

Absolutely and unironically based behavior. Good luck! Probably don't tell her about the spreadsheet or the applied mathematics, at least before she's hopelessly smitten.
More options
Context Copy link