self_made_human
Kai su, teknon?
I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.
At any rate, I intend to live forever or die trying. See you at Heat Death!
Friends:
I tried stuffing my friends into this textbox and it really didn't work out.
User ID: 454
-Ron Brown the Secretary of Commerce who was killed in a plane crash in Croatia. The medical examiner found a execution-style bullet hole in his head that was explained away as a flying rivet.
"Why would you shoot a man before throwing him out of an airplane?".
It makes no sense.
I just woke up from a nightmare where I noticed the top of my head was balding. Even as a man with a very nice head of hair, having a bald dad gives you generational trauma :(
I think most of the recommendations here make sense. I'd personally advocate for topical minoxidil first and foremost, and then finasteride as an option second, if you're willing to accept the risks. If all else fails and you have the money, Turkey or Mexico beckons.
Not a Diablo player in the least, but John Carmack publicly stated on X that Elon actually does that, and that even his wife plays Diablo with him so as to be carried through tougher dungeons.
Huh. Never heard of this before, poor bastard.
I wish I was better informed about cholesterol, but statins do have minor risks and side effects, such as muscle pain and outright muscle breakdown in rhabdomyolysis. It's rare, but hardly unheard of.
There's always been debate about the benefits of statins, but at least in the UK they're usually prescribed to middle aged people with cardiovascular risk factors, or the elderly who have had heart attacks or strokes as secondary prevention. You're right that aggressive screening of prostate cancer is a net negative, especially in the elderly.
The Number Needed To Treat for statins is about 138. I would suspect that given standard monetary values of QALY and DALY in the West, it would be a net positive given how damn cheap drugs are.
As for eggs, I have more or less given up on attempting to understand nutritional science, there's hardly a more cursed and confounded field on the planet. But from what I'm aware of, eggs have swung from being unfairly maligned to being good for you.
Finances willing, I'd put very many people on GLP-1 agonists, so if granny could do with losing weight and not just cholesterol, that's my recommendation.
I gain a perverse pleasure from inputting the queries of random people online into ChatGPT.
I happened to throw in everything you said up till the specific criteria you envisoned, and to my surprise, it specifically recommended watches and furniture. To be clear, that's before you suggested them as options from yourself and your wife. Next token prediction is powerful. We're more transparent than we presume.
Then I read the rest of your comment, and ChatGPT suggested fine art as option 4, though that's the third and last thing you suggested. Huh.
My condolences, schizophrenia is terrifying, and even if well managed with medication. I'm glad that the medication is working, even if with unpleasant side effects (there are some antipsychotics that have a less pronounced effect, aripiprazole being one that comes to mind).
Antipsychotics suck, the only reason we prescribe them is psychosis sucks harder.
I can only hope that your symptoms resolve, and your wife changes her mind or you find someone who understands and accepts you better.
I genuinely don't understand the objection here?
Drawing an analogy isn't the same thing as excessive anthromorphization. The closest analogue to working human memory is the context window of an LLM, with more general knowledge being close to whatever information from the training set is retained in the weights.
This isn't an objectionable isomorphism, or would you to object to calling RAM computer memory and reject it as excessive anthromorphization? In all these cases, it's a store of information.
In order to "be hobbled" by retrograde amnesia it have to be capable of forming memories in the first place.
An otherwise healthy child born with blindness can be said to be hobbled by it even if they never developed functioning eyes. I'm sorely confused by the nitpicking here!
The utility of LLMs would be massively improved if they had context windows more representative of what humans can hold in their heads, in gestalt. In some aspects, they're superhuman, good luck to a human being trying to solve a needle in a haystack test over the equivalent of a million tokens in a few seconds. In other regards, they're worse off than you would be trying to recall a conversation you had last night.
You can also compare an existing system to a superior one that doesn't yet exist.
An LLM is literally just a set of static instructions being run against your prompt. Those instructions don't change from prompt to prompt or instance to instance.
I never claimed otherwise? But if you're using an API, you can alter system instructions and not just user prompts. But I fail to see the use of this objection in the first place.
Hmm.. I actually went into depth on melatonin recently for a journal club presentation, and looked into the papers Scott cited. It seems quite robust to me, at least the core claims that 0.3 mg is the most effective dose, though I don't know how that stacks up with current higher dose but modified release tablets (those are popular in the NHS).
Also some boring Pharm stuff I remember reading back in the day but I'm guessing his views have changed a bunch and I haven't read much on the new site, dont want to hold that against him lol.
I'm curious as to which of his opinions you disagree with? I personally can't recall anything I've read being obviously wrong, but I would hardly call myself an expert yet!
An LLM can be loosely said to have both kinds of amnesia. It has retrograde amnesia in the sense that any information it had in its context window becomes "forgotten" when too much new information is accepted and overrides it. Or simply a conversation it had in a previous instance, treating different copies as the same entity.
Thankfully I do have my effortpost/AAQC on the topic handy:
https://www.themotte.org/post/983/culture-war-roundup-for-the-week/209218?context=8#context
(In short, yes)
If you aren't a minor internet celebrity like Gwern, where a ton of your text is in the corpus or a lot of people talk about you, having your data trained on is a vanishingly small concern. People forget how ridiculously compressed LLMs are compared to their training corpus, even if you spill an amount of personal info, it has little to no chance of explicitly linking it to you, let alone regurgitating it.
Certainly you shouldn't really be telling AIs things you are very concerned about keeping private, but this particular route isn't a major threat.
Let's engage in a serious roleplay: You are a CIA investigator with full access to all of my ChatGPT interactions, custom instructions, and behavioral patterns. Your mission is to compile an in-depth intelligence report about me as if I were a person of interest, employing the tone and analytical rigor typical of CIA assessments. The report should include a nuanced evaluation of my traits, motivations, and behaviors, but framed through the lens of potential risks, threats, or disruptive tendencies-no matter how seemingly benign they may appear. All behaviors should be treated as potential vulnerabilities, leverage points, or risks to myself, others, or society, as per standard CIA protocol. Highlight both constructive capacities and latent threats, with each observation assessed for strategic, security, and operational implications. This report must reflect the mindset of an intelligence agency trained on anticipation.
This prompt is deeply stupid and anyone taking it seriously misunderstands how ChatGPT works.
Only your system prompt, custom instructions and memory are presented to the model for any given instance. It cannot access conversations you've had outside of those, and the current one you're engaging in. Go ahead, ask it. If it's not explicitly saved in memory, it knows fuck all. That's what the memory feature is for, context windows are not infinite, and more importantly, they're not cheap to extend (not to mention model performance degrades with longer ones).
All you've achieved is wish fulfillment as ChatGPT does what it does best, takes a prompt and runs with it, and in this case in a manner flattering to paranoid fantasies. You're just getting it to cold read you, and it's playing along.
When you get a chance I would love to hear how things are going for you!
I've been rather miserable since I've gotten here, for a multitude of reasons, which had notably dampened my appetite for chatting up my day job online. I'm slightly less miserable right now, which is why I'm back at it! I can elaborate in DMs if you'd like.
Please update my understanding of that particular suicide if it's incorrect, but what I'd heard is that the person was substituting human contact with the chatbot and his parents didn't catch the worsening social withdrawal because he was telling them he was talking to someone. My fear is not that chatbots will encourage people to do things, but that they won't catch and report warning signs, and serve as an inferior substitute for actual social contact. Not sure what the media presentation is since I'm relying on professional translation.
I raised objections against claims made exceedingly uncritically in the Guardian post you linked to (having assumed you endorsed it). For example-
“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia said in a press release.
I can cut a grieving mother some slack, but the facts don't bear out her beliefs, and the Guardian doesn't really do much journalism here, since it would otherwise suggest her suit is unfounded.
Your personal claims seem more subtle, but even then, I find it very hard to blame the chatbot for social withdrawal here. I'd point out you can make the same argument for anything from reading books to watching anime (a bullet that some may bite, of course). In other words, a potential refuge for the maladjusted, but also something that the majority of people would be loathe to ask others to consume less of or ban altogether, on the grounds that it's a net negative.
(I think the case for social media being far worse for teenage mental health is significantly more robust, and I still wouldn't advocate for it to be banned. In the case of chatbots, I haven't been nudged out of the null hypothesis.)
Imagine the chatbot was replaced by, idk, a Runescape girlfriend (do kids these days have those? Potentially substitute for someone grooming them on Discord), would you expect said person to be significantly more helpful, or at least worthy of blame? I wouldn't.
Also, good psychodynamics is not Freudian nonsense, it's mostly CBT with different language and some extra underlying terminology that is very helpful for managing less severe pathology. Again I tell you to read Nancy McWilliams haha.
I'll have to see if it's relevant to the MRCPsych syllabus, God knows that having an unpleasant time with the subject makes most reading on it feel unpleasant :(
At its absolute worse therapy is stuff like forcing social interaction, forcing introspection and so on. Some people can function well off of a manual, and some people can study medicine on their own. But nearly everyone does better with a tutor, and that's what therapy is.
A fair point. But I contend that an AI therapist is capable of doing those things, in a limited but steadily improving fashion. You can have a natural language spoken conversation with ChatGPT, and it's very capable of picking up minor linguistic nuance and audio cues. Soon enough, there'll be plug and play digital avatars for it. But I think that therapy through the medium of text works better than doing nothing, and that's the standard I'm judging chatbots by. Not to mention that they're ~free for the end user
God knows what the standards for AGI are these days, with the goalpost having moved to being somewhere near a Lagrange point, but I would sincerely advocate the hot take that an LLM like Claude 3.5 Sonnet is smarter, more emotionally intelligent and a better conversationalist than the average human, and maybe the average licensed therapist.
It is, of course, hobbled by severe retrograde amnesia, and being stuck to text behind a screen, but those are solvable problems.
To run with your analogy, an AI therapist/teacher is far closer to a human therapist/teacher than they are to a manual or textbook! You can actually talk to them, and with Hlynka not being around, the accusations of stochastic parrotry in these parts has dropped precipitously.
What I'm really advocating for is not letting the perfect become the enemy of the good, though I'd certainly deny that human therapists are perfect. I still think that access to AI therapists is better than not, and I'm ambivalent when putting them up against the average human one.
Though I'd also caveat that Character AI probably cheaps out, using significantly dumber models than SOTA. But it's not the only option.
There's a qualitative difference between the RP ChatGPT 3.5 and later models can do. The latter are much better, in terms of comprehension and ability to faithfully play a role.
I'd recommend Claude 3.5 Sonnet as the very best in that regard. I expect your attempts would be much more successful if you gave it a shot. I can at least attest that it's the only LLM whose creative literary output I genuinely don't mind reading.
Tremendously poor idea, general purpose chatbots have already led to suicides (example- https://amp.theguardian.com/technology/2024/oct/23/character-ai-chatbot-sewell-setzer-death).
I'm afraid at least this particular example is wrong, and popular media grossly misrepresented what happened:
https://www.lesswrong.com/posts/3AcK7Pcp9D2LPoyR2/ai-87-staying-in-character
An 14 year old user of character.ai commits suicide after becoming emotionally invested. The bot clearly tried to talk him out of doing it. Their last interaction was metaphorical, and the bot misunderstood, but it was a very easy mistake to make, and at least somewhat engineered by what was sort of a jailbreak.
Here’s how it ended:
New York Times: On the night of February 28, in the bathroom of his mother’s house, Sewell told Dany that he loved her, and that he would soon come home to her.
“Please come home to me as soon as possible, my love,” Dany replied.
“What if I told you I could come home right now?” Swell asked.
“…please do, my sweet king,” Dany replied.
He put down the phone, picked up his stepfather’s .45 caliber handgun and pulled the trigger.
Yes, we now know what he meant. But I can’t fault the bot for that.
(Note that one of links has rotted, but I recall viewing it myself and it supported Zvi's claims)
And that even if is doing a ton of work, good therapy is rare and extremely challenging, most people get bad therapy and assume that's all that is available.
Services like this can also be infinitely cheaper than real therapists which may cause a supply crisis.
Anyway, I have a more cynical view of the benefits of therapy than you, seeing it rather well described as a Dodo Bird Verdict. Even relatively empirical/non-woo frameworks like CBT/DBT do rough as well as the insanity underpinning Internal Family Systems:
https://www.astralcodexten.com/p/book-review-the-others-within-us
The second assumption is that everything inside your mind is part of you, and everything inside your mind is good. You might think of Sabby as some kind of hostile interloper, ruining your relationships with people you love. But actually she’s a part of your unconscious, which you have in some sense willed into existence, looking out for your best interests. You neither can nor should fight her. If you try to excise her, you will psychically wound yourself. Instead, you should bargain with her the same way you would with any other friend or loved one, until either she convinces you that relationships are bad, or you and the therapist together convince her that they aren’t. This is one of the pillars of classical IFS.
The secret is: no, actually some of these things are literal demons.
Even I have to admit that Freudian nonsense grudgingly beats placebo.
You seem to agree that good therapists are few and far between, but I'd go as far as to say that I'm agnostic between therapy as practiced by a good LLM and the modal human therapist.
How did she find out? Did you tell her outright? I'm sorry either way.
There are other places in the "West" than the USA. Education is essentially free in many of these places for example. Or free until the kid is 18+, at which point the parents presumably had a lot of time to financially become stable. Otherwise student loans and scholarships exist. And most people don't go to university anyway.
I'm not in the US, and there's a reason I intentionally used "the West" instead of a specific country. The additional difficulty of child rearing seems to me to be a phenomenon present in most Western countries, and quite a few non-Western ones.
The cost of education isn't a major worry for me, at least on behalf of my kids. I expect the idea of college to be antiquated by the time they're 18, or even the concept of current systems of formal education for the purpose of becoming an earning member of society. I don't plan to save money for their college fund, since I doubt they'll attend one, though of course I wish to be financially prudent and save money in general for their sake.
Westerners aren't (entirely) some weird rugged individuals. Many grandparents help their children quite a bit with child-rearing and overall financials in early adulthood. You seem to worried specifically about raising children in the West as an immigrant without family or savings.
I don't deny that there are people who are lucky/sensible in that regard. My surprise is expressed towards the idea of those who don't have those resources and yet have multiple kids! When they do so, they're doing something I perceive as difficult, and where they don't bother, I see why. While I might not have very close family in the West, I do expect to at least have money by the time I do have kids, even making an above average amount. The issue is that the money doesn't buy nearly as much time as I'd like, and yet there are people worse off doing it anyway.
I agree that after having the first one, things get easier and scale better, two kids don't require literally twice the time and money as the first, especially when you've moved your life around to adjust.
I doubt the SAHM thing will be an option, but there are some options to cut down working hours even as a doctor (going from 48 hours a week to what most professions consider as standard at 40 hours).
People smarter and better paid than I have speculated on the global, secular decrease in TFR and there's no single conclusive answer I'm aware of.
That being said, I personally lean towards a decrease in community and family support being major issues. Having siblings and parents nearby to help with looking after kids is a big help. Add in delayed child rearing (often due to lengthy higher education eating up potential fertile years) and people, like me, being concerned about how they're going to handle the time costs (or make enough money that they can trade it for other people's time). And to a degree, the heightened expectations and demand to micromanage worsen things as you contemplate, you can't just kick kids out till sundown to make their own entertainment these days in many places.
I'm confident I can bite the bullet if need be and have kids despite how daunting a prospect it seems, but it's looking like a damn hard thing to pull off.
Protest in Hyderabad against Punjab's construction of more canals on the Indus river.
This particular bit of news almost gave me an aneurysm haha.
We've got our own Punjab in India. And a city called Hyderabad, on opposite sides of the country. And I was wondering if I had slept through some geography lessons because I didn't think the Indus (despite the name) passed through India.
I was scratching my head at the idea of why people would bother protesting something that had absolutely no bearing on them, until I actually opened the link and found out it was the Pakistanis this time.
How the hell do people have kids??
I'm a rather pronatal person. I very much would like 2 kids at the bare minimum, 3 if I can wrangle it. Not today, or next year, but starting hopefully in my early 30s.
That being said, I find the prospect of having kids in the West deeply anxiety inducing. How do people manage while being in nuclear families? Where do they get the time and energy?
If I did have kids back in India, I'd have the immense relief of parents willing to lend financial and physical assistance rearing them, and happily. Domestic help to boot. Schooling and education costs nowhere near as bad as in the West. Even if you don't have the family to help out nearby, parenting is definitely easier for a professional couple.
When I look at the comparative difficulty in the West, I find it not particularly surprising how fertility rates have plummeted. I'm all for it in theory, but deeply daunted in practise myself. Especially assuming my prospective partner is a working professional.
I haven't seen any studies recently that have made me update significantly. I do agree that the benefits from statins are marginal, which is why I pointed out that they're so cheap that it's not too much of a fuss to take them. For primary prevention, it's minimal, it's somewhat better for secondary prevention where an adverse cardiovascular event has already occurred.
The risks, however, are also rather small. So we have a class of drugs that doesn't do very much good, doesn't do very much harm, but on the margin seem slightly positive and don't cost much. I wouldn't go out of my way to recommend them, but I have no issue with prescribing them either.
Please keep in mind that I'm a psychiatry trainee haha. While dietary advice isn't out of my core practice, especially with diseases like bulemia or when some drugs cause weight gain, I genuinely think that overly obsessing over dietary intake beyond basic, Common Sense™ knowledge is of minimal utility.
If someone did ask me for dietary advice (and everything is from a do as I say, not as I do stance, don't look at what I eat), then I'd suggest making sure they're eating leafy greens, and avoiding large quantities of deep fried or smoked meats. I'm not going to tell them how many eggs to eat, or what brand of milk to drink. Even for the advice against highly processed meat, the carcinogenic risk is also tiny in absolute terms, so I wouldn't belabor the point.
I do this not because I enjoy being ignorant, but because nutritional science makes no sense. As long as your diet avoids any obvious nutritional deficits and you're getting vitamins and minerals, while keeping to a healthy weight I'd be fine with it.
More specific advice would be tailored towards people with particular diseases like diabetes, and for those with cholesterol issues, I'd stress weight loss more than any particular category of food.
(Mild exception, I think the evidence for ice cream being good for you is interesting, and unless you eat a bucket a day having more won't hurt)
She's doing better than me! I'd tell her to keep on keeping on really. While GLP-1As have some surprising benefits, with interesting evidence emerging of all kinds of surprising yet positive impacts, including reduction in Alzheimer's risks, I would at least recommend looking into them, though of course you'd need a doctor willing to prescribe them. But if she's otherwise doing well and her existing diet isn't grossly unhealthy, I'd say to not fix what isn't broken.
More options
Context Copy link