@self_made_human's banner p

self_made_human

Kai su, teknon?

15 followers   follows 0 users  
joined 2022 September 05 05:31:00 UTC

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

I tried stuffing my friends into this textbox and it really didn't work out.


				

User ID: 454

self_made_human

Kai su, teknon?

15 followers   follows 0 users   joined 2022 September 05 05:31:00 UTC

					

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

I tried stuffing my friends into this textbox and it really didn't work out.


					

User ID: 454

I'd say my ADHD is quite mild. You wouldn't be able to tell at a glance, and I'm used to working hectic and cognitively taxing jobs without my meds, though they do help.

I'd say Ritalin was 6/10 effective for me, on a platonic ideal where 10/10 would have me locking in and working till I drop. It does help me focus, and I simply can't study without meds. I'd have flunked med school since my old habit of cramming at the last minute no longer cut it as the textbooks approached the dimensions and weight of a healthy newborn, if it hadn't been for Ritalin.

It's mostly the side effects and come downs that put me off it. I get anxious and jittery, and even taken early in the day, it makes me insomniac enough to be debilitating. This is mostly just bad luck and idiosyncratic, it gives my brother, who has ADHD worse than mine, terrible headaches at the lowest dose. He breaks his tablets in half just to try and get by with less.

If you read Scott's review of stimulants, the one that's consistently the cause of rave reviews is desoxyn, which is a polite way of saying meth. At actually therapeutic doses, it works wonders, but it's not available legally in the UK, and most psychiatrists are scared to prescribe it even in the States.

I wouldn't be so harsh on him, and I'm actually quite sympathetic. Let the doctor who hasn't made a spelling mistake or mistaken a dose cast the first stone, and I'm not chucking any. Nothing glaring, or lethal, thank god, but all the steps we take to avoid this only seek to minimize the risk, and can't eliminate it.

He didn't get the name or dose of the medication wrong, and usually capsules versus tablets is an irrelevant detail. If he was doing it electronically, it would be constrained by the list of meds recognized in the system. With pen and paper? Much more scope to go wrong.

Dextroamphetamine isn't the first choice for ADHD here, probably somewhere around 2nd or 3rd line. I can understand why he might just look up dose, refer to something that wasn't the BNF, and then put that down.

In fact, when I called back today to get this sorted out, I learned he'd called in sick today, so it's possible that he wasn't feeling so well when I saw him.

The income of most humans in such a scenario would also be nearly zero. Cognitive and physical labor would be entirely devalued.*

I'd expect anyone with even a modest amount invested would see it soar, and even savings would elevated in terms of purchasing power.

The question is whether this will be enough.

*Even the absolute minimal human existence requires about a hundred watts of power and raw biological feedstock. You can't lower your wages lower than this without dying, and every dollar that could be spent on food and shelter would be much better spent elsewhere. A comfortable existence would be significantly more expensive. I think in a worst-case scenario, humans would be killed outright, slightly less worse but awful would be us being outcompeted and left to die by an uncaring ASI, in less bad scenarios marginalized and unable to meaningfully engage in agency.

Some of them, most notably ChatGPT, are explicitly trained and prompted not to reproduce potentially copyrighted work like song lyrics. Though OAI's recent model spec has been updated to standards where the LLM is suppose to decline politely rather than lie and say it's incapable of reproducing them.

It's not what he was describing. It was my extension, a claim that sufficiently rigorous and exhaustive "prompt" programming is just regular programming.

I have written a few programs (they compiled! eventually..), and I am aware of all the other miscellaneous errata a competent programmer must keep in mind like dependency trees, versioning, accounting for spaghetti code and legacy code that will collapse if you sneeze at it wrong. That's what I meant

it can't possibly fuck it up

Is what I was gesturing to, taking all of that into account. I should consider myself lucky that I've never had to grapple with legacy code bases.

It turns out that at least in outpatient settings, the rule is that controlled substances need a hand-written prescription. Which strikes me as odd given that in all my inpatient work, I just had to tick a few boxes and sign a physical copy when it came to those classes of drugs.

This is eventually going to become part of your bread and butter - you should feel very certain that amphetamines of all kinds are better than methylphenidate (or not!) and eventually be familiar with the considerations for use of one or the other (especially since it impacts your own personal life).

Unfortunately, that's going to take a while. My current placement is psychiatry of old age, and the next one ought to be General Adult. It's probably not till I do one for children and adolescents (or learning disabilities) that I would be personally prescribing any. I can only go off my own experience, having exhausted the options back in India, and what I read online for now.

I did do a literature review! (though given that I have ADHD and unmedicated when I did it, it's not going to be published anytime soon haha)

The effect sizes for dexedrine vs methylphenidate were 0.9 vs 0.8 in adults, within spitting distance. My impression is that methylphenidate is better tolerated in some, but it's already been so unpleasant for me that I am eager to try anything else. (Don't even ask what fucking atomoxetine did, it was highly NSFW to say the least).

This is going to be important for a few reasons, one is that Scott often elides some of the practical concerns that we need to know about (like actual availability, as you ran into) and he cues into very specific old evidence bases at times which is fine for what/who he writes about but misses new innovation (lets see.....psych example....how about the conversation about Trazodone as a sleep aid?) and importantly isn't necessarily the standard of care - your attendings, billing processes, and potentially malpractice attorneys (yes yes UK) are going to look at you funny if you take him seriously.

He also has a tendency to miss or underemphasize some of the research errors (some spotted in this article! What they are is left to the learner lol).

It's exceptionally cruel for you to burden a neurodivergent trainee with additional research burdens :(

That being said, I do hold Scott in very high esteem. I don't consider him infallible, of course, but I would have the presumption of deferring to him unless I had overwhelming evidence of error. I certainly wouldn't formally cite him in my medical decisions at least at the resident level, but thankfully consultants have significantly more leeway in that regard, and I hope I get to that point eventually.

(I'm aware of trazodone as a sleep aid being an occasional prescription decision, do I take this as you asking me to evaluate whether it's ineffective at that job? I've only heard weak evidence, and mirtazapine would be the first port of call anyway for insomnia)

IIRC Vyvanse is now generic in the U.S. but in short supply (as is basically everything else for ADHD), I don't know what it is like in the UK but for my money it is almost always the better choice if the patient can get it and afford it. Being a prodrug presents a ton of advantages and I'm mildly irked at the way Scott is minimizing it.

The UK is also grappling with a supply shortage. I think dexedrine is comparatively uncommon enough that I have better odds of getting it than the alternatives!

I've previously been on an extended release formulation of methylphenidate, and it did nothing good for due to the increased duration of action. I've never tried an immediate release variant of either, but I'm willing to try the devil I don't know at this point.

Telling a computer what you want it to do with such clear terminology and logical consistency that it can't possibly fuck it up is just programming.

AI companies have a strong incentive to make prompting easy, and they already have, I recall the days of using the GPT-3 base model and trying to get it to do anything useful. Right now, the models are significantly smarter and in fact are quite proactive in asking clarifying questions and making useful suggestions that the user didn't know. In the limit, this makes prompting beyond a formulation of an initial suggestion redundant.

We're not there yet, but we're close. Eventually the systems will just understand intent or outright demand clarification, and fancy prompting won't add much to the equation.

Thanks my dude. Luckily I did find a good couple months worth of my previous prescription languishing in a dark corner of my room, so I won't be entirely screwed over if there's a large delay in getting the prescription amended, but it's a pain either way.

I'm too scared to actually try the Dark Web, largely because I have more to lose than the average citizen (I could be deported!). I did once have a friend with an Adderall prescription he didn't use, but he turned out to be an asshole and I wouldn't reach out to him.

My absolute fallback would have been scheduling a flight home and bring as much Ritalin back with me as I could, or just have my family send it over with extended friends and family coming back.

For now I'm hoping a few phone calls will sort this out, and if not, I'll just suffer a little longer from taking a suboptimal medication that beats nothing.

Absolutely nothing, except that they don't exist in the UK. It's tablets or nothing, as far as the NHS is concerned, and private prescriptions have the same issue AFAIK.

We do have e-prescription! It's the default, but while the psychiatrist didn't mention a rationale behind a written one, it was likely because he did it in a hurry, or because he's the old-fashioned type.

https://www.astralcodexten.com/p/know-your-amphetamines

Purely d-amphetamine works better than l-amphetamine or a racemic mixture. And I read elsewhere that dexedrine beats methylphenidate in terms of pure efficacy in adults.

I was the one who suggested it, mainly because I'm sick of the side effect profile of methylphenidate. If I experience anything too unbearable, and I hope some of that is idiosyncratic and dexedrine would be more tolerable. I'll probably ask for an extended release formulation during the next follow-up appointment.

"The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment."

Yes. But I only see concerns about alignment. Which really just kicks the can down the road, if we align AI so that even a smart person can't jailbreak it to let it make them a virus, how can we ensure that we prevent that smart person from creating their own unaligned AI etc.

I believe the Term of Art would be a "pivotal act". The Good Guys, with the GPUs and guns, use their tame ASI to prevent anyone else from making another, potentially misaligned ASI.

The feasibility of this hinges strongly on whether you trust them, as well as the purportedly friendly ASI they're unleashing.

As @DaseindustriesLtd has said, this form of pivotal act might require things like nuking data centers or other hijinks that violate the sovereignty of nuclear powers. Some bite this bullet.

Moreover, you're defending two contradictory positions here.

On the one hand, you seem ready to concede to metaphysical skepticism and the idea that knowledge is impossible. On the other hand, you're using the Naive Empiricist idea that systems can only be considered to exist if they have measurable outcomes. These are not compatible.

If what you're doing is simply instrumentally using empiricism because it works, you must be ready to admit that there are truths that are possibly outside of its reach, including the inner workings of systems that contain hidden variables. Otherwise you are not a skeptic.

I am large, I contain multitudes. As far as I'm concerned, there is no inherent contradiction behind my stance.

Knowledge without axiomatic underpinning is fundamentally impossible, due to infinite regress. Fortunately, I do have axioms and presumably some of them overlap with yours, or else we wouldn't have common grounds for useful conversation.

I never claimed being a "skeptic" as a label, that's your doing, so I can only apologize if it doesn't fit me. If there are truths beyond materialist understanding, regretfully we have no other way of establishing them. What mechanism ennobles non-materialists, letting them pick out Truth safe from materialism from the ether of all possible concepts? And how does it beat a random number generator that returns TRUE or FALSE for any conjecture under the sun?

Non functionalists disagree that it is analogous. So you need to actually make that argument beyond "it is obviously so because it is so from the functionalist standpoint".

I must then ask them to please demonstrate where a Chinese Room, presumably made of atoms, differs from human neurons, also made of atoms.

If computationalism is true, computationalism is true.

I reject your claim this is a tautology. A Chinese Room that speaks Chinese is a look-up table. A Chinese Room that speaks Chinese while talking about being a Chinese Room is a larger LUT. Pray tell what makes the former valid, and the latter invalid. Is self-referentiality verboten? Can ChatGPT not talk about matrix multiplication?

Whatever one thinks of our epistemic position, I always recommend humility.

I'm all for epistemic humility, but I fail to see the relevance here. It's insufficient grounds for adding more ontologically indivisible concepts to the table than are strictly necessary, and Searle's worldview doesn't even meet necessity, let alone strictness.

There's epistemic humility, and there's performative humility, a hemming and hawing and wringing of hands that we just can't know that things are the way they seem, there must be more, and somehow this validates my worldview despite it having zero extra explanatory power.

Virtual machines were a thing since 1965, and Searle wrote his nonsense about intentionality in 1983, and the Chinese Room in 1980.

If someone has the gall to claim to disprove the possibility of artificial intelligence, as he set out to do, it would help to have some understanding of computer science. But alas.

The answer is still "the man doesn't know Chinese, the system does".

I agree with you but Searle and his defenders wouldn't. As far as I'm concerned, it matters not a jot if the system is embedded inside a brain, up an arse, or in the room their arse is resting in.

I expect Prompt Engineering will turn out to be the world's shortest lived career.

The only real solution I'm aware of is some form of Universal Basic Income. In other words, if the economy explodes as human cognitive and physical labor is automated, then governments tax it and redistribute it.

This will likely prove unpopular with the people and entities being taxed on their newfound wealth, and it remains to be seen whether governments/democracies will listen to their anxious and unemployed populace over entrenched interests who now hold most of the money and power.

I don't think the likelihood of this happening is high enough for me to relax and take it for granted.

Even if UBI was a thing, that doesn't necessarily mean that inequality wouldn't be. The future uber-wealthy might well be the descendants of those who already had existing wealth, or at least shares in FAANG. I'd take this as acceptable if it meant I wouldn't starve to death.

Blue-collar work won't be safe for long either. We're seeing robotics finally take flight, there are commercial robo-taxis on the road, and cheap robo-dogs and even humanoids on the market. The software smarts are improving rapidly, and so is the hardware. Humans are going to end up squeezed every which way.

There are no reassuring answers or easy solutions, but at least hope isn't lost that we'll come out of this unemployed yet rich beyond our wildest dreams. It only takes a trivial share of the light cone to make billionaires of us all, assuming the current ones will deign to share.

You are right that AIs will more heavily weight ideas that show up in their corpus. I understand this, and hence don't go into detail that would aid a bad actor more than a cursory Google search (I'm already stretching my own qualifications to do so).

You point out that AI Doomers (I'm not a Doomer in the strict sense, my p(doom) is well below 100%) often are the first to point out and plot how AIs might concretely be hostile. This is unavoidable in the service of getting those skeptical to take the ideas seriously! I don't know how much time you've spent browsing places like LessWrong, but I assure you that I have seen a dozen instances of people pointing out that they inside knowledge that would accelerate AI development or cause other catastrophe, without revealing it. (And the majority of them were serious people with qualifications to match, not someone bullshitting about their awesome secret knowledge that they're too benevolent to divulge).

The best way is to design AI that is intrinsically aligned (Asimov's positronic AIs that, most of the time, must follow the 3 laws). Barring that (or, I would say, in addition to it) Humans need to be able to threaten to destroy an AI if it turns genocidal. This might not rule out AI "accidents" but as you say you would expect an evil AI to understand self-preservation if it is sophisticated enough to do real damage. There are probably a lot of ways to do this, and it might be best if they aren't made completely public, so maybe they are already underway

Stopping a misaligned superintelligence is no easy task, nor is killing it. But in general, I agree that it would be best if we create them aligned in the first place, and to a degree, these aren't entirely useless efforts already. Existing RLHF and censors do better than nothing, though with open models like R1, it only takes minimal effort to side step censorship.

For what it's worth, Skepticism, which I take to be your view if you're making this objection, is also unfalsifiable. As are all statements in metaphysics.

I happen to be a metaphysical skeptic myself, but this isn't an argument. We're talking about something more fundamental than notions of falsifiablity or correspondence.

If we really want to get into this, then proving (and disproving) anything is mathematically impossible..

This makes axioms necessary to be a functional sapient entity. Yet axioms are thus incredibly precious, and not to be squandered or proliferated lightly.

To hold as axiomatic that there exists some elan vital of "intent" that the room lacks, but a clearly analogous system in the human brain itself possesses, strains credulity to say the least. If two models of the world have the same explanatory power, and do not produce measurable differences in expectation, the sensible thing to do is adopt the one that is more parsimonious.

(It would help if more philosophers had even a passing understanding of Algorithmic Information Theory)

Unless the writer of the manual understands reasoning to a sufficient degree as to provide exhaustive answer to all possible questions of the mind, this isn't possible. And certainly isn't within the purview of the thought experiment as originally devised.

Why not? What exactly breaks if we ask that the creator of the Room makes it so?

It is already a very unwieldy object, a pure look-up-table that could converse in Chinese is an enormous thing. Or is it such an onerous ask that we go beyond "Hello, where is the library?" in Chinese? You've already shot for the moon, what burden the stars?

If the Room can equipped to productively answer questions that require knowledge of the inner mechanisms of the Room, then the problem is solved.

We don't know yet. We may possibly never know. But we can observe the phenomenon all the same.

For consciousness? Maybe. I'd be surprised if we never got an answer to it, and a mechanistic one to boot. Plenty of mysterious and seemingly ontologically basic phenomenon have crumbled under empirical scrutiny.

Lo and behold, after your kind blessings, I dug through a pile of belongings in the back of my closet, and found two pristine boxes of Ritalin, just ripe for the taking. I knew there had to be some of the fuckers lying around around haha

I have a reasonable plan in mind for what I'd do with the $10 million. I'd probably pivot away from my branch of medicine and ingratiate myself into an Infectious Disease department, or just sign up for a masters in biology.* The biggest hurdle would be the not getting caught part, but there's an awful amount of Legitimate Biology you can do that helps the cause, and ways to launder bad intent. Just look at apologia for gain of function.

There's also certainly Knightian uncertainty involved, but there are bounds to how far you can go while pointing to unknown unknowns. I don't think I'd need $1 billion to do it, as I'm confident it couldn't be done $3.50 and a petri-dish.

And whatever the actual cost and intellectual horsepower + domain knowledge is, it only tends downwards, and fast!

*If you can't beat disease, join them

It kinda seems like we do live in a world where any attempt to kill everyone with a deadly virus would involving using AI to try to find ways to develop a vaccine or other treatment of some kind.

The downside to this is having to hope that whatever mitigation is in place is robust and effective enough to make a difference by the time the outbreak is detected! The odds of this aren't necessarily terrible, but you want it to have come to that?

It's ironic, though - the people who are most worried about unaligned AI are the people who are most likely to use future AI training content to spell out plausible ways AI could kill everyone on Earth, which means that granting unaligned agentic AI is a threat for the purposes of argument, increases the risks of unaligned agentic AI attempting to use a viral murder weapon regardless of whether or not that is actually reliable or effective.

Sorry, side tangent. I don't take the RISKS of UNALIGNED AI nearly as seriously as most of the people on this board, but I do sort of hope for the sake of hedging those people are considering implementing the unaligned AI deterrence plans I came up with after reflecting on it for 5 minutes instead of along with posting HERE IS HOW TO KILL EVERY SINGLE HUMAN BEING over and over again on the Internet :p

I expect hope than a misaligned AI competent enough to do this would be intelligent enough to come up with such an obvious plan, regardless of how often it was discussed in niche internet forums.

How would you stop it? The existing scrapes of internet text suffice. To censor it from the awareness of a model would require stripping out knowledge of loads of useful biology, as well as the obvious fact that diseases are a thing, and that they reliably kill people. Something that wants to kill people would find that connection as obvious as 2+2=4, even if you remove every mention of bioweapons from the training set. If it wasn't intelligent enough to do so, it was never a threat.

Everything I've said would be dead-simple, I haven't gone into any detail that a biology undergrad or a well-read nerd might not come up with. As far as I'm concerned, it's sufficient to demonstrate the plausibility of my arguments without empowering adversaries in any meaningful way. You won't catch me sharing a .txt file with the list of codons necessary for Super Anthrax to win an internet argument.

The hard part is what I was alluding to, when I mentioned that during the gene-editing, you could copy and paste sections of genomes from unrelated pathogens. Nature already does this, but to a limited extent (bacteria can share DNA, viral replication often incorporates bits of the host or previous viral DNA still lurking there).

I expect that a competent actor could merge properties like:

  1. Can spread through aerosols (influenza or rhinoviri)

  2. Avoids detection by the immune system, or has a minimal prodrome that looks like Generic Illness (early HIV infection)

  3. Massive lethality (HIV or a host of other diseases, not just restricted to viruses)

The design space pretty much contains anything that can code for proteins! There's no fundamental reason that a disease can't both be extremely lethal and have incubation periods long enough for it to be widespread. The only reason, as far as I can see, for why we don't have this is because nobody has been insane (and resourceful) enough to try. Holding the former constant, the resource requirement is dropping precipitously by the year. Anyone can order a gene editing kit off ebay, and the genetic code of many pathogens are available online. The thing that remains expensive is a proper BSL-4 lab, to ensure time to tinker without releasing a half-baked product. But with AI assistance, the odds of early failure are dropping rapidly. You might be able to do a one-off print of the Perfect Pathogen, and as long as you're willing to die, spread it widely.

You're just a Functionalist, exactly the sort of people the argument is supposed to criticize. Or you're missing the point.

My response is that there isn't a point to miss.

Searles is a Biological Realist, which is to say that he believes that processes of mind are real things that emerge from the biochemical processes of human beings and that language (and symbol manipulation in general) is a reflection of those processes, not the process in itself. He thinks thoughts are real things that exist outside of language.

That strikes me as the genuine opposite of what someone with a realistic understanding of biology would believe, but I guess people can call themselves whatever they like. It strikes me as unfalsifiable Cartesian Dualism or a close relative, and worth spending no more time rebutting with evidence than it was forwarded without evidence.

To wit, he argues that what the room is missing is "intentionality". It does not have the ability do to anything but react to input in ways that are predetermined by the design of the chinese manual, and insofar as any of its components are concerned (or the totality thereof) they are incapable of reflecting upon the ideas being manipulated.

What is so mysterious about this "intentionality"? Give the Room a prompt that requires it to reason about being a Chinese Room. Problem solved.

Your brain does "speak chinese" properly speaking because it is able to communicate intentional thoughts using that medium. The mere ability to hold conversation does not qualify to what Searles is trying to delineate.

What is the mechanism by which a thought is imbued with "intentionality"? Where, from a single neuron, to a brain lobe, to the whole human, does it arise?

I think it's far from clear that AI mitigates the issue more than it currently exacerbates. I'm in agreement that it's already technically possible, and we're only preserved by the modest sanity of nations and a lack of truly motivated and resource-imbued bad actors.

In a world with ubiquitous AI surveillance, environmental monitoring and lock-downs of the kind of biological equipment that modern labs can currently buy without issue, it would clearly be harder to cook up a world-ending pathogen.

We don't live in that world.

We currently reside in one where LLMs already possess the requisite knowledge to aid a human bad actor in following through with such a plan. There are jailbroken models that would provide the necessary know-how. You could even phrase it as benign questioning, a lot of advanced biotechnology is inherently dual-use, even GOF adherents claim it has benefits, though most would say it doesn't match the risks.

This is a huge problem for ending life on Earth; living is 100% fatal but humans keep having kids. If you set an incubation period that is too long, then people can just post live through it. I also think a long incubation period would dramatically raise the chances that your murdercritter mutates to a less harmful form.

In a globalized world, a long incubation period could merely be a matter of months. A bad actor could book a dozen intercontinental flights and start a chain reaction. You're correct that over time, a pathogen tends to mutate towards being less lethal towards its hosts, but this does not strike me as happening quickly enough to make a difference in an engineered strain. The Bubonic Plague ended largely because all susceptible humans died and the remaining 2/3rds of the population had some degree of innate and then acquired immunity.

Look at HIV, it's been around for half a century, but is no less lethal without medication than when it started out (as far as I'm aware).

Prions would not be the go-to. Too slow, both in terms of spread and time to kill. Good old viruses would be the first port of call.

Searle’s response to the Systems Reply is simple: in principle, he could internalize the entire system, memorizing all the instructions and the database, and doing all the calculations in his head. He could then leave the room and wander outdoors, perhaps even conversing in Chinese. But he still would have no way to attach “any meaning to the formal symbols”. The man would now be the entire system, yet he still would not understand Chinese. For example, he would not know the meaning of the Chinese word for hamburger. He still cannot get semantics from syntax.

Seems nonsensical to me. I fail to see how this person could have that inside their brain and fail to speak Chinese. How is that even physically possible?

So, take throwing a ball. The brain’s doing a ton of heavy lifting, solving inverse kinematics, adjusting muscle tension, factoring in distance and wind and all in real time, below the level of conscious awareness. You don’t explicitly think, “Okay, flex the biceps at 23.4 degrees, then release at t=0.72 seconds.” You just do it. The calculations happen in the background, and you’d be hard-pressed to explain the exact math or physics step-by-step. Yet, if someone said, “You can’t throw a ball because you don’t consciously understand the equations,” you’d rightly call that nonsense. You can throw the ball - your ability proves it, even if the “how” is opaque to your conscious mind.

If Searle were to attempt to rebutt this by saying, nah, you're just doing computations in your head without actually "knowing" how to throw a ball, then I'd call him a dense motherfucker and ask if he knows how the human brain works.