@self_made_human's banner p

self_made_human

amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi

14 followers   follows 0 users  
joined 2022 September 05 05:31:00 UTC

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

A friend to everyone is a friend to no one.


				

User ID: 454

self_made_human

amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi

14 followers   follows 0 users   joined 2022 September 05 05:31:00 UTC

					

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

A friend to everyone is a friend to no one.


					

User ID: 454

Also reading Legend of the Galactic Heroes again.

I tried watching the anime, after it seeing it shared as an example of a "rational"(ish) anime.

The first episode (all that I bothered watching) disappointed me greatly. The so-called strategic genius won a fleet battle against all odds by using tactics obvious to a particularly bright seven year old. Someone tell me if it's worth persisting despite poor first impressions.

Sure, if we're being strict about things. But then there's everything else Watt says, which makes me feel justified in saying that was his subtext/implication. He comes out and says so!

I'm probably misremembering. I think I've read the book at least 5 times, but the probably over a year ago.

The point still stands: we have limited insight into the actual degree of consciousness in a sleepwalking state. It's clearly abnormal, but our understanding of neuroscience can't confidently say that since the ability to form longterm memories is largely disabled, that means that consciousness, if present, can't be reported by the sleepwalker later (the same reason you start forgetting a dream as soon as you wake up).

If you've ever lucid dreamed (I haven't, sadly) then that demonstrates the ability to be aware and at least partially conscious during REM sleep. Sleepwalking is NREM behavior, sure, but it's not possible to say that the sleepwalker is entirely unconscious, we just don't know.

Even if they're performing complex motor behaviors, I strongly suspect that overall performance is hampered. They might (in rare cases), drive a car, but I doubt they drive as well as they would fully awake. I could be wrong, but without the ability to subject an active sleepwalker to a battery of cognitive tests, I'll stay here. It's a very tricky subject to study.

Eh, I have mixed feelings on the topic. Watts did his best to rationalize the concept with evobio, but that only gets you so far with vampires. It's kinda cool, but they're far from plausible organisms.

Oooh they're scary dangerous predators that would murderise us all if they could. Yeah, and so could great white sharks, with their dead shoe-button eyes.

Unlike sharks, vampires are depicted as both amoral/murderous, and more intelligent than us silly humans.

We're not going to be murdered by sharks any time soon, and the sentimentality around the way some people treat them accords perfectly well with the stupidity of, as you point out, letting the vampires walk around unfettered. I can easily believe some people would be greedy and stupid enough to think they could make pets out of vampires and use them for PROFIT. But the vampires themselves? There's nothing there, they're just automata. Or sharks, perfect killing machines but no higher goal than that.

The thing is, they don't roam around entirely unfettered! In-universe, they're recognized as highly dangerous, and mitigation measures are put in place:

  • The original vampires were highly territorial hypercarnioveres who couldn't stand competition. The resurrected ones had those tendencies ramped up, they were described as murdering each other if allowed to enter close proximity. Think shoving two male tigers into the same enclosure.

  • Their handlers thought that this instinctual intolerance of their own kind would prevent scheming and conniving. They were very, very wrong. The exact mechanisms by which the vampires coordinated their rebellion are excellent, probably one of the best depictions of the power of decision theories for modeling and coordination. They just imagined what they'd do if in the place of another vampire, and vice-versa, solved for the equilibrium, and acted, independently and simultaneously, without ever having to actively exchange information with their kin. Hats off.

  • The crucifix glitch was weaponized against them, the belief was that if they went off the reservation, they'd die painfully as soon as the drugs that stopped them from having painful and lethal seizures wore off.

The humans weren't entirely complacent, but they were still unforgivably insufficiently paranoid about creatures smarter than them, which they knew to be hostile by default. The Vampires consistently use their superior physical prowess to murder normal humans, not just their brains.

So why even let them have that physical prowess? It doesn't take a genius to say that "hey, maybe we should give them the grip strength of an obese 4channer". The Vamps were kept around for their brains, not their brawn. It added nothing while making them a greater threat. This is, as far as I'm concerned, giving the humans an idiot ball. The ways the vampires circumvented their other shackles is understandably hard to predict without the benefit of hindsight. Tearing people apart with their bare hands isn't.

You know what? I don't think he is engaging with the article. The article specifically mentions GPT 5.2 Pro seven times, two of which seem, to my read, to imply that that's what he's using. There is one moment where he just says "GPT 5 Pro". Perhaps he just happened to leave off the ".X" in this one spot. Perhaps I'm reading the other seven mentions of GPT 5.2 Pro wrong, and the dirty secret is that he's using 5.0. I suppose he doesn't say in big bold highlighted words, "I'm definitely using 5.2 and not 5.0," so sure, maybe one could say that it would be nice to have a clear statement.

I checked, and this seems correct.

On that basis, I can't really disagree with your claim that @Poug didn't engage with the article. Being charitable, it's exceedingly common to see this happen in the wild, so he might have jumped to conclusions, but neither you, nor the author, seems to have made that kind of error and it's unfair to criticize you on those grounds.

Sure Rorschach is more advanced than humanity, but that obviously doesn't prove that consciousness is a drag any more than someone taller and balder than you indicates that hair is keeping you short.

Rorschach is explicitly described as a p-zombie/Chinese Room, and is used as an existence proof for superintelligence without qualia or consciousness. I struggle to separate in-universe speculation from author fiat, I doubt that Watts is the kind to devote that much screentime to an idea without partially endorsing it.

It's the most technologically advanced entity in Sol, it's doing very well for itself, and all without being conscious. I think that constitutes a claim that consciousness isn't particularly important.

Anyway, after writing this, I had GPT 5.2 Thinking check the version hosted on Archive for direct quotes:


From Siri’s internal monologue near the end (the book’s most on-the-nose anti-sentience passage):

“It begins to model the very process of modeling. It consumes ever-more computational resources, bogs itself down with endless recursion…” � Internet Archive

“Metaprocesses bloom like cancer, and awaken, and call themselves I.” � Internet Archive

“The system weakens, slows… advanced self-awareness is an unaffordable indulgence.” � Internet Archive

“This is what intelligence can do, unhampered by self-awareness.” � Internet Archive

That last line is basically your exact request in one sentence.

In the Notes and References: consciousness as interference, nonconscious competence In the back-matter discussion of consciousness (Watts stepping partly out of “story voice”):

“Consciousness does little beyond taking memos… rubber-stamping them, and taking the credit for itself.” � Internet Archive

“The nonconscious mind… employs a gatekeeper… to do nothing but prevent the conscious self from interfering…” � Internet Archive

“It feels good… makes life worth living. But it also turns us inward and distracts us.” � Internet Archive

“While… people have pointed out the various costs and drawbacks of sentience, few… wonder… if… it isn’t more trouble than it’s worth.” �


It also found a full interview where Watts, out of universe says:

It finally occurred to me that if consciousness actually served no useful function – if it was a side-effect with no adaptive value, maybe even maladaptive – why, that would be a way scarier punch-in-the-gut than any actual function I could come up with. It would be an awesome narrative punchline for a science fiction story. So I put it in.

Of course, not being any kind of neuroscientist, I had no doubt that I’d missed something really obvious, and that if I was lucky a real neuroscientist would send me an email setting me straight. At least I would have learned something. It never occurred to me that real neuroscientists would start arguing about whether consciousness is good for anything. In hindsight, I seem to have just blindly tossed a dart over my shoulder and hit the bullseye entirely by accident.

https://milk-magazine.co.uk/interview-peter-watts-sci-fi-novel-blindsight/

https://x.com/lauriewired/status/2020006982598685009?s=20

This is the closest I've ever come to seeing usage in the wild, and Laurie claims it's applied by some flavor of analyst. I suppose it's neat?

Well, I don't see myself crossing the bright line of actually posting my essay here and then begging for votes. I think simply soliciting suggestions and mentioning a rather extensive list of potential candidates I've come up with is probably fine. I don't think @ScottA would mind.

So you may want to avoid stating what your final decision on this topic is.

Fair enough, but I'm still in the concepts-of-a-plan stage.

Did you know that visualizing data in the form of faces is an actual technique?

https://en.wikipedia.org/wiki/Chernoff_face

Making them screaming faces? Subtlety is a lost art.

I do not think it's fair to say that @Poug didn't engage with your post.

If you say:

It seems to me to be a balanced take. He's bullish and hopeful on the future, while trying to be accurate/realistic about current capabilities, while remaining somewhat concerned about possible problems

Then it is entirely fair to point out that the person you're using as an authority isn't using cutting-edge models that correctly capture "current capabilities". A few months is a very long time indeed when it comes to LLMs.

That is all I have to say, and I mean it. I'm not a professional mathematician, I can't attest to their peak capabilities as a primary source. The last time I was able to was when I got my younger cousin (a Masters student then, now postgrad in one of the more prestigious institutions here) to examine their capabilities in my presence.

"Is the one-point compactification of a Hausdorff space itself Hausdorff?" was a problem that I could actually understand, after he showed me the correct answer. The LLMs of the time were almost always wrong, 6 months later we got mixed results , but as early as a year ago, they get it right every time (when restricting ourselves to reasoning models, and you shouldn't use anything else for maths).

Now? He went from being skeptical about my claims of near-term AI parity in mathematics to what I can only describe as grim resignation.

(Now being six months ago, last time I saw him.)


In the interest of fairness, I think @Poug is probably incorrect when he says:

But you will also notice the absense of issues you are facing

I'm not saying this with confidence, because that's just my recollection of what actual mathematicians say these days, including Tao himself. I just mention it to hopefully demonstrate that I'm trying very hard not to be a partisan about things.

It's excellent to see you living up to the latter half of your username. Here, have a cookie for good behavior.

Tell me about it. I was looking for published research on administering human IQ tests to LLMs, and the most recent example I could find is a preprint that tested cutting edge models like 4o and Sonnet 3.5. Damn thing hadn't even made it through peer review. I had to settle for a relatively niche website that independently administers the Mensa IQ test to the latest models, and while that's much better than nothing, it demonstrates that standard academia is entirely unable to keep up with the frontier.

Huh, I haven't heard of that one before, and up till this point, I thought I'd read pretty much everything he's ever written. Maybe it's even more misanthropic when translated to Polish? You guys aren't known for your sunny vibes and general optimism.

In general, I agree that Watts is deeply, borderline-fanatical levels of misanthropic. I regularly check in on his blog, and a running theme is his sentiment that humans have Wrecked The Planet (ecological collapse, global warming), and we're going to pay for our sins/hubris by quite possibly going extinct. There is such a thing as overstating the seriousness of what is otherwise a real problem. Global Warming is an eminently solvable problem, for very little money should we get over our civilizational allergy to geoengineering. Of course, the idea of using technology to solve things instead of degrowth and industrial regression is deeply antithetical to his worldview. Recently, he's slowly migrating to AI-bashing, which is a very modest directional improvement.

For now, he's busy writing polemics and giving talks at moderately populated scifi seminars. A retired academic in Canada has largely aged out of active terrorism, that's a young man's game.

I was considering submitting my review of Rejection

The collection of short stories? I agree that it would have been unlikely to win, but that's on the basis of general ACX-audience inclination, and not because of your chops as a writer (very real) or the quality of the book (I have no clue).

Incidentally if you'd like me to read over your draft and offer feedback, I'd be more than happy to.

Thank you again! This reminds me that I really need to apologize for asking you to send me your draft and never getting around to giving suggestions :(

If it's any consolation, I have consistently felt bad/embarrassed about it ever since. I try and keep my promises in general.

I can take an actual look this time, assuming you still want a second set of eyes on it.

Thanks! Out of curiosity, do you plan on throwing your hat in the ring?

I'd have to write the review from scratch, but if you want a TLDR:

  • Watts posits that consciousness is an evolutionary spandrel and that it's possible to have intelligence/superintelligence without consciousness. While not mentioned in the book, the usual supporting evidence is observation of sleepwalking humans or blackouts (in which case we haven't ruled out that the person isn't partially or fully conscious, they might simply lack the consolidation of longterm memory required to remember being conscious, this is pretty strongly evident in alcohol blackouts). Not only does he claim it's not strictly necessary, he posits that it's suboptimal, and a drag on performance.
  • Our best theories of consciousness like IIT and GNWT seem to be partially supported and partially discredited based on recent research. That means that it's possible to salvage Watts's claim, but no strong consensus either way.
  • We've found clear correlations between consciousness and statistical phenomena on the whole-brain scale. You could look up edge-of-criticality models for more. The gist of is that what we perceive as normal consciousness, the type optimal for normal life, is a very fine balance in neuronal activity with chaos on one side and rigidity on the other. This is actually a blow against consciousness-as-epiphenomenon, as Watts claims. These models cash in with actual predictions, and they can measure "degrees" of consciousness from stupor to full alertness using physical metrics.
  • LLMs are the first real xenointelligences. A few years ago, the case for them entirely lacking consciousness or internal qualia was the default. Now, we have very interesting evidence suggesting active ability to introspect and awareness of their internal cognition in a way not specifically trained into them:

https://www.anthropic.com/research/introspection

  • I still wouldn't go as far as to claim that LLMs are conscious, since we're awful at conclusively identifying consciousness in humans, let alone animals or AI, but they seem to possess at least some of the necessary elements.

  • I fucking hate the Chinese Room, it's an impoverished excuse for a thought experiment with an obvious answer: the room+human system speaks Chinese, even if no individual component does. You speak English, even if no single neuron in your brain does. I find it ridiculous that it's brought up today as if it means anything. The aliens in the story are specifically described as Chinese Rooms, and you can guess what I think of that. If I was writing a full essay, I'd add more about the sheer metaphysical implausibility of p-zombies in general, but those aren't original observations.

  • If I'm nitpicking (some very annoying nits), the baseline humans and their pet AGIs show suicidal incompetence in universe. You've got hyperintelligent autistic superpredators on the loose? And you let them walk around? Break their spines and put them in a wheelchair while on enough enough oestrogen to give them brittle bones/spontaneously manifest programming socks. The only reason that the primary safeguard was an aversion to straight lines intersecting at right angles is Watts trying to launder in the classical trope of vampires being averse to crucifixes. It's deeply dumb as an actual solution. Also, why didn't the supersmart AI actually do something about the vampire takeover? Are they stoopid?

Summing up: the case for the theories in Blindsight is weaker than at time of publication, even if no one can outright falsify them.

Edit: It's worth noting that I still love the books, it's in my top 10, maybe top 3. I even separate art from the author, I'm not sure if Watts is terminally depressed or terminally misanthropic, but I suspect that the combination is the only thing preventing him from becoming a low-grade ecoterrorist (this is mostly a joke). I still highly recommend it to new readers, as long as they don't overindex from the existential crises.

Oops. I'll fix that brain fart, thanks.

Thanks! It would the be the easiest option, since well, I do have the essay written already.

vanilla/safe/on-the-nose choice for Scott's blog

Fair point, but in my defense, I'm a psychiatry resident, in the NHS, and also suffering from my residency while using humor as a coping mechanism. If anyone can get away with it, it should be me, goddammit!

Having read Blindsight, I also don't see what a review of it would add. The book, while being interesting in the way an academic paper is, is also dry and bloodless like one, and something something trying to squeeze juice from a brick

I genuinely disagree on the characterization of the book, it's one of my favorite novels for good reason, and that includes the sharp prose. Watts might be a depressed misanthrope who prays that humanity pays for its sins (any day now), but the man can write. Oh well, opinions can vary among gentlemen. I can think of a few things to discuss, I believe the reposted/updated version in the SSQ thread specifically mentions general cognitive neuroscience advancements as well as the clear example of LLMs as an inhuman intelligence that may or may not have qualia/consciousness.

I have some interest in participating in Scott's Book Review contest, but I'm having a very hard time figuring out what to review.

I have a few things already up my sleeve:

  • A detailed review of the Golden Oecumene series by John C.Wright, comparing and contrasting it with the Culture novels by Ian Banks. This is roughly complete, but needs a bit of polish before I'm ready to hit submit. In the domain of fiction, I feel like it's my best shot. I have thoughts.

  • A psychiatrist's take on Wuthering Heights, focusing on the obvious mental illness in most of the dramatis personae. Unfortunately, this would require me to re-read the damn novel, and that brings up PTSD flashbacks from a BPD ex who tried to force feed me Victorian period dramas.

  • Blindsight? Unfortunately it's very well known in SSC/Rat circles, and I'm not sure there's much to add beyond a discussion of some minor advances in cognitive neuroscience (and massive advances in LLMs).

  • An even more in-depth review of Reverend Insanity? Tempting, but niche.

  • This Is Going To Hurt, the memoirs of an ex-gynecology resident in the UK. An incredibly funny, poignant, and moderately depressing look into how the NHS functions. Written over 10 years ago, so you can only imagine how much worse things are today. Well, if I do write it, you'll no longer have to try very hard imagining.

  • The Denial of Death. Prime candidate for a transhumanist takedown, leaving aside that like many grand theories of the human psyche, it proves too much. Unfortunately, I've yet to read it, and I don't know if the views it espouses are still fashionable enough to be worth skewering.

I was 70% through my planned submission for the Anything But A Book Review contest last year, namely a comparative analysis of the NHS and the healthcare system in India informed by both data and my personal experience, but that was unfortunately derailed by a combination of depression and work/exam pressure. Oh well, perhaps I can salvage it for next year.

I'd appreciate suggestions! My main blocker is that I rarely read non-fiction or "Big Picture" books these days. Those are typically winners, from my analysis of past results, and I haven't read anything in the past few years that even remotely inspired me to engage in that level of analysis. Controversial take: I find that the most interesting material dealing with the real world is found in blogs or online essays, not in books. Sue me.

Edit: To be clear, I'd appreciate both suggestions on options I've already curated, as well as books you think might be a good fit (in terms of me having something useful to say, plus being suitable for the actual contest).

I think I understand AGP as a phenomenon (for a loose value of understand, nobody has a strong grasp on what causes it at a mechanical level). It seems like a good way to describe and conceptualize a large chunk of trans people, and I know many who willingly endorse it as an accurate model of their internal cognition.

What I'm confused by is MSM who prefer "feminine" men. Naively, you'd expect that they'd want the most masculine gay men they could find. If you like femininity that much, why not just sleep with women? Why seem out "passing" transwomen or ladyboys or twinks or...

Hmm. Now that I've articulated this, I can only shrug and say that human sexuality is messy and complicated. Firstly, we have bisexual men, who might be willing to sleep with both men and women, but find it easier to sleep with other men. Solve for the equilibrium.

Second, even straight men have diverse tastes in women. Some like girly-girls, others, like me, are Tomboy Respecters. If I was making the Perfect Woman in a lab, she'd be a man (in terms of personality and interests) who just happens to be in a woman's body. Well, +2-3 SD of being nurturing and caring, but the point stands.

All infantry heal back to full between missions. That's not the case for vehicles, you need dedicated vehicle repair bays for that (within an operation) and they're repaired to full between operations.

You are a brave man for going into the game completely blind, I've played the demo and read the guides voraciously before EA. But you seem to be doing fine and figuring these things out is half the fun!

Buying multiple dossiers at once for the discount is a good trick.

Which SLs? Some SLs are just crap (Wentworth, Ivy). The low supply cost doesn't make up for the fact that growth potential is useless. They'll never catch up with the heavy hitters till it's irrelevant.

That being said, I'm far from overly concerned with HP. AP and accuracy are far more important (and the 1 star SLs are subpar on both fronts). Most missions, my soldiers never even get hit.

I had an excellent non special weapon for my biggest squad. The 100 credits smg/assault rifle with a 5 shot burst. It killed groups of small aliens in a single action. I tried out the sniper rifle to deal with the big armored alien and it didn't do much damage.

In the early game, the only reliable weapon against the larger aliens is some flavor of RPG. Unfortunately, the xenos are not known to drop weapons you can actually use, so you might need to fight the pirates for a bit till you get one, or buy one off the Black Market. You can get by with sheer volume of fire, especially since the Queen is pretty slow.

I'll try to hire Lim once I figure out how to get a 5th SL.

Black Market. You need to buy a dossier. Then spend authority on actually hiring them.

Please don't do that. If you're in the UK, I might be the doctor seeing you, and I have enough on my plate already. If you're elsewhere, I still have sympathy for the local doctors.