@self_made_human's banner p

self_made_human

Kai su, teknon?

16 followers   follows 0 users  
joined 2022 September 05 05:31:00 UTC

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

I tried stuffing my friends into this textbox and it really didn't work out.


				

User ID: 454

self_made_human

Kai su, teknon?

16 followers   follows 0 users   joined 2022 September 05 05:31:00 UTC

					

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

I tried stuffing my friends into this textbox and it really didn't work out.


					

User ID: 454

The same argument applies for signing up for experimental heart surgery.

You can do that right now if you cared to.

Find a site like piaotian that has raw Chinese chapters. Throw it into a good model. Prompt to taste. Ideally save that prompt to copy and paste later.

I did that for a hundred chapters of Forty Millenniums of Cultivation when the English translation went from workable to a bad joke, and it worked very well.

(Blizzard arc was great. The only part of the book I recall being a bit iffy was the very start)

You're correct that I'm being generous. Expecting a system as macroscopic and noisy as the brain to rely on quantum effects that go away if you look at them wrong is a stretch. I wouldn't say that's impossible, just very, very unlikely. It's the kind of thing you could present at a neuroscience conference, without being kicked out, but everyone would just shake their heads and tut the whole time.

If this were true, then entering an MRI would almost certainly do crazy things to your subjective conscious experience. Quantum coherence holding up to a tesla-strong field? Never heard of that, at most it's incredibly subtle and hard to distinguish from people being suggestible (transcranial magnetic stimulation does do real things to the brain). Even the brain in its default state is close to the worst case scenario when it comes to quantum-only effects with macroscopic consequences.

And even if the brain did something funky, that's little reason to assume that it's a feature relevant to modeling it. As you've mentioned, there's a well behaved classical model. We already know that we can simulate biological neurons ~perfectly with their ML counterparts.

I've done dozens, or even a hundred pages with good results. An easy trick is to tell it to rip off the style of someone you like. Scott is easy pickings, ACX and SSC is all over the training corpus. Banks, Watts and Richard Morgan work too.

Taste is inherently subjective, and I raise an eyebrow all the way to my hairline when people act as if there's something objective involved. Not that I think slop is a useless term, it's a perfectly cromulent word that accurately captures low-effort and an appeal to the LCD.

Then again, that same man recommended a Chinese web novel with atrocious writing style to people, so maybe his bar is lower than many.

Fang Yuan, my love, he didn't mean it! It's a good novel, this is the hill I'm ready to die on.

I've enjoyed getting Gemini 2.5 and Grok 3 to write a new version of Journey to the West in Scott Alexander's style. Needs an edit pass, but it's close to something you'd pay money for.

PS: You need to @ instead of u/. That links to a reddit account, and doesn't ping.

This is highly speculative, and a light-year away from being a consensus position in computational neuroscience. It's in the big if true category, and far from being confirmed as true and meaningful.

It is trivially true that human cognition requires quantum mechanics. So does everything else. It is far from established that you need to explicitly model it at that detail to get perfectly usable higher level representations that ignore such detail.

The brain is well optimized for what's possible for a kilo and change of proteins and fats in a skull at 37.8° C, reliant on electrochemical signaling, and a very unreliable clock for synchronization.

That is nowhere near the optimal when you can have more space and volume, while working with designs biology can't reach. We can use copper cable and spin up nuclear power plants.

I recall @FaulSname himself has a deep dive on the topic.

I always keep an eye out for your takes. You need to be more on the ball so that I can count on you appearing out of a dim closet every time the whole AI thing shows up here.

I see no particular reason that a copilot for writing couldn't exist, but as far as I can tell it doesn't (unless you count something janky like loom).

I'm a cheap bastard, so I enjoy Google's generosity with AI Studio. Their interface is good, or at the least more powerful/ friendly for power-users than the typical chatbot app. I can fork conversations, regenerate responses easily and so on. It doesn't hurt that Gemini 2.5 is great, the only other LLM I've used that I like so much is Grok 3.

I can see better tooling, and I'd love it. Maybe one day I'll be less lazy and vibe code something, but I don't want to pay API fees. Free is free, and pretty good.

And then instead of leveraging that we for whatever reason decided that the way we want to use these things is to train them to imitate professionals in a chat room who are writing with a completely different process (having access to tools which they use before responding, editing their writing before hitting "send", etc).

Gemini 2.5 reasons before outputting anything. This is annoying for short answers, but good on net. I'm a nosy individual and read its thoughts, and they usually include editing and consistency passes.

The stricter and stronger your Prune filter, the higher quality content you stand to produce. But one common bug is related to this: if the quality of your Babble is much lower than that of your Prune, you may end up with nothing to say. Everything you can imagine saying or writing sounds cringey or content-free. Ten minutes after the conversation moves on from that topic, your Babble generator finally returns that witty comeback you were looking for. You'll probably spend your entire evening waiting for an opportunity to force it back in.

I'm always glad that my babble usually comes out with minimal need for pruning. Some people can't just write on the fly, they need to plot things out, summarize and outline. Sounds like a cursed way to live.

This got a report, though I don't think it's mod worthy.

Why single out whites? I'm pretty sure that current SOTA models, in the tasks they're competent at, outperform the average of any ethnic group. I can have far more interesting conversations with them than I can with a 105 IQ redditor.

Cheer up, that particular scenario seems quite unlikely to me. Most things get cheaper with time, due to learning curves and the benefits of scale if nothing else. I expect that once we establish working life extension at all, it won't be too long before it's cheap and/or a human right. You'll probably live that long.

I agree with most of this, but there's a difference between FIRE, lean FIRE and fat FIRE. For most people who can retire early and survive indefinitely, it probably makes sense on the margin to work a bit longer and save more.

(If the person reading this is 75 and has twenty million in the bank, just quit your job already)

There's communism, and there's communism, even holding the fully automated luxury bit equal.

I can easily see the trajectory of our civilization leading to a situation where everyone is incredibly wealthy and happy by modern standards, but some people, by virtue of having more starting capital to invest into the snowball, own star systems while others make do with their own little space habitat. I'd consider this a great outcome, all considered. Some might even say that by the standards of the past, much of the world is already here.

While controversial in certain spheres, richer people tend to be smarter, better educated people.

All else being equal, the opinions of someone who fit that bill are worth more than someone who doesn't have their shit together.

I agree with much of your comment, but keep in mind that when you're already rich and powerful, a lot of the usual downsides of risky plays become minimal. The upsides here are things like potentially making out like a gangster, outperforming the competition that relies on Mk 1.0 humans, and so on. (I know you've said something similar downthread, I'm elaborating, not contesting this bit).

A certain someone reported you for impersonating a mod. Unlike him, most of the mods have a sense of humor about such things.

[User was banned for this post]

Even if every AI researcher faced the wall today, and we were stuck at current SOTA, nobody is going to forget anything. Modern AI is entrenched, it is compelling, even if it's just for normies cheating on homework.

I grant that your observation is an important one, half of life's problems would be solved if we all thought so clearly about the correct reference class.

Is there that much demand for purely literary translation? I'd expect that like most romantic/artisanal fields, the bulk of the work is dry and boring. Here, make that washing machine's manual Spanish.

I would claim this as my joke, and it was probably my comment you recall, but it's been in circulation for probably longer than I've been alive. It's a good joke, stabs right at the gut.

This is the most reasonable AI skeptic take I've seen here, and that's high praise. I disagree on quite a few points, which add up, but I can see why an intelligent person who shares slightly different priors would come to your conclusion.

I presume you can't buy a Bugatti either. It's still an option that real living people can get for cash.

There's nothing standing in the way of Waymo rolling out an ever wider net. SF just happens to be an excellent place to start.

If you intended sarcasm, then this is an excellent example of Poe's law. There are people here who would unironically say the same thing, and have.

Consider this a warning; keep posting AI slop and I'll have to put on my mod hat and punish you.

But sir, I followed the rules and linked it off-site. Please put away that rod, I'm scared :(

Do you really think you can do that with existing technology? I'm not confident we've seriously tried to make a pathogen that can eradicate a species (mosquito gene drives? COVID expressing human prions, engineered so that they can't just drop the useless genes?) so it's difficult to estimate your odds of success. I can tell you the technology to make something 'with a lengthy incubation time and minimal predromal symptoms' does not exist today. You can't just take the 'lengthy incubation time gene' out of HIV and Frankenstein it together with the 'high virulence gene' from ebola and the 'high infectivity' gene from COVID. Ebola fatality rate is only 50%, and it's not like you can make it airborne, so...

You're the domain expert here, not me. I'd hope I'm more informed than the average Joe, but infectious diseases and virology isn't my field. Though if you consider culture-bound illnesses or social contagion like anorexia..

A gene drive wouldn't work for humans. We could easily edit it out once discovered.

Even if we haven't intentionally exterminated a species with a pathogen (myxoma virus for rabbits in Australia came close), we have done so accidentally. A few frogs and rare birds have croaked.

(There are no mistakes, just happy accidents eh?)

And now we're talking about something on par with what a really motivated and misanthropic terrorist could conceivably do if they were well-resourced.

Which isn't the worst benchmark for a malevolent AGI that is very smart by human standards.

I'd be talking out of my ass if I claimed I knew for sure how to create the perfect pathogen. I'm >50% confident I could pull it off if someone gave me a hundred million dollars to do it. (I could just hire actual virologists, some people seem insane enough to do GOF even today, so it seems easy to repurpose "legitimate" research).

I'm still voting against bombing the GPU clusters, and I'm still having children. We'll see in 20 years whether my paltry gentile IQ was a match for the big Yud, or whether he'll get to say I told you so for all eternity as the AI tortures us. I hope I at least get to be the well-endowed chimpanzee-man.

So am I, I don't want my new RTX 5080 blown up, not that I have a choice if the power connector fails. I also plan to have kids, because I think it's better to live than not, even if life was short. I don't expect them to have a "normal" life by modern standards.

We'll see how this plays out, but I think there's enough justification to take more broad precautions like saving lots of money. That's usually a good idea any way.

I think calling artists and journalists "poor members of the upper classes", while not entirely wrong, isn't my preferred framing. They're semi-prestigious, certainly, but my definition of upper class would be someone like 2rafa. They're often members of the intelligentsia, and have a somewhat disproportionate impact on public affairs, but they're not upper class by most definitions. Poor but upper class is close to a contradiction in terms.

In any case, AI isn’t taking everyone’s job. There will be fewer software engineers, sure, but we don’t need so many of them. They should learn to fix toilets or dig coal or something. Previous increases in the productivity of white collar work have not led to the elimination of white collar employment.

I've already explained my stance in this thread that the previous expectation about the state of affairs for automation doesn't hold. Cognitive automation that replaces all human thought is a qualitatively different beast when compared to the industrial revolution or computers.

A tool that does 99% of my work for me? Great, I'm a hundred times as productive! There might even be a hundred times more work to do, but I'll probably see some wage growth. There might be some turmoil in the employment market.

A tool that does 100% of the labor? What are you paying me for?

The whole point is that AI is approaching 100%, might even be there, or is so close employers don't care and will fire you.

I agree. Even if a given nation wants to protect human jobs, there's enormous incentive to be the first to defect and embrace automation.

In the UK, Rishi Sunak had already seriously floated the proposal of automating doctors away. With the tech of the time, it wouldn't have gone all that great, but it's only a matter of time someone bites as the potential gains mount.

We're all waiting for DeepSeek to release a new multimodal model and open source it. If it can generate pictures, you know there's gonna be porn.

If you're in the West? CS could have been great. Sadly, I'd have been just another programmer in India, competing with a million others for a greencard.

I can tell myself I'd be a decent programmer, and I'd probably have gone into ML since I was following advances well before the hype. Even then, medicine seems like the right choice given the constraints I faced.

But a lot of people are like you, so these models will start to get used everywhere, destroying quality like never before.

I can however imagine a future workflow where these models do basic tasks (answer emails, business operations, programming tickets) overseen by someone that can intervene if it messes up. But this won't end capitalism.

This conveys to me the strong implication that in the near term, models will make minimal improvements.

At the very beginning, he said that benchmarks are Goodharted and given too much weight. That's not a very controversial statement, I'm happy to say it has merit, but I can also say that these improvements are noticeable:

Metrics and statistics were supposed to be a tool that would aid in the interpretation of reality, not supercede it. Just because a salesman with some metrics claims that these models are better than butter does not make it true. Even if they manage to convince every single human alive.

You say:

Besides which, your logic cuts both ways. Rates of change are not constant. Moore's Law was a damn good guarantee of processors getting faster year over year... right until it wasn't, and it very likely never will be again. Maybe AI will keep improving fast enough, for long enough, that it really will become all it's hyped up to be within 5-10 years. But neither of us actually knows whether that's true, and your boundless optimism is every bit as misplaced as if I were to say it definitely won't happen.

I think that blindly extrapolating lines on the graph to infinity is as bad an error as thinking they must stop now. Both are mistakes, reversed stupidity isn't intelligence.

You can see me noting that the previous scaling laws no longer hold as strongly. The diminishing returns make scaling models to the size of GPT 4.5 using compute for just model parameters and training time on larger datasets not worth the investment.

Yet we've found a new scaling laws, test-time compute using reasoning and search which has started afresh and hasn't shown any sign of leveling out.

Moore's law was an observation of both increasing transistor/$ and also increasing transistor density.

The former metric hasn't budged, and newer nodes might be more expensive per transistors. Yet the density, and hence available compute, continues to improve. Newer computers are faster than older ones, and we occasionally get a sudden bump, for example, Apple and their M1

Note that the doubling time for Moore's law was revised multiple times. Right now, the transistor/unit area seems to double every 3-4 years. It's not fair to say the law is dead, but it's clearly struggling.

Am I certain that AI will continue to improve to superhuman levels? No. I don't think anybody is justified in saying that. I just think it's more likely than not.

  1. Diminishing returns!= negative returns.
  2. We've found new scaling regimes.
  3. The models that are out today were trained using data centers that are now outdated. Grok 3 used a mere fraction of the number of GPUs that xAI has, because they were still building out.
  4. Capex and research shows no signs of stopping. We went from a million dollar training run being considered ludicrously expensive to companies spending hundreds of millions. They've demonstrated every inclination to spend billions, and then tens of billions. The economy as a whole can support trillion dollar investments, assuming the incentive was there, and it seems to be. They're busy reopening nuclear plants just to meet power demands.
  5. All the AI skeptics were pointing out that we're running out of data. Alas, it turned out that synthetic data works fine, and models are bootstrapping.
  6. Model capabilities are often discontinuous. A self-driving car that is safe 99% of the time has few customers. GPT 3.5 was too unreliable for many use cases. You can't really predict with much certainty what new tasks a model is capable of based on extrapolating the reducing loss, which we can predict very well. Not that we're entirely helpless, look at the METR link I shared. The value proposition of a PhD level model is far greater than that of one as smart as a high school student.
  7. One of the tasks most focused upon is the ability to code and perform maths. Guess how AI models are made? Frontier labs like Anthropic have publicly said that a large fraction of the code they write is generated by their own models. That's a self-spinning fly-wheel. It's also one of the fields that has actually seen the most improvement, people should see how well GPT-4 compares to the current SOTA, it's not even close.

Standing where I am, seeing the straight line, I see no indication of it flattening out in the immediate future. Hundreds of billions of dollars and thousands of the world's brightest and best paid scientists and engineers are working on keeping it going. We are far from hitting the true constraints of cost, power, compute and data. Some of those constraints once thought critical don't even apply.

Let's go like 2 years without noticeable improvement before people start writing things off.