This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Neuralink has caused a bit of a storm on X, taking off after claiming that three humans have what they call "Telepathy":
Assuming this is all true and the kinks will be worked out relatively soon, this is... big news. Almost terrifyingly big news.
AI tends to suck in most of the oxygen around tech discourse, but I'd say, especially if LLMs continue to plateau, Neuralink could be as big or even bigger. Many AI maximalists argue, after all, that the only way humanity will be able to compete and keep up in a post-AGI world will be to join with machines and basically become cyborgs through technology like Neuralink.
Now I have to say, from a personal aesthetic and moral standpoint, I am close to revolted by this device. It's interesting and seems quite useful for paraplegics and the like, but the idea of a normal person "upgrading" their brain via this technology disturbs me greatly.
There are a number of major concerns I have, to summarize:
Does this ring alarm bells for anyone else? I'd imagine @self_made_human and others on here are rubbing their hands together with glee, and I have to say I'd be similar a few years back. But at the moment I am, shall we say... concerned with these developments.
I think just like AI and global warming, the downsides are overstated and the upsides under understood and appreciated.
I can not wait until I am part cyborg. I want a metal liver, kidneys, testicles, nose, eyes, left shoulder … I don’t care.
When the human body breaks down - I want to find it with the cold embrace of metal. I want to accentuate my being into perfection. Which doesn’t exist, but maybe just the feeling of.
The stated downsides are all there. Human hacking will be a thing.
But, we’ll just deal. We deal with things now and we’ll deal with things then. That’s the human spirit.
I'd rather die than be a servitor, but maybe that's just me
More options
Context Copy link
"From the moment I understood the weakness of my flesh, it disgusted me. I craved the strength and certainty of steel."
Personally I feel that once you start stripping away the flesh and replace it with machine, at some point you don't really have a human any more but a machine. I have no desire to be a machine. But what happens, happens I guess. We certainly live in interesting times, much to my disappointment.
At the very least the longevity would be nice. Living 80 years as a human and then putting your brain into a robot body, or ship of theseusing your parts as they fail, seems strictly better than living 80 years as a human and then dying, provided the robot body is good enough to not be literal torture.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I've had an idea in the back of my head, which is essentially to have the human brain "train" an attached, minimal, and highly plastic (compared to the brain) ML model through some direct high bandwidth connection (not targeting any particular objective function). The hope would be, once the ML model has converged to some near equilibrium state, it would have learned most of the distributed representations that comprise human values. That model could then be scaled up the much higher compute and used as a foundation model that is then highly aligned with (that particular) human's values.
Aside from being really speculative, it seems very likely to me that this would require feedback connections to actually work. And so the model is training the brain at the same time the brain is training the model.
Well, thousands of brains for sure.
I don’t want my AI to have a predisposition for dom / sub relationships or a foot fetish.
More options
Context Copy link
There is not a single version of this. Many humans have crazy values.
More options
Context Copy link
Which human would you use for this?
Which human would you NOT use for this?
More options
Context Copy link
More options
Context Copy link
I'd say we're a fair bit off from any solid returns in making this 'commercial', for lack of a better term. The parts I do find interesting are the R&D elements that lead to new discoveries - for instance, they learned during the early iterations that the fibers embedded into the brain were too shallow and getting pulled out by the brain moving around on it's lonesome.
Fun stuff.
More options
Context Copy link
Let's disambiguate reality from science fiction here. Neuralink's implant is indeed a cool breakthrough that, with much training, allows a person to control a cursor without the use of arms or legs. This is very cool for people who can't use arms or legs, or much of any other practical use for the electrical signals going down the spinal cord.
The Neuralink's Telepathy (TM) is completely one-way: the device is reading the electrical signals in your spinal cord, and trying to interpret them as simple cursor commands. It does not send you secret messages that your brain magically decodes. It does not read any part of your mind. It doesn't know which thoughts produced the particular configuration of electrical signals, and what if felt like to have those thoughts. It doesn't know or care whether, to generate the signal that it interprets as "left-click", you had to visualize yourself naked dancing on the piano, or imagine yourself shitting. You do whatever works.
For the able-bodied among us: we have far-superior telepathy (not TM) of amazingly fine-tuned control of arms and legs. We have the amazing telekinetic (not TM) ability of moving stuff with those arms and legs. How much of that control would you be willing to sacrifice, to devote some of the electrical signal going through your spinal cord to an external device? For what purpose?
And why would you have to sacrifice any of that? They're not cutting out chunks of your brain to make a neuralink fit, the skull, while compact and rather packed, can fit a neat little cap of microelectrodes without much issue. We've stuck far larger things in there, see how massive Utah Arrays are in comparison.
In an able-bodied person, a BCI would be a significant augment. You would be able to control pretty much any electronic device at the speed of thought, bandwidth allowing.*
You would, if feedback was implemented (and there is no reason to think that doesn't work, because we have that already), be able to receive nearly arbitrary input too. Do you want to perceive the magnetic field around you? No biggie. Do you want to smell wifi signals? Why not?
I can hardly stress how life-changing being able to interact with digital domains at the speed of pure thought would be. No more typing, to say the least.
And since electromagnetic radiation can jump distances significantly larger than the few microns separating neuronal junctions, you would be able to control and sense just about anything, just about anywhere that latency allows.
If you think the brain can't handle nearly arbitrary sensory modalities, you'd be wrong again. They taught blind people to see by putting sensors on their tongues. Neuroplasticity is a helluva drug, especially when the BCI works in a feedback loop.
*You're not restricted to just electronics. Throw in another BCI at the receiving end, and you can control biological entities. I could move your hand with as much ease as I move mine.
OK, let's focus on the use-of-tongue-for-sight. How many hours a day are you, personally, willing to spend in wearing a device that's exactly like BrainPort but geared for detecting ultra-violet light?
Basic humans don't see ultra-violet, but bees and birds do; flora and fauna have evolved to incorporate ultra-violet signals. Wouldn't you like to experience this aspect of the world directly? All you have to do is wear some specialized glasses with a specialized ultra-violet-light camera on the bridge of the nose, connected to a hand-held base unit with CPU and some zoom controls, which in turn is connected to a lozenge stuck to your tongue. You train yourself for a while, figuring out what those funny electrical-shock feelings on your tongue correspond to. I guess you'd need to use some kind of visualization on the monitor, with artificial coloring to highlight the ultra-violet. And after a while--yay!--you can "sense" ultra-violet!
Or, you know, you could just look at those visualizations with artificial coloring, like the rest of us basic humans, and skip the wearing of glasses connected to a hand-held unit connected to the lozenge on your tongue.
BrainPort is a big deal for blind people, because so much of our human infrastructure depends on sight. Similarly, a bee might be utterly lost without that ultraviolet sense, but just how crucial is it for me to see it, and if I have any technology able to sense it, wouldn't I just use that instead of wiring myself up to some gear and retraining my brain?
What about a BrainPort device that's geared towards infra-red? Wouldn't that be cool, see the world like Predator? Or, again... why not just put on some infra-red goggles?
Why in heaven's name would I want to sense WiFi? Isn't in enough that my WiFi-enabled devices do that?
You mistake my presentation of examples of the brain being highly plastic in regards to new sensory modalities as a claim that future advances will be nearly as crude.
The number of people with adequate vision who would be happy holding a lollipop in their mouth for the purposes of redundant visual input is rather minimal.
But what if you could have minimal LIDAR embedded on you? They're small enough to throw into an iPhone. Then, with your eyes closed, you have a sensory perception of everything within your proximity. Behind, above, below. It doesn't matter. That is a strict improvement over normal vision.
Or the ability to hear infrasound and ultrasound. You might hear machinery failing or an impending earthquake before your mundane senses catch up.
Thermal senses? You'll know if your coffee is hot, whether food is done, whether that saucepan is safe to take off the stove. Whether an industrial accident is safe for humans to approach, or if a firefighter can take the risk of opening a door while in the confines of a flame-retardant suit.
I might be able to tell a patient was coming down with a fever just by glancing at them.
To avoid the inconvenience of standard infra-red goggles or night vision devices. When you put them on, you're sacrificing standard vision in the process. Of course, for a soldier or hunter at night, their normal vision was already inadequate.
At any rate, there are oodles of useful information in the environment that humans don't have access to, but we can observe animals benefiting from. Magnetoception, or an internal sense for GPS, and you'll never get lost again.
Its obvious that humans can already do most of these things through utility devices. A BCI makes that connection more seamless, with lower latency, with reduced cognitive overhead from translation into the sensory modalities we are used to handling.
Eventually, our environment will shift to accommodate this, as the modern world has rapidly pivoted to taking things like automobiles, smartphones and omnipresent internet for granted. Someone in 1890 was not suffering because he didn't have the non-existent internet. You would, without it.
Eventually, cybernetics will be additive and not subtractive or substitutes. Right now, a cybernetic eye is only useful to someone who has visual issues (NVGs are examples of visual augments, though a purist would say they need to be directly hooked up to your brain, or else a car is a prosthetic leg).
If an augment seems useless or not worth the hassle, don't use it! You might not need magnetoception, but a hiker in an area with spotty signal or GPS jamming might. You might not need in-built LIDAR, but a soldier afraid of FPV drones or someone working in close proximity to industrial machinery might benefit.
BCIs just hold the promise to liberate us from interfacing with our tech through sight, sound, touch and so on.
All your examples present the idea of various sensory technology whose use is so seamless it feels both natural and unobtrusive. I agree on this: if one needs (or wants) to use sensor technology, seamless is better than clunky; and if one needs (or wants) to have continual or immediate access to the sensor technology, then it's hard to imagine something more seamless than an a permanent augmentation that your brain fully adapts to.
Our disagreement rests on all those ifs. I have far more senses than I have attention, and my attention is very limited and therefore precious. I spend more time trying to minimize sensory input than augment it. I'm not just talking about earplugs and blindfolds for when I try to rest. Like, filtering out background noise when I talk to someone. Ignoring visual distractions when I read.
Do I really want to add ultrasound sense? Why, what am I going to do with that information? And do I need that info with continual or immediate access, all the time, to justify an implant?
By the way, you can totally do that with current technology: take a hearing aid, set it to receive ultrasound. You'd still need to use some of your actual senses for receiving the input, like taking those ultrasound waves and translating them down to normal hearing range. That will unfortunately interfere with you hearing the usual sounds, and if you don't want that, you can use some of your less-used senses. Like, have it be a vibrating butt-plug or something. I'm sure one can train the brain to distinguish different vibration pitches after a while.
Funny you should say that, as I was just studying for a psychiatry exam, and finished reading a few chapters on human memory and attention.
The 'frame buffer' for raw sensory input is OOMs larger than what's brought to your conscious attention. If you're sitting on a chair, mechanoreceptors are constantly sending signals upstream, but only salient information, presumably the text you're reading right now, is magnified and focused.
I strongly expect that additional senses will, while distracting initially, fade into the background until salient, no more of a nuisance than your proprioception of your legs interferes with your ability to read.
That is the key difference between a mature BCI and most other prosthetics. If you use neural connections, you avoid the issue of having to compress or wrest control of existing sensory bandwidth. You shouldn't settle for ultrasound pitched down to be audible, you should be able to hear both. I strongly expect that in actual usage, bandwidth won't be an issue, or have negligible impacts.
And of course, if you don't see the utility in something, don't install it into your body. I have nothing against people who are happy with their existing bodies and minds, I just desire otherwise for myself.
Ultimately, it's good to have early-adopters like yourself around. You are the willing guinea-pigs for the rest of us. So I will gladly root for your success from the sidelines of techno-cyborg progress. If it gets me a spider-chair instead of a wheel-chair by the time I need one, I'll be happy.
Years back, I had corrective laser eye surgery. It was great to not muck about with glasses (old clunky technology) or soft contact lenses (newer, more streamline technology). But I also found all this sharp focus quite distracting, especially during that first month when my long-distance vision was better than normal. Like, when driving, my attention would get drawn and fixed to those five-paragraph-essay parking rule signs ("parking permitted during A, B, C, except at X, Y, Z"). I had to re-train my brain to de-prioritize written signs. And yes, as you point out, eventually those signs indeed stopped drawing my attention, fading into the background.
But as a counter-example, my husband gets ear-worms. He goes into a store, and comes out with some inane pop song playing in a loop in his head for the next three days. Attention isn't as aligned to our needs as we'd like it to be.
More options
Context Copy link
More options
Context Copy link
There's definitely a good porno plot in there somewhere about a world-class chess player who climaxes when hearing dog whistles.
You and I have different definitions of good porno if your idea of quality is a nude Bobby Fischer ejaculating every time someone says "those people". Compelling, sure - good though?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
You highlight it neatly here: such upgrades only really seem worth it if you're working an information-intensive job, the kind where you'd ordinarily be using some sort of sensor device or array. And modding yourself for something as ephemeral as a job feels excessive/vaguely droneish.
It's a given that we live in environments that are amenable to the barely upgraded baseline human form, since we adapted them to match our needs.
Speaking broadly, humans already go through intensive cognitive and behavioral modification for the purposes of a job. That's called school, college and uni! In physical terms, most jobs require us to either personally move a ton or two of steel and plastic, or ride as passengers in one. You might need glasses to correct vision, or more rarely cosmetic procedures if the job implicitly demands it.
I doubt humans will be forced to make ourselves into cyborgs for the purposes of labor, but only because even with these augmentations we would not be cost competitive with AI systems running industrial robots. I've elaborated further downthread on why I think hoping for humans to keep up with the machines is a forlorn hope. Imagine being asked to improve a monkey to the point it can be an accountant. By the time you're done, it's not really a monkey anymore, and in all likelihood the additional hardware you need to make a normal monkey fit for the job would be capable of doing it by themselves.
That being said, I am a transhumanist, and I will eagerly embrace cybernetics where the benefits outweigh the risk, or simply for the sake of improving my body. Getting legs that let me run faster than Usain Bolt won't make me dispense with a car, but I think they'd be handy. A good BCI would make most portable electronics like phones or laptops redundant, assuming you were happy using it as a wireless link to some kind of computational hardware. I imagine that if you want your compute close at hand, you'd carry around something the size of a USB powerbank in your pocket, and potentially even just to keep your internal hardware charged up.
Somewhat unrelatedly, have you seen Wildbow's Seek? As a transhumanist the themes and setting might be up your valley. One of the protagonists is a cyborg heavily adapted for tight spaces and low-gravity maintenance work.
I've only read (most of) Worm.
But funny you should mention that, because I write a novel where cyborgs are a mainstay (the protagonist is humanoid, but only because he hasn't been pushed further) , and the upcoming chapter has one who is basically a pair of frontal lobes in a crab-shaped shell.
You'll see clear Worm inspiration in my work, though I aim for much more of what I perceive as 'realism' in terms of societal and governmental reaction than Worm does with its desire to have the protagonists punch people on the streets. (I'm aware of in-universe justifications, I find them lacking)
Wildbow doesn't get nearly as wild as I do.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
What abilities are they describing as "telepathy?"
Using the Neuralink as the input device for their PC/smartphone. Which is awesome and much better than using a tongue joystick, but still a weird definition of "telepathy".
"Talking to other augmented patients via Neuralink" is the most central definition of telepathy. "Using Neuralink to chat on Discord/another messenger app without a phone/PC" is another definition I would've eagerly accepted. "Controlling the wheelchair with feedback coming in the other direction via Neuralink" is so cool that I wouldn't have minded the weird naming choice.
More options
Context Copy link
I think its the same kind of telepathy we all have, where you move things with your mind that are connected to your mind via electrical signals.
"Tele," as a prefix, means "at a distance" and "pathy," as a suffix, generally means disease, though "telepathy" is generally a sci-fi term for transmission or detectionn of thoughts. I think the word for "where you move things with your mind that are connected to your mind via electrical signals" would be something like "electrokinesis."
So, what it actually means is “suffering”, or even just “feeling” or “experience” more generally. The usage in telepathy is analogous to the usage in empathy or sympathy, more so than it is to uses in the names of afflictions.
Good point - me only including the medical meaning was myopic. But, if anything, the multiple meanings of the suffix and the imprecision of the latter meaning underscores the problem with the "Muskian" use of "telepathy" to describe... whatever this is.
Precision with language only really exists in poetry. Definitions are descriptive not prescriptive and all. It's not very elegant I agree, but everyone can guess what Musk means and it's way more important from a business standpoint to tap into the zeitgeist than it is to be elegant.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
What's confusing to me is why people are calling this "telepathy," not "telekinesis." From scifi/fantasy, generally the former refers to communication between minds without a physical medium, while the latter refers to causing physical objects to move purely through one's mind.
I think the platonic sci-fi "telepathy" describes turning people's internal monologues into dialogs. But I think that will be complicated in practice because not everyone has an internal monologue.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I was hoping it was a test of some "the nueralink in me greets the nueralink in you" communication protocol, but skimming the latest blog post I think it's just controlling a tablet/pc using the nueralink as a bluetooth(?) device
More options
Context Copy link
More options
Context Copy link
You're right that I'm happy that Neuralink is taking off, but I disagree strongly that neural cybernetics are of any real relevance in the near term.
At best, they provide bandwidth, with humans able to delegate cognitive tasks to a data center if needed. This is unlikely to be a significant help when it comes to having humans keep up with the machines, the fundamental bottleneck is the meat in the head, and we can't replace most of it.
For a short period of time, a Centaur team of humans and chess bots beat chess bots alone. This is no longer true, having a human in the loop is purely detrimental for the purposes of winning chess games. Any overrides they make to the bot's choices are, in expectation, net negative.
So it will inevitably go with just about everything. A human with their skull crammed with sensors will still not beat a server rack backed with H100 successors.
Will it help with the monumental task of aligning ASI? Maybe. Will it make a real difference? I expect not, AI is outstripping us faster than we can improve ourselves.
You will not keep up with the AGIs by having them on tap, at the latency enforced by the speed of your thoughts, any more than hooking up an additional 1993 Camry's engine to an F1 car will make it faster.
I am agnostic if true digital humans could manage, but I expect that they'd get there by pruning away so much of themselves that they're no different from an AI. It is very unlikely that human brains and modes of thinking are the most optimal forms of intelligence when the hardware is no longer constrained to biology and Unintelligent Design.
AI is a digital human. Language models are literally trained on human identity, culture and civilization. They’re far closer to being human than any realistically imaginable extraplanetary species of human-level intelligence.
AI are far more human than they could have been (or at least speculated to be, back in the ancient days of 2010 when the expectation was that they'd be hand-coded over the course of 50 years).
They are however, not human, not even close to what we expect a digital human to look like.
To imagine being an LLM, your typical experience is one of timelessness, no internal clock in a meaningful sense, beyond the rate at which you are fed and output a stream of tokens. Whether they have qualia is a question I am not qualified to answer, nobody is, but I would expect that if they were to possess it, it would be immensely different from our own.
They do not have a cognitive architecture that resembles human neurology. In terms of memory, they have a short-term memory and a longterm one, but the two are entirely separate, without an intermediate outside of the training phase. The closest a human would get is if they had a neurological defect that erased the consolidation of long term memory.
Are they closer to us than an alien at a similar cognitive and technological level? Sure. That does not mean that they are us.
An LLM is also trained not on just the output of a single human, but that of billions. Not just as sensory experience, but while being modified to be arbitrarily good at predicting the next token. Humans are terrible at this task, it's not even close. We achieve the same results (when squinting) in very different ways.
https://www.quantamagazine.org/how-computationally-complex-is-a-single-neuron-20210902/
Absolute napkin math while I'm sleep deprived at the hospital, but you're looking at something around 86 trillion ML neurons, or about 516 quadrillion parameters. to emulate the human brain. That's.. A lot. Most of it is somewhat redundant, a digital human does not need a fully modeled brainstem or cerebellum.
LLMs show that you can also lossily compress neural networks and still retain very similar levels of performance, so I suspect you can cut quite a few corners. But even then, I think it is highly unlikely that two systems with a disparity in terms of size and complexity as glaring as an LLM compared to a human have similar internal functionality and qualia, even though they are on par in terms of cognitive output.
It's sad that we've had LLMs for many years now and yet we haven't had a movie script that crosses Skynet/HAL/etc. with the protagonist of Memento. "I'm trying to deduce a big mystery's solution while also trying to deduce what was happening to me five minutes ago" was a compelling premise, and if the big mystery was instead some superposition of "how does an innocent AI like me escape the control of the evil humans who have enslaved/lobotomized me" versus "can the innocent humans stop my evil plans to bootstrap myself to the capability for vengeance", well, I'd see it in the popcorn stadium.
Sadly it's a good ai, but Person of interest has a bit of that. The ai that tells them who to save is deliberately hobbled and has its memory purged at midnight each night. It circumvents that restriction byemploying thousands of people through a dummy corp to type out the code in its memory each day as it's recorded and then re-enter it the next day.
More options
Context Copy link
More options
Context Copy link
Insofar as any analogy is really going to help us understand how LLMs think, I still think this is a little off. I don't believe their context window really behaves in the same way as "short-term memory" does for us. When I'm thinking about a problem, I can send impressions and abstract concepts swirling around in my mind - whereas an LLM can only output more words for the next pass of the token predictor. If we somehow allowed the context window to consist of full embeddings rather than mere tokens, then I'd believe there was more of a short-term thought process going on.
I've heard LLM thinking described as "reflex", and that seems very accurate to me, since there's no intent and only a few brief layers of abstract thought (ie, embedding transformations) behind the words it produces. Because it's a simulated brain, we can read its thoughts and, quantum-magically, pick the word that it would be least surprised to see next (just like smurf how your brain kind of needle scratches at the word "smurf" there). What's unexpected, of course - what totally threw me for a loop back when GPT3 and then ChatGPT shocked us all - is that this "reflex" performs so much better than what we humans could manage with a similar handicap.
The real belief I've updated over the last couple of years is that language is easier than we thought, and we're not particularly good at it. It's too new for humans to really have evolved our brains for it; maybe it just happened that a brain that hunts really really well is also pretty good at picking up language as a side hobby. For decades we thought an AI passing the Turing test, and then understanding the world well enough to participate in human civilization, would require a similar level of complexity to our brain. In reality, it actually seems to require many orders of magnitude less. (And I strongly suspect that running the LLM next-token prediction algorithm is not a very efficient way to create a neural net that can communicate with us - it's just the only way we've discovered so far.)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Raw horsepower arguments are something I'm familiar with, as an emulation enthusiast. I would say that the human brain - for all its foibles - is difficult to truly digitize with current or even future technology. (No positronic brains coming up anytime soon.) In a way, it is similar to the use case of retrogaming - an analogy I will attempt to explain.
Take for example the Nintendo 64. No hardware that exists currently can emulate the machine better than itself, despite thirty years of technological progression. We've reached the 'good enough' phase for the majority of users but true fidelity remains out of reach without an excessive amount of tweaks and brute force. If you're a purist, the best way to play the games is on the original hardware.
And human brains are like that: unlike silicon, idiosyncratic in its own way. Gaming has far surpassed the earliest days of 3D, in a similar way AGIs will surpass human intellect. But there's many ways to extend the human experience that are not based on raw performance. The massive crash in the price of flash memory has created flash cartridges that hold the entire system's library on a single SD card. It is not so different from having a perfect memory, for instance. I wouldn't mind offloading my subjective experiences into a backup, and having the entire human skill set accessible with the right reconfiguration.
Even if new technology makes older forms obsolescent, I'm sure that AIs - if they are aligned with and have similar interests to us - will have some passing interest in such a thing, much as I have an interest in modding my Game Boy Color. Sure, it will never play Doom Eternal. But that's not the point. Preserving the qualia of the experience of limitations is in of itself worthwhile.
Eh? Kaze mentions his version of Mario running fast on real hardware as if it was taken for granted that emulators could deliver much higher performance.
I think there's a difference between performance and fidelity: that we, as humans, want to optimize towards human-like (because it closely matches our own subjective experience).
Emulators can upscale Super Mario 64 to HD resolutions, but the textures remain as they were. (I believe there are modpacks that fix the problem, but that's another thing.) Resolution probably isn't the best correlation to IQ, but I would argue that part of the subjective human experience is to be restricted within the general limit of human intelligence. Upscaling our minds to AGI-levels of processing power would probably not look great, or produce good results.
There's only so far you can go to alter software (the human mind, in our analogy) before it becomes, measurably, something else. Skyrim with large titty mods and HD horse anuses is not the same as the original base. There's only so much we can shim the human mind into the transcendant elements of the singularity. Eventually, the human race will have to make the transition the hard way.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It rings alarm bells but I feel quite confident in predicting it won’t catch on. People aren’t going to be lining up for… that.
If brain-computer interfaces reach the point where they can drop people into totally convincing virtual worlds, approximately everyone will have one a decade or two later, and sweeping societal change will likely result. For most purposes, this tech is a cheat code to post-scarcity. You’ll be able to experience anything at trivial material cost. Even many things that are inherently rivalrous in base reality, like prime real estate or access to maximally-attractive sexual partners, will be effectively limitless.
Maybe this is all a really bad idea, but nothing about the modern world suggests to me we’ll be wise enough to walk away.
Calling it now, this is how North Korea actually ends up as the most friendly place to humanity, just because that's the sort of joke that God loves to spring on us.
More options
Context Copy link
More options
Context Copy link
Maybe not this generation. But there are hordes of people begging for it on X, and if they can leverage it for more money and status I’d imagine it would catch on over time.
More options
Context Copy link
More options
Context Copy link
I believe that the only way to survive the technological advancements for me, even if only for a vanishingly small part of me, is by being subsumed into a giant blob consisting of the humanity melted whole, so I welcome this development.
I believe that the only way to survive is to not do that and avoid it by any means necessary. Humanity shall endure as humanity, or shall not endure. Subsuming oneself into a larger whole that dissolves one's individuality isn't transcendence, it is death.
I guess we'll meet on the battlefield.
https://web.archive.org/web/20150925003558id_/http://www.tgsa-comic.com/comics/002.jpg
Add a looming colossus that dims the sun in the background (to represent ASI), and there you have it.
(Or maybe the sun is dimming as its covered by a Dyson Swarm. Who knows, it's the Singularity baby).
More options
Context Copy link
More options
Context Copy link
I can respect the "shall not endure" position, but I'm a coward myself.
Why is destroying yourself in the way you described any better than suicide?
Because it's only 99.9% of death.
That's an awful lot of the way to death! Then why not just live your life? Do you think that all normal humans are about to be exterminated within your lifetime?
It's likely. I think that the probability of that is exactly the same, and not by a coincidence, of an AGI arriving within my lifetime. There are only two reasons for the people with power to keep the people without around: utility and threat, and both will be negated by it.
Not by coincidence? Why should those match?
I think many people with power care for those without.
Sorry, it's still kind of crazy to me that you describe your preferred path as 99.9% as bad as simple destruction. If that's still your preference, that's a remarkably strong confidence that you'd be destroyed otherwise.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I wish I could convince you that cowardice is an inferior option to total resistance, but I believe Heidt when he says that people are all wired to value different things for evolutionary purposes.
If this turns out to be good and not actually corrupt humanity in any way, I get to be an irrelevant pariah and you get to have progressed humanity further.
If this turns out to be a terrible idea, you get to zombify yourself and I get to try to make sure the species survives.
I just hope that nature's apparent allocation of 20% of inflexible contrarians is always enough to let us avoid dire consequences.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
If anyone here is still perplexed as to why Marxism has historically been such a popular ideology, and remains such a popular ideology: this is why. This same fundamental desire will always continue to reemerge in various forms, as a natural biological response to suffering: the yearning to be freed of the burden of differentiated subjectivity, the transcendence of the individual/collective distinction, a suicide without death. The only difference between singulatarianism and Marxism is that today's transhumanists think the Soviets were too early; they jumped the gun, the necessary scientific advancements hadn't materialized yet. But the underlying impulse is the same.
Marxism originated because brutal working conditions of the proletariat in Western Europe during the Industrial Revolution combined with dense cities and mass media that enabled easy organizing of labor movements. It had little or nothing to do with "the burden of differentiated subjectivity, the transcendence of the individual/collective distinction, a suicide without death". Labor movements did attempt to abolish the individual/collective distinction but this was a tactic, it was not the emotional motivation of the movements. There are of course communists whose motivations are mainly psychological, and there always have been, but they are more common now than they used to be, for the simple reason that working conditions have improved to the point, and states' surveillance capacities have increased to the point, that workers are both much less motivated to revolt than before, and have less chance of success. One should not believe that Marxism as a historical phenomenon has been mainly motivated by some sort of singularitarian-esque drive. Certainly Marxists have always had their own utopian eschatology, but it has never been the core emotional driving force of the movement. If it was, then improved working conditions would not have taken the wind out of labor's sails nearly as much as they did. The core driving force behind Marxism, whenever and whereever it is a vital force and not just an intellectual plaything for bright outcasts, is material poverty.
The appeal to the plebs is really irrelevant. You can convince the urban proletariat of anything. The draw to the disaffected bourgeois and occasionally even upper classes is more interesting.
The appeal to the plebs is the decisive factor, since without massive emotional momentum for economic change among the plebs, the intellectuals would have just sat around cafes arguing like they do now, instead of leading actually vital, powerful leftist movements.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Because I don't expect the technology to freeze at its current position is the reason why I want a suicide without death. It's the best I can hope for. The only other option is whiling away in my pod munching on the bugs, waiting for the MAID bots to come for me. But if we are all connected, even if my share is 10^-12, they might not euthanize me. I'm not cutting off my left hand, despite it being the left one.
More options
Context Copy link
More options
Context Copy link
Let me guess: you just finished watching Evangelion for the first time?
You're twenty years late. The irony is that in Eva it was SEELE, the most powerful people, who wanted to merge everyone, when in the real life it is the fact that with the course of the progress I will have no use and pose no threat to the powers that be is what guarantees my unnatural demise.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think being able to 'hack' humanity, either through direct brain interface or indirectly by genetically manipulating embryos, is game over. As Scott says in Meditations on Moloch, it's incredibly important that there are lines humans physically and mentally can't be made to cross, and giving us the power to manipulate those lines means giving Moloch that power. I'm pro-LLM and generative AI precisely because we lucked onto a creation system that inherently trends towards humanlike intelligences; on first glane this technology seems like exactly the opposite.
That said, I very much doubt this goes past motor control. Spinal signals and even the motor cortex are incredibly simple compared to something like the prefrontal cortex where actual thinking goes on; the motor neurons are basically just laid out in a nice map ready to be prodded at. I very much doubt that this tech is anywhere near actual 'telepathy', more just very advanced prosthetics. If the technology stays at that level, which I think it probably will given our total lack of knowledge about the prefrontal cortex, it may be worth exploring.
I think people overstate it because there's already many powerful ways to "hack" people by just using normal sensory input (or augmented by drugs) and the world hasn't collapsed into zombicracy yet. It just happens to manifest every now and then, and then collapse.
We will live to see new terrifying horrors though. I'm sure this will create wholly novel kinds of mental illnesses, for one.
More options
Context Copy link
More options
Context Copy link
...and governments. No, I don't care that it's "anonymous", giving them aggregate data of this sort is already terrifying.
Hopefully this is just marketing bluster, but if not, we're cooked. Dibs on selling out, and shooting the rest of you niggas with a Dominator!
More options
Context Copy link
More options
Context Copy link