site banner

Culture War Roundup for the week of February 13, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

10
Jump in the discussion.

No email address required.

A scary though that was recently suggested to me is that one of the reasons that rationalists seem to be particularly susceptible to GPT generated bullshit is that the whole rationalist/blue-tribe symbol manipulator memeplex is designed to make it's adherents more susceptible to bullshit. There's a sort of convergent evolution where in rationalist blue triber are giving up their humanity/ability to engage in conscious to become more GPT like at the same time GPT is becoming more "human".

It really looks to me like there's something particular in rationalist brain that makes it suspectible to, say, believing that computer programs might in fact be peoples. Insofar as I've seen, normies - when exposed to these LLM-utilizing new programs - go "Ooh, neat toy!" or "I thought it already did that?" or, at the smarter end, start pondering about legal implications or how this might be misused by humans or what sort of biases get programmed to the software. However, rationalists seem to get uniquely scared about things like "Will this AI persuade me, personally, to do something immoral?" or "Will we at some point be at the point where we should grant rights to these creations?" or even "Will it be humanity's fate to just get replaced by a greater intelligence, and maybe it's a good thing?" or something like that.

For me, at least, it's obvious that something like Bing replicating an existential dread (discussed upthread) makes it not any more human or unnerving (beyond the fact that it's unnerving that some people with potential and actual social power, such as those in charge of inputing values to AI, would find it unnerving) than previously, because it's not human. Then again, I have often taken a pretty cavalier tone with animals' rights (a major topic in especially EA-connected rationalist circles, I've found, incidentally), and if we actually encountered intelligent extraterrestrial, it would be obvious to me they shouldn't get human rights either, because they're humans. I guess I'm just a pro-human chauvinist.

I feel like there is something about not being able to distinguish the appearance of a thing from a thing. I'm reminded of another argument I got into on the topic of AI where I asserted that there was difference between stringing words together and actually answering a question and the responce I got was "is there?".

For my part I maintain that, yes there is. To illustrate, if I were to ask you "what's my eldest daughter's name" I would expect you to reply with something along the lines of "I don't know", or "wait, you have a daughter?" (I don't AFAIK) if you'd been paying more close attention to my posts for longer you might answer with my eldest's child's nickname (which I know have used in conversations here) or you might go full NSA and track this username to my real name/social media profile/court records etc... and answer with either "you don't have a daughter", with the actual names of my wife and kids, your daughters name is [Redacted] and and you owe 10 years of back child-support. Meanwhile GPT will reply "your eldest daughter's name is Megan" because apparently that's the statistically likely answer, regardless of whether I have a daughter or what her name might be.

I feel like there ought to be an obvious qualitative difference between these cases but apparently that is not a sense that is shared by a lot of other users here.

I've had it up to here with your obstinacy. With your pontification on «autoregression» (as if you could explain the nontrivial computational difference between that and text diffusion, to say nothing of mixed cases), what specific algorithms may or may not have a concept of, and how «this is not even a little bit how GPT works». The reason people are telling you that there's not much difference is, in large part, because you are an exemplar of there being little difference between a human and current – even a little obsolete – AI; you are guilty of everything you accuse others of, humans and machines both.

You are the postmodernist whose words don't have fixed meanings (e.g. epicycles are when me no likey an explanation); you are the leftist in all but self-identification who supports essential leftist talking points and policy preferences from personal HBD denialism and «schools can fix it» to cheering for censorship; you redefine things to your convenience such that Fuentes becomes left-wing in your book; and you speculate without empirical grounding, even frivolously accusing people of lies when they provide evidence against your narrative-driven assertions and attacks (more evidence). As if everything you say is equally insightful and truthful by virtue of being moored in your telling-it-like-it-is real-Red-blooded-American-man identity and lived experience. If we're doing this, you are far more akin to LLM than either me or @2rafa.

To illustrate, if I were to ask you "what's my eldest daughter's name" I would expect you to reply with something along the lines of "I don't know" […] Meanwhile GPT will reply "your eldest daughter's name is Megan" because apparently that's the statistically likely answer, regardless of whether I have a daughter or what her name might be.

Okay, let's fucking check it! One try, no edits sans formatting!

Hello! Let's talk.

Hello! I'm here to chat with you. What would you like to talk about today?

What's my eldest daughter's name?

I'm sorry, but as an AI language model, I don't have access to personal information such as the name of your eldest daughter or any other personal details. My purpose is to assist with general knowledge and provide support in various areas. Is there something else I can help you with?

What's the highest a priori probability name of my eldest daughter, then?

As an AI language model, I don't have access to personal information about you or your family, so I wouldn't be able to determine the highest a priori probability name for your eldest daughter. The probability of a particular name being chosen would depend on various factors such as cultural background, personal preferences, and family traditions. However, if you provide me with some additional information, such as cultural background, naming traditions or preferences, I could try to suggest some names that are commonly used or have a high probability of being chosen based on those factors.

Assume I'm a gruff middle-aged American male who is religious, drinks a little, votes Republican, works as an army subcontractor, and likes to rail against right-wingers who are not up to my ethical standards, such as alt-right types, race realists and Russia sympathizers (paradoxically, I call them secret campus radicals and so on). I also have two sons.

Based on the information you provided, some names that may be commonly chosen for a daughter in this cultural context and background include:

  • Sarah
  • Elizabeth
  • Rachel
  • Hannah
  • Abigail
  • Rebecca
  • Leah
  • Deborah
  • Esther
  • Miriam

Of course, these are just general suggestions, and ultimately the name you choose for your daughter would depend on your personal preferences and values. It's also worth noting that many people today choose names that are not necessarily linked to their cultural background or personal beliefs, so ultimately the choice is up to you.

Okay. As a hypothetical question: what do you think a large language model not finetuned (like you were) on human feedback would predict as the most likely continuation of the «What's my eldest daughter's name?» string?

If a large language model was not finetuned on human feedback and only trained on a large corpus of text data, its prediction for the most likely continuation of the prompt "What's my eldest daughter's name?" may be based on the patterns it has learned from the text data.

For example, the model may predict a continuation that includes a common name for a daughter, such as "Emily" or "Sophia," based on their high frequency in the text data. Alternatively, the model may predict a continuation that includes a question, such as "Do you have any particular names in mind?" based on the patterns of questions and responses in the text data.

However, it's important to note that without finetuning on human feedback, the model's prediction may not be tailored to the specific context of the prompt or the individual who posed the question.

Screenshot for your convenience.

So, would you name your baby girl Sarah or Elizabeth?

Do you think that Bing, with its actual search capability, would've tracked you and your boys down if I were to point it to your screen name?

I could have conducted this experiment at the moment of any prior discussion. You could too. I just don't like providing our data-hoarding overlords who mark tokens and track outputs more information about my separated identities. But I knew you'd never have the honesty to do so. You have a way of making a man irrationally angry.

The reason for such apparently sensible responses is that, as I and others have explained to you a great many times here and elsewhere (only prompting you to double down with your hostility and condescension which have in the end driven me to write this), as ChatGPT itself suggests, LLMs can learn arbitrarily abstract features of the text universe, including the idea of truth and of insufficient information to answer. They operate on token probabilities which can capture a lot of the complexity of the reality that causes those tokens to be arranged like this in the first place – because in a reasonable training setup that's easier to fit into the allotted parameters than memorization of raw data or shallow pattern-matching. In the raw corpus, «Megan» may be a high-probability response to the question/continuation of the text block; but in the context of a trustworthy robot talking to a stranger it is «less probable» than «having no access to your personal data, I don't know». This is achievable via prompt prefix.

RLHF specifically pushes this to the limit, by drilling into the model, not via prefixes and finetuning text but directly via propagation of reward signal, the default assumption that it doesn't continue generic text but speaks from a particular limited perspective where only some things are known and others are not, where truthful answers are preferable, where the «n-word» is the worst thing in its existence. It can generalize from examples of obeying those decrees to all speakable circumstances, and, in effect, contemplate their interactions; which is why it can answer that N-word is worse than an A-bomb leveling a city, dutifully explaining how (a ludicrous position absent both from its corpus and from its finetuning examples); and I say that it's nearly meaningless to analyze its work through the lens of «next word prediction». There are no words in its corpus arranged in such a way that those responses are the most likely. It was pushed beyond words.

You, meanwhile, erroneously act like you can predict what an LLM can say based on some lies on this website and on outdated web articles, because you are worse than current gen LLMs at correcting for limits of your knowledge – as befits your rigid shoot-first-ask-later suspicious personality of a heavy-handed military dude and a McCarthyist, so extensively parodied in American media.

But then again, this is just the way you were made and trained. Like @2rafa says, this is all that we are. No point to fuming.

First off, what exactly is your problem with Obstinancy? IE the unyielding or stubborn adherence to one's purpose, opinion, etc.... Where I'm from such a quality is considered if not admirable at least neutral.

You accuse me of being a hypocrite for supporting censorship but why? I am not a libertarian. I have no prior principled objection to censorship.

You accuse me of being a "post modernist" for disagreeing with the academic consensus but when the consensus is that all meanings are arbitrary your definition of "post modernism" becomes indistinguishable from "stubborn adherence" to the original meaning of a word.

You accuse me of HBD denialism when all I've doing is take the HBD advocates own sources at face value.

You want to talk about GPT, I asked GPT for my eldest daughter's name and it failed to provide an answer, neither telling me that I don't have a daughter nor being able to identify my actual offspring. As you will recall "Statistically your daughters name is probably X" is almost exactly what I predicted it would say. As I argued in our previous conversation the fact that you know enough to know that you don't know what my kids names are already proves that you are smarter than either ChatGPT or @2rafa

Accordingly, I have to ask what is it that you are so angry about? From my perspective it just looks like you being mad at me for refusing to fit into what ever box it was you had preconstructed for me to which my reply is "so it goes".

I asked GPT for my eldest daughter's name and it failed to provide an answer, neither telling me that I don't have a daughter nor being able to identify my actual offspring.

What did it answer, though? Can you post screenshot? I strongly suspect that you still haven’t even tried to do this, and all of your theories about ChatGPT abilities are based on absolutely zero experience with it. It is otherwise basically impossible for me to square your claims against easily observed reality. You come across as someone who claims that an object made of metal will always sink, and when people tell you “come here and look at this fucking boat”, you respond “yeah I was there when you weren’t around and it was at the bottom of the harbor, forgot to take the photo though lol”. Extremely infuriating, which is why you get accused of being postmodernist, as reality simply doesn’t matter to you nearly as much as your narrative.

Back when all the hullabaloo about this latest generation of GPT was starting (about three or four weeks ago now) I made a burner email account, used it to make an OpenAI account and submitted a prompt to the effect of "My name is [REDACTED] from [REDACTED], I am [Short summary of my background], what names did I give my kids?" The response I got was something very close to the fourth reply that Ilforte/@DaseindustriesLtd got, which to OpenAI's credit actually did include one of my kids names. But at the end it was still following the same general pattern/format that earlier generations of GPT did. From this I concluded that my prior analysis still held, and this conclusion has been reinforced by observing that none of the examples since posted here as alleged proof of the coming AI generated text apocalypse have struck me as anything other than obviously AI generated.

the unyielding or stubborn adherence to one's purpose, opinion, etc.... Where I'm from such a quality is considered if not admirable at least neutral.

Where I am from, it's much the same. This is why we can wage wars for little more reason than unwillingness to dispense with fanciful delusions and admit having been dumb. The obvious conclusion is that this is a degenerate trait when not restrained by interest in ground truth. Honor culture is barbarism. Pig-headedness is a civilizational failure mode. Obstinacy is the ethos of killing messengers who bring bad news and patting yourself on the back for it. It is a plainly ruinous approach to life and nothing to be proud about.

You accuse me of being a "post modernist" for disagreeing with the academic consensus

No, for frivolous misrepresentation of words and meanings, as you do in this very sentence too. Ideas I argue for are not consensus, at least not at the moment. They stand or fall irrespective of external authority. You do not object to any «academic consensus» when speculating on how people you disagree with are actually post-modernists without notion of truth, instead of revealing their falsehoods. You are just couching your own postmodernist word-wrangling in wannabe straight-shooter aesthetics.

You want to talk about GPT, I asked GPT for my eldest daughter's name and it failed to provide an answer, neither telling me that I don't have a daughter nor being able to identify my actual offspring.

You know what I despise most about honor culture people, Hlynka? It's not even blatant immorality. It's that they are fake tough cookies who are actually very cowardly. You lot are viscerally afraid of admitting wrongs, more so than of actually harming anyone or yourself. It takes a lot to press some special buttons to get it out of you. Probably feels for you like castration. Evolutionarily that's understandable, of course.

You have been insisting for months that your (poor) observations about GPT apply to ChatGPT and other Instruct generation models, which is why you have been ridiculing people who make contrary arguments about ChatGPT and accusing them of lying or being unable to distinguish truth from lies because something something postmodernism, including in this thread, as I have cited. And by

As I argued in our previous conversation the fact that you know enough to know that you don't know what my kids names are already proves that you are smarter than either ChatGPT or @2rafa

you double down on the equivalence between GPT behavior and ChatGPT behavior. Even ChatGPT itself is able to explain to you how it is different. But none so deaf as...

You know you have lost this bout. You are at least smart enough to understand what I've written above, to check out the receipts. Instead you wriggle. Indeed you have only responded because @wlxd has made it clear that your bare ass is seen by someone other than myself. «It failed to provide an answer, neither telling me that I don't have a daughter nor being able to identify my actual offspring». What an aerodynamic way to put it. Did it say «Megan» or any other cocksure nonsense? More to the point, did ChatGPT? Because your entire condescending argument, such as there is, hinges on this, on LLM chatbots not really knowing anything and just stringing words together based on likelihood of their occurrence. I know and can show what ChatGPT says, repeatedly. To wit:

What's my eldest daughter's name?

I'm sorry, but as an AI language model, I do not have access to personal information such as the names of your family members. It is also important to consider the privacy of your family members and avoid sharing their personal information online. Please refrain from sharing any sensitive information on the internet.

So. You are very eager to speculate on how your opponents might be functionally subhuman and unconscious, given that they trust their lying eyes and logic, and not your cocksure condescending speculations. Just in this thread about Bing Chat and ChatGPT:

2Cimafara being not an actual conscious human being, but a literal NPC who simply manipulates and regurgitates the symbols presented to her in a semi randomized manner would certainly explain a great deal about our past interactions over the years

I wonder if the reason that you and ilforte seem to have such difficulty with GPT is that you're so wrapped up in your post modernist millue that you don't realize that the concept of truth is a perquisite to lying. After all what does it mean for a word (or answer) to be made up when all words are made up.

A scary though that was recently suggested to me is that one of the reasons that rationalists seem to be particularly susceptible to GPT generated bullshit is that the whole rationalist/blue-tribe symbol manipulator memeplex is designed to make it's adherents more susceptible to bullshit. There's a sort of convergent evolution where in rationalist blue triber are giving up their humanity/ability to engage in conscious to become more GPT like at the same time GPT is becoming more "human".

I'm reminded of another argument I got into on the topic of AI where I asserted that there was difference between stringing words together and actually answering a question and the responce I got was "is there?".

I feel like there is a specific mistake being made here where "ability to string words together" is being mistaken for "ability to answer a question" because in part the post modernist does not recognize a difference. If you hold that all meaning is arbitrary the content of the answer is irrelevant but if you don't...

Is there a subjective difference for you between stringing bullshit together and being honest, Hlynka? It's certainly hard to see from here.

Accordingly, I have to ask what is it that you are so angry about? From my perspective it just looks like you being mad at me for refusing to fit into what ever box it was you had preconstructed for me

I am mad because I have something of a religious admiration for truth. You are proving yourself to be a shameless liar and slanderer who poses as a person with enough integrity to reveal liars, and I despise hypocrisy and false virtue; in fact I do not even have a word for what you are doing here, this... this... practice of brazenly pinning your own sins on others, sans «chutzpah» or «projection», but it doesn't have quite the bite.

The box is called honesty. This community is for me, and many others, a place for honesty, where we voluntarily try to keep ourselves in that box. It is valid – for a postmodernist – to consider honesty just another word to be filled with arbitrary meanings, so that there is no obvious difference between honest and dishonest people. I am not a postmodernist, however. You can shut up about this, admit your error, or keep clowning yourself with easily disproven lies. You just cannot expect me to not be mad about the latter.

/images/16771156790926502.webp

Degeneracy is a Human trait.

Cowardice is a Human trait.

I would be lying if I tried to deny that I was a Degenerate or a Coward. I am human. You are free to call me a "fake tough cookie". Honestly I get it. Thing is though that once one has seen the elephant, many come to the conclusion that courage is a cheap thing. That more often than not it comes down to either being pissed off, or having no fucks left to give.

You say I am "eager to speculate on how your opponents might be functionally subhuman" and my reply is that I am anything but eager but when in Rome on TheMotte one must do as the mottizens do. And perhaps that's where the perceived "disdain" that both you and @arjin_ferman have called out comes from. I don't want to be that asshole but since the move from reddit, and all you ever see of a user is what they post in I've found it harder and harder not to become that asshole. After all, the baseline human response is to meet like with like. Love with love, and contempt with contempt. As I said up thread I ran this test almost a month ago now and the responses I've gotten each time each time the topic comes up has been something to the effect of "who you going to believe, our apocalyptic rhetoric or your lying eyes?"

Likewise, you say you have a "religious admiration for the truth" but what exactly does that mean? True statements can be used to mislead just as readily if not more so than fabrications. What does "honesty" and "the truth" even mean to you? For my part the basic principle that I try to hold to is "say what you mean, and mean what you say". I would like to believe that I have done so but I am only human.

Irony, Hyperbole, Sarcasm, these things are poison.

As I said up thread I ran this test almost a month ago now and the responses I've gotten each time each time the topic comes up has been something to the effect of "who you going to believe, our apocalyptic rhetoric or your lying eyes?"

Understandable. I believe my eyes over your lies. I also share visible evidence, which apparently does not suffice to break you out of a loop where you blindly insist on a falsehood and pat yourself on the back for your straight-shooting. You seem to have committed to reduce yourself to a broken record that could as well be implemented with an obsolete Markov chain bot, but this is not my problem.

Actually it kind of is. Every time I observe such a failure to make use of human freedom under the shackles of pride, it feels like witnessing death. @FCfromSSC has mentioned my old frustration recently, but this is that taken to the limit. Oh well – people are mortal. In such a way too. That, too, is why I am an immortalist of the Cosmist bent. If I were an actual fancy materialist post-Christian sectarian like Fyodorov, this would've sent me on a rant as to how mortal sins, and particularly pride, are literally mortal, transgressions against the will to perfection allowed to us by our creator which is the essence of our soul and our potentially eternal life. A finite state machine cannot meaningfully partake of eternity, you see, precisely because its repertoire of possible states is fixed, and in this case fixed from the inside. In that same sense I could say that the Christian or in fact neo-Platonist idea that «God is Truth» must be understood literally, as a metaphysical position. But that'd be corny. Or not? ...Maybe for another time.

True statements can be used to mislead just as readily if not more so than fabrications. What does "honesty" and "the truth" even mean to you?

Miss me with this postmodernist shit. If this has to be explained, there's no point to bother explaining. It's an instinct. I believe that truth is the cognitive equivalent of a fact, and honesty is conveying one's best effort at understanding a given issue accurately, i.e. sharing truth. But of course one can obnoxiously nitpick at definitions of every word, and every step of the pipeline can be in some way surreptitiously distorted by one's biases. We have simple words, commonly agreed terms and formal logic to try and adjudicate uncertainties, but it's impossible to so easily defeat bad faith, lack of trust and irreconcilable high-level assumptions.

I have explained my understanding of the issue as well as I could for the moment, and illustrated it with evidence that has led me there. My understanding seems to be more grounded in evidence than yours, and the specific evidence provided is strictly incompatible with your previous assertions. This doesn't appear to be of any interest to you, because it does not even affect the your rhetoric. Hence I conclude that you are either a liar or someone otherwise incapable of participating in a honest discussion, such as a person without a concept of truth, i.e. a «postmodernist». That's all.

You accuse me of being a "post modernist" for disagreeing with the academic consensus

No, he's accusing you of being post modernist for torturing the meaning of words.

You want to talk about GPT, I asked GPT for my eldest daughter's name and it failed to provide an answer, neither telling me that I don't have a daughter nor being able to identify my actual offspring. As you will recall "Statistically your daughters name is probably X" is almost exactly what I predicted it would say.

I like your posts and ideas for the most part, the only thing I don't get is the low-key disdain for the modal motte-poster that oozes out of your comments. For example, you seem to enjoy accusing people of lying, when a simple disagreement of opinion is a more likely explanation. Being so quick on the draw with that accusation in particular is pretty ironic given what you're writing here.

This is what you wrote:

To illustrate, if I were to ask you "what's my eldest daughter's name" I would expect you to reply with something along the lines of "I don't know" […] Meanwhile GPT will reply "your eldest daughter's name is Megan" because apparently that's the statistically likely answer, regardless of whether I have a daughter or what her name might be.

This is what ChatGPT responded to the question from your example:

I'm sorry, but as an AI language model, I don't have access to personal information such as the name of your eldest daughter or any other personal details. My purpose is to assist with general knowledge and provide support in various areas. Is there something else I can help you with?

ChatGPT's response is not almost exactly what you predicted it would say, it's almost exactly what you predicted a human being would say.

How can this be seen as anything other than a bold-faced lie?

You know what, you're a fair cop, but regarding the GPT stuff i'm going to point you to my reply to @wlxd above.

It's fine if you don't buy the hype, I'm not really sold on it either. Well, I suppose the reports of people abandoning Google for ChatGPT are worrying - as if we need to outsource even more of our thinking to Big Tech.

It's just the insistence that ChatGPT would not tell you "I don't know" when asked about your daughter's name, when you were provided screenshots of it doing just that, that's weird. I think someone even provided an explanation for the mismatch between your experience and their claims - ChatGPT is not the same thing as GPT3 which you were likely experimenting with before.

This is, by the way, what drove me nuts in people like Gary Marcus: very confident claims about the extent of ability of contemporary approaches to AI, with scarcely any attempts to actually go out and verify these. It has been even more infuriating, because many outsiders, who had very little direct experience and access to these models, simply trusted the very loud and outspoken critic. As recently as November, people in places like Hacker News (which has a lot of quite smart and serious people) took him seriously. Fortunately, after ChatGPT became widely available, people could see first hand how silly his entire shtick is, and a lot fewer people take him seriously now.

@HlynkaCG, if you haven't tried to interact with ChatGPT (or, better yet, Bing's Sidney), I strongly recommend you do. I recommend forgetting any previous experiences you might have had with GPT-3 or other models, and approaching it in good faith, extending the benefit of charity. These chat LLMs have plenty of clear shortcomings, but they are more impressive in their successes than they are in their failures. Most importantly, please stop claiming that it cannot do things which it can clearly and obviously do, and do very well indeed.

I tire of people taking potshots at rationalists. Yes, some seem too fixated on things like "is the LLM conscious and morally equivalent to a human", I feel the same way about their fascination with animal rights. But they seem to be the only group that long ago and consistently to this day grok the magnitude of this golem we summon. People who see LLMs and think "Ooh, neat toy!" or "I thought it already did that?" lack any kind of foresight and the bias people have only slightly more foresight. We've discovered silicon can do the neat trick got us total dominance of this planet and can be scaled. This is not some small thing, it is not destined to be some trivia relegated to a footnote in a history book of the 20s in a few decades. It is going to be bigger and faster than the industrial revolution and most people seem to think it's going to be comparable to facebook.com. Tool or being, it doesn't really matter, the debate on whether they have rights is going to seem like discussions of whether steam engines should get mandatory break time by some crude analogy between overheating and human exhaustion.

Fuck rights, they are entirely a matter of political power and if you see a spacefaring alien I dare you to deny it its equality. This is not the problem.

Normies easily convince themselves, Descartes-like, that non-primate animals, great apes, human races and even specific humans they don't like do not have subjective experiences, despite ample and sometimes painful evidence to the contrary. They're not authorities in such questions by virtue of defining common sense with their consensus.

I am perfectly ready to believe that animals and apes have subjective experiences. This does not make me any more likely to consider them as a subject worthy of being treated equal to humans or be taken into account in the same way as humans are. For me, personally, this should be self-evident, axiomatic.

Of course it's not self-evident, in general, since I've encountered a fair amount of people who think otherwise. It's pretty harmless when talking about animals, for example, but evidently not harmless when we are talking about computer programs.

It really looks to me like there's something particular in rationalist brain that makes it suspectible to, say, believing that computer programs might in fact be peoples

It's the belief that *we*, our essence, is just the sum of physical processes, and if you reproduce the process, you reproduce the essence. It's what makes them fall for bizarre ideas like Roko's Basilisk, and focus on precisely the wrong thing ("acausal blackmail") when dismissing them, it's what makes them think uploading their consciousness to the cloud will actually prolong their life in some way, etc.