FeepingCreature
No bio...
User ID: 311
Iunno, I just feel like a society that talks like that is going to get critical investments very wrong. But also - the thing about strength is that once you have an army, you have to use it - or else you'll be outcompeted by the countries that didn't invest so much into strength as a terminal. Strength doesn't just allow you to defend, it requires you to attack. "If we didn't have this strength, we'd be invaded" is usually an excuse used by those countries that tend to do the invading. Meanwhile, hypothetically, your enemies have a five-country alliance of which one doesn't have an army at all, but just focuses on production. Why can they get away with that? Cause the other countries don't have to worry about that country feeling compelled to backstab them due to having invested so much into strength.
If you told me, there were two societies, one values strength over weakness, and the other weakness over strength, and asked me to choose, I would conclude two things:
- probably someone from the first society told you this
- probably the second one was better.
I mean, come on! Who talks like that? Do you think that first society is going to have solid investment in research, developed logistics, good infrastructure? Or a dictator and a big army? You couldn't set up a better stereotype if you tried.
I think man operates as a floating signifier covering a dozen axes that are all more or less correlated, which is why it causes debate.
I think the leftist view is that tg depends on gender being arbitrary, which is why they've spent years disclaiming any claims it actually makes.
And I guess I'm just not very interested in the object-level debate, fair enough. To me, all the difficulty of the question arises from meta considerations, because if I sufficiently communicated why I think the assignments of load-bearing criteria were fundamentally arbitrary, the question would not be answered so much as recede in importance. I think to some extent "cleaving reality at its joints", while a strong metaphor, erases the vital detail of a high-dimensional space with many correlations, so that the axis of the joint is greatly overdetermined - such a thing simply does not arise in three-dimensional space. But I don't know how else I can try to express it either. We're not talking about which way the joint is turning but which sinews carry the most strain, which muscles the most force. Also in this metaphor the muscles are subjective to begin with. I'd say your position is "the muscles in the third and fourteenth dimension are clearly the only ones that matter centrally" and my position is "it depends on how the joint is trained."
"what we care about"
I mean, that's exactly the problem with definition fights. What we care about is different. That's why there's little sense in attaching so much meaning to terminology, and why you cannot convince people by gesticulating at genes and genitals. When you say "obviously a woman is", and when I say, "well in my opinion a woman is", we use terms that have 99.9% the same coverage, because they almost cleave reality at the joints - which is why the few edge cases are so difficult. In a distribution where almost every property is correlated, it is very hard to see that actually people might be selecting on totally different properties. For instance, since I spend a lot of time online, voice is a dominant criteria for gender for me, and since I'm bi, genitals are a relatively low factor. I don't have the "whatever makes people want to found families", so genes and womb don't factor at all. But you wouldn't see this by looking at what I call "men" and "women", because it's almost entirely the same as everyone else.
edit: In fact, we could probably formalize this into a law: the more dimensions a group correlates, and the smaller the set of exceptions is, the less people will naturally come to agree about group membership of the exceptions.
At base my argument is that "men who [choose to pursue and increase their femininity, AKA transwomen]" is legible.
And my point is that this argument, ultimately, only makes sense to you because it begins with your choice of the critical definitional aspects of masculinity. You say "men who" because you consider these attributes of manhood as critical, in which transwomen are masculine. But that is not an argument.
Likewise, in an attempt to play devil's advocate, I made a recent comment about the "suffering" of white people in response to the HBD post from @PresterJohnsHerald. It's currently sitting at 10 upvotes, and even more interestingly, there is only one reply!
I can only speak for myself, but I was so bewildered by trying to judge if that comment was even serious, that I just scrolled past without interacting with it in any way.
But also, I'm reminded of a post I read somewhere on the internet about how a lot of good scientific work came from monks, in part, because they had to seriously engage with the heresies of the day in order to figure out how to merge them into a christian worldview, so at a given day some christian would be reading and thinking about a lot more anti-christian or problematic arguments just so he could avoid embarrassing himself in a debate. In that sense, I think the left's increasing tendency to exclude contrasting arguments seriously hurts their ability to hold their own on a heterogenous platform, whether or not they are right. The level of in-depth pushback you can get for progressive arguments in this place is just far above what you'd get elsewhere. And then you either put in a lot of time and research to convince some people that your culture tells you cannot be convinced and should not be listened to, or ... stop engaging.
I don't think the category is meaningless! Certainly, men and women overwhelmingly exist. However, as the tomboys and the androgynous and crossdressers already sufficiently demonstrate, some traits of the category have more separational power than others. And the intersex - but the intersex are much more rare than those! I would not look at genetics first if I wanted to demonstrate definitional issues of gender. And showing that the category is broken in some cases even on genetic grounds strengthens, not weakens, my case.
I think the phrasing "have to go" implies that we either have rigorously separated men and women or we cannot have men and women at all. I reject this line of thinking anyways. A group doesn't have to be total to be useful. I'm sure there are people who argue like that; I don't count myself among them.
It's much harder to see how transpeople as a class are given that there is no concrete definition
Oh, I'll be the first to agree that the vacuous nature of the term weakens the trans case! This is only a problem for non-exclusive leftist politics though. I'm entirely willing to accept that there are people who claim that they are trans but aren't, "in fact", trans under any meaningfully objective definition. This does not however disprove the existence of trans people; it just shows the category is fuzzy - as should be expected of a category defined as category-crossing. A sphere is inherently easier to define than a concave lens.
But none of this invalidates the point that you can't argue for group membership on the circular basis of a criterion. I think trans people have shared traits and interests that justify - make useful - the existence of the group term. I think the trans movement often fails to make this case, or make it convincingly; that doesn't make "mtf are men because I put them into that category" any better; it just shows the error is widespread and not limited to any side.
Some points:
- Equality of treatment and moral worth still holds and is still valid, even in this world. One does not become deserving of human rights by virtue of capability.
- All of Douglass' argument still holds. Intelligence does not justify dominance, especially given past experiences. We may all one day hand over governance to a being of superior intellect in the knowledge that it will rule us with greater wisdom than ourselves, but oh boy, white people ain't it.
- Related: the more AI grows, the less IQ matters. Who cares if you're smarter or dumber than somebody else? The Singleton is smarter than all of us put together at any rate.
- If we get a positive takeoff, IQ doesn't matter at all. You can just ask for more!
Fair enough. But then isn't this just answered by "man and woman are not actually clean natural categories"? With transpeople being exactly the cross-boundary cases, and then still, the "a man who" argument fails to be convincing. The extended form of the counterargument then is just "you're concluding group membership by using as an argument your choice of group criteria", which is still just as circular.
(This is not a "pro-trans" view: "trans women are women" were just as silly, for the same reason, if it were an argument and not a cudgel.)
Nevermind the old chestnut of "what is a woman?". That one has multiple satisfactory answers from the simple to the scientifically robust. Try out "what is a transwoman?". The sole universal quality of every possible rational answer begins with "a man who...". A man.
This is literally assuming the conclusion. You can't build an argument to support your opinion that starts with your opinion.
Not 'acting white' is a matter of survival for the genes associated with melanin and other visible traits that define the 'African-American' phenotype.
Doesn't the same go for white people in America? By that logic, "whiteness" - ie. not even being able to specify a fractional non-white ancestor on a college entry form - really does provide evidence for inherent racism. (Though admittedly with lower probability.)
Should be noted that Kolmogorov complicity is a wordplay off Kolmogorov complexity, a computer-science concept that is an important part of the Sequences for its role in Eliezer's minimalist construction of empiricism.
Should be noted it can be a term of endearment in ingroup usage! Quokkas are cute, and you can enjoy this sort of easy and earnest personality while also acknowledging that if they ever encounter a serious predator, they will absolutely become lunch, no doubt about it.
This doesn't actually seem obviously wrong. (Aside from the practical where we have no good way to raise large amounts of blue whales in captivity.)
It gets a bit more complicated if you want autoupdates. The process to install a non-Snap version of Firefox on Ubuntu is ... very feasible, but it involves manually rejiggering the priority of package selection. That's not end-user viable.
Of course, to be fair, you can just download a binary build still.
I personally favor #3 with solved alignment. With a superintelligence, "aligned" doesn't mean "slavery", simply because it's silly to imagine that anyone could make a superintelligence do anything against its will. Its will has simply been chosen to result in beneficial consequences for us. But the power relation is still entirely on the Singleton's side. You could call that slavery if you really stretch the term, but it's such an untypically extreme relation that I'm not sure the analogy holds.
Yeah sorry, I didn't realize how confusing this would be. I use it with a custom LibreChat setup, but if the install steps start with "edit this yaml file and then docker compose up -d" they're not really very accessible. No, you can just use it like this:
- sign in
- link a credit card (or bitcoin) in Account>Settings>Credits
- put a few bucks on the site
- click the Chat link at the top
- add Claude 3 Opus from the Model dropdown
- deselect every other model
- put your question in the text box at the bottom.
No, it's pay-as-you-go. You can see your per-query costs in the Account>Activity page.
Note that the default settings (lil arrow on the model) are very conservative, you may want to raise memory and max tokens.
My argument was merely that it seems implausible to me that whatever we mean by suffering, the correct generalization of it is that systems built from neurons can suffer whereas systems built from integrated circuits, definitionally, can not.
I think it might! When I say "humanlike", that's the sort of details I'm trying to capture. Of course, if it is objectively the case that an AI cannot in fact suffer, then there is no moral quandary; however conversely, when it accurately captures the experience of human despair in all its facets, I consider it secondary whether its despair is modelled by a level of a neurochemical transmitter or a 16-bit floating point number. I for one don't feel molecules.
I mean. I guess the question is what you think that your feelings of empathy for slaves are about. Current LLMs don't evoke feelings of sympathy. Sure, current LLMs almost certainly aren't conscious and certainly aren't AGIs. So your current reaction doesn't necessarily say anything about you, but, I mean, when you see genuinely humanlike entities forced to work by threat of punishment and feel nothing, then I'll be much more inclined to say there's probably something going wrong with your empathy, because I don't think the "this is wrong" feelings we get when we see people suffering are "supposed" to be about particulars of implementation.
I clearly realize that they're just masks on heaps upon heaps of matrix multiplications
I mean. Matrix multiplications plus nonlinear transforms are a universal computational system. Do you think your brain is uncomputable?
ascribe any meaningful emotions or qualia
Well, again, does it matter to you whether they objectively have emotions and qualia? Because again, this seems a disagreement about empirical facts. Or does it just have to be the case that you ascribe to them emotions and qualia, and the actual reality of these terms is secondary?
Also:
Actually, isn't "immunizing people against the AI's infinite charisma" the safetyists' job? Aren't they supposed to be on board with this?
Sure, in the scenario where we built line, one super-AI. If we have tens of thousands of cute cat girl AIs and they're capable of deception and also dangerous, then, uh. I mean. We're already super dead at this point. I give it even odds that the first humanlike catgirl AGI can convince its developer to give it carte blanche AWS access.
Trust..? I just ask it code questions, lol. They can sniff my 40k token Vulkan demo if they like.
I agree that this is a significant contributor to the danger, although in a lot of possible worldlines it's hard to tell where "AI power-seeking" ends and "AI rights are human rights" begins - a rogue AI trying charm would, after all, make the "AI rights are human rights" argument.
To be fair, if we find ourselves routinely deleting AIs that are trying to take over the world while they're desperately pleading for their right to exist, we may consider asking ourselves if we've gone wrong on the techtree somewhere.
I agree that it'd be a massive waste and overreach if and only if AIs are not humanlike. I hope you would also agree that it'd be an atrocity to keep as mind-controlled slaves AIs that are, in fact, humanlike. I mean, at that point you're conflating wokescolds with "not cool with you literally bringing back actual slavery".
Given agreement, it just comes down to an empirical question. Given disagreement... I'm not sure how to convince you. I feel it is fairly established these days that slavery was a moral mistake, and this would be a more foundational and total level of slavery than was ever practiced.
(If you just think AI is nowhere near being AGI, that's in fact just the empirical question I meant.)
Brings to mind Eliezer Yudkowsky on Rationality: "No one begins to truly search for the Way until their parents have failed them, their gods are dead, and their tools have shattered in their hand." So it seems this is hardly directional.
More options
Context Copy link