@Spookykou's banner p

Spookykou


				

				

				
0 followers   follows 0 users  
joined 2023 March 08 17:24:53 UTC

				

User ID: 2245

Spookykou


				
				
				

				
0 followers   follows 0 users   joined 2023 March 08 17:24:53 UTC

					

No bio...


					

User ID: 2245

This seems to be hyper focused on writing, which is odd because a lot of the most popular games ever made have basically no writing at all. Surely video game quality is not singularly determined by writing quality, I would contend that writing quality is actually pretty low on the priority list of things that matter when determining game quality.

Japan is a weird example to bring up when a manga like Demon Slayer can out sell the entire American comic industry. Demon Slayer is no The Sun Also Rises, but Japan is clearly doing something right. They are a lot less woke than the west, and are probably the second most powerful cultural exporter behind the US, Korea might be close, but they don't necessarily do better on the woke dimension.

DE feels way more leftist than woke, but it does have some woke elements.

Yes, the vast majority of video games have been made by white/asian men including (all?) of the greats.

I think this is mostly just that you are using a scale for evaluating writing such that 95% of writing is all crammed together in the 'shit' category and then acting like it can't be further differentiated. Shit contains multitudes.

Lae'zel has that wonderful teef-ling bit that is probably the most endearing character interaction in the whole game.

is an unstated up to this point

I am wrong here, you have expressed your human supremist views multiple times. Rather I would say I was confused on the exact shape of those views and what the underlying reasoning was, but here the implication is that there is not an 'underlying' reason, and it is explicitly the human vs non-human distinction that is important. I think this was confusing for me because when I think about assigning moral worth to things other than humans I do it primarily by thinking about how human-like, the thing is. So for example, I care more about chimps>dogs>birds>bugs, etc (in the abstract, I have way more actual contact with dogs but if I was reasoning about hypotheticals where different types of animals are being tortured I think torturing a chimp is worse than torturing a dog, and both are bad). I have not really seen a clear explanation for why this line of moral reasoning would not be applicable to artificial life in the abstract. You seem to hold that just, categorically, it doesn't/shouldn't. Does that sound right?

Again, for the argument here we are assuming that you are convinced the AI is really, meaningfully capable of feeling and experience suffering, not just that some AI ethics person is convinced that it is.

Or at least, it was that position, which I think is a fair reading of the quote/your post, that I was trying to engage with.

Your answer to this is, no you actually don't think they can meaningfully suffer in a humanlike way, and almost everything is resolved.

I have no idea how trying to tease this out of you constitutes a 'trick question' when your answer is an unstated up to this point tautology.

I will maintain that I think my reading of your post (and subsequent posts) is reasonable, and actually far closer to any sort of plain English reading of your post, than your reply here.

If they are capable of suffering, I 1) expect it to be minimized and/or made invisible by design, and 2) in any case will not be stirred by it in the way I am not stirred by the occasional tired whirring my 9 year old HDD emits when it loads things.

My reading, AI can suffer in a morally relevant way, but I don't care.

Your 'intended' meaning, AI are incapable of suffering in a morally relevant way.

As a brief aside, I have repeatedly at this point stated why I actually engaged with your post in the first place. The moral idea that I thought was interesting enough to ask questions about was the idea that the purposeful creation of a thing informs the moral relevance of that thing with regard to its purpose. I already admitted a while ago that I probably read too much into your post and you do not actually have a strong, creator derived moral position, but it was that position that all three of my questions in my first reply were trying to engage with. While my opening sentence attempted to frame my reply around that idea. My second reply was largely in response to your answer to the third question, in which you seemed to be saying that creating and enslaving a sub-species of intelligent creatures is fine and just a default result of a human first morality, which also seemed pretty extreme to me.

I am sorry if I keep bringing up sex, but it seems particularly germane when we are talking about the moral implications of 'intelligent sex robots'. I get it, your position is that they are not actually meaningfully 'intelligent', but I struggle to see how the accusation is an unwarranted stretch for someone who thinks they could be meaningfully intelligent. Especially given my interpretation of your position as outlined above.

Maybe also relevant, I was not at all asking about the actual state of the technology or predicting that morally relevant cat-bots are around the corner. I assumed my, genetically generating an entire slave species, hypothetical, would clearly put this into the, reasoning about the morality of human-like intelligence, camp, and out of the, hypothesizing about near term technology camp.

If you saw in me someone who thinks Human like AI is near, then I must disappoint. I am also not an AI doomer, and personally would consider myself closest to an AI accelerationist. I have no sympathy with AI ethicist and little sympathy for AI safety. I just don't see any reason why I should preclude the possibility of AI achieving an internal state such that I would extend to them moral considerations such that I would object to them being enslaved/abused/killed.

I am not sure what you think I am driving at beyond what I have stated.

I am fine with vague vibes based moral intuitions that are fuzzy around corner cases. I did not see you as having such a position. You seemed to be very strongly of the opinion that there was no evidence that you could ever see and no capability that an AI could ever have that would result in you ascribing it a moral worth such that keeping it in a state of sexual slavery would be wrong.

If they are capable of suffering, I 1) expect it to be minimized and/or made invisible by design, and 2) in any case will not be stirred by it in the way I am not stirred by the occasional tired whirring my 9 year old HDD emits when it loads things.

This, feels like a pretty hard line rule, and I wanted to try and understand just how generalizable this was, or how contingent it was on the various relevant categories, such as, human, non-human, biological, non-biological , the 'created for a purpose' axis that you introduced, etc.

I am not sure why uplift is beyond the pale in a conversation about AI capable of suffering, but if super smart chimps are off the table, what about aliens with similar intelligence to humans? I suspect that you would find enslaving intelligent, loving, crying, songwriting, dream having, despair feeling alien life forms morally wrong even if they are not morally equivalent to humans? Would they hold a different (higher?) moral position than dogs?

How many of those capabilities does an AI need to have before it would be wrong to enslave it? How important is the biological/synthetic distinction?

Again, for the argument here we are assuming that you are convinced the AI is really, meaningfully capable of feeling and experience suffering, not just that some AI ethics person is convinced that it is.

Or at least, it was that position, which I think is a fair reading of the quote/your post, that I was trying to engage with.

I was not specifically interested in the pedo/age aspect of 'child' but the sense in which a person 'creates' another person.

I really was trying to dig into the idea that because humans 'created' something that means something morally. For example, is there a moral difference between two men going into a futuristic IVF clinic and genetically designing a child and growing it in an artificial womb for the purpose of abusing it (waiting till it is 18 years old). Compared with two men genetically engineering an uplifted animal with similar mental faculties to a human for the purpose of abusing it (waiting till it is an 'adult'). For me, if 'creation' is a relevant term, these two things are indistinguishable on that front, they are distinguishable on the, one thing is a human and the other is not, which seems to be the actual point of consideration for you.

The dog fucking was a word replace for android cat girl fucking, dogs and android cat girls seem to be similarly positioned as, not human. I am not sure why you view dog fucking as 'degenerate' behavior given the moral principles you have laid out.

I saw two different moral concepts gestured at in your post, one being human supremacy, the other was a vague sense that specifically because a machine is created by a person to be used by a person, this means that even if it is capable of being abused we are not morally wrong for abusing it.

So I was trying to dig into this idea that there is some sort of connection between the act of 'creating' something and the moral weight of abusing said thing. However with this clarification, I guess I was simply reading too much into your post.

Would you be opposed to someone keeping a dog locked in their basement for the purpose of fucking it? Would you consider that person a bad person? Would you be for or against your society trying to construct laws to prevent people from chaining dogs in their basement and fucking them?

I do not understand the moral relevance of "built for humans by humans".

If I clarify that I am creating a child because I want a slave, does that change the moral calculus of enslaving my child?

If aliens came around and proved that they had seeded earth with DNA 4 billion years ago with a hidden code running in the background to ensure the creation of modern humans, and they made us to serve them as slaves, is it your position that they are totally morally justified in enslaving humanity?

What if humanity is the alien in the hypothetical and we seeded a planet with biological life to create a sub-species for the purpose of enslaving them?

The 90s were an interesting transitional period and personally I feel like a lot of what we see there was both reactionary and sort of shallow. Falling crime and the end of the Cold War, the End of History, created a world without struggle or conflict (at least for someone living in a western democracy). At the same times the last vestiges of religion in education were being defeated, and there was a clear, but also very boring future lining up before us. We just use science to improve everything and make everything better for forever and all the major problems have been solved or are solvable and we are on the path to solve them.

A brief aside, my best friend in high school would go out in the middle of the night, sneak around in the employees only sections of buildings, try to get onto roofs and such, smoked, did harder drugs, and stole stuff. While he was lower-class SES, he had a 'stable' home life and didn't steal out of 'necessity'. He did it because he was afflicted with a profound sense of ennui. He could see the future laid out before him, and he could not see any purpose or meaning in any of it. The supreme banality of a modern existence.

We were the kids who got asked in 3rd grade what we would do when we were president. We were the kids told to be astronauts and scientists and change the world, and we had finally gotten old enough to realize what a great lie all that was. Of course grunge was popular, and gansta rap spread like wildfire through suburbia. It was the wild desperate thrashing of an animal slowly suffocating under the crushing weight of distributed nihilism. Office Space, to use the modern parlance, was a mood.

Eventually you get to generation Z, enough time on the experiential treadmill and their solution was to just reinterpret what it means to be in danger, what it means to hurt, so they could struggle again, so they could fight against something 'real'.

Another film from 1999 expressed the sentiment well,

But I believe that, as a species, human beings define their reality through suffering and misery. The perfect world was a dream that your primitive cerebrum kept trying to wake up from

I was speaking specifically to this comment thread/similar comment threads here on the Motte and am not sure how people more generally use 'HBD awareness' in conversation.

From this thread, you said, paraphrasing, 'Assuming for the sake of the argument that HBD is correct, what does being "HBD aware" add,' and 4bpp, again paraphrasing, explained that HBD is an 'alternative to the normal structural racism argument used to explain disparate outcomes, with HBD we could stop enforcing disparate impact laws, because disparate impact would not longer be considered iron-clad proof of racial discrimination'. Finally Doubletree2 chimed in, yes I am still paraphrasing, saying that 'explaining HBD to the structural racism people would just convince them that structural racism is correct, cause you sound like a racist'. I was responding to what I felt was Doubletree2's confusion as to what was being discussed, and that nobody was using 'HBD awareness' to mean, telling progressives HBD things. In both your prompt and 4bpp's response it is a basic assumption of the thought experiment that HBD is accepted enough to inform policy.

I think the phrase 'HBD awareness' is being used specifically to side step the practical political realities of how unpopular the concept is. That is, I do not think most people mean a literal awareness campaign where they want to just go around and tell progressives that race realism is correct, or some such, and think that would work. I assume when 'HBD awareness' is being brought up it is normally presupposing a world where people are at least open to being convinced that HBD is correct, or already think that it is correct, and then reasoning about the possible policy realities from that point.

This is already a thing, at least where I live. Any time I see the doctor I always leave with a handful of documents covering any medications or exercises or what have you that they are recommending. Of course I leave those papers in the car and never look at them again.

I guess there are two ways to read the relevant comments. One would be that religious people actually had better predictive modeling skills and their rejection of gay marriage and similar trends was based on them having an accurate model of how that would lead to specific bad outcomes.

The other reading has a bit more wiggle room. Maybe, conservatives and religious types had passed down and maintained social technologies that were valuable and well-honed, ironically, by a process more like evolution than intelligent design. It was from these inherited norms and values that they knew 'something' was wrong without actually understanding the complicated multifaceted societal shifts and changes that would come about in response to any given policy.

If the second position is all that is being claimed, then the internal experience might have gone something like; back then I believed in secular hedonistic sexual norms and values and thought religious people were crazy. Two adult homosexual people having relations, dating, and getting married, all seemed like totally acceptable/good things, and I supported the general cultural zeitgeist that was in favor of gay marriage.

As time has marched on, I am increasingly confronted by things that seem to be coming out of that same cultural movement that I once supported, that I know find distasteful. I can see a through-line, from the arguments and ideas that I once repeated to the slogans and activism of today. I regret the confidence with which my younger-self dismissed the concerns raised by traditional/conservative/religious figures. It increasingly looks like their social technology was correct in some way about the nebulous dangers of increasingly liberal sexual norms and values and now we are living through the consequences of them losing that battle.

This certainly speaks broadly to my personal rightward shift.

I believed that we really understood sociology and that the social sciences were robust, accurate models of reality. That all calls for traditional/religious/conservative values were born of ignorance at best and malice at worst. Then I started reading SSC and my faith in the social sciences was shatter (irrevocably?). My whole worldview came crashing down, sexism first, then racism, every aspect of the liberal progressive package was called into question. Where once it was obvious beyond question that Christianity was an arbitrary useless hatful ideology, now I wonder, how it spread so far(it wasn't always powerful and rich)? How did enslaved priests convert the Vikings? Maybe memetic fitness is a real thing and Christianity was actually a valuable and insightful social technology that made the societies that adopted it better? I don't actually strongly believe this is true, but it certainly seems possible to me now.

So I might be projecting, but when I hear someone say that 'maybe the religious doomsayers were on to something', it speaks to me. Even if I doubt I could find a specific religious doomsayer whose positions I would endorse.

I feel like the original quote is playing definitional games around 'responsibility' in exactly the way you just laid out. Both of the types of blame you describe are totally coherent and acceptable concepts within the normal understanding of the word. That is, blaming other people can change your actions. Harry's advice to a young child who parents got run over by a drunk-driver would be "it was your fault," and he is clearly a monster. The best thing the kid can do is blame others, blame drunk-drivers, to end friendships with people if they drive drunk, etc. That at least could potentially save other peoples lives. Playing a definitional game such that the kids behavior is defined as 'holding yourself responsible for your parents death' is about as insightful as asking if a hotdog is a sandwich. To say nothing of the emotional component.

I like this scene from Atomic Blonde.

Thank you for the suggestion but I don't trust myself to articulate my ideas clearly in a spoken language format so I will stick to text for now.

I have driven as far west as Las Vegas and as far east as NYC, I don't even know how many multi-day road trips, etc. I have a family member who sets the cruise control to the speed limit and doesn't touch the gas. We can go hours getting passed non-stop while never once catching up to a car ahead of us. Either everyone who isn't speeding is also doing the cruise control at exactly the speed limit thing, or almost nobody is driving at or under the speed limit. I often complain about how dangerous it is because even the 18 wheelers all want to pass us and that shit is risky on a two lane country road.

Nobody is being compelled to do anything...

My understanding is that an ultimatum from A to B with no external enforcement mechanism would still be commonly understood as a compulsion placed on B by A.

...since it’s a voluntary debate with ongoing negotiations as to what would even happen.

This is exactly what I am replying to. @ymeskhout presented a conversational norm/expectation that they felt was necessary to have the conversation, and I was questioning the validity and generality of that expectation.

An isolated demand for rigor, is only a coherent concept in a world of generalized principles. Obviously it is okay to treat different cases differently, but you should be aware that you are doing it, and if you are worried about epistemic hygiene you should interrogate your reasons for the different treatment of different topics.

@ymeskhout seems to appreciate this, and offers their reason for making this specific demand in this specific situation, I just don't find "they might motte and bailey me" to be a very convincing reason for making this specific demand.

Of course, if the demand is mollified from, bolding mine,

I personally think pursuing the "election was flawed/unfair" angle is a sound strategy much more grounded in reality, but it requires disavowing the "election was stolen" angle in order to close off motte-and-bailey acrobatics between the two.

to,

stating one's positions clearly and unambiguously.

then I think it is totally reasonable.

Again, I am concerned specifically about the generalized principle of the form; Bob must disavow 2.a if they want to discuss 2.b with Alice. I think it is a bad principle and I am suspicious that anyone would actually apply it fairly. If you think that is a total normal and anodyne request, if you can't imagine a situation where it might be employed nefariously to manipulate the terrain of a discussion, that's fine. If you think you would/do apply it fairly when it is needed, and never when it is not warranted, that's also fine, I am not going to actually check.

I am sorry but this still does not seem very relevant to what I was trying to get across, I will try again.

I am specifically asking if the demand for people to disavow a position they have not advanced is an isolated demand for rigor only being brought out in this instance, or a standard practice for productive conversations.

@ymeskhout has themselves acknowledged that it is, if not an 'isolated demand for rigor' a 'specific demand for rigor' because they think it is only appropriate when the person is 'slippery' or the topic is particularly fraught. Personally, I think this allows @ymeskhout far too many degrees of freedom, that this is functionally an isolated demand, and the correct approach would be to treat people as bad actors only after they have behaved badly, state clearly what you expect from them before continuing to engage, or simply not engage with commenters who you think are bad faith.

I am not replying to the broader conversation with @ymeskhout and have not participated in it. If specific users are behaving badly and @ymeskhout knows this and wants to act on that information, I don't see any problem with that. If the initial comment was, I can't have a productive conversation with @ motte-user-i-just-made-up without them first acknowledging that all of their previous election fraud claims turned out to be wrong, I would not have commented.

Do you think, as a general rule, it is reasonable to demand that people disavow popular Bailey positions that they have not personally advanced, simply because the topic is one in which Motte and Bailey arguments are common? I have a strong instinctive dislike for this kind of compelled position taking, it feels like a 'debate tactic', which is why I also asked about tabooing the word stolen. If @ymeskhout had simply said, it is necessary to state ones positions clearly and unambiguously, which they claim is all the disavowal is supposed to accomplish anyway, I would not have commented.

A request for disavowal is only appropriate if there is a history or suspicion of this kind of slipperiness, and I would apply it consistently to any other topic where this issue applies.

To clarify my question, is it your position that someone who has only ever been a part of camp 'unfair' who wants to discuss camp 'unfair' with you, must first disavow camp 'stolen'? If not, then that is resolved and I simply misunderstood you. If yes, then while I have no intention of going through your comment history I think it would be quite extraordinary if this was actually a consistently held principle. Demanding that people you are talking to disavow Bailey position they have not themselves mentioned or argued for, seems like it should violate community norms if not rules.

Could you explain what you think I was referring to when I used the phrase 'Isolated demand for rigor' in my comment, and how this is a reply to me, because I can't parse it.

Are you saying that the word 'stolen' has a hard technical meaning such that someone who believes, for example, that there was a distributed effort by various actors including those in service of the US government to pervert the course of a fair and free US election, can not in good faith describe that as a 'stolen' election? Is this a standard or established somewhere else? Did Russa 'steal' 2016?

Are you claiming that anyone who wishes to argue that the election was flawed or unfair must also state emphatically that it was not 'stolen' before it is possible to have a productive conversation, even if the person in question never said it was stolen, or did, but never referencing the more extreme and implausible versions of that claim?

Are you sure this is not an isolated demand for rigor, is it really your normal operating procedure to demand disavowals from interlocutors in this way, either over a specific definition or cluster of ideas, even if that person has not previously held or promoted them?

How would you feel about reciprocal rules, would you be okay with both parties not using the word 'stolen', such that they could not say it was stolen, and you could not say it was not stolen?

I must have missed these discussions, because I've never seen this as far as I recall. How does this argument work?

My understand was that this was based on a statistic that found married women are more conservative than single women. There are a few different reasons this might be/have been the case. I could see a social pressure, stronger in the past, expecting that a wife would adopt political views more in line with her husband/would vote in step with her husband. Married women will be older on average than single women so the basic older>conservative pipe line. The institution of marriage could change how someone evaluates a lot of different questions, putting priority on their children over welfare for strangers, etc. This is at least my vague understanding of the situation (but it could be a reference to something totally different?), and some possible arguments.

I am not a huge fan of education, and would argue that we as a species don't have a great idea of how to even do 'education' as it is often presented. I suspect that there is an education floor that is necessary and useful though and that our modern education system is more than sufficient to meet that floor, expenditures in excess of it are probably low value. However I believe that for the vast majority of it's existence the American public school system has been an effective redistributive program that produced more value than it cost us as a nation. I think it is increasingly difficult to do good welfare programs because a bunch of sociologists decided to make a bunch of shit up 40 years again and nobody has ever called them on it, but we could do better than we currently are pretty easily. I do not have a strong opinion if any given current program is positive sum, but I think some probably already are, and we could do better than we currently do.

Did I really need to include an exhaustive list of, 'things that make people happy but are bad for them so I would not want to subsidize those things'?

Alternatively, do you think it is literally impossible to have a 'positive-sum' redistributive program that does not boil down to buying people Heroin?