@Spookykou's banner p

Spookykou


				

				

				
0 followers   follows 0 users  
joined 2023 March 08 17:24:53 UTC

				

User ID: 2245

Spookykou


				
				
				

				
0 followers   follows 0 users   joined 2023 March 08 17:24:53 UTC

					

No bio...


					

User ID: 2245

A brief defense of Mass Effect, and why I wish more games like Mass Effect would get made.

I grew up a nerd(reading Piers Anthony, playing Samurai Swords, D&D, MTG, etc) who was socially adept enough to pass as a non-nerd. I dressed well, hung out with the cool kids, went to parties, did drugs and had sex. It was all good fun. Sometimes I would also hang out with my nerd friends and go do nerd things. I remember one time going to a Con, dressed well, hair on point, and seeing people walking around in dragon T-shirts and cargo shorts, poorly made cosplay, and the occasional Naruto-headband. As I watched the pockmarked, sweaty nerds, a deep pit opened up inside me. I was jealous. My fashionable sneakers and my tight fitting jeans were all lies, DAMNED LIES. I wanted to be like them, and I was just too scared to admit it, too scared to wear a dragon T-shirt. Well, not anymore.

I enjoy power fantasy. Yes, it is kind of cringy and lame and low-brow, but ima live my truth.

I want to be a kick ass hero who saves the galaxy and fucks hot alien chicks.

I feel like there are a few core concepts to liberalism that are very old and very consistent and the disconnect here is that most modern progressives don't realize that they have almost totally abandoned the ideological framework that they were raised in, so they still hold onto the word liberal despite abandoning the ideology.

It seems sort of amusingly illiberal, to rewrite history so that liberal is just the word that the left uses to describe itself and so liberals who are no longer in-line with the modern left, despite being totally in-line with liberalism, must be conservatives.

The reality is the modern left is not liberal for any coherent understanding of the term, this is not even ship of Theseus territory, it is an almost total abandonment of liberalism as an ideology. The principled liberals who used to be on the left were all collectively shocked(or shocked later when they finally noticed) as the rug got pulled out from under them and their massive wide spread cultural support vanished over night in the face of woke. As I vaguely gestured to above, I think this is mostly a politics as fashion thing, and all the people who would have smashed the like and re-tweet buttons on "I may not agree with what you say but I will defend to the death your right to say it," on a hypothetical 1995 twitter, ended up smashing the like and re-tweet buttons on "Freezepeach" on the real 2015 twitter.

A great place to watch this in the wild is, if you have the temperament for it, any Destiny content. Destiny is basically a liberal, and when he talks to progressives he will make liberal arguments, and you can see the sort of confusion and cognitive dissonance, as they try to square a sort of vague background respect for an under-specified liberalism, with their totally illiberal current positions and thinking.

Possible addition, On the gripping hand.

I am not sure I buy it.

It seems to me that almost every government that was able to pass pro-abortion laws did so directly in the face of this accusation and under the exact same framing you outlined above. That is, they thought abortion was 'wrong' in some sense but the lesser of two evils and advocated for it specifically by presenting it as a rational trade off against other interests.

The recent spread of euthanasia laws seem to have also come about under similar circumstances.

I think the, abortion is a necessary evil, framing was pretty much universal until relatively recently when the ever ratcheting up US centric culture war got to the point that pro-abortion advocacy was specifically calling for no questions asked, no shame or stigma attached, infinite access to abortion, in response to conservative states trying to limit access. If by real world you mean, current moment, then I agree in the abstract that it would be hard to pass national abortion laws as restrictive as the median EU member state (and said as such), but I suspect this has almost nothing to do with the rhetorical tactic of accusing people of supporting murder.

I guess a lot of this hinges on what you mean by 'calling it murder', but the impression I get is that people are very good at and comfortable using euphemisms for murder.

While I am generally in favor of consequentialist reasoning and am I fan of utilitarianism as a way to think about morality, I am pretty far from having rigorously mathed out my various moral/ethical beliefs.

Something like the formula you outline seems at least directionally similar, but insufficient. I tend to value women over men, children over adults (for reasons not fully captured in age), good people over bad people, etc. While I endeavor to formulate principals and consistency in my thinking around issues of morality, I often feel like the complexities of reality are such that I do not trust my ability to construct a formula that would properly capture the shape of my preferences.

One thing that bothers me in the abortion debate is that I personally see a lot of granularity within the worth of a human life. If I imagine a hypothetical where I have to pick between saving two eighty-year old men or one eight-year old boy, I will save the boy every time. More over, I would honestly think less of the two men if they advocated for their own lives while understanding the full situation. I do not see any incongruity with my moral intuitions as outlined above, and the moral intuition that it would be wrong to kill one of those eighty year old men. Similarly, I think a fertilized egg is a human life in a very straightforward and technical sense such that I think it is wrong to kill it, but I would not pick to save a fertilized egg over saving the eight year old boy, either(I also wouldn't pick it over the eighty-year old man). As such I generally find most of the extreme claims about the implications of treating a fertilized egg as a life overblown. I am fine with having a category of thing where I think it is wrong to kill it but which I do not think our entire society must upend itself in an effort to protect. Especially not when that protection would be against what would commonly be understood as the 'natural order' of things.

We currently think of full humans as being ... full humans, and yet 100% of them die. How much of our humanitarian efforts are dedicated to immortality research? I think your hypothetical reflects more than anything a poor understanding of how humans actually behave and the kinds of moral intuitions people are mostly running on. I would propose that a huge number of people would see nothing incongruous in holding a funeral for a miscarriage while simultaneously not donating 50% of their income to R&D on how to reduce the number of fertilized eggs that fail to attach.

Ultimately I find health of the mother concerns to be valid, but I can understand why some would worry about the category being stretched too far. Beyond that, I think abortion is very popular and the best case real world policy I could hope for would be something like, safe, legal, and rare.

And of course, I am a hypocrite who purchased a morning after pill for my girlfriend one time after a broken condom, such is life.

Oh, I didn't really get that it was supposed to be writing consultancy specifically. I feel like the two main complaints about woke in video games that I read here on TheMotte are with ugly female character models, and then random woke signaling (trans characters, pride flags), specifically that it can be hard to have anti-woke mods that remove such things because the mod hosting sites are all ideologically captured.

Still I think that woke ideas can and do make the 'writing' in video games worse in a number of ways.

One example might be illustrated by comparing Mass Effect and BG3, both being games that do not have 'great' writing in the general sense, but I think woke impulses make BG3 a worse story in specific ways. Mshep is far and away the most common play through, and Garrus (who can't be romanced by Mshep) might be the most popular video game companions of all time. Meanwhile people had to make mods for BG3 to turn off the entire approval gain function because it is literally impossible to be friends with any of your companions, they are all romantic interests who tend to get very sexual and often physical with you from the very first approval cutscene. This does not mean that any given scene has worse writing, or that the overall plot is worse, and yet, I think the story as a whole is weaker because of the inability for your character to have deeper friendships.

Then there are the generic ways that woke writing is bad, as it often does things that are just broadly considered bad writing. Being preachy, making the subtext text, and breaking suspension of disbelief by importing modern (American) issues into settings and situations where they do not organically fit the story.

There is a sense in which all video game writing is bad so woke isn't the thing stopping video games from being literary masterpieces, but I am not sure how relevant that is compared with the general complaint that woke makes things worse.

This seems to be hyper focused on writing, which is odd because a lot of the most popular games ever made have basically no writing at all. Surely video game quality is not singularly determined by writing quality, I would contend that writing quality is actually pretty low on the priority list of things that matter when determining game quality.

Japan is a weird example to bring up when a manga like Demon Slayer can out sell the entire American comic industry. Demon Slayer is no The Sun Also Rises, but Japan is clearly doing something right. They are a lot less woke than the west, and are probably the second most powerful cultural exporter behind the US, Korea might be close, but they don't necessarily do better on the woke dimension.

DE feels way more leftist than woke, but it does have some woke elements.

Yes, the vast majority of video games have been made by white/asian men including (all?) of the greats.

I think this is mostly just that you are using a scale for evaluating writing such that 95% of writing is all crammed together in the 'shit' category and then acting like it can't be further differentiated. Shit contains multitudes.

Lae'zel has that wonderful teef-ling bit that is probably the most endearing character interaction in the whole game.

is an unstated up to this point

I am wrong here, you have expressed your human supremist views multiple times. Rather I would say I was confused on the exact shape of those views and what the underlying reasoning was, but here the implication is that there is not an 'underlying' reason, and it is explicitly the human vs non-human distinction that is important. I think this was confusing for me because when I think about assigning moral worth to things other than humans I do it primarily by thinking about how human-like, the thing is. So for example, I care more about chimps>dogs>birds>bugs, etc (in the abstract, I have way more actual contact with dogs but if I was reasoning about hypotheticals where different types of animals are being tortured I think torturing a chimp is worse than torturing a dog, and both are bad). I have not really seen a clear explanation for why this line of moral reasoning would not be applicable to artificial life in the abstract. You seem to hold that just, categorically, it doesn't/shouldn't. Does that sound right?

Again, for the argument here we are assuming that you are convinced the AI is really, meaningfully capable of feeling and experience suffering, not just that some AI ethics person is convinced that it is.

Or at least, it was that position, which I think is a fair reading of the quote/your post, that I was trying to engage with.

Your answer to this is, no you actually don't think they can meaningfully suffer in a humanlike way, and almost everything is resolved.

I have no idea how trying to tease this out of you constitutes a 'trick question' when your answer is an unstated up to this point tautology.

I will maintain that I think my reading of your post (and subsequent posts) is reasonable, and actually far closer to any sort of plain English reading of your post, than your reply here.

If they are capable of suffering, I 1) expect it to be minimized and/or made invisible by design, and 2) in any case will not be stirred by it in the way I am not stirred by the occasional tired whirring my 9 year old HDD emits when it loads things.

My reading, AI can suffer in a morally relevant way, but I don't care.

Your 'intended' meaning, AI are incapable of suffering in a morally relevant way.

As a brief aside, I have repeatedly at this point stated why I actually engaged with your post in the first place. The moral idea that I thought was interesting enough to ask questions about was the idea that the purposeful creation of a thing informs the moral relevance of that thing with regard to its purpose. I already admitted a while ago that I probably read too much into your post and you do not actually have a strong, creator derived moral position, but it was that position that all three of my questions in my first reply were trying to engage with. While my opening sentence attempted to frame my reply around that idea. My second reply was largely in response to your answer to the third question, in which you seemed to be saying that creating and enslaving a sub-species of intelligent creatures is fine and just a default result of a human first morality, which also seemed pretty extreme to me.

I am sorry if I keep bringing up sex, but it seems particularly germane when we are talking about the moral implications of 'intelligent sex robots'. I get it, your position is that they are not actually meaningfully 'intelligent', but I struggle to see how the accusation is an unwarranted stretch for someone who thinks they could be meaningfully intelligent. Especially given my interpretation of your position as outlined above.

Maybe also relevant, I was not at all asking about the actual state of the technology or predicting that morally relevant cat-bots are around the corner. I assumed my, genetically generating an entire slave species, hypothetical, would clearly put this into the, reasoning about the morality of human-like intelligence, camp, and out of the, hypothesizing about near term technology camp.

If you saw in me someone who thinks Human like AI is near, then I must disappoint. I am also not an AI doomer, and personally would consider myself closest to an AI accelerationist. I have no sympathy with AI ethicist and little sympathy for AI safety. I just don't see any reason why I should preclude the possibility of AI achieving an internal state such that I would extend to them moral considerations such that I would object to them being enslaved/abused/killed.

I am not sure what you think I am driving at beyond what I have stated.

I am fine with vague vibes based moral intuitions that are fuzzy around corner cases. I did not see you as having such a position. You seemed to be very strongly of the opinion that there was no evidence that you could ever see and no capability that an AI could ever have that would result in you ascribing it a moral worth such that keeping it in a state of sexual slavery would be wrong.

If they are capable of suffering, I 1) expect it to be minimized and/or made invisible by design, and 2) in any case will not be stirred by it in the way I am not stirred by the occasional tired whirring my 9 year old HDD emits when it loads things.

This, feels like a pretty hard line rule, and I wanted to try and understand just how generalizable this was, or how contingent it was on the various relevant categories, such as, human, non-human, biological, non-biological , the 'created for a purpose' axis that you introduced, etc.

I am not sure why uplift is beyond the pale in a conversation about AI capable of suffering, but if super smart chimps are off the table, what about aliens with similar intelligence to humans? I suspect that you would find enslaving intelligent, loving, crying, songwriting, dream having, despair feeling alien life forms morally wrong even if they are not morally equivalent to humans? Would they hold a different (higher?) moral position than dogs?

How many of those capabilities does an AI need to have before it would be wrong to enslave it? How important is the biological/synthetic distinction?

Again, for the argument here we are assuming that you are convinced the AI is really, meaningfully capable of feeling and experience suffering, not just that some AI ethics person is convinced that it is.

Or at least, it was that position, which I think is a fair reading of the quote/your post, that I was trying to engage with.

I was not specifically interested in the pedo/age aspect of 'child' but the sense in which a person 'creates' another person.

I really was trying to dig into the idea that because humans 'created' something that means something morally. For example, is there a moral difference between two men going into a futuristic IVF clinic and genetically designing a child and growing it in an artificial womb for the purpose of abusing it (waiting till it is 18 years old). Compared with two men genetically engineering an uplifted animal with similar mental faculties to a human for the purpose of abusing it (waiting till it is an 'adult'). For me, if 'creation' is a relevant term, these two things are indistinguishable on that front, they are distinguishable on the, one thing is a human and the other is not, which seems to be the actual point of consideration for you.

The dog fucking was a word replace for android cat girl fucking, dogs and android cat girls seem to be similarly positioned as, not human. I am not sure why you view dog fucking as 'degenerate' behavior given the moral principles you have laid out.

I saw two different moral concepts gestured at in your post, one being human supremacy, the other was a vague sense that specifically because a machine is created by a person to be used by a person, this means that even if it is capable of being abused we are not morally wrong for abusing it.

So I was trying to dig into this idea that there is some sort of connection between the act of 'creating' something and the moral weight of abusing said thing. However with this clarification, I guess I was simply reading too much into your post.

Would you be opposed to someone keeping a dog locked in their basement for the purpose of fucking it? Would you consider that person a bad person? Would you be for or against your society trying to construct laws to prevent people from chaining dogs in their basement and fucking them?

I do not understand the moral relevance of "built for humans by humans".

If I clarify that I am creating a child because I want a slave, does that change the moral calculus of enslaving my child?

If aliens came around and proved that they had seeded earth with DNA 4 billion years ago with a hidden code running in the background to ensure the creation of modern humans, and they made us to serve them as slaves, is it your position that they are totally morally justified in enslaving humanity?

What if humanity is the alien in the hypothetical and we seeded a planet with biological life to create a sub-species for the purpose of enslaving them?

The 90s were an interesting transitional period and personally I feel like a lot of what we see there was both reactionary and sort of shallow. Falling crime and the end of the Cold War, the End of History, created a world without struggle or conflict (at least for someone living in a western democracy). At the same times the last vestiges of religion in education were being defeated, and there was a clear, but also very boring future lining up before us. We just use science to improve everything and make everything better for forever and all the major problems have been solved or are solvable and we are on the path to solve them.

A brief aside, my best friend in high school would go out in the middle of the night, sneak around in the employees only sections of buildings, try to get onto roofs and such, smoked, did harder drugs, and stole stuff. While he was lower-class SES, he had a 'stable' home life and didn't steal out of 'necessity'. He did it because he was afflicted with a profound sense of ennui. He could see the future laid out before him, and he could not see any purpose or meaning in any of it. The supreme banality of a modern existence.

We were the kids who got asked in 3rd grade what we would do when we were president. We were the kids told to be astronauts and scientists and change the world, and we had finally gotten old enough to realize what a great lie all that was. Of course grunge was popular, and gansta rap spread like wildfire through suburbia. It was the wild desperate thrashing of an animal slowly suffocating under the crushing weight of distributed nihilism. Office Space, to use the modern parlance, was a mood.

Eventually you get to generation Z, enough time on the experiential treadmill and their solution was to just reinterpret what it means to be in danger, what it means to hurt, so they could struggle again, so they could fight against something 'real'.

Another film from 1999 expressed the sentiment well,

But I believe that, as a species, human beings define their reality through suffering and misery. The perfect world was a dream that your primitive cerebrum kept trying to wake up from

I was speaking specifically to this comment thread/similar comment threads here on the Motte and am not sure how people more generally use 'HBD awareness' in conversation.

From this thread, you said, paraphrasing, 'Assuming for the sake of the argument that HBD is correct, what does being "HBD aware" add,' and 4bpp, again paraphrasing, explained that HBD is an 'alternative to the normal structural racism argument used to explain disparate outcomes, with HBD we could stop enforcing disparate impact laws, because disparate impact would not longer be considered iron-clad proof of racial discrimination'. Finally Doubletree2 chimed in, yes I am still paraphrasing, saying that 'explaining HBD to the structural racism people would just convince them that structural racism is correct, cause you sound like a racist'. I was responding to what I felt was Doubletree2's confusion as to what was being discussed, and that nobody was using 'HBD awareness' to mean, telling progressives HBD things. In both your prompt and 4bpp's response it is a basic assumption of the thought experiment that HBD is accepted enough to inform policy.

I think the phrase 'HBD awareness' is being used specifically to side step the practical political realities of how unpopular the concept is. That is, I do not think most people mean a literal awareness campaign where they want to just go around and tell progressives that race realism is correct, or some such, and think that would work. I assume when 'HBD awareness' is being brought up it is normally presupposing a world where people are at least open to being convinced that HBD is correct, or already think that it is correct, and then reasoning about the possible policy realities from that point.

This is already a thing, at least where I live. Any time I see the doctor I always leave with a handful of documents covering any medications or exercises or what have you that they are recommending. Of course I leave those papers in the car and never look at them again.

I guess there are two ways to read the relevant comments. One would be that religious people actually had better predictive modeling skills and their rejection of gay marriage and similar trends was based on them having an accurate model of how that would lead to specific bad outcomes.

The other reading has a bit more wiggle room. Maybe, conservatives and religious types had passed down and maintained social technologies that were valuable and well-honed, ironically, by a process more like evolution than intelligent design. It was from these inherited norms and values that they knew 'something' was wrong without actually understanding the complicated multifaceted societal shifts and changes that would come about in response to any given policy.

If the second position is all that is being claimed, then the internal experience might have gone something like; back then I believed in secular hedonistic sexual norms and values and thought religious people were crazy. Two adult homosexual people having relations, dating, and getting married, all seemed like totally acceptable/good things, and I supported the general cultural zeitgeist that was in favor of gay marriage.

As time has marched on, I am increasingly confronted by things that seem to be coming out of that same cultural movement that I once supported, that I know find distasteful. I can see a through-line, from the arguments and ideas that I once repeated to the slogans and activism of today. I regret the confidence with which my younger-self dismissed the concerns raised by traditional/conservative/religious figures. It increasingly looks like their social technology was correct in some way about the nebulous dangers of increasingly liberal sexual norms and values and now we are living through the consequences of them losing that battle.

This certainly speaks broadly to my personal rightward shift.

I believed that we really understood sociology and that the social sciences were robust, accurate models of reality. That all calls for traditional/religious/conservative values were born of ignorance at best and malice at worst. Then I started reading SSC and my faith in the social sciences was shatter (irrevocably?). My whole worldview came crashing down, sexism first, then racism, every aspect of the liberal progressive package was called into question. Where once it was obvious beyond question that Christianity was an arbitrary useless hatful ideology, now I wonder, how it spread so far(it wasn't always powerful and rich)? How did enslaved priests convert the Vikings? Maybe memetic fitness is a real thing and Christianity was actually a valuable and insightful social technology that made the societies that adopted it better? I don't actually strongly believe this is true, but it certainly seems possible to me now.

So I might be projecting, but when I hear someone say that 'maybe the religious doomsayers were on to something', it speaks to me. Even if I doubt I could find a specific religious doomsayer whose positions I would endorse.

I feel like the original quote is playing definitional games around 'responsibility' in exactly the way you just laid out. Both of the types of blame you describe are totally coherent and acceptable concepts within the normal understanding of the word. That is, blaming other people can change your actions. Harry's advice to a young child who parents got run over by a drunk-driver would be "it was your fault," and he is clearly a monster. The best thing the kid can do is blame others, blame drunk-drivers, to end friendships with people if they drive drunk, etc. That at least could potentially save other peoples lives. Playing a definitional game such that the kids behavior is defined as 'holding yourself responsible for your parents death' is about as insightful as asking if a hotdog is a sandwich. To say nothing of the emotional component.

I like this scene from Atomic Blonde.

Thank you for the suggestion but I don't trust myself to articulate my ideas clearly in a spoken language format so I will stick to text for now.

I have driven as far west as Las Vegas and as far east as NYC, I don't even know how many multi-day road trips, etc. I have a family member who sets the cruise control to the speed limit and doesn't touch the gas. We can go hours getting passed non-stop while never once catching up to a car ahead of us. Either everyone who isn't speeding is also doing the cruise control at exactly the speed limit thing, or almost nobody is driving at or under the speed limit. I often complain about how dangerous it is because even the 18 wheelers all want to pass us and that shit is risky on a two lane country road.

Nobody is being compelled to do anything...

My understanding is that an ultimatum from A to B with no external enforcement mechanism would still be commonly understood as a compulsion placed on B by A.

...since it’s a voluntary debate with ongoing negotiations as to what would even happen.

This is exactly what I am replying to. @ymeskhout presented a conversational norm/expectation that they felt was necessary to have the conversation, and I was questioning the validity and generality of that expectation.

An isolated demand for rigor, is only a coherent concept in a world of generalized principles. Obviously it is okay to treat different cases differently, but you should be aware that you are doing it, and if you are worried about epistemic hygiene you should interrogate your reasons for the different treatment of different topics.

@ymeskhout seems to appreciate this, and offers their reason for making this specific demand in this specific situation, I just don't find "they might motte and bailey me" to be a very convincing reason for making this specific demand.

Of course, if the demand is mollified from, bolding mine,

I personally think pursuing the "election was flawed/unfair" angle is a sound strategy much more grounded in reality, but it requires disavowing the "election was stolen" angle in order to close off motte-and-bailey acrobatics between the two.

to,

stating one's positions clearly and unambiguously.

then I think it is totally reasonable.

Again, I am concerned specifically about the generalized principle of the form; Bob must disavow 2.a if they want to discuss 2.b with Alice. I think it is a bad principle and I am suspicious that anyone would actually apply it fairly. If you think that is a total normal and anodyne request, if you can't imagine a situation where it might be employed nefariously to manipulate the terrain of a discussion, that's fine. If you think you would/do apply it fairly when it is needed, and never when it is not warranted, that's also fine, I am not going to actually check.