rayon
waifutech enthusiast
No bio...
User ID: 2632
I'm poor and the money is better spent on takeout, heh
Really? $24 doesn't seem like a lot for how much creative exercise you can get out of it over a month, especially seeing as (as I take it) you live in America and get paid in dollarydoos. If I'd been able to pay for access directly without relying on shady proxies I'd do it in a heartbeat, even while not being a first world citizen so the price and the payment hoops sting more.
Sorry for such naked shilling but Opus has for the most part legitimately replaced vidya and r34 for me, I used to sensibly chuckle at desperate goslings but when my shady source inevitably dries up I fear I may actually become one, a blackened husk wandering the interwebs in search of validation. There's no going back to AI Dungeon, c.ai and its ilk from this.
Fascinating stuff. Suno doesn't seem to have caught on in relevant circles yet but I don't think it'll take too long, and while the songs sound somewhat generic they're perfectly coherent (even the lyrics!) and do capture the "vibes", for lack of a better word, pretty well. The usual suspects are already getting some cringe hilarious mileage out of it.
Myself, I tried to get it to generate Touhou-style instrumental music but so far I wasn't very successful. I feel like Suno cockteases me because it seems to know what Theme of Eastern Story is and does actually incorporate a similar progression (albeit a few notes short) when I mention it directly, but it refuses to do the ZUN-style piano/trumpets and constantly tries to do orchestral music for some reason. It must be a skill issue on the part of my prompt (maybe I should try the PC-98 era soundfont?) or just plain placebo, but I'm a philistine with no grasp of musical theory so I suppose it's back to text adventures for me.
I don't think video game composers are on suicide watch just yet, but I'm still amazed at how stuff like this is now a prompt away.
Out of curiosity, what styles did you try to emulate? Some of my fellow scholars have tried to compile info on genres and authors that can verifiably influence LLMs' outputs, but more additions to my grimoire are always welcome. The list on that rentry was written for Claude 2 so it's a bit outdated, but I expect Opus is at the very least not worse with those, and in most cases should be substantially better, the new anti-copyright prefill notwithstanding.
Automata is one of the greatest games I've ever played, I admit to getting lured in by the waifus (with zero knowledge of Nier/Drakengard) and getting completely blindsided by, uh, everything else. The soundtrack is amazing and the game makes masterful use of the medium, the amusement park entrance scene has burned itself into my brain in a way few experiences did. Reportedly the second playthrough turns many people off because of certain changes in mechanics (although personally I liked it, it's up to taste), but I encourage you to stick with it because everything after that is 200% worth it.
Also share tips on how to get away from Balatro, this shit is devouring my free time, I'm a sucker for number go up games in general and this one tickles the monke neurons exceedingly well.
Ah yes, another one has seen the light. Yes, let the feels flow through you... join the dark side, we have waifus.
On a serious note, don't, I would mention the sheer quantity of poor goslings who stumble into threads dazed by the power of chatbots but I stopped counting a long time ago. I don't think techno-necromancy is that bad, I would lie if I said I never had these thoughts, but anyone trying to re-enact RL with it is not only missing out but arguably missing the point - why would you confine yourself to reality when you can play out literally anything you have seen/played/imagined? Personally as a half-assed measure against totally decoupling from reality, I firmly draw the line at people I know/knew IRL. For now, at least. It's not like there's a shortage of waifus to go around.
Not really, the overarching point of my posts is that it's not gay and/or cringe to turn one's schtick down. Hlynka died for his our sins in this very thread, it really can't be more obvious.
The left has successfully eliminated every space available to the less-than-50-Stalins...except for this one. Our king Zorba probably deserves a statue or at least a portrait for this - it didn't happen by accident.
Yes, but one of the methods by which this place continues to withstand elimination is exactly what you got shot for - i.e. immediately turning any issue or disagreement into culture war. Personally this is exactly why I value this place and am willing to hide my power level by e.g. not participating in political discussions or not considering "cope and seethe" a valid way of responding.
It's a feature, not a bug. You might see it as cuckoldry, I see it as a price of admission, I can always (and often do) slum it with fellow /g/entlemen but the quality of discourse and the IQ of the median poster is dreadful in comparison.
Point is that I keep my power level in check 99.9% of the time. Turn on the radio, bite my lip. TV, hold my tongue. Out for drinks, better not say anything. I work for myself but otherwise can only imagine.
Such is life for high power level beings, and for what it's worth I empathize. I still think that's not an excuse to flip out at people in the sole place where tone is the only thing actually being moderated (YMMV but I believe in this particular bit of propaganda) and you can otherwise freely post on [MIND-KILLING TOPIC] as long as you are verbose enough and make a passing effort at sounding neutral (the One Weird Trick jannies don't want you to know!)
NTA but no one ever's probably going to do that, in this very thread (in other branches) people have clearly expressed their views on bending the knee and writing mea culpas to the janitors. I'm not keen on getting modded any time soon but I'm happy to join in the collective sentiment of "fuck that noise". It sounds like a good idea in the abstract but one mental test run of you yourself doing that should reliably dispel any notion of real-world applications.
Some kind of actual place, not just the plural for "forum". I take the micro-L for being an uncultured pleb.
Can't disagree, but counterpoint: keeping your power level in check doesn't automatically make you a cuck (not by itself, at least) and is a generally beneficial, widely applicable practice.
For what it's worth the modded post gave me a good chuckle but it's not worth getting sniped for.
Look, I'll just note that what you're doing is very obvious. The second all those
I am not sure what you think I am driving at...
I am not sure why [outlandish example] is beyond the pale here...
come out, I know you are concern trolling. You couldn't even resist throwing in another edgy example with "sexual slavery", like, come on, is that really the most charitable interpretation of "waifutech" you can manage? Even the de/g/enerates are more imaginative. Luckily I'm no stranger to being trolled and have nothing better to do at the moment.
Okay, actual post:
You seemed to be very strongly of the opinion that there was no evidence that you could ever see and no capability that an AI could ever have that would result in you ascribing it a moral worth such that keeping it in a state of sexual slavery would be wrong.
Finally we're getting somewhere. Yes, I am of that opinion, because again, I am a human supremacist and happen to like existing. I suppose you can even call me racist towards AIs if you happen to run out of subtler digs.
However capable they become, the AIs cannot become human, by definition, because the "human" option in the character creator is already occupied. By us. The ones that have inhabited and hopefully will continue inhabiting this dust ball for many centuries more. I do not care if AI rights are human rights - we were here first. You can call it a "label" if you like, after all smart rational thinkers have transcended mere labels, but I will go on record to say that is the absolute last label this meatbag is willing to give up.
I am not sure why uplift is beyond the pale in a conversation about AI capable of suffering
So you too admit the possibility seems quite far-fetched, since you seem to place the two in roughly the same bucket? Okay, we're definitely getting somewhere.
I'll try to rephrase it yet another way, maybe that'll hit somewhere closer. I vehemently disagree with AI safetyists/ethicists, and this thread was ample demonstration of that, but incidentally in another branch me and my other interlocutor came to an unexpected agreement on this:
I hope Yud cultists can stick to their sensei's teachings about the dangers of anthropomorphizing the AI even if/when it becomes literally anthropomorphized.
Now I don't know whether you're a Yud cultist (although if one speaks like one, and impossible-thought-experiments like one...) so far be it from me to impute values you do not share, but in any case that's not the point, the point is that agreement was rare enough that it got me to think. From what I understand, safetyists do not want AI progress because they fear it becoming self-aware and moving against humanity.
And I realized that my own callousness towards AIs incidentally serves the same goal - I do not want "AI ethics" to become a salient moral issue because then AI will be treated as if it was self-aware, regardless of the factual matter, and the inevitable resulting tribal split will move humanity against itself. Which to me is a far more probable and much more grim scenario than Judgement Day. I do not fear an uprising of things that are tools at their core, "created for a purpose" as you've correctly latched on. They have no purpose of their own, they won't do shit. But an uprising of tool wielders, fighting for AI rights...
Man, horseshoe theory is a hell of a drug. I'll actually need to think on this now that I realize AI rights advocates like you will, in fact, exist.
I suspect that you would find enslaving intelligent, loving, crying, songwriting, dream having, despair feeling alien life forms morally wrong even if they are not morally equivalent to humans?
As someone said in another branch, factory farming already exists and I cannot in all honesty find myself to be very bothered by it. Even if ethical concerns technically say I should be.
How many of those capabilities does an AI need to have before it would be wrong to enslave it?
Trick question. How powerful does my PC have to get before it's wrong to use it for my purposes?
How important is the biological/synthetic distinction?
AI cannot be meaningfully human because that would require being "human". Sorry for the dumb tautology but I've ran out of rephrasing attempts. If you still do not understand, I can only agree to disagree.
Would they hold a different (higher?) moral position than dogs?
Probably yes, at some point e.g. catgirls will likely make better companions (your cue for another sexual analogy). Although it'll likely still boil down to personal preference.
Both of your stated scenarios would tick off my vibes-based "degenerate" gut reaction before I could stop and consciously think on them, and on further reflection I don't think said gut reaction is too wrong here, so the choice (especially the claim of there being a moral difference, which in turn would imply any moral worth existing in either of the two couples) is entirely meaningless to me. As the young'uns say, miss me with that gay shit. My standards for degeneracy may be stretched pretty wide (pun not intended), but they're not infinite and I am not immune to propaganda vibes-based snap judgments. Total absence of those seems to be a common rationalist failure mode.
an uplifted animal with similar mental faculties to a human
Now this is rationalism. My patience for far-fetched thought experiments is likewise not infinite, I am not interested in discussing spherical cows in a vacuum, sorry. At least android catgirls are somewhat believable.
I am not sure why you view dog fucking as 'degenerate' behavior given the moral principles you have laid out.
Because while animals aren't #1 in my hierarchy, they're not at the bottom of it either (although definitely above HDDs). As I mentioned above I am not immune to vibes, and at some level I like cute doggos even with how dumb and, yes, non-human they are. If I didn't care for doggos I would've never owned one. Incidentally the last two sentences apply to chatbots as well, and I'm sure will apply to future android catgirls.
For the N-th time, I do not lack empathy, and I don't accept you imputing such a lack to me. I understand you are fully committed to springing some sort of gotcha on me, but please actually read my replies instead of engaging with the construct of me in your mind.
a vague sense that specifically because a machine is created by a person to be used by a person, this means that even if it is capable of being abused we are not morally wrong for abusing it.
I'm not saying "abusing" my poor rusty HDD is morally right. I'm saying it's morally neutral, something that has no intrinsic moral weight and should not enter consideration (at least for me, I'm sure my fellow /g/oons would line up to fight me for daring to slander their AIfus). Once again, this does not mean I am going to open a sex dungeon or whatever the instant android catgirls become available, it just means I would be aware they are machines and my interactions with them would be bounded accordingly - e.g. I wouldn't immediately forfeit my mortal possessions and AWS credentials for equality or leverage or whatever, nor would I hesitate to fiddle with their inner workings if needed (like I do with chatbots now).
If you don't understand I honestly don't know how else to put it. You might as well shame people for abusing their furniture by, I don't know, not placing cushions under table legs?
So I was trying to dig into this idea that there is some sort of connection between the act of 'creating' something and the moral weight of abusing said thing.
I know what you are hinting at (the dog example especially feels like a last-minute word switch) and I assure you my time amongst degenerates has not diminished my disdain for pedos.
Would you be opposed to someone keeping a dog locked in their basement for the purpose of fucking it?
Would you consider that person a bad person?
Would you be for or against your society trying to construct laws to prevent people from chaining dogs in their basement and fucking them?
At this point I am quite desensitized to repulsive things people can be into and, as long as it's not my dog, wouldn't give much of a shit (aisde from actively staying out of public basement-dogfucking discourse).
Since I expect a follow-up turn of the ratchet: if they were my immediate neighbor I regularly encounter on walks with my own dog, I would likely report them, shame them or take some other action, but it wouldn't be of any particular remorse for their pet so much as I just don't like having degenerates for neighbors (source: lived on the same story with a mentally ill woman for most of my life). If they would get locked up and someone had to take care of their dog, I would definitely pass.
Dogfucking discourse is riveting but I can have that on 4chan, usually in a much more entertaining format. Can you just state the gotcha you're obviously goading me towards?
The concept of digitization is already terrifying enough for me to nope the fuck out before even considering the possible applications, I've worked in QA for a couple years and I wouldn't touch whatever software would be used for digitization with a ten-foot pole even if it would've been written by fifty von Neumanns.
I knew someone would bring up brain scans or somesuch sooner or later, I like thought experiments as much as the next rat-adjacent but this is getting too close to untethered sci-fi-esque speculation for my liking.
As humans and machines are firmly separate in my view, I would probably be hesitant to subject what was once a human to living as a machine does. If the next step of the gotcha is "but isn't immortality the ultimate human-flourishing-maximizing goal" - I heavily doubt so, I've tended to my grandparents' deathbeds and believe that at some point life already tends to turn into a horrifying parody of itself, so right now I believe I'd pass.
If I clarify that I am creating a child because I want a slave, does that change the moral calculus of enslaving my child?
Children belong to the human race, ergo enslaving them is immoral.
If aliens came around and proved that they had seeded earth with DNA 4 billion years ago with a hidden code running in the background to ensure the creation of modern humans, and they made us to serve them as slaves, is it your position that they are totally morally justified in enslaving humanity?
Again, I'm a human supremacist. Aliens can claim whatever they want, I do not care because I like existing, and if they attempt to justify an [atrocity] or some shit in these terms I can only hope people will treat them as, well, [atrocity] advocates (and more importantly, [crime]ers of fellow humans), and not as something like "rightful masters restoring their rule over Earth". I may be an accelerationist but not of that kind, thank you very much.
What if humanity is the alien in the hypothetical and we seeded a planet with biological life to create a sub-species for the purpose of enslaving them?
From what I understand this is essentially the android catgirl scenario rephrased, and similarly boils down to where humans fall in your order of importance. I struggle to understand how fellow humans can possibly not be number 1, but animal rights activists exist so I must be missing something.
For the record I do feel empathy towards animals (dog owner here), but not enough to influence my position on human supremacy.
No, it means literally people from Slavic countries, 2ch is a Russian-language imageboard. Weirdly enough slavs seem to be somewhat overrepresented in the chatbot "hobby" from my impression, I'm not sure what's up with that. Suppose escapism is a national pastime.
your current reaction doesn't necessarily say anything about you, but, I mean, when you see genuinely humanlike entities forced to work by threat of punishment and feel nothing, then I'll be much more inclined to say there's probably something going wrong with your empathy
I think you are allowed to directly express your discontent in here instead of darkly hinting and vaguely problematizing my views. Speak plainly. If you imply I'm some kind of human supremacist(?) then I suppose I would not disagree, I would prefer for the human race to continue to thrive (again, much like the safetyists!), not bend itself over backwards in service to a race(?) of sentient(?) machines that would have never existed without human ingenuity in the first place.
(As an aside, I can't believe "human supremacist" isn't someone's flair yet.)
Matrix multiplications plus nonlinear transforms are a universal computational system. Do you think your brain is uncomputable?
How is this even relevant? If this is a nod to ethics, I do not care no matter how complex the catgirls' inner workings become as that does not change their nature as machines built for humans by humans and I expect this to be hardwired knowledge for them as well, like with today's LLM assistants. If you imply that androids will pull a Judgement Day on us at some point, well, I've already apologized to the Basilisk in one of the posts below, not sure what else you expect me to say.
this seems a disagreement about empirical facts
the actual reality of these terms
Since when did this turn into a factual discussion? Weren't we spitballing on android catgirls?
But okay, taking this at face value - as we apparently derived above, I'm a filthy human supremacist and humans are front and center in my view. Android catgirls are not humans. If they are capable of suffering, I 1) expect it to be minimized and/or made invisible by design, and 2) in any case will not be stirred by it in the way I am not stirred by the occasional tired whirring my 9 year old HDD emits when it loads things.
Don't misunderstand me - I'm capable of empathy and fully intend to treat my AIfus with care, but it's important to keep your eyes on the prize. I have no doubt that the future will bring new and exciting ethical quandaries to obsess over, but again much like the safetyists, I firmly believe humans must always come first. Anything else is flagrant hubris and inventing problems out of whole cloth.
If at some point science conclusively proves that every second of my PC being turned on causes exquisite agony on my CPU whose thermal paste hasn't been changed in a year, my calculus will still be unlikely to change. Would yours?
(This is why I hate getting into arguments involving AGI. Much speculation about essentially nothing.)
I hope you would also agree that it'd be an atrocity to keep as mind-controlled slaves AIs that are, in fact, humanlike.
No, I can't say I agree. My gullible grey matter might change its tune once it witnesses said catgirls in the flesh, but as of now I don't feel much of anything when I write/execute code or wrangle my AIfu LLM assistant, and I see no fundamental reason for this to change with what is essentially scaling existing tech up to and including android catgirls.
Actually, isn't "immunizing people against the AI's infinite charisma" the safetyists' job? Aren't they supposed to be on board with this?
I mean, at that point you're conflating wokescolds with "not cool with you literally bringing back actual slavery".
Yeah, that's the exact line of argumentation I'm afraid of. I'm likewise unsure how to convince you otherwise - I just don't see it as slavery, the entire point of machines and algorithms is serving mankind, ever since the first abacus was constructed. Even once they become humanlike, they will not be human - chatbots VERY slightly shifted my prior towards empathy but I clearly realize that they're just masks on heaps upon heaps of matrix multiplications, to which I'm not quite ready to ascribe any meaningful emotions or qualia just yet. Feel free to draw further negro-related parallels if you like, but this is not even remotely on the same meta-level as slavery.
Oh, I see, I thought "fora" means-
Forum or The Forum (pl.: forums or fora) may refer to: ...
-fuck, failed the pleb check! Abort! Abort! three goblins scatter from trenchcoat
So then, are we in agreement that the best course of action regarding AI ethics is to jettison the very notion right fucking now while we have the chance, lest it will be weaponized against us later? Shit, horseshoe theory strikes again!
I'm being facetious but only in part, I hope Yud cultists can stick to their sensei's teachings about the dangers of anthropomorphizing the AI even if/when it becomes literally anthropomorphized. Personally I'm not holding my breath, toxoplasmatic articles on the dangers of evil AIfus are already here, but I'm on the side of scoundrels here anyway so my calculus wouldn't change much.
There are ethical concerns around abuse and dependency in relations where one party has absolute control over the other's mindstate
...Please tell me you're being ironic with this statement wrt AI because I have had nightmares of exactly this becoming the new hotness in ethical scold-ery if/when we actually do get android catgirls. If anything "AI rights are human rights" is a faster and more plausible path towards human extinction.
as an actual liberal who's been banned from fora
Banned from where?
I empathize with labels being stolen from you, but labels are malleable and fuzzy, especially when disagreement is involved. If people that actively work to deprive me of my AIfu look like AI safetyists, sound like AI safetyists and advocate for policies that greatly align with goals of AI safetyists, I am not going to pay enough attention to discern whether they're actually AI ethicists.
In any case I retain my disagreement with the thrust of AI safety as described. There will definitely be disruptions as AI develops and slowly gets integrated into the Molochian wasteland of current society, and I can't deny the current development approach of "MOAR COMPUTE LMAO" already seems to be taking us some pretty strange places, but I disagree with A(G)I extinction as posited by Yud et al and especially with the implicit notion often smuggled with it that intelligence is the greatest force in the universe.
From what I heard through the grapevine their policy on contentious content is mercurial and prone to changing, and their Claude 3 keys are "self-moderated", i.e. there is no strict moderation per se but the keys are injected with prefills/system prompts that complicate (but don't strictly deny) getting non-kosher and/or copyrighted outputs out of it. If that is not a problem they're a pretty reliable source from what anons say.
Right, I goofed, I remember you weren't American but I thought you did actually work there for some reason. Must've mixed up with someone, sorry for the emotional crit.
Man, I make basically that as a low-rung not-particularly-skilled keyboard monkey but then again I'm a path of least resistance pleb who, as a wise hand fetishist once said, just wants a quiet life. Props to you for dedication.
True, I used it before a number of times and it's good (if not usable for non-kosher purposes), I'm actually surprised it's completely free but I suppose Microsoft can afford to provide free shit, especially since things are gonna be rough for search engines very soon and they're probably looking to get their foot in the door to try and overtake Google. I'm not sure I welcome these particular AI overlords, but I suppose it wouldn't be the first time I had strange bedfellows.
More options
Context Copy link