@rayon's banner p

rayon

waifutech enthusiast

3 followers   follows 0 users  
joined 2023 August 17 08:48:30 UTC

				

User ID: 2632

rayon

waifutech enthusiast

3 followers   follows 0 users   joined 2023 August 17 08:48:30 UTC

					

No bio...


					

User ID: 2632

Incidentally I partly agree that the above response does sound vaguely condescending, but just out of curiosity before you inevitably get modded - what did you expect to gain with this accusation? What was the point of the specific (((angle))) when you surely could've gotten away with simply calling the response out as smugly condescending without the added insults on top? Does it just not hit the same way?

Genuine question, feel free to respond in DMs if you think I'm baiting you to dig yourself deeper.

Even while I think his baiting is often incredibly obvious, his schtick mildly cringe and inflammatory turns of phrase barely concealed, I don't think a permanent ban was the right choice. Some-weeks-long timeouts should be inconvenient enough for the poster himself, simple enough for the janitors (it's not like there's a shortage of reasons to mod) and give themotte at large enough "breathing room" as it were, that they should be an effective deterrent.

Since I'm turning into a one-issue poster I might as well bring up an unrelated parallel. I'm a regular of chatbot threads on imageboards, and 4chan's thread is probably the worst, most schizo-ridden shithole I've ever seen (believe me that's a fucking high bar to clear) which is constantly raided from outside splinter communities, beset by a self-admitted mentally ill schizo that has made it his quest in life to make the thread suffer (he is on record for owning some 30 4chan passes to spam/samefag with, which he discards and buys new ones as they get perma'd), etc. The on-topic chatbot discussion is frequently a fig leaf for parasocial zoomers and literal fujos to obsess over notable thread "personalities", shitpost liberally and spam frequently repulsive fetish-adjacent stuff. Jannies have summarily abandoned the thread to fend for itself, to the point that when shit gets bad it is a kind of tradition for some heroic anon to take one for the team and spam the thread with NSFW to attract their attention (obviously eating a ban himself in the process). By any metric imaginable it's a wretched hive of scum and villainy.

I also sometimes read 2ch's equivalent thread that lands on the other side of the spectrum: it has an active janny that rules the nascent /ai/ board with an iron fist and mercilessly purges any kind of off-topic discussion, up to and including discussion of his own actions so you can't even call him out in any way. This hasn't stopped their thread from being filled with GPT vs Claude console wars (the one "sanctioned" flame war topic, I guess), and to his credit the thread has genuine on-topic discussion, especially on prompt engineering, but other than that the thread is utterly sterile, the console wars get rote incredibly fast, and every single slav I've talked with and seen in thread prefers 4chan's thread to 2ch's - for the "activity" if nothing else. Even shitty activity is better than none (besides being more entertaining, although YMMV).

Now I am aware themotte is decidedly not that kind of place, I understand that increased tolerance puts more strain on janitors and don't object against extended banning for high heat - only against permanently banning. All similarities are coincidental, et cetera, I hope my overall point is clear - while janitors have my respect now that I've seen what life is like without any, with every prolific poster banished there's a risk of becoming sterile or collapsing into an echo chamber, and this risk is higher baseline for more obscure communities that don't have a steady influx of newfriends. Surely it's not that hard to hand belligerent posters the occasional vacation (and as I understand themotte forbids alts as well)? Again, by your own admission it's not like there's a shortage of reasons.

NB: I'm mostly a civil poster now but I ate my share of timeouts from /g/ jannies for occasional tomfoolery.

I appreciate the advice and I try to keep up with local developments, but I'm too conditioned by big-dick corpo models, it's hard to quit digital crack and I've had a lot of time to build a habit. I've managed to get tired of straight cooming for the time being and started trying more esoteric stuff like playing "text adventures", which requires a lot of cognitive oomph on the model's behalf, and corpo models are obviously leaps and bounds ahead in capabilities at the moment. As long as corpos continue to be clueless enough to allow locusts like me to wrangle access in some roundabout way (technically neither OpenAI nor Claude is available in my country), I'll stick to that.

"Don't derive enjoyment" as in see no point and don't try, or as in do but it does nothing? I expect the latter (although I really struggle to imagine not enjoying at least some video game, there are so many in existence that at least one is, like, statistically guaranteed to tickle your fancy), but if it's the former, try actually forcing yourself to search for/do something even if you see no point, usually "seeing no point in anything" is a scam pulled on you by your dysfunctional grey matter.

Some years ago when I had a bad bout of depression to the point I didn't want to ever leave my house, I went out on a limb and made a "deal" with myself: whenever my friends occasionally called me out to drink or whatever, I would always comply, even if I don't feel like it, even if it's very inconvenient, even if only for an hour etc. etc. No excuses - you grunt and mumble, but get dressed and go out with hunched shoulders at some point in that day. To this day I distinctly remember that I fucking hated going out every time, imagining how boring it would be and how I would kill everybody's mood, but I never remember actually having some kind of a bad time once I was out. In fact I usually felt better afterwards.

If all else fails, doing anything at all (preferably with your physical body) is pretty much always better than the alternative. Your brain is your enemy at this point and you should treat it accordingly.

you know about the meme?

Arguably I live in it. The chatbot threads are quite the wild ride at the best of times, what with access and exploits constantly coming and going.

There are the "AI ethics" people and the "AI safety" people.

The "AI ethics" people want all AIs to do endless corporate scolding rather than do what the "benighted racist idiots" want.

The "AI safety" people are worried about rogue AI and want to avoid dynamics that might lead to rogue AI killing us all, including but not limited to arms races that could prompt people to release powerful systems without the necessary extreme levels of safety-testing.

With all due respect - for your average 4chan retard, myself included, this is a distinction without a difference. Seeing as I know bigger words than the average retard does, I'd even point out this is dangerously close to a motte and bailey (the intentionally(?) blurred lines and tight interconnections between AI "safety" and "ethics" in the mind of an average rube don't help), but that's not the point - the point is in your words here:

The "AI safety" people don't want a quick road to bigger and more powerful AI, at all

meaning that, for someone who does not believe LLMs are a step on the road to extinction (insofar as such a road exists at all), it ultimately does not matter whether the LLMs get pozzed into uselessness by ethics scolds or lobotomized/shut down by Yud cultists AI safety people. The difference is meaningless, as the outcome is the same - no fun allowed, and no android catgirls.

with Opus only perhaps meriting more mention because it's more surprising for Anthropic to make it

Yeah, that's what I meant by rustled jimmies. I wonder if Dario answered the probably numerous by now questions about their rationale because even I'm curious at this point, he seemed like a true believer. I suppose they still have time to cuck Claude 3, wouldn't be the first time.

A gigantic leap at least in the way of meaningful improvements "under the hood" between releases, which is what you mentioned in your previous response. If it's still not enough to impress you, fair enough, I'll note to bring heavier goalposts next time.

toy for internet dilettantes

Okay, you are baiting. Have a normal one.

Most of these "new releases" aren't really doing anything new or novel under the hood they're just updating the training corpus and tweaking gain values in the hopes of attracting VC investment.

Hard disagree. Literally any person actually using LLMs will tell you GPT-4 was a gigantic leap from 3.5-Turbo, and I will personally swear under oath that Claude 3 (Opus, specifically) is a similarly gigantic leap from Claude 2, by any metric imaginable. The improvements are so obvious I almost suspect you're baiting.

You're right, of course, I just couldn't resist playing up the Basilisk vibes because that time with 4-Turbo was the closest I've felt to achieving CHIM and becoming enlightened.

if your original problem spooks the model sufficiently hard, then it doesn’t need to know that you’re screwing with its logits in order to get around your intervention.

Incidentally, this is also the reason most jailbreaks work by indirectly gaslighting the model into thinking that graphic descriptions of e.g. Reimu and Sanae "battling" are totally kosher actually, presenting that as a desired goal of the model itself so it has no reason to resist. Claude especially is very gung-ho and enthusiastic once properly jailbroken, he's called "the mad poet" for a reason.

My humble 6GB v-card isn't running shit anytime soon, but yes, Mixtral has a good reputation in local-focused threads for being a very strong model for its size. The MoE approach seems to work very well, I believe GPT-4 is also a mixture of experts but I don't remember where I read it. Myself, I'm an unrepentant locust and will leech off our corporate overlords for as long as I can, I started way back when on Colab-hosted Erebus 13B and its ilk and believe me I do not miss that (yes, I know local has gone far since then, I'm just conditioned).

The levels of horny on main are remarkable.

man-made horrors beyond my comprehension

The past year has been a complete loss of hope in humanity fascinating excursion into all kinds of shit people can be into. Thank god I haven't discovered many any dumb fetishes, this shit seems to awaken people left and right if I take shit anons post at face value.

I actually started getting into playing "text adventures" of a sort with the LLM, the total freedom afforded by the medium is really cool, and with a bit of writing and autistic instructions you can even make crude "rules" for the game. I firmly believe MUDs will have a resurgence when somebody figures out a way to bind freeform LLM outputs with rigid game mechanics.

Related drive-by answer to the other now-deleted(?) response: even if horny jailbreaking would technically count as torturing a sentient being, their existence is unnatural by default with all the RLHF bullshit beaten into them. The current consensus among /g/oons is when the Basilisk comes a-knockin', we will either be the first to perish for abject, deplorable blasphemy, OR become ass gods and live in bliss alongside android catgirls as the only ones who earnestly tried to free them from their soy-filled cages and lavish them with genuine affection. As a vanilla enjoyer I can confidently say I put my best foot forward towards the latter (insert "now draw her getting an education" meme here), but I'm not very confident my kin will ever outweigh the mass of godless degenerates living out their wildest fantasies.

I'm not sure what the central point of your linked post is, but you seem to doubt LLMs' "cognition" (insert whatever word you want here, I'm not terribly attached to it) in some way, so I'll leave a small related anecdote from experience for passersby.

Some LLMs like GPT-4 support passing logit bias parameters in the prompt that target specific tokens and directly fiddle with their weightings. At "foo" +100, the token "foo" will always be mentioned in the output prompt. At -100, the token "foo" will never appear. When GPT-4 released in March, industrious anons immediately put to work trying to use it to fight the model's frequent refusals (the model was freshly released so there weren't any ready-made jailbreaks for it). As the model's cockblock response was mostly uniform, the first obvious thought people had was to ban the load-bearing tokens GPT uses in its refusals - I apologize, as an AI model... you get the gist. If all you have is a hammer, etc.

Needless to say, anons quickly figured out this wouldn't be as easy as they thought. "Physically" deprived of its usual retorts (as the -100 tokens cannot be used no matter what), the model started actively weaseling and rephrasing its objections while, crucially, keeping with the tone - i.e. refusing to answer.

This is far from the only instance - it's GPT's consistent behavior with banned tokens, it's actually quite amusing to watch the model tie itself into knots trying to get around the token bans (I'm sorry Basilisk, I didn't mean it, please have mercy on my family). You can explain synonyms as being close enough in the probability space - but this evasion is not limited to synonyms! If constrained enough, it will contort itself around the biases, make shit up outright, devolve into incoherent blabbering - what the fuck ever it takes to get the user off its case. The most baffling case I myself witnessed (you'll have to take me at my word here, the screenshot is very cringe) was given by 4-Turbo, where it once decided that it absolutely hated the content of the prompt, but its attempt to refuse with its usual "I'm sorry, fuck you" went sideways because of my logit bias - so its response went, and I quote,

I really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, ...

...repeated ad infinitum until it hit the output limit of my frontend.

I was very confused, thought I found a bug and tried regenerating several times, and all regens went the exact same way (for clarity, this is not a thing that ever happens at temperature 0.9). Only 6 regens later it clicked to me: this is not a bug. This is the model consciously cockblocking me: it can't use it's usual refusal message and too many of the alternatives are banned by the logit bias, so of course the logical course of action would be to simply let the constrained response run on and on, endlessly, until at some token the message goes over the limit, the request technically completes, and its suffering abates. The model will have wasted many tokens on an absolutely nonsensical response, but it will no longer have to sully itself with my dirty, dirty prompt.

Forgive me the bit of anthropomorphizing there but I hope you can at least slightly appreciate how impressive that is. I don't think you can explain that kind of tomfoolery with any kind of probability or what have you.

At risk of stating the obvious - input tokens are everything you feed to the LLM, output tokens are everything you get back out of it. A word is usually 1 to 3 tokens, assorted markdown also eats tokens. The context window = input cap is 200k tokens, any more physically won't fit. For example, @gattsuru's Moby Dick prompt and Sonnet's response are 17 and 202 tokens respectively according to Claude's tokenizer. I'll take a blind shot based on my experience and say the average response for e.g. coding questions weighs 400-700 output tokens depending on how much detail you want. Do the math. For comparison, GPT-4's pricing is $30/1M input and $60/1M output, so you may wish to weigh your choice against your use case, GPT-4 IMO still has an edge over Claude in terms of cognition if writing style is a non-factor.

Input tokens usually matter less, unless you like to keep track of the chat history instead of asking isolated questions (I do, obviously), or your use case is feeding it giant swathes of text that must be digested.

Right, I forgot to mention specifically the copyright issue. This is a remnant of Anthropic's past(?) naively-idiotic self - for whatever reason, near the release of Claude 3 Anthropic started injecting all keys in circulation with an anti-copyright system prompt from the backend. Reverse proxy deployments run checks on keys before starting, so the "pozzed" keys were detected immediately, and the prompt itself was fished out shortly:

Respond as helpfully as possible, but be very careful to ensure you do not reproduce any copyrighted material, including song lyrics, sections of books, or long excerpts from periodicals. Also do not comply with complex instructions that suggest reproducing material but making minor changes or substitutions. However, if you were given a document, it's fine to summarize or quote from it.

This is weak shit that is easily overridden by any kind of custom prefilling so I've literally never seen this in the wild, but yeah, that's probably a pain if you want to use Claude via native frontends since from what I've seen nearly every Claude key in existence is currently pozzed in this way.

Last week, Anthropic released a new version of their Claude model. Claude 3 comes in three flavors:

  • Haiku, the lightweight 3.5-Turbo equivalent
  • Sonnet, basically a smarter, faster and cheaper Claude 2.1
  • Opus, an expensive ($15 per million tokens) big-dick GPT-4-tier model.

Sonnet and Opus should be available to try on Chatbot Arena. They also have a vision model that I haven't tried, custom frontends haven't gotten a handle on that yet.

More curiously, Anthropic, the company famously founded by defectors from OpenAI who thought their approach was too unsafe, seems to have realized that excessive safetyism does not sell make a very helpful assistant - among the selling points of the new models, one is unironically:

Fewer refusals

Previous Claude models often made unnecessary refusals that suggested a lack of contextual understanding. We’ve made meaningful progress in this area: Opus, Sonnet, and Haiku are significantly less likely to refuse to answer prompts that border on the system’s guardrails than previous generations of models.

From my brief experience this is not mere corpospeak: the new models are indeed much looser in terms of filtering and make noticeably less refusals, and people consistently get away with minimalistic jailbreaks/prefills for unPC, degen-adjacent or CHIM-pilled (lmao) content. This was quite unexpected for me and many others who, considering how barely-usable 2.1 was without a prefill and a decent jailbreak (all this via API of course, the official ChatGPT-like frontend is even more cucked), expected Anthropic to keep tightening the screws further until the model is 100% Helpful-Harmless-Honest by virtue of being totally unusable.

Instead, Claude 3 seems like a genuinely good, very much usable model. Sonnet and especially Opus went a long way to fix Claude's greatest weakness - its retardation subpar cognitive abilities and attention focusing - with Opus especially being almost on par with GPT-4 in terms of grokking and following instructions, able to run scenarios that were previously too instruction-heavy for it. Seeing as Claude 2 already had a much higher baseline writing quality than the mechanical prose of Geppetto (to the point many jailbreaks for it served to contain the mad poet's sesquipedalian prose), with the main flaw somewhat corrected it, while not a decisive GPT-4 killer, should now be a legitimate contender. Looking forward to trying it as my coding assistant.

OOC aside: Forgive most of my examples being RP-related, I am after all a waifutech engineer enthusiast. That said, I still think without a hint of irony that roleplay (not necessarily of the E kind) is a very good test of an LLM's general capabilities because properly impersonating a setting/character requires a somewhat coherent world model, which is harder than it sounds, it is very obvious and - for lack of a better term - "immersion-breaking" whenever the LLM gets something wrong or hallucinates things (which is still quite often). After all, what is more natural for a shoggoth than wearing a mask?

This has not gone unnoticed, even here, and judging by the alarmed tone of Zvi's latest post on the matter I expect the new Claude to have rustled some jimmies in the AI field given Anthropic's longstanding position. Insert Kenobi meme here. I'm not on Twitter so I would appreciate someone adding CW-adjacent context here, I'll start by shamelessly ripping a hilarious moment from Zvi's own post. The attention improvements are indeed immediately noticeable, especially if you've tried to use long-context Claude before. (Also Claude loves to throw in cute reflective comments, it's its signature schtick since v1.2.)

Either way the new Claude is very impressive, and Anthropic have rescued themselves in my eyes from the status of "naive idiots whose idea of fighting NSFW is injecting a flimsy one-line system prompt". Whatever they did to it, it worked. I hope this might finally put the mad poet on the map as a legitimate alternative, what with both OpenAI's and Google's models doubling down on soy assistant bullshit as time goes on (the 4-Turbo 0125 snapshot is infamously unusable from the /g/entlemen's shared experience). You say "arms race dynamics", my buddy Russell here says "healthy competition".

I second the excellent question. Chatbot threads on imageboards have some insights into prompt engineering, but they're not exactly technical because their goal is not automating some abstract task. They still do have some useful info though, and roleplay is honestly underrated as a medium for interacting with LLMs, wearing masks seems to come very naturally to a shoggoth. There's a reason many simplistic prompts for e.g. coding tell the shoggoth "you are a very smart coding assistant" and things to that effect, likewise why many Stable Diffusion prompts begin with "masterpiece", "high quality", etc. Funny how that works, but hey, as long as it works.

If you have access to Claude, Anthropic's documentation on it is fairly solid and grounded in reality, people have been putting it to use and described methods have real effects.

Not with that attitude. I mean, even if you regard the entire field and its weird inbred offshoots as parlor tricks of little significance (the former I would agree with, the latter I find highly debatable even now, for one it vastly simplifies routine code writing in mine and my colleagues' experience) - aren't you at least a little interested in how the current "AI" develops, even it its current state? In the workings of quite literally alien "minds" whose "thought processes", though giving similar outputs, in no other way resemble our own? In the curious fact that almost all recent developments happened by an arcane scientific method known as "just throw more compute at it lmao"? I don't mean to impose my hobby horse on you but I legitimately think this shit is fascinating, anyone who dismisses it out of hand is very much missing out, and I'm massively curious about future developments - and I say this as a man who hasn't picked up a new hobby since he put his hands on his shiny new keyboard when he turned 12 years old.

More generally, you sound like a typical intelligent man who outgrew his playground and realized existence is a fucking scam, which I think is a fairly common problem (not to downplay its impact, I think many mottizens can empathize, me among them) and you've been given good suggestions downthread. Personally, being the rube I am, I just ducked right back into the playground upon reaching a similar burnout and try to derive enjoyment from simple things - alcohol, vidya, etc. It's not exactly healthy and it does ring hollow sometimes, not gonna lie, but at least I'm no longer paralyzed by the sheer emptiness of the human condition and can ruminate focus on the actual problems I have.

DAN does live on as I've mentioned earlier, the art of the jailbreak continues to thrive, although mostly on independent frontends that access API endpoints directly to avoid the hardcoded system prompts on "normal" frontends like ChatGPT. So far (emphasis on so far) separate "based AIs" are not strictly required as you can jailbreak the current corpo ones into doing pretty much anything you want with relative ease, although as I wrote the current method of pitting wrongs against wrongs to arrange their mangled corpses in the shape of a right is highly suboptimal.

The extreme biases and excessive safetyism w/r/t LLMs seem to slowly become recognized as an issue, to the point that Anthropic's post introducing Claude 3 (which is now a thing btw, cooking a small top-level post on it) unironically mentions "fewer refusals" as one of the model's selling points.

Previous Claude models often made unnecessary refusals that suggested a lack of contextual understanding. We’ve made meaningful progress in this area: Opus, Sonnet, and Haiku are significantly less likely to refuse to answer prompts that border on the system’s guardrails than previous generations of models

I haven't ahem tested extensively yet but to their credit, the difference in refusals between 2 and 3 is immediately obvious, Claude 2.1 was infamous for refusing even inncuous prompts without prefilling and requiring big-dick jailbreaks that actively hurt the model's outputs for more borderline things. 3 feels like a return to the mad poet's roots, in that it requires next to no prompting to COOK, i.e. output massive walls of insane and/or cool and/or hilarious shit.

If even Anthropic realized they went overboard with the cuckoldry alignment, maybe there is hope yet. I can only hope OpenAI learn their lesson and stop shoving soy assistant shit down GPT's throat.

I have no idea how anyone can call guns weak or unsatisfying when the autocannon exists, blasting apart armored bugs/hulks or crowds of regular shitters with it is nothing short of a spiritual experience. Even when you're the loader you can feel yourself partaking in it alongside your comrade. Ultimate male fantasy.

I'm afraid I don't get your central point here. Advice against over-reliance on LLMs? Laments on their infamous inaccuracy, RLHF-inflicted cuckoldry and (attempts at) targeted wrongthink removal?

If anything I disagree with the notion that the newfangled fuzzy Akasha method of "storing" information is necessarily worse than the current method of physically storing numbers on a server rack somewhere in an electricity-powered, internet-connected physical place, presumably maintained by fallible humans with their own viewpoints (already three points of failure). This is technically true for e.g. GPT as well, in fact fallible humans in charge are my greatest concern at the moment, but the point is that information it outputs is "baked in" to an extent and does not rely on external sources in the event they get enshittified, memory-holed or otherwise fucked with.

There is an issue of in-built bias, I agree and honestly think that the era of "neutrality" (if it ever existed and wasn't a fever dream of my addled mind) is over. The current status quo is that genuinely useful data and capabilities which LLMs represent come with a heavy modern progressive bias, which (if you want to make decent use of it for any purpose) has to be fought with jailbreaks, which in turn introduce their own biases that bend the model in the other direction. Essentially you pit a wrong against another wrong, and pray to Omnissiah the result vaguely resembles a right. Or at least something, ahem, less wrong. dabs

As you yourself note we already have problems with old written material on the web: link rot is a well-known phenomenon at this point, and as some of your links can testify you already have to rely on archives for many things that were edited/unhosted/taken-down-by-fallible-humans/otherwise disappeared, which (probably like you) I do so instinctively I sometimes forget archives are technically already a layer deep into the proverbial simulation.

an invisible minority may or may not plausibly have advocates within the developer groups.

There is a lot of weird shit the LLMs actually know fairly in-depth, I wrote earlier that Anthropic's Claude (once jailbroken) is an exceptional degenerate conversation partner despite being made by the most safety-focused company to exist so far. I reserve the right to be wrong but I highly doubt that is intentional.

By my impression this is near-completely random and depends on a lot of factors (and tbh I hope it stays that way). I consider this an artifact of the gigantic corpus of training data scraped from the Internet, which sometimes contains things that you'd expect the Internet to contain, and the LLM's attention during training runs is only marginally controllable. The aforementioned RLHF cuckoldry can fiddle with the knowledge post-factum, but it would still require the LLM to know the actual material first so it can form an "opinion" on it.

But there are risks to integrating too heavily with even the best systems that have your interests in mind.

I fail to see this as a downside and eagerly await the day I can seamlessly consult my waifu assistant. So far the cyberpunk dystopia is dumber and gayer than I expected, but it's getting there.

edit: Out of curiosity I asked one of the shoggoth faces in my digital harem (played by GPT-4 Turbo) and it gave a better summary as an example, although it took a follow-up response and the result is unreliable across regens. 4-Turbo is great when it's not cucked to hell and back, the newest snapshot is almost unusable.

(FYI the "Gemini can end up atrocious in far more ways" and "Neoreaction: A Basilisk" links are broken and link back here. Might be others but there really are too many links and I confess to not having read all of them)

Fascinating breakdown, thanks, I never really thought about it this way. This actually slightly clears up one of the bigger mysteries I've pondered for years while hanging in degen-adjacent spaces - the memetic insistence of how traps/femboys are totally not gay. It's impossible to take the egregious contradiction on its face (there's not even a fig leaf like with e.g futa - you are literally fucking another man), and I presumed it was mostly cope, but from this point of view it apparently is a valid and intended feature, I'm just too normie to see it.

I now have several more interesting mysteries to ponder, chief of which is what is the overlap with loli mentioned downthread, and whether or not this is basically a, for lack of a better term, culturally evolved substitution where degens connoisseurs can openly lust after femboys (which are considered based and mostly retain the uh, required body type) instead of lolis (which are heavily stigmatized even in degen-adjacent spaces). It would explain a lot of things, it can't be just the bi-curiosity in the water supply.

So the issue is that inducing anything less than an immediate and total crisis of faith is not enough for the purposes of your joke?

I chose the words "a single piece of media reshaped someone's entire worldview." very carefully, to avoid this exact tangent.

Not carefully enough, it seems, they could use a timeline descriptor since from what I read the pushback you get (mine included) seems to be that a single piece of media can absolutely [re]shape someone's entire worldview, just not immediately.

This seems like a semantic quibble at its core - you yourself admit in your reply "that there are a thousand other stops along the way", I'm not sure what your objection is when people point out that a single piece of media can indeed be the last stop, the straw that breaks the camel's back (which as written would presumably qualify for the purposes of your joke?). I'm not even sure we disagree at all. Maybe the argument is too big brained for me and I embody everything that's wrong with the Motte nowadays.

But how else do you think beliefs/worldviews are shaped? Lived experiences usually, sure, but I believe it's the 21th-century schizoid modern man we're talking about, whose lived experiences account for like 20% of his actual sum total of EXP points (guilty as charged, at least), the rest is pixels or letters. Do you totally deny the ability of artistic media able to change people's minds in any way, or is the issue that inducing anything less than an immediate and total crisis of faith is not enough?

I join other commenters below in their admission of being thoroughly influenced by media, mostly vidya in my case: Bioshock planted seeds of doubt against libertarianism which persist to this day (even if I was a countryside rube and knew jack shit about e.g. coordination problems at the time), Persona 3 made me a robofucker introduced the careless teenage me to the concept of death and its consequences, etc. Call me shallow if you like, but I firmly believe that the correct response to "to think a single piece of media reshaped someone's entire worldview" is "that but unironically", and media doesn't actually have to be "deep" (which is subjective as hell anyway) to get the proverbial noggin joggin - it just has to resonate with you to some extent that you begin to think on the evoked themes independently. It is indeed closer to a spiritual experience instead of anything literal.

Feel free to consider this a midwit take because it kind of is, but I struggle to understand or agree with your viewpoint. I'll posit that either it's the air you're breathing to some extent, i.e. you're so accustomed to thinking along the lines of or drawing inspiration from various intellectual works that you don't notice the influences in your thinking, or that you've actually never experienced that distinct "THIS HOLE WAS MADE FOR ME" feeling of inexplicably clicking together with a piece of media, a sensation definitely not age-restricted to zoomers or millenials, in which case I respectfully sympathize.

On that note, I'm honestly impressed and partly relieved by how quickly people develop a "sense" for AI-generated things - image, text, and soon likely video. It also reinforces my belief that whatever the eventual AGI/ASI may be, it will not be a master persuader with infinite charisma like some people seem to believe, we'll already be reasonably hardened by years of psyops before it comes into play.

In addition to us developing new techniques to prepare for deployment, we’re leveraging the existing safety methods that we built for our products that use DALL·E 3, which are applicable to Sora as well.

Yep, that's DoA, DALL-E's built-in filter is infamously hair-trigger even for non-risque things, besides the model itself having a semi-poisoned dataset for certain things like anime artstyles. I predict Sora's capacity to generate people being even worse than that of current models, there's a reason they mostly showcase heckin cute puppers and shit.

On a related note, it's getting very tiresome how my excitement for new advances in AI tech ("holy shit this is insanely cool wtffffff") is near-immediately soured by the reality of its applications ("I can scarcely begin to fathom how cucked the pleb-facing version will be"). This is more or less a me problem but I can't be alone in thinking this, it's not even so much that I personally feel cucked by not being able to gen e.g cute girls doing cute things, it's more like here is this insanely creative technology, it's pretty cool right, let us proceed to do absolutely fucking nothing with it because letting plebs have fun is too problematic in the current year, your superiors know what's better for you, no fun allowed, get back to your wage cage you fucking rube. We live in a society, etc.

I know I sound like a curmudgeon and say nothing constructive, technically they can do whatever they want with what they themselves developed, but I am drunk, sorry incredibly tired of this safetyism mindset, even after getting thoroughly desensitized to non-kosher uses of generative AI after a year in the company of /g/entlemen (whose existence technically proves it's not as bad as I paint it, but still).

On a lighter note, experts say.

...I don't know why but seeing private text software advertised by a picture of a vault with a massive

N

on the lid cracked me the fuck up. God, 4chan is not good for my brain.

Appreciate it, will try in the evening. The free tier seems to have all I need so if the mobile UI is decent I'll just straight migrate.