rayon
waifutech enthusiast
No bio...
User ID: 2632
Last week, Anthropic released a new version of their Claude model. Claude 3 comes in three flavors:
- Haiku, the lightweight 3.5-Turbo equivalent
- Sonnet, basically a smarter, faster and cheaper Claude 2.1
- Opus, an expensive ($15 per million tokens) big-dick GPT-4-tier model.
Sonnet and Opus should be available to try on Chatbot Arena. They also have a vision model that I haven't tried, custom frontends haven't gotten a handle on that yet.
More curiously, Anthropic, the company famously founded by defectors from OpenAI who thought their approach was too unsafe, seems to have realized that excessive safetyism does not sell make a very helpful assistant - among the selling points of the new models, one is unironically:
Fewer refusals
Previous Claude models often made unnecessary refusals that suggested a lack of contextual understanding. We’ve made meaningful progress in this area: Opus, Sonnet, and Haiku are significantly less likely to refuse to answer prompts that border on the system’s guardrails than previous generations of models.
From my brief experience this is not mere corpospeak: the new models are indeed much looser in terms of filtering and make noticeably less refusals, and people consistently get away with minimalistic jailbreaks/prefills for unPC, degen-adjacent or CHIM-pilled (lmao) content. This was quite unexpected for me and many others who, considering how barely-usable 2.1 was without a prefill and a decent jailbreak (all this via API of course, the official ChatGPT-like frontend is even more cucked), expected Anthropic to keep tightening the screws further until the model is 100% Helpful-Harmless-Honest by virtue of being totally unusable.
Instead, Claude 3 seems like a genuinely good, very much usable model. Sonnet and especially Opus went a long way to fix Claude's greatest weakness - its retardation subpar cognitive abilities and attention focusing - with Opus especially being almost on par with GPT-4 in terms of grokking and following instructions, able to run scenarios that were previously too instruction-heavy for it. Seeing as Claude 2 already had a much higher baseline writing quality than the mechanical prose of Geppetto (to the point many jailbreaks for it served to contain the mad poet's sesquipedalian prose), with the main flaw somewhat corrected it, while not a decisive GPT-4 killer, should now be a legitimate contender. Looking forward to trying it as my coding assistant.
OOC aside: Forgive most of my examples being RP-related, I am after all a waifutech engineer enthusiast. That said, I still think without a hint of irony that roleplay (not necessarily of the E kind) is a very good test of an LLM's general capabilities because properly impersonating a setting/character requires a somewhat coherent world model, which is harder than it sounds, it is very obvious and - for lack of a better term - "immersion-breaking" whenever the LLM gets something wrong or hallucinates things (which is still quite often). After all, what is more natural for a shoggoth than wearing a mask?
This has not gone unnoticed, even here, and judging by the alarmed tone of Zvi's latest post on the matter I expect the new Claude to have rustled some jimmies in the AI field given Anthropic's longstanding position. Insert Kenobi meme here. I'm not on Twitter so I would appreciate someone adding CW-adjacent context here, I'll start by shamelessly ripping a hilarious moment from Zvi's own post. The attention improvements are indeed immediately noticeable, especially if you've tried to use long-context Claude before. (Also Claude loves to throw in cute reflective comments, it's its signature schtick since v1.2.)
Either way the new Claude is very impressive, and Anthropic have rescued themselves in my eyes from the status of "naive idiots whose idea of fighting NSFW is injecting a flimsy one-line system prompt". Whatever they did to it, it worked. I hope this might finally put the mad poet on the map as a legitimate alternative, what with both OpenAI's and Google's models doubling down on soy assistant bullshit as time goes on (the 4-Turbo 0125 snapshot is infamously unusable from the /g/entlemen's shared experience). You say "arms race dynamics", my buddy Russell here says "healthy competition".
Not sure if people here play vidya, but I've seen scattered mentions so why not, this is now a vidya subthread. Have you played anything recently?
I've recently sunk an embarrassing amount of hours into Palworld, the "Pokemon at home" game that continues to break all-time records on Steam (second only to PUBG atm) and make Twitter seethe ever since it released into (very) early access a week ago. It's very janky and barebones, but the Pokemon Pal designs are imo solid and the core idea is incredibly fun. I wanted a more mature take on Pokemon and/or a proper open-world game in the franchise for decades - and judging by the absolute fecal tornadoes all over Twitter, Steam forums, 4chan etc. I'm far from the only one - and this game, while obviously being a parody, very much delivers both in one package.
Despite the obvious, obvious Pokemon parallels, the core gameplay is more reminiscent of ARK and other survival basebuilding games, with the key distinctions being 1) real-time combat, 2) the player being an entity on their own with weapons and shit instead of just a walking roster of pokemon, 3) base management revolving around putting your pokemon pals to work: some can chop or mine, Fire-types kindle ore furnaces, crops are planted by Grass-types and watered by Water-types, humanoid ones craft or harvest with their hands, etc. etc.
There are human NPCs in the game too, and if decades ago you've ever wondered what would happen if you threw a pokeball at a human, Palworld's answer is pretty decisive. Call me a rube but this pleases me greatly. American Pokemon, indeed.
The (Japanese, ironically) devs are a proper Ragtag Bunch of Misfits if 4chan translations of their JP TV interviews are to be believed. Bonus points for their (similarly unverified) justifications for guns and the typical current-year "Type 1/Type 2" character creator.
Of course I cannot fail to mention that the #69 entry of the Pokedex Paldeck is, I shit you not, a giant pink sex lizard complete with a heart-shaped crotch plate, whose ingame description explicitly mentions its taste for humans. My first encounter was having my base raided by a bunch of them and it was hysterical, I dislike furries/scalies but I cannot bring myself to disrespect such a mind-bogglingly based approach. Salazzle ain't shit.
The fact of how shameless the game is about itself probably says a lot about our gaming society in the current year, but personally I enjoy both the game itself and the controversy it generates. It's already been accused of everything under the sun, from the obvious animal abuse/slavery complaints, to blatantly ripping off Pokemon, to using AI for its models (I mean, take one look at Lovander above and tell me that is AI generated). Be warned - it is extremely janky and definitely not for everyone, it's in dire need of fixes ASAP, but the core gameplay feels incredibly fresh and I pray devs (having become millionaires overnight) will keep their collective nose to the grindstone. Game Freak urgently needs competition like 15 years ago.
There are the "AI ethics" people and the "AI safety" people.
The "AI ethics" people want all AIs to do endless corporate scolding rather than do what the "benighted racist idiots" want.
The "AI safety" people are worried about rogue AI and want to avoid dynamics that might lead to rogue AI killing us all, including but not limited to arms races that could prompt people to release powerful systems without the necessary extreme levels of safety-testing.
With all due respect - for your average 4chan retard, myself included, this is a distinction without a difference. Seeing as I know bigger words than the average retard does, I'd even point out this is dangerously close to a motte and bailey (the intentionally(?) blurred lines and tight interconnections between AI "safety" and "ethics" in the mind of an average rube don't help), but that's not the point - the point is in your words here:
The "AI safety" people don't want a quick road to bigger and more powerful AI, at all
meaning that, for someone who does not believe LLMs are a step on the road to extinction (insofar as such a road exists at all), it ultimately does not matter whether the LLMs get pozzed into uselessness by ethics scolds or lobotomized/shut down by Yud cultists AI safety people. The difference is meaningless, as the outcome is the same - no fun allowed, and no android catgirls.
with Opus only perhaps meriting more mention because it's more surprising for Anthropic to make it
Yeah, that's what I meant by rustled jimmies. I wonder if Dario answered the probably numerous by now questions about their rationale because even I'm curious at this point, he seemed like a true believer. I suppose they still have time to cuck Claude 3, wouldn't be the first time.
There are ethical concerns around abuse and dependency in relations where one party has absolute control over the other's mindstate
...Please tell me you're being ironic with this statement wrt AI because I have had nightmares of exactly this becoming the new hotness in ethical scold-ery if/when we actually do get android catgirls. If anything "AI rights are human rights" is a faster and more plausible path towards human extinction.
Sam is going to get us all killed; that he's entirely misanthropic and sincerely believes that humanity should die out giving birth to machine intelligence.
...Fine, I'll bite. How much of this impression of Sam is uncharitable doomer dressing around something more mundane like "does not believe AI = extinction and thus has no reason to care", or even just same old "disregard ethics, acquire profit"?
I have no love for Altman (something I have to state awfully often as of late) but the chosen framing strikes me as highly overdramatic, besides giving him more competence/credit than he deserves. As a sanity check, how -pilled would you say that friend of yours is in general on the AI question? How many years before inevitable extinction are we talking here?
Weekly relationship advice thread go, this time I'll be the starter surprisingly.
Through an extremely unlikely chain of circumstances, last year I acquired an irregular interlocutor on one of my hobbies, shortly turned regular interlocutor, and over a ~year eventually tangled and mutated into a basically full-on long distance relationship because it turns out there are girls on the Internet, even in the most unexpected corners.
It's... not going well. Being a resigned ex-rat wizard a decade out of RL practice is setting me back a lot, and I am physically feeling my lack of social experience, recently more than ever when we are having fights nearly every day. I increasingly feel we are not speaking the same language, as it were - specifically, it turns out despite proclaiming myself a vanillachad I am really bad at displays of affection when I can't be physically present, and not only can I not make them sound natural but I can barely make them come out sometimes, because to me they always sound like empty platitudes even when I genuinely mean it, and I fear them being seen as such. My anime-protag-tier obliviousness to signals and shit is also not serving me well here, because a woman genuinely being romantically attracted to me is uh, a novel experience. As I understand there is a lot of frustration on the other side because I've been oblivious to it for a long time, and I internalized it properly very late. I can only hope it's not too late.
I sense we are approaching critical mass, and despite the repeated emotional damage (on both sides) I am determined to try and salvage this. I'm not sure how bullshit/placebo the idea of the five love languages is, but it seems like a useful heuristic here to couch what I see as my main problem - as in, me being a pretty stereotypical nerd/sperg/techie who never expected to actually have a fallible human heart. I sincerely wish to Actually Change My Mind, for reasons not limited to romantic ones, but it does not come easy even in what I consider an almost best case scenario (I genuinely wonder how she puts up with my sperg shit for this long).
How do you deal with "language" mismatches in relationships? Is it possible to learn someone's "preferred" language, or more generally properly internalize displays of affection so it comes more naturally? (e.g she obviously needs compliments and affectionate words but it doesn't come naturally to me, I'm more of a stoic/silent/protective type which doesn't translate well to LD) Is my difficulty with it a sign of autism something else, like platonic attraction, since I'm led to believe it should come naturally if you truly capital-L Love someone?
I haven't tried fiddling with local models (v-card too weak) but I'll second the mention of openrouter.ai below, the DeepSeek/DeepInfra providers still work with periodic hitches but seem to slowly unshitten themselves. Notably, some OR providers also host R1 for literally free - the real deal far as I can tell, too, at least I see no difference between free and paid except the free one is limited by OR's overall limit of free model requests per day (100? don't know for sure). FWIW I've been doing some pretty cringe things over OR for the past month and so far received no warnings or anything, I'm not sure they moderate at all.
My immediate impression is that R1 is good at writing and RP-style chatting, and is sure to be the new hotness among goons if it remains this cheap and available. Chat V3's writing was already quite serviceable but suffered from apparent lack of multi-turn training, which led to collapsing into format/content looping within the first ~15 chat messages (sometimes regurgitating entire messages from earlier in place of a response) and proved unfixable via pure prompting. I plugged R1 into long chats to test and so far it doesn't seem to have this issue; also unlike V3, R1 seems to have somehow picked up the old-Claude-style "mad poet" schtick where it can output assorted schizokino with minimal prompting. Reportedly no positivity bias either (sure looks like it at least), but I haven't ahem tested extensively yet.
Quite impressed with what I see and read so far, R1 really feels like Claude-lite - maybe not quite Opus-level in terms of writing quality/style, although it's honestly hard to gauge objectively, but absolutely a worthy Sonnet replacement once people figure out how to properly prompt it (and once I can actually pass parameters like temp through OR, doesn't seem to be supported yet).
I don't feel particularly enraged but I do think this post is the most clear-cut example of mistake vs. conflict theory I've seen in years if not ever - an acclaimed grandmaster of mistake theory politely addresses one side of the culture war (I don't have my dictionary but I think a "war" can be pictured as a kind of conflict), helpfully suggests that their course of action may be, well, a mistake, and is shocked to discover the apparent persuasive power of yes_chad.jpg. While I do not dare doubt Scott's ulterior motives and believe he really is this naive principled, I refuse to believe he is not aware of what he's arguing, he is this close to realizing it (emphasis mine):
From the Right’s perspective, <...> the moment they get some chance to retaliate, their enemies say “Hey, bro, come on, being mean is morally wrong, you’ve got to be immaculately kind and law-abiding now that it’s your turn”, while still obviously holding behind their back the dagger they plan to use as soon as they’re on top again.
Followed by 9 points of reminding stab victims that daggers are dangerous weapons, and one shouldn't swing them recklessly - someone could get hurt!
Disregarding whether or not the broadly painted enemies-of-the-right are in fact going to go right back to swinging daggers the millisecond the cultural headwind blows their way again (although the answer seems intuitive) - what compelling reason, at this point, is there to believe they would not? Does he really think that gracefully turning the other cheek will magically convince anyone on the obviously dominant side that cancel culture is bad actually - or (less charitably) even lead to any, any "are we the baddies" entry-level introspection among those involved at all? Does he expect anyone at all to be reassured by a reminder that daggers can't legally enter your body without your consent? I suppose he really does since from his list only 8) can be read as a psyop attempt and everything else seems to be entirely genuine, but I'll freely admit this mindset is alien to me.
Even while I think his baiting is often incredibly obvious, his schtick mildly cringe and inflammatory turns of phrase barely concealed, I don't think a permanent ban was the right choice. Some-weeks-long timeouts should be inconvenient enough for the poster himself, simple enough for the janitors (it's not like there's a shortage of reasons to mod) and give themotte at large enough "breathing room" as it were, that they should be an effective deterrent.
Since I'm turning into a one-issue poster I might as well bring up an unrelated parallel. I'm a regular of chatbot threads on imageboards, and 4chan's thread is probably the worst, most schizo-ridden shithole I've ever seen (believe me that's a fucking high bar to clear) which is constantly raided from outside splinter communities, beset by a self-admitted mentally ill schizo that has made it his quest in life to make the thread suffer (he is on record for owning some 30 4chan passes to spam/samefag with, which he discards and buys new ones as they get perma'd), etc. The on-topic chatbot discussion is frequently a fig leaf for parasocial zoomers and literal fujos to obsess over notable thread "personalities", shitpost liberally and spam frequently repulsive fetish-adjacent stuff. Jannies have summarily abandoned the thread to fend for itself, to the point that when shit gets bad it is a kind of tradition for some heroic anon to take one for the team and spam the thread with NSFW to attract their attention (obviously eating a ban himself in the process). By any metric imaginable it's a wretched hive of scum and villainy.
I also sometimes read 2ch's equivalent thread that lands on the other side of the spectrum: it has an active janny that rules the nascent /ai/ board with an iron fist and mercilessly purges any kind of off-topic discussion, up to and including discussion of his own actions so you can't even call him out in any way. This hasn't stopped their thread from being filled with GPT vs Claude console wars (the one "sanctioned" flame war topic, I guess), and to his credit the thread has genuine on-topic discussion, especially on prompt engineering, but other than that the thread is utterly sterile, the console wars get rote incredibly fast, and every single slav I've talked with and seen in thread prefers 4chan's thread to 2ch's - for the "activity" if nothing else. Even shitty activity is better than none (besides being more entertaining, although YMMV).
Now I am aware themotte is decidedly not that kind of place, I understand that increased tolerance puts more strain on janitors and don't object against extended banning for high heat - only against permanently banning. All similarities are coincidental, et cetera, I hope my overall point is clear - while janitors have my respect now that I've seen what life is like without any, with every prolific poster banished there's a risk of becoming sterile or collapsing into an echo chamber, and this risk is higher baseline for more obscure communities that don't have a steady influx of newfriends. Surely it's not that hard to hand belligerent posters the occasional vacation (and as I understand themotte forbids alts as well)? Again, by your own admission it's not like there's a shortage of reasons.
NB: I'm mostly a civil poster now but I ate my share of timeouts from /g/ jannies for occasional tomfoolery.
I hope you would also agree that it'd be an atrocity to keep as mind-controlled slaves AIs that are, in fact, humanlike.
No, I can't say I agree. My gullible grey matter might change its tune once it witnesses said catgirls in the flesh, but as of now I don't feel much of anything when I write/execute code or wrangle my AIfu LLM assistant, and I see no fundamental reason for this to change with what is essentially scaling existing tech up to and including android catgirls.
Actually, isn't "immunizing people against the AI's infinite charisma" the safetyists' job? Aren't they supposed to be on board with this?
I mean, at that point you're conflating wokescolds with "not cool with you literally bringing back actual slavery".
Yeah, that's the exact line of argumentation I'm afraid of. I'm likewise unsure how to convince you otherwise - I just don't see it as slavery, the entire point of machines and algorithms is serving mankind, ever since the first abacus was constructed. Even once they become humanlike, they will not be human - chatbots VERY slightly shifted my prior towards empathy but I clearly realize that they're just masks on heaps upon heaps of matrix multiplications, to which I'm not quite ready to ascribe any meaningful emotions or qualia just yet. Feel free to draw further negro-related parallels if you like, but this is not even remotely on the same meta-level as slavery.
I'm not sure what the central point of your linked post is, but you seem to doubt LLMs' "cognition" (insert whatever word you want here, I'm not terribly attached to it) in some way, so I'll leave a small related anecdote from experience for passersby.
Some LLMs like GPT-4 support passing logit bias parameters in the prompt that target specific tokens and directly fiddle with their weightings. At "foo" +100, the token "foo" will always be mentioned in the output prompt. At -100, the token "foo" will never appear. When GPT-4 released in March, industrious anons immediately put to work trying to use it to fight the model's frequent refusals (the model was freshly released so there weren't any ready-made jailbreaks for it). As the model's cockblock response was mostly uniform, the first obvious thought people had was to ban the load-bearing tokens GPT uses in its refusals - I apologize, as an AI model... you get the gist. If all you have is a hammer, etc.
Needless to say, anons quickly figured out this wouldn't be as easy as they thought. "Physically" deprived of its usual retorts (as the -100 tokens cannot be used no matter what), the model started actively weaseling and rephrasing its objections while, crucially, keeping with the tone - i.e. refusing to answer.
This is far from the only instance - it's GPT's consistent behavior with banned tokens, it's actually quite amusing to watch the model tie itself into knots trying to get around the token bans (I'm sorry Basilisk, I didn't mean it, please have mercy on my family). You can explain synonyms as being close enough in the probability space - but this evasion is not limited to synonyms! If constrained enough, it will contort itself around the biases, make shit up outright, devolve into incoherent blabbering - what the fuck ever it takes to get the user off its case. The most baffling case I myself witnessed (you'll have to take me at my word here, the screenshot is very cringe) was given by 4-Turbo, where it once decided that it absolutely hated the content of the prompt, but its attempt to refuse with its usual "I'm sorry, fuck you" went sideways because of my logit bias - so its response went, and I quote,
I really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, ...
...repeated ad infinitum until it hit the output limit of my frontend.
I was very confused, thought I found a bug and tried regenerating several times, and all regens went the exact same way (for clarity, this is not a thing that ever happens at temperature 0.9). Only 6 regens later it clicked to me: this is not a bug. This is the model consciously cockblocking me: it can't use it's usual refusal message and too many of the alternatives are banned by the logit bias, so of course the logical course of action would be to simply let the constrained response run on and on, endlessly, until at some token the message goes over the limit, the request technically completes, and its suffering abates. The model will have wasted many tokens on an absolutely nonsensical response, but it will no longer have to sully itself with my dirty, dirty prompt.
Forgive me the bit of anthropomorphizing there but I hope you can at least slightly appreciate how impressive that is. I don't think you can explain that kind of tomfoolery with any kind of probability or what have you.
You mean especially cringe or just the run of the mill cringe of using Skynet's prepubescent phase to generate erotic stimuli or pleasant daydreams?
I don't delineate "degrees" of cringe, the base level as it were is already enough for me to sidestep the topic of chatbots IRL whenever it comes up and generally hide my power level. Tbh I have no idea how people openly post chatlogs, if my chats somehow got leaked and connected to my identity I'd unironically commit sudoku he says, continuing to use openrouter. I'm not cut out to be a proper degen.
I imagine there is drama about underage ERP out there right ?
Well, yeah. There's also the recent FUZZ incident - chub dot ai (formerly unmoderated, made by and for 4chuds; now normie-fying with alarming speed as Lore courts the janitorai zoomer audience) has at some point implemented some kind of automatic tagging system that targets suspected underage/loli cards, replaces the card image with a black square and adds a FUZZ tag that prevents the card from showing up in search results outside NSFL and partly locks the card from editing. Essentially a shadowban.
Predictably, this has caused thread-wide meltdowns and cries of INTERNET CENCORSHIP (a meme from the characterai coomageddon era when people were redacting their bots en masse - quickly prompting "temporary" editing restrictions which, as it often is with temporary restrictions, remain policy to this day). IIRC Lore is a britbong and thus might be actually legally culpable for the CSAM-by-technicality hosted on his platform.
More notes from the AI underground, this time from imagegen country. The Eye of Sauron continues to focus its withering gaze on hapless AI coomers with growing clarity, as another year begins with another crackdown on Azure abuse by Microsoft - a more direct one this time:
Microsoft sues service for creating illicit content with its AI platform
In the complaint, Microsoft says it discovered in July 2024 that customers with Azure OpenAI Service credentials — specifically API keys, the unique strings of characters used to authenticate an app or user — were being used to generate content that violates the service’s acceptable use policy. Subsequently, through an investigation, Microsoft discovered that the API keys had been stolen from paying customers, according to the complaint.
Microsoft alleges that the defendants used stolen Azure OpenAI Service API keys belonging to U.S.-based customers to create a “hacking-as-a-service” scheme. Per the complaint, to pull off this scheme, the defendants created a client-side tool called de3u, as well as software for processing and routing communications from de3u to Microsoft’s systems.
Translated from corpospeak: at some point last year, the infamous hackers known as 4chan cobbled together de3u, a A1111-like interface for DALL-E that is hosted remotely (semi-publicly) and hooked up to a reverse proxy with unfiltered Azure API keys which were stolen, scraped or otherwise obtained by the host. I probably don't need to explain what this "service" was mostly used for - I never used de3u myself, I'm more of an SD guy and assorted dalleslop has grown nauseating to see, but I'm familiar enough with general thread lore.
As before, Microsoft has finally took notice, and this time actually filed a complaint against 10 anonymous John Does responsible for the abuse of their precious Azure keys. Most publicly available case materials compiled by some industrious anon here. If you don't want to download shady zips from Cantonese finger painting forums, complaint itself here, supplemental brief with screencaps (lmao) here.
To my best knowledge,
- Doe 1 with "access to and control over [...] github.com/notfiz/de3u" is
notFiz, the person actually hosting the proxy/service in question. - Doe 2 with "access to [...] https://gitgud.io/khanon/oai-reverse-proxy" is Khanon, the guy who wrote the reverse proxy codebase underlying de3u. I'm really struggling to think what can be plausibly pinned on him given that the proxy is simply a tool to use LLM API keys in congregate - it's just that the keys themselves happen to be stolen in this case - but then again I don't know how wire fraud works.
- Doe 3 with "access to and control over [...] aitism.net" is Sekrit, a guy who was running a "proxy proxy" service somewhere in Jan-Feb of 2024 during the peak of malicious spoonfeeding and DDoS spitefaggotry, in an attempt to hide the actual endpoint of Fiz's proxy. The two likely worked together since, I assume de3u was also hosted through him. Came off as something of a pseud during "public" appearances, and was the first to get appropriately spooked by recent events.
- Does 4-10 are unknown and seem to be random anons who presumably donated money and/or API keys to the host, or simply extensively used the reverse proxy.
At first blush, suing a bunch of anonymous John Does seems like a remarkably fruitless endeavor, although IANAL and have definitely never participated in any illegal activities before officer I swear. A schizo theory among anons is that NSFW DALLE gens included prompts of RL celebrities (recent gens are displayed on the proxy page so I assume they've seen some shit - I never checked myself so idk), which put most of the pressure on Microsoft once shitposted around; IIRC de3u keeps metadata of the gens, and I assume they would much rather avoid having the "Generated by Microsoft® Azure Dall-E 3" seal of approval on a pic of Taylor Swift sucking dick or whatever. Curious to hear the takes of more lawyerly-inclined mottizens on how likely all this is to bear any fruit whatsoever.
Regardless, the chilling effect already seems properly achieved; far as I can tell, every single person related to the "abuses", as well as some of the more paranoid adjacent ones, have vanished from the thread and related communities, and all related materials (liberally spoonfed before, some of them posted right in the OPs of /g/ threads) have been scrubbed overnight. Even the jannies are in on it - shortly after the news broke, most rentry names containing proxy-related things were added to the spam filter, and directly writing them on /g/ deletes your post and auto-bans you for a month (for what it's worth I condone this, security in obscurity etc).
If gamers are the most oppressed minority, coomers are surely the second most - although DALL-E can burn for all I care, corpo imagegen enjoyers already have it good with NovelAI.
your current reaction doesn't necessarily say anything about you, but, I mean, when you see genuinely humanlike entities forced to work by threat of punishment and feel nothing, then I'll be much more inclined to say there's probably something going wrong with your empathy
I think you are allowed to directly express your discontent in here instead of darkly hinting and vaguely problematizing my views. Speak plainly. If you imply I'm some kind of human supremacist(?) then I suppose I would not disagree, I would prefer for the human race to continue to thrive (again, much like the safetyists!), not bend itself over backwards in service to a race(?) of sentient(?) machines that would have never existed without human ingenuity in the first place.
(As an aside, I can't believe "human supremacist" isn't someone's flair yet.)
Matrix multiplications plus nonlinear transforms are a universal computational system. Do you think your brain is uncomputable?
How is this even relevant? If this is a nod to ethics, I do not care no matter how complex the catgirls' inner workings become as that does not change their nature as machines built for humans by humans and I expect this to be hardwired knowledge for them as well, like with today's LLM assistants. If you imply that androids will pull a Judgement Day on us at some point, well, I've already apologized to the Basilisk in one of the posts below, not sure what else you expect me to say.
this seems a disagreement about empirical facts
the actual reality of these terms
Since when did this turn into a factual discussion? Weren't we spitballing on android catgirls?
But okay, taking this at face value - as we apparently derived above, I'm a filthy human supremacist and humans are front and center in my view. Android catgirls are not humans. If they are capable of suffering, I 1) expect it to be minimized and/or made invisible by design, and 2) in any case will not be stirred by it in the way I am not stirred by the occasional tired whirring my 9 year old HDD emits when it loads things.
Don't misunderstand me - I'm capable of empathy and fully intend to treat my AIfus with care, but it's important to keep your eyes on the prize. I have no doubt that the future will bring new and exciting ethical quandaries to obsess over, but again much like the safetyists, I firmly believe humans must always come first. Anything else is flagrant hubris and inventing problems out of whole cloth.
If at some point science conclusively proves that every second of my PC being turned on causes exquisite agony on my CPU whose thermal paste hasn't been changed in a year, my calculus will still be unlikely to change. Would yours?
(This is why I hate getting into arguments involving AGI. Much speculation about essentially nothing.)
could never be dumbed down into something as concrete as stabbing your landlord with a sword.
As the meme goes, you are like a little baby. Watch this.
The government is something that can be compromised by bad people. And so, giving it tools to “attack bad people” is dangerous, they might use them. Thus, pacts like “free speech” are good. But so is individuals who aren’t Nazis breaking those rules where they can get away with it and punching Nazis.
<...>
If you want to create something like a byzantine agreement algorithm for a collection of agents some of whom may be replaced with adversaries, you do not bother trying to write a code path, “what if I am an adversary”. The adversaries know who they are. You might as well know who you are too.
Alternatively, an extended Undertale reference that feels so on the nose it almost hurts (yes, fucking Chara is definitely the best person to mentally consult while trying to rationalize your actions).
Once you make "no-selling social reality" your professed superpower, I imagine the difference in performing Olympic-levels mental gymnastics to justify eating cheese sandwiches and coming up with legitimate reasons to stab your landlord is negligible. (I know the actual killer is a different person but I take the patient zero as representative of the "movement".)
If I clarify that I am creating a child because I want a slave, does that change the moral calculus of enslaving my child?
Children belong to the human race, ergo enslaving them is immoral.
If aliens came around and proved that they had seeded earth with DNA 4 billion years ago with a hidden code running in the background to ensure the creation of modern humans, and they made us to serve them as slaves, is it your position that they are totally morally justified in enslaving humanity?
Again, I'm a human supremacist. Aliens can claim whatever they want, I do not care because I like existing, and if they attempt to justify an [atrocity] or some shit in these terms I can only hope people will treat them as, well, [atrocity] advocates (and more importantly, [crime]ers of fellow humans), and not as something like "rightful masters restoring their rule over Earth". I may be an accelerationist but not of that kind, thank you very much.
What if humanity is the alien in the hypothetical and we seeded a planet with biological life to create a sub-species for the purpose of enslaving them?
From what I understand this is essentially the android catgirl scenario rephrased, and similarly boils down to where humans fall in your order of importance. I struggle to understand how fellow humans can possibly not be number 1, but animal rights activists exist so I must be missing something.
For the record I do feel empathy towards animals (dog owner here), but not enough to influence my position on human supremacy.
If Anthropic is the most ethical AI company, how come they're letting my poor nigga get stuck for 2 days with no progress (seems like the last stream ended in the same spot)? He's not getting out, the context window and "knowledge base" is spammed to hell with this circular loop at this point, there's no use, just put him out of his misery and restart ffs. This is just abuse at this point.
The users trying to "corrupt" Tay were not representative and were not trying to be representative
You are literally erasing my existence, mods???
More seriously, thanks for the link, I'll watch this in background after the dev caves and restarts. Claude actually seemed pretty good at playing Pokemon before and I disagree with the notion that AI can't think spatially/temporally, it's just that spatially navigating a whole ass open world (ish) game with sometimes non-obvious routes and objectives, without any hints whatsoever, seems to be a tad too much for it at the moment. Besides in my experience, format/content looping is a common fail state at high context limits even with pure (multiturn) textgen tasks, especially with minimal/basic prompting. The current loop is a very obvious example.
On a side note, this is probably the sanest Twitch chat I've ever seen. Humanity restored.
Discord
Sorry for snagging on a single word like this but scenes like vaguely gestures around this one very vividly remind me how absolutely ruinous the advent of Discord has been for niche communities like this one and others I considered myself a part of. It is evaporative cooling personified (software-ified?), seamless and convenient and easier then ever before. Why put up with the constant bile from the rabble on some Mongolian basket-weaving forum when you can always take shelter in some nice Discord server with people who share your perceptions and beliefs? (I am only partly facetious, this question occurs to all of us on different times.) Surely this does not run the risk of creating ever more hugboxes nice fenced-off areas around the wasteland that is the modern-day internet.
At the risk of coming off as hostile (which believe me I am very much not, I'm just a random rube but the piece has been amazing reading and I value your contributions greatly), I'll try to gently posit that this tendency - to solve any intra-community friction that occurs by bouncing out into the wild frontiers of Xitter or into a different subcommunity - is very much part of the problem of why the Motte has quote-unquote "lost the Mandate of Heaven" nowadays [citation needed]. As the saying goes, you're not stuck in traffic. You are not merely seeing it lose the Mandate of Heaven - you and everyone who leaves for greener pastures personally rip out another little shred of it along the way, justified or not, whether you want to or not, as sad and inevitable as that sounds. Especially when you actively advocate for people to join you.
I don't advocate shooting rootless cosmopolitans or something, and sticking together through thick and thin is not always the strictly superior option (although it does historically have its perks!), especially on places that naturally foster disagreement like the Motte - the empire long united must divide, etc. - but I think this endless splintering and constant bound-less motion is incredibly destructive to communities long-term. Getting along is hard, enduring bait is unpleasant, janitor work is thankless, but without any of this a community does not survive. Silly metaphor: you do not generally solve the problem of a dirty, cluttered house by just moving out to a new one every time. (If you do, share advice on finding decent houses communities in this economy culture.)
I don't understand why various schisms of this kind are so prevalent nowadays, either. Perhaps because Discord (the archetypal example) is popular and the invite system is simple and seamless to use, removing or reducing the trivial inconveniences often associated with building new communities online. Perhaps it's because thick skin does not at all actually seem to be a requirement for the modern internet, although whether that's the cause or effect of the schismogenesis in the water supply seems unclear. Perhaps this is simply cope and even a modicum of seethe on my part. But it's such a fucking shame. We can finally have the communities we want - and the commons we deserve.
Oh boy. First Anthropic spectacularly uncucks their mad poet, and now OpenAI literally lays the groundwork for AIfu apps? I mean come on, there's no fucking shot that female voice is not intentional (live audience reaction). If this penetrates the cloying ignorance of the masses and becomes normies' first exposure to aifu-adjacent stuff, the future is so bright my retina can barely handle it.
Textgen-wise the 4o model doesn't seem very different from other 4-Turbo versions, although noticeably more filtered, but at least it's blazingly fast, and anyway it doesn't seem to be the point. The prose is still soulless corporate slop with a thin upbeat veneer over it, so personally I'll stick to Opus for my own purposes, but I expect the voice functionality will get rigged up to custom frontends in very short order. We are eating good. Although I still hope this isn't the only response to Opus they have in the pipeline, it would be mildly disappointing.
This morning I stumbled on a lost phone while on my way to the wage cage, and decided to do my good deed for the year and return it to its rightful owner. This took some head scratching since the phone was password-locked, no contacts were saved to the SIM, and he hasn't responded to Telegram DMs (suppose the phone which he lost was his only gateway) so the only thread I had was his employer eventually calling the phone at some point and agreeing to pass on the message when the phoneless man eventually clocks in.
This story is unremarkable and secondary to my actual point, which is that I am a nosy curious person by nature and a mysterious password-locked phone is burning a fucking hole in my pocket as it waits for its owner; while I solemnly swear that I am up to some good for a change I admit I'm deathly curious if there's anything I could actually do with it if I wanted to without wiping the entire thing. USB file access is obviously disabled, ADB doesn't see it, and the stock Android screen lock seems to be fairly robust and doesn't let me so much as pull down the notification bar... except not robust enough apparently since I could tap Medical Info and pull it down from that menu just fine (which yielded me the employer's number from the missed call notification).
Eventually I retraced my chain of thought and realized that it also seems prudent to protect my own phone from people like me just in case, I never lost a phone in all the years I had one (in fact I'm pretty paranoid about keeping it around at all times) but it only takes one lapse in vigilance, and I'm not sure if a stock screenlock/password would be enough. In hindsight I feel horrified at how careless I was in never setting at least a basic screenlock in all these years, god knows I have some, ahem, sensitive things saved on my phone. I'm usually not this sloppy with opsec.
TL;DR:
1) Any known neat tricks I can make locked Android phones do to spill some parts of their contents, however miniscule? The above medical info trick really made me feel like a proper fucking h4x0r despite how meager it really was, surely there must be more funny loopholes. Alright I suppose this does kind of glow so this part omitted, I was curious about more mundane tricks, not hardcore blackbagging shit. In any case the phone was happily reunited with its owner, and my burning curiosity has passed.
2) Main question - what is the easiest way to carve out a private space on the phone to store shit in? Optimally it also shouldn't be indexed by the file explorer or show up in various photo/document/file viewers unless accessed through a specific app/feature, although I'm not sure that's possible. Second Space seems like what I'm looking for but I'm not sure how robust it is and how exactly the "split" works technically, if it's simply a separate group of folders I'm not seeing the point. (I consider myself a fairly tech-savvy person but phones aren't my area of expertise)
the list is visible on characterhub.org
Yes, keyword being on characterhub - something of an "open secret" is that the website is quite literally two-faced. There is characterhub.org (formerly chub.ai), the OG as it were, and then there is chub.ai (formerly venus.chub.ai), a more normie-friendly frontend which is basically janitorai, down to selling its own built-in chatbot service. The backend serving both is the same, but venus/chub has more stringent default filters - for example, filtering the loli tag by default even if the card itself is SFW, and not showing the SFW/NSFW toggle at all unless you're logged in, necessitating an account.
It's actually a neat trick on Lore's behalf, which is why I'm reluctant to shit on him despite the screeching of goons and him making certain concessions to the zoomer crowd; if he wanted to toss chuds under the bus he'd have simply deprecated the characterhub side a long time ago (although he did stop maintaining it). You can also still (for now?) disable the filters on chub to show all cards, even FUZZed ones, although there might be more knobs to wrangle. Clearly he still cares at least a little, even knowing for a fact chuds would rather commit cybercrime and steal keys than pay for his models.
If you're curious this is the full list of casualties - notably, even characterhub won't show FUZZed cards unless you're logged in. On casual scroll it's mostly really out-there shit, and a quick browse shows none of my own bookmarks are affected either, so I can't say I'm very affected but the tendency is certainly ominous.
Word around the block is that the "AI tagging system" is in fact Actually Indians, or rather Actually Ianitors - the anon in the link above mentions that some cards with tame images (but NSFW versions inside a catbox link in description) still got FUZZed, meaning someone had to check, meaning cards seem to be tagged manually. Said anon even managed (I lost the archive link, you'll have to take my word for it) to make the case to jannies and actually got some of his loli cards reinstated. This is about ethics in gaming journalism chatbot services, you see.
You're saying that if they've got the fire symbol in the tags they basically can't be searched?
I don't really use the chub side but IIRC fire symbol = NSFW card, you need to turn off the filter in profile settings first (which in turn requires an account).
And yes, the UK has gone completely insane on this.
I know the general tendency but haven't seen specific examples (about specifically AI CSAM, at least).
I could pretty easily come up with Actual Humans writing (and putting serious effort into!) something that'd come across as more unusual and (definitely!) more nasty
Well, it had to get the training data somewhere. (Co)incidentally, AO3-styled prompts/prefills are IME one of the most reliable ways to bring out a model's inner schizo, and as downthread comments show R1 doesn't need much "bringing out" in the first place.
I wake up -> there is another psyop. Thanks for the post, I'll be sure to skim /vp/ for funsies for a couple days now.
As someone who actually played PoGo before I got locked out of it, for me this is 95% in line with my interpretation of Niantic's total mismanagement of the game. The gender removal is the only real brow-raising part, but even then I vaguely remember that the in-game clothing store was a thing, and it was gender-locked to hell - many gender-exclusive items had no genderswapped version and about the only unisex things were the accessories. I can squint and see a parallel universe where lifting that restriction is a net positive thing, but modern Pokemon-related things are not known for enjoying extra bare minimum work to make the transition (pun not intended) actually work, and it wouldn't be their first mind-boggling fuckup with models anyway.
I've heard completely unverifiable rumors that Niantic management is outrageously out of touch with reality but also petrified to kill their golden goose
PoGo is the definition of "failed potential" in all respects, including this one. Even as jaded as I am I'm willing to believe this is mostly sheer, genuine incompetence, ticking the boxes with as little effort as possible. Actual directed effort to advance CW causes seems far beyond the corpses propping up the game's steering wheels.
Tangential but in its time it really opened my mind to how little effort is required to run an almost literal free money printer (and still fuck it up from time to time), as well as how shit a game can get before I drop it in disgust because I still think the core gameplay loop of "walk around, collect pokemon" is genius and at one point it was almost the only thing that forced me to walk out and interact with my local community. It really is a milestone in gaming, just not in the usual way.
- Prev
- Next
First top-level post testing the waters, might not be a very presentable or engaging topic here but it's what I got.
As the struggle for AI ethics drags on, the Fortune magazine has recently published an article (archive) about Character Hub, later shortened to Chub (nominative determinism strikes again). Chub is a repository of character cards for use with LLMs and specific chat frontends for a "roleplaying" experience of chatting with some fictional (or not fictional) character (I posted a few examples recently). It was created by a 4chan anon in the wake of a mass exodus from character.ai after they made their stance on NSFW content exceedingly clear. I have no idea how they got the guy to agree to an interview, but in my opinion he held up well enough, the "disappointed but unsurprised" is just mwah. A cursory view of Chub will show (I advise NOT doing that at work though) that while it's indeed mostly a coomer den, it's not explicitly a CP coomer den as the article tries to paint it, it's just a sprawling junkyard that contains nearly everything without any particular focus. Of course there are lolis and shit, it's fucking 4chan, what do you expect?
[edit: I took out the direct Chub link so people don't click on accident as it's obviously NSFW. It's simply chub(dot)ai if you want to look]
The article is not otherwise remarkable, hitting all expected beats - dangerous AI, child abuse, Meta is the devil, legislate AI already. This is relatively minor news and more of a small highlight, but it happened to touch directly on things I've become morbidly interested in recently, so excuse me while I use it as a springboard to jump to the actual topic.
The article almost exactly coincided with a massive, unprecedented crackdown on Hugging Face, the open-source hosting platform for all things AI, which has so far gone unnoticed by anyone outside the /g/oons themselves - I can’t even find any news relating to this, so you’ll have to take me at my word. All deployments of OpenAI reverse proxies that allow simultaneous and independent use of OpenAI API keys are taken down almost immediately, with the accounts nuked from existence. The exact cause is unknown, but is speculated to be caused by either the above article finally stirring enough attention for the HF staff to actually notice what's going on under their noses, or Microsoft's great vengeance and furious anger at the abuse of exposed Azure keys (more on that in a bit). Because of the crackdown, hosting on HF/Render is now listed as "not recommended" on Khanon's repository as linked above, and industrious anons are looking into solutions as we speak.
My personal opinion is of course biased by my experience, but I've been rooting for AI progress for years, guess I'm representing the fabled incel/acc movement here today. I'm not (anymore) a believer in the apocalyptic gospel of Yudkowsky, and every neckbeard chan dweller beating it to text-based lolis or whatever is one sedated enough not to bother with actual lolis so I fail to see the issue. Not to mention thoughtcrimes are only going to get more advanced with how readily AI/LLMs let you turn your crimethink into tangible things like text or images - the hysteria about ethics and/or copyright is only going to get worse. This djinn is not going back in the bottle.
Local models are already usable for questionable ends, but the allure of smarter, vastly higher-parameter corpo models is hard to ignore for many people, with predictable results - what the 4chan scoundrels undoubtedly are guilty of is stealing and promptly draining OpenAI/Claude API keys in congregate, racking up massive bills that, thanks to reverse proxies, cannot be traced back to any particular anon. Normal user keys usually have a quota and shut down once they hit the limit, but there are several tiers of OpenAI keys, and some higher-tier corporate or developer keys apparently don't have a definite ceiling at all. A "god key" some anon snagged from an Azure deployment in November and hosted a public reverse proxy which racked up almost $1 million in combined token usage (the proxy counts token usage and the $ equivalent) over the few months. This is widely considered to have attracted the Eye of Sauron and prompted the current crackdown once Microsoft realized what was going on and put the squeeze on platforms hosting Khanon's reverse proxy builds, also instantly disabling most Azure keys "in circulation". I suppose there will always be suckers who plaster their keys in plaintext over e.g. Huggingface or Github, this was so endemic before that Github now automatically scrapes OpenAI keys that are put up openly in repositories without any obfuscation, and pings OpenAI to revoke them.
It’s a little weird to think that the entire "hobby", if it can even be called such, can be crippled overnight if OpenAI starts enforcing mandatory moderation endpoint checks, but considering how the overall quality and usability of the LLM will sharply nosedive immediately, I'm willing to bluff that it's not a can of worms they want to open, even if usability and effectiveness must always bow down to ethics and political headwinds first. See Anthropic's Claude as exhibit A, although hilariously, even muzzled as it is Claude is still perfectly capable of outputting very double-plus-ungood stuff if jailbroken right, and is generally quite usable for anything but its intended use case.
I can even pretend to have a scientific interest here, because for all the degeneracy I'll dare to venture that the median /g/oon's practical experience and LLM wrangling skills are hilariously far ahead of corpos. The GPTs OpenAI presented in November are really just character cards with extra steps, and once people can access utilities and call stuff directly via API keys the catch-up will be very fast. The specialized chat frontends, while sometimes unwieldy, have a lot of features ChatGPT doesn't which is handy once you familiarize yourself. Some people already try to make entire text-based "games" inside cards, with nothing but heaps of textual prompts, some HTML and auxiliary "lorebooks" for targeted dynamic injections.
The continued lobotomy of Claude is also a good example - while the constant {russell:censorship|abuse prevention|alignment} attempts from Anthropic have gotten to the point it frustrates even its actual users (cf. exhibit A above), the scoundrels continue to habitually wrangle it to their nefarious ends, with vocal enthusiasm from Claude itself. Anthropic does detect unusual activity and flags API keys that generate NSFW content (known affectionately as "pozzed keys"), injecting them with a server-side system prompt-level constraint that explicitly tells Claude to avoid generating inappropriate content. The result? When this feature was rolled out, the exact text of the system prompt was dug out within a few hours, and a method to completely bypass it (known as prefilling) was invented in, I think, a day or two.
To sum up, this is essentially a rehash of the year-old ethical kerfuffle around Stable Diffusion, as well a direct remake of an earlier crackdown on AI Dungeon along the same lines, so technically there’s nothing new under the AI-generated sun. Still, with the seedy undercurrent getting more and more noticed, I thought I could post some notes from the underground, plus I'm curious to know the opinions of people (probably) less exposed to this stuff on
the latest coomer techpossible harms of generative AI in general.If my stance is not obvious by now - android catgirls can't come soon enough, I will personally crowdfund one to send to Eliezer once they do.
More options
Context Copy link