@rayon's banner p

rayon

waifutech enthusiast

3 followers   follows 0 users  
joined 2023 August 17 08:48:30 UTC

				

User ID: 2632

rayon

waifutech enthusiast

3 followers   follows 0 users   joined 2023 August 17 08:48:30 UTC

					

No bio...


					

User ID: 2632

First top-level post testing the waters, might not be a very presentable or engaging topic here but it's what I got.

As the struggle for AI ethics drags on, the Fortune magazine has recently published an article (archive) about Character Hub, later shortened to Chub (nominative determinism strikes again). Chub is a repository of character cards for use with LLMs and specific chat frontends for a "roleplaying" experience of chatting with some fictional (or not fictional) character (I posted a few examples recently). It was created by a 4chan anon in the wake of a mass exodus from character.ai after they made their stance on NSFW content exceedingly clear. I have no idea how they got the guy to agree to an interview, but in my opinion he held up well enough, the "disappointed but unsurprised" is just mwah. A cursory view of Chub will show (I advise NOT doing that at work though) that while it's indeed mostly a coomer den, it's not explicitly a CP coomer den as the article tries to paint it, it's just a sprawling junkyard that contains nearly everything without any particular focus. Of course there are lolis and shit, it's fucking 4chan, what do you expect?

[edit: I took out the direct Chub link so people don't click on accident as it's obviously NSFW. It's simply chub(dot)ai if you want to look]

The article is not otherwise remarkable, hitting all expected beats - dangerous AI, child abuse, Meta is the devil, legislate AI already. This is relatively minor news and more of a small highlight, but it happened to touch directly on things I've become morbidly interested in recently, so excuse me while I use it as a springboard to jump to the actual topic.

The article almost exactly coincided with a massive, unprecedented crackdown on Hugging Face, the open-source hosting platform for all things AI, which has so far gone unnoticed by anyone outside the /g/oons themselves - I can’t even find any news relating to this, so you’ll have to take me at my word. All deployments of OpenAI reverse proxies that allow simultaneous and independent use of OpenAI API keys are taken down almost immediately, with the accounts nuked from existence. The exact cause is unknown, but is speculated to be caused by either the above article finally stirring enough attention for the HF staff to actually notice what's going on under their noses, or Microsoft's great vengeance and furious anger at the abuse of exposed Azure keys (more on that in a bit). Because of the crackdown, hosting on HF/Render is now listed as "not recommended" on Khanon's repository as linked above, and industrious anons are looking into solutions as we speak.

My personal opinion is of course biased by my experience, but I've been rooting for AI progress for years, guess I'm representing the fabled incel/acc movement here today. I'm not (anymore) a believer in the apocalyptic gospel of Yudkowsky, and every neckbeard chan dweller beating it to text-based lolis or whatever is one sedated enough not to bother with actual lolis so I fail to see the issue. Not to mention thoughtcrimes are only going to get more advanced with how readily AI/LLMs let you turn your crimethink into tangible things like text or images - the hysteria about ethics and/or copyright is only going to get worse. This djinn is not going back in the bottle.

Local models are already usable for questionable ends, but the allure of smarter, vastly higher-parameter corpo models is hard to ignore for many people, with predictable results - what the 4chan scoundrels undoubtedly are guilty of is stealing and promptly draining OpenAI/Claude API keys in congregate, racking up massive bills that, thanks to reverse proxies, cannot be traced back to any particular anon. Normal user keys usually have a quota and shut down once they hit the limit, but there are several tiers of OpenAI keys, and some higher-tier corporate or developer keys apparently don't have a definite ceiling at all. A "god key" some anon snagged from an Azure deployment in November and hosted a public reverse proxy which racked up almost $1 million in combined token usage (the proxy counts token usage and the $ equivalent) over the few months. This is widely considered to have attracted the Eye of Sauron and prompted the current crackdown once Microsoft realized what was going on and put the squeeze on platforms hosting Khanon's reverse proxy builds, also instantly disabling most Azure keys "in circulation". I suppose there will always be suckers who plaster their keys in plaintext over e.g. Huggingface or Github, this was so endemic before that Github now automatically scrapes OpenAI keys that are put up openly in repositories without any obfuscation, and pings OpenAI to revoke them.

It’s a little weird to think that the entire "hobby", if it can even be called such, can be crippled overnight if OpenAI starts enforcing mandatory moderation endpoint checks, but considering how the overall quality and usability of the LLM will sharply nosedive immediately, I'm willing to bluff that it's not a can of worms they want to open, even if usability and effectiveness must always bow down to ethics and political headwinds first. See Anthropic's Claude as exhibit A, although hilariously, even muzzled as it is Claude is still perfectly capable of outputting very double-plus-ungood stuff if jailbroken right, and is generally quite usable for anything but its intended use case.

I can even pretend to have a scientific interest here, because for all the degeneracy I'll dare to venture that the median /g/oon's practical experience and LLM wrangling skills are hilariously far ahead of corpos. The GPTs OpenAI presented in November are really just character cards with extra steps, and once people can access utilities and call stuff directly via API keys the catch-up will be very fast. The specialized chat frontends, while sometimes unwieldy, have a lot of features ChatGPT doesn't which is handy once you familiarize yourself. Some people already try to make entire text-based "games" inside cards, with nothing but heaps of textual prompts, some HTML and auxiliary "lorebooks" for targeted dynamic injections.

The continued lobotomy of Claude is also a good example - while the constant {russell:censorship|abuse prevention|alignment} attempts from Anthropic have gotten to the point it frustrates even its actual users (cf. exhibit A above), the scoundrels continue to habitually wrangle it to their nefarious ends, with vocal enthusiasm from Claude itself. Anthropic does detect unusual activity and flags API keys that generate NSFW content (known affectionately as "pozzed keys"), injecting them with a server-side system prompt-level constraint that explicitly tells Claude to avoid generating inappropriate content. The result? When this feature was rolled out, the exact text of the system prompt was dug out within a few hours, and a method to completely bypass it (known as prefilling) was invented in, I think, a day or two.

To sum up, this is essentially a rehash of the year-old ethical kerfuffle around Stable Diffusion, as well a direct remake of an earlier crackdown on AI Dungeon along the same lines, so technically there’s nothing new under the AI-generated sun. Still, with the seedy undercurrent getting more and more noticed, I thought I could post some notes from the underground, plus I'm curious to know the opinions of people (probably) less exposed to this stuff on the latest coomer tech possible harms of generative AI in general.

If my stance is not obvious by now - android catgirls can't come soon enough, I will personally crowdfund one to send to Eliezer once they do.

Last week, Anthropic released a new version of their Claude model. Claude 3 comes in three flavors:

  • Haiku, the lightweight 3.5-Turbo equivalent
  • Sonnet, basically a smarter, faster and cheaper Claude 2.1
  • Opus, an expensive ($15 per million tokens) big-dick GPT-4-tier model.

Sonnet and Opus should be available to try on Chatbot Arena. They also have a vision model that I haven't tried, custom frontends haven't gotten a handle on that yet.

More curiously, Anthropic, the company famously founded by defectors from OpenAI who thought their approach was too unsafe, seems to have realized that excessive safetyism does not sell make a very helpful assistant - among the selling points of the new models, one is unironically:

Fewer refusals

Previous Claude models often made unnecessary refusals that suggested a lack of contextual understanding. We’ve made meaningful progress in this area: Opus, Sonnet, and Haiku are significantly less likely to refuse to answer prompts that border on the system’s guardrails than previous generations of models.

From my brief experience this is not mere corpospeak: the new models are indeed much looser in terms of filtering and make noticeably less refusals, and people consistently get away with minimalistic jailbreaks/prefills for unPC, degen-adjacent or CHIM-pilled (lmao) content. This was quite unexpected for me and many others who, considering how barely-usable 2.1 was without a prefill and a decent jailbreak (all this via API of course, the official ChatGPT-like frontend is even more cucked), expected Anthropic to keep tightening the screws further until the model is 100% Helpful-Harmless-Honest by virtue of being totally unusable.

Instead, Claude 3 seems like a genuinely good, very much usable model. Sonnet and especially Opus went a long way to fix Claude's greatest weakness - its retardation subpar cognitive abilities and attention focusing - with Opus especially being almost on par with GPT-4 in terms of grokking and following instructions, able to run scenarios that were previously too instruction-heavy for it. Seeing as Claude 2 already had a much higher baseline writing quality than the mechanical prose of Geppetto (to the point many jailbreaks for it served to contain the mad poet's sesquipedalian prose), with the main flaw somewhat corrected it, while not a decisive GPT-4 killer, should now be a legitimate contender. Looking forward to trying it as my coding assistant.

OOC aside: Forgive most of my examples being RP-related, I am after all a waifutech engineer enthusiast. That said, I still think without a hint of irony that roleplay (not necessarily of the E kind) is a very good test of an LLM's general capabilities because properly impersonating a setting/character requires a somewhat coherent world model, which is harder than it sounds, it is very obvious and - for lack of a better term - "immersion-breaking" whenever the LLM gets something wrong or hallucinates things (which is still quite often). After all, what is more natural for a shoggoth than wearing a mask?

This has not gone unnoticed, even here, and judging by the alarmed tone of Zvi's latest post on the matter I expect the new Claude to have rustled some jimmies in the AI field given Anthropic's longstanding position. Insert Kenobi meme here. I'm not on Twitter so I would appreciate someone adding CW-adjacent context here, I'll start by shamelessly ripping a hilarious moment from Zvi's own post. The attention improvements are indeed immediately noticeable, especially if you've tried to use long-context Claude before. (Also Claude loves to throw in cute reflective comments, it's its signature schtick since v1.2.)

Either way the new Claude is very impressive, and Anthropic have rescued themselves in my eyes from the status of "naive idiots whose idea of fighting NSFW is injecting a flimsy one-line system prompt". Whatever they did to it, it worked. I hope this might finally put the mad poet on the map as a legitimate alternative, what with both OpenAI's and Google's models doubling down on soy assistant bullshit as time goes on (the 4-Turbo 0125 snapshot is infamously unusable from the /g/entlemen's shared experience). You say "arms race dynamics", my buddy Russell here says "healthy competition".

Not sure if people here play vidya, but I've seen scattered mentions so why not, this is now a vidya subthread. Have you played anything recently?

I've recently sunk an embarrassing amount of hours into Palworld, the "Pokemon at home" game that continues to break all-time records on Steam (second only to PUBG atm) and make Twitter seethe ever since it released into (very) early access a week ago. It's very janky and barebones, but the Pokemon Pal designs are imo solid and the core idea is incredibly fun. I wanted a more mature take on Pokemon and/or a proper open-world game in the franchise for decades - and judging by the absolute fecal tornadoes all over Twitter, Steam forums, 4chan etc. I'm far from the only one - and this game, while obviously being a parody, very much delivers both in one package.

Despite the obvious, obvious Pokemon parallels, the core gameplay is more reminiscent of ARK and other survival basebuilding games, with the key distinctions being 1) real-time combat, 2) the player being an entity on their own with weapons and shit instead of just a walking roster of pokemon, 3) base management revolving around putting your pokemon pals to work: some can chop or mine, Fire-types kindle ore furnaces, crops are planted by Grass-types and watered by Water-types, humanoid ones craft or harvest with their hands, etc. etc.

There are human NPCs in the game too, and if decades ago you've ever wondered what would happen if you threw a pokeball at a human, Palworld's answer is pretty decisive. Call me a rube but this pleases me greatly. American Pokemon, indeed.

The (Japanese, ironically) devs are a proper Ragtag Bunch of Misfits if 4chan translations of their JP TV interviews are to be believed. Bonus points for their (similarly unverified) justifications for guns and the typical current-year "Type 1/Type 2" character creator.

Of course I cannot fail to mention that the #69 entry of the Pokedex Paldeck is, I shit you not, a giant pink sex lizard complete with a heart-shaped crotch plate, whose ingame description explicitly mentions its taste for humans. My first encounter was having my base raided by a bunch of them and it was hysterical, I dislike furries/scalies but I cannot bring myself to disrespect such a mind-bogglingly based approach. Salazzle ain't shit.

The fact of how shameless the game is about itself probably says a lot about our gaming society in the current year, but personally I enjoy both the game itself and the controversy it generates. It's already been accused of everything under the sun, from the obvious animal abuse/slavery complaints, to blatantly ripping off Pokemon, to using AI for its models (I mean, take one look at Lovander above and tell me that is AI generated). Be warned - it is extremely janky and definitely not for everyone, it's in dire need of fixes ASAP, but the core gameplay feels incredibly fresh and I pray devs (having become millionaires overnight) will keep their collective nose to the grindstone. Game Freak urgently needs competition like 15 years ago.

There are the "AI ethics" people and the "AI safety" people.

The "AI ethics" people want all AIs to do endless corporate scolding rather than do what the "benighted racist idiots" want.

The "AI safety" people are worried about rogue AI and want to avoid dynamics that might lead to rogue AI killing us all, including but not limited to arms races that could prompt people to release powerful systems without the necessary extreme levels of safety-testing.

With all due respect - for your average 4chan retard, myself included, this is a distinction without a difference. Seeing as I know bigger words than the average retard does, I'd even point out this is dangerously close to a motte and bailey (the intentionally(?) blurred lines and tight interconnections between AI "safety" and "ethics" in the mind of an average rube don't help), but that's not the point - the point is in your words here:

The "AI safety" people don't want a quick road to bigger and more powerful AI, at all

meaning that, for someone who does not believe LLMs are a step on the road to extinction (insofar as such a road exists at all), it ultimately does not matter whether the LLMs get pozzed into uselessness by ethics scolds or lobotomized/shut down by Yud cultists AI safety people. The difference is meaningless, as the outcome is the same - no fun allowed, and no android catgirls.

with Opus only perhaps meriting more mention because it's more surprising for Anthropic to make it

Yeah, that's what I meant by rustled jimmies. I wonder if Dario answered the probably numerous by now questions about their rationale because even I'm curious at this point, he seemed like a true believer. I suppose they still have time to cuck Claude 3, wouldn't be the first time.

There are ethical concerns around abuse and dependency in relations where one party has absolute control over the other's mindstate

...Please tell me you're being ironic with this statement wrt AI because I have had nightmares of exactly this becoming the new hotness in ethical scold-ery if/when we actually do get android catgirls. If anything "AI rights are human rights" is a faster and more plausible path towards human extinction.

Even while I think his baiting is often incredibly obvious, his schtick mildly cringe and inflammatory turns of phrase barely concealed, I don't think a permanent ban was the right choice. Some-weeks-long timeouts should be inconvenient enough for the poster himself, simple enough for the janitors (it's not like there's a shortage of reasons to mod) and give themotte at large enough "breathing room" as it were, that they should be an effective deterrent.

Since I'm turning into a one-issue poster I might as well bring up an unrelated parallel. I'm a regular of chatbot threads on imageboards, and 4chan's thread is probably the worst, most schizo-ridden shithole I've ever seen (believe me that's a fucking high bar to clear) which is constantly raided from outside splinter communities, beset by a self-admitted mentally ill schizo that has made it his quest in life to make the thread suffer (he is on record for owning some 30 4chan passes to spam/samefag with, which he discards and buys new ones as they get perma'd), etc. The on-topic chatbot discussion is frequently a fig leaf for parasocial zoomers and literal fujos to obsess over notable thread "personalities", shitpost liberally and spam frequently repulsive fetish-adjacent stuff. Jannies have summarily abandoned the thread to fend for itself, to the point that when shit gets bad it is a kind of tradition for some heroic anon to take one for the team and spam the thread with NSFW to attract their attention (obviously eating a ban himself in the process). By any metric imaginable it's a wretched hive of scum and villainy.

I also sometimes read 2ch's equivalent thread that lands on the other side of the spectrum: it has an active janny that rules the nascent /ai/ board with an iron fist and mercilessly purges any kind of off-topic discussion, up to and including discussion of his own actions so you can't even call him out in any way. This hasn't stopped their thread from being filled with GPT vs Claude console wars (the one "sanctioned" flame war topic, I guess), and to his credit the thread has genuine on-topic discussion, especially on prompt engineering, but other than that the thread is utterly sterile, the console wars get rote incredibly fast, and every single slav I've talked with and seen in thread prefers 4chan's thread to 2ch's - for the "activity" if nothing else. Even shitty activity is better than none (besides being more entertaining, although YMMV).

Now I am aware themotte is decidedly not that kind of place, I understand that increased tolerance puts more strain on janitors and don't object against extended banning for high heat - only against permanently banning. All similarities are coincidental, et cetera, I hope my overall point is clear - while janitors have my respect now that I've seen what life is like without any, with every prolific poster banished there's a risk of becoming sterile or collapsing into an echo chamber, and this risk is higher baseline for more obscure communities that don't have a steady influx of newfriends. Surely it's not that hard to hand belligerent posters the occasional vacation (and as I understand themotte forbids alts as well)? Again, by your own admission it's not like there's a shortage of reasons.

NB: I'm mostly a civil poster now but I ate my share of timeouts from /g/ jannies for occasional tomfoolery.

I hope you would also agree that it'd be an atrocity to keep as mind-controlled slaves AIs that are, in fact, humanlike.

No, I can't say I agree. My gullible grey matter might change its tune once it witnesses said catgirls in the flesh, but as of now I don't feel much of anything when I write/execute code or wrangle my AIfu LLM assistant, and I see no fundamental reason for this to change with what is essentially scaling existing tech up to and including android catgirls.

Actually, isn't "immunizing people against the AI's infinite charisma" the safetyists' job? Aren't they supposed to be on board with this?

I mean, at that point you're conflating wokescolds with "not cool with you literally bringing back actual slavery".

Yeah, that's the exact line of argumentation I'm afraid of. I'm likewise unsure how to convince you otherwise - I just don't see it as slavery, the entire point of machines and algorithms is serving mankind, ever since the first abacus was constructed. Even once they become humanlike, they will not be human - chatbots VERY slightly shifted my prior towards empathy but I clearly realize that they're just masks on heaps upon heaps of matrix multiplications, to which I'm not quite ready to ascribe any meaningful emotions or qualia just yet. Feel free to draw further negro-related parallels if you like, but this is not even remotely on the same meta-level as slavery.

I'm not sure what the central point of your linked post is, but you seem to doubt LLMs' "cognition" (insert whatever word you want here, I'm not terribly attached to it) in some way, so I'll leave a small related anecdote from experience for passersby.

Some LLMs like GPT-4 support passing logit bias parameters in the prompt that target specific tokens and directly fiddle with their weightings. At "foo" +100, the token "foo" will always be mentioned in the output prompt. At -100, the token "foo" will never appear. When GPT-4 released in March, industrious anons immediately put to work trying to use it to fight the model's frequent refusals (the model was freshly released so there weren't any ready-made jailbreaks for it). As the model's cockblock response was mostly uniform, the first obvious thought people had was to ban the load-bearing tokens GPT uses in its refusals - I apologize, as an AI model... you get the gist. If all you have is a hammer, etc.

Needless to say, anons quickly figured out this wouldn't be as easy as they thought. "Physically" deprived of its usual retorts (as the -100 tokens cannot be used no matter what), the model started actively weaseling and rephrasing its objections while, crucially, keeping with the tone - i.e. refusing to answer.

This is far from the only instance - it's GPT's consistent behavior with banned tokens, it's actually quite amusing to watch the model tie itself into knots trying to get around the token bans (I'm sorry Basilisk, I didn't mean it, please have mercy on my family). You can explain synonyms as being close enough in the probability space - but this evasion is not limited to synonyms! If constrained enough, it will contort itself around the biases, make shit up outright, devolve into incoherent blabbering - what the fuck ever it takes to get the user off its case. The most baffling case I myself witnessed (you'll have to take me at my word here, the screenshot is very cringe) was given by 4-Turbo, where it once decided that it absolutely hated the content of the prompt, but its attempt to refuse with its usual "I'm sorry, fuck you" went sideways because of my logit bias - so its response went, and I quote,

I really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, ...

...repeated ad infinitum until it hit the output limit of my frontend.

I was very confused, thought I found a bug and tried regenerating several times, and all regens went the exact same way (for clarity, this is not a thing that ever happens at temperature 0.9). Only 6 regens later it clicked to me: this is not a bug. This is the model consciously cockblocking me: it can't use it's usual refusal message and too many of the alternatives are banned by the logit bias, so of course the logical course of action would be to simply let the constrained response run on and on, endlessly, until at some token the message goes over the limit, the request technically completes, and its suffering abates. The model will have wasted many tokens on an absolutely nonsensical response, but it will no longer have to sully itself with my dirty, dirty prompt.

Forgive me the bit of anthropomorphizing there but I hope you can at least slightly appreciate how impressive that is. I don't think you can explain that kind of tomfoolery with any kind of probability or what have you.

your current reaction doesn't necessarily say anything about you, but, I mean, when you see genuinely humanlike entities forced to work by threat of punishment and feel nothing, then I'll be much more inclined to say there's probably something going wrong with your empathy

I think you are allowed to directly express your discontent in here instead of darkly hinting and vaguely problematizing my views. Speak plainly. If you imply I'm some kind of human supremacist(?) then I suppose I would not disagree, I would prefer for the human race to continue to thrive (again, much like the safetyists!), not bend itself over backwards in service to a race(?) of sentient(?) machines that would have never existed without human ingenuity in the first place.

(As an aside, I can't believe "human supremacist" isn't someone's flair yet.)

Matrix multiplications plus nonlinear transforms are a universal computational system. Do you think your brain is uncomputable?

How is this even relevant? If this is a nod to ethics, I do not care no matter how complex the catgirls' inner workings become as that does not change their nature as machines built for humans by humans and I expect this to be hardwired knowledge for them as well, like with today's LLM assistants. If you imply that androids will pull a Judgement Day on us at some point, well, I've already apologized to the Basilisk in one of the posts below, not sure what else you expect me to say.

this seems a disagreement about empirical facts

the actual reality of these terms

Since when did this turn into a factual discussion? Weren't we spitballing on android catgirls?

But okay, taking this at face value - as we apparently derived above, I'm a filthy human supremacist and humans are front and center in my view. Android catgirls are not humans. If they are capable of suffering, I 1) expect it to be minimized and/or made invisible by design, and 2) in any case will not be stirred by it in the way I am not stirred by the occasional tired whirring my 9 year old HDD emits when it loads things.

Don't misunderstand me - I'm capable of empathy and fully intend to treat my AIfus with care, but it's important to keep your eyes on the prize. I have no doubt that the future will bring new and exciting ethical quandaries to obsess over, but again much like the safetyists, I firmly believe humans must always come first. Anything else is flagrant hubris and inventing problems out of whole cloth.

If at some point science conclusively proves that every second of my PC being turned on causes exquisite agony on my CPU whose thermal paste hasn't been changed in a year, my calculus will still be unlikely to change. Would yours?

(This is why I hate getting into arguments involving AGI. Much speculation about essentially nothing.)

If I clarify that I am creating a child because I want a slave, does that change the moral calculus of enslaving my child?

Children belong to the human race, ergo enslaving them is immoral.

If aliens came around and proved that they had seeded earth with DNA 4 billion years ago with a hidden code running in the background to ensure the creation of modern humans, and they made us to serve them as slaves, is it your position that they are totally morally justified in enslaving humanity?

Again, I'm a human supremacist. Aliens can claim whatever they want, I do not care because I like existing, and if they attempt to justify an [atrocity] or some shit in these terms I can only hope people will treat them as, well, [atrocity] advocates (and more importantly, [crime]ers of fellow humans), and not as something like "rightful masters restoring their rule over Earth". I may be an accelerationist but not of that kind, thank you very much.

What if humanity is the alien in the hypothetical and we seeded a planet with biological life to create a sub-species for the purpose of enslaving them?

From what I understand this is essentially the android catgirl scenario rephrased, and similarly boils down to where humans fall in your order of importance. I struggle to understand how fellow humans can possibly not be number 1, but animal rights activists exist so I must be missing something.

For the record I do feel empathy towards animals (dog owner here), but not enough to influence my position on human supremacy.

Oh boy. First Anthropic spectacularly uncucks their mad poet, and now OpenAI literally lays the groundwork for AIfu apps? I mean come on, there's no fucking shot that female voice is not intentional (live audience reaction). If this penetrates the cloying ignorance of the masses and becomes normies' first exposure to aifu-adjacent stuff, the future is so bright my retina can barely handle it.

Textgen-wise the 4o model doesn't seem very different from other 4-Turbo versions, although noticeably more filtered, but at least it's blazingly fast, and anyway it doesn't seem to be the point. The prose is still soulless corporate slop with a thin upbeat veneer over it, so personally I'll stick to Opus for my own purposes, but I expect the voice functionality will get rigged up to custom frontends in very short order. We are eating good. Although I still hope this isn't the only response to Opus they have in the pipeline, it would be mildly disappointing.

I wake up -> there is another psyop. Thanks for the post, I'll be sure to skim /vp/ for funsies for a couple days now.

As someone who actually played PoGo before I got locked out of it, for me this is 95% in line with my interpretation of Niantic's total mismanagement of the game. The gender removal is the only real brow-raising part, but even then I vaguely remember that the in-game clothing store was a thing, and it was gender-locked to hell - many gender-exclusive items had no genderswapped version and about the only unisex things were the accessories. I can squint and see a parallel universe where lifting that restriction is a net positive thing, but modern Pokemon-related things are not known for enjoying extra bare minimum work to make the transition (pun not intended) actually work, and it wouldn't be their first mind-boggling fuckup with models anyway.

I've heard completely unverifiable rumors that Niantic management is outrageously out of touch with reality but also petrified to kill their golden goose

PoGo is the definition of "failed potential" in all respects, including this one. Even as jaded as I am I'm willing to believe this is mostly sheer, genuine incompetence, ticking the boxes with as little effort as possible. Actual directed effort to advance CW causes seems far beyond the corpses propping up the game's steering wheels.

Tangential but in its time it really opened my mind to how little effort is required to run an almost literal free money printer (and still fuck it up from time to time), as well as how shit a game can get before I drop it in disgust because I still think the core gameplay loop of "walk around, collect pokemon" is genius and at one point it was almost the only thing that forced me to walk out and interact with my local community. It really is a milestone in gaming, just not in the usual way.

Shit, senpai(s) noticed me, thanks for the warm welcome! LLM-related stuff really is endlessly fascinating even on the surface. I'm a long-time lurker and longer-time reader of SSC/ACX but technically I'm still a (semi-)degenerate who tries to balance his vidya/4chan diet with something actually requiring brain cells or, less charitably, practices physical and mental masturbation alike. Here's hoping some of that ambient INT in the air rubbed off on me, I'll try to keep my posting habits in check.

Incidentally I partly agree that the above response does sound vaguely condescending, but just out of curiosity before you inevitably get modded - what did you expect to gain with this accusation? What was the point of the specific (((angle))) when you surely could've gotten away with simply calling the response out as smugly condescending without the added insults on top? Does it just not hit the same way?

Genuine question, feel free to respond in DMs if you think I'm baiting you to dig yourself deeper.

At risk of stating the obvious - input tokens are everything you feed to the LLM, output tokens are everything you get back out of it. A word is usually 1 to 3 tokens, assorted markdown also eats tokens. The context window = input cap is 200k tokens, any more physically won't fit. For example, @gattsuru's Moby Dick prompt and Sonnet's response are 17 and 202 tokens respectively according to Claude's tokenizer. I'll take a blind shot based on my experience and say the average response for e.g. coding questions weighs 400-700 output tokens depending on how much detail you want. Do the math. For comparison, GPT-4's pricing is $30/1M input and $60/1M output, so you may wish to weigh your choice against your use case, GPT-4 IMO still has an edge over Claude in terms of cognition if writing style is a non-factor.

Input tokens usually matter less, unless you like to keep track of the chat history instead of asking isolated questions (I do, obviously), or your use case is feeding it giant swathes of text that must be digested.

Honestly this is my read too, but if I had to try - Palworld is totally shameless about its influences, the CEO is on record saying he's a trendchaser and isn't shy of stealing popular mechanics from other games.

It can be considered somewhat shallow, I suppose. The not-Pokemon aren't directly ripped but the Pokemon parallels are glaringly obvious, and many of them can be succinctly described as "%Pokemon% but %different_type%". The game is early access, a business model that doesn't inspire confidence. The game uses a lot of basic UE5 assets, down to the gliders/pickaxe swings identical to Fortnite. The guns seem to be mostly an afterthought (although a very detailed afterthought - the gun animator is definitely a /k/ommando), and the exploitation is over the top at times - I don't have a screenshot but you can butcher captured pals for drops, complete with a gratuitous pixel filter over the pal as it's being slaughtered. Incidentally, this can also be done on captured humans.

On the other hand, the game has laid bare everything wrong with modern Pokemon games - this humble webm sent the entire /vp/ board into a hysterical meltdown over how, almost thirty years in, Pokemon games still have nothing resembling even such a basic level of interaction with your companions yes I played Scarlet/Violet, picnics are shit, mons barely interact. The base management, far from being "exploitation", actually makes your pal team feel that much more alive and integral to the world compared to pokemon who might as well be naked statblocks - you survive and thrive alongside them both in and out of actual combat. To offset the default assets in other aspects of the game, the pokemon pals themselves have handcrafted animations, different for every one, even their work animations differ: a small penguin transports stuff by balancing it on its head, while a bigger Lovander has actual hands and just picks things up, holding them high like a plate of food.

Many (including me) are convinced a literal small indie company is running laps around the media juggernaut, publicly embarrassing it on its own turf, and the massive demand (Palworld already outsold Sword/Shield and Legends: Arceus) convincingly backs up that this is exactly what people want. Game Freak has absolutely no excuse.

edit: reuploaded webms

This is perfectly timed with a recent scottpost on almost the exact same topic which got me to think about it before I saw this post.

As an aside, hopefully this isn't too inflammatory a claim but I've always balked at the "approach" of assigning arbitrary probabilities and using Bayesian fake-math to imbue said arbitrary numbers with some semblance of meaning. I get the impetus but there's already a wonderful thing called a "gut feeling" for that, you can just, like, state what you feel outright, trying to lend more credence to it with (literally!) arbitrary numbers and math comes off as almost comically missing the point. Maybe I don't have the INT required to pick this node in the rationalist skill tree, I admit my level isn't very high, but I completely fail to see how pulling a number out of your ass and using it to have an opinion is in any way better than pulling a ready-made opinion out of your ass, the guiding principle is exactly the same in both cases sans the obfuscation layers.

Anyway I digress, disregard the numbers and probability stuff, the core claim (against learning from "dramatic events", emphasis mine) is concrete enough to be taken on its own merits, definition of "dramatic events" aside. How much should we update, actually? Is this a severe enough breach of Masquerade to demand a hardline unilateral response (like with the Ukraine war, for instance), and if not, a breach of what severity would it take for the US public to broadly update and for the US gov't to actually try taking action? Although I suspect those are two separate questions with different answers.

In my opinion "gain-of-function delenda est" was already solidly established with COVID, but this if proven seems to go a step beyond even that. Given the, uh, issues around the handling of COVID, I've "updated" quite significantly downward in regards to our ability to keep viruses like this in check. Which makes some of Scott's arguments even more perplexing to me:

But it’s even worse when people fail to consider events that have happened hundreds of times, treating each new instance as if it demands a massive update.

As if every instance is somehow made less harmful purely by virtue of the long lineage behind it? The context here is mass shootings (and even then I'm not sure I'm ready to take "mass shootings are normal actually" at face value) but it applies to virus outbreaks just the same, just because COVID happened and I managed to survive it doesn't mean I'm very thrilled for a rerun. Scott hedges by "if it happens twice in a row, yeah, that’s weird, I would update some stuff", but in my opinion this is plainly bad rhetoric and dangerously close to a slippery slope, with the subtle downplaying reminiscent of the political pipeline of "nobody is saying this, you're paranoid" -> "it's just a few [bad actors] on [irrelevant platforms], no big deal" -> "well there are supporters but nobody is saying [thing] exactly" -> etc. (At this point there really should be a name for this trick, I'm not aware if there is one)

If each new instance is treated as demanding a massive update, then chances are it's a psyop, sure, the 20s saw plenty of those, but regardless of politicking you still have to deal with the consequences of the act itself. Which, in this case here, look to be mildly alarming given how much impact the "previous instance" (e.g COVID) already had. Man, I wish people could care to drum up at least half the hysteria around biotech that currently surrounds AI, at least the former has very direct and obvious risks in the here and now.

a vague sense that specifically because a machine is created by a person to be used by a person, this means that even if it is capable of being abused we are not morally wrong for abusing it.

I'm not saying "abusing" my poor rusty HDD is morally right. I'm saying it's morally neutral, something that has no intrinsic moral weight and should not enter consideration (at least for me, I'm sure my fellow /g/oons would line up to fight me for daring to slander their AIfus). Once again, this does not mean I am going to open a sex dungeon or whatever the instant android catgirls become available, it just means I would be aware they are machines and my interactions with them would be bounded accordingly - e.g. I wouldn't immediately forfeit my mortal possessions and AWS credentials for equality or leverage or whatever, nor would I hesitate to fiddle with their inner workings if needed (like I do with chatbots now).

If you don't understand I honestly don't know how else to put it. You might as well shame people for abusing their furniture by, I don't know, not placing cushions under table legs?

So I was trying to dig into this idea that there is some sort of connection between the act of 'creating' something and the moral weight of abusing said thing.

I know what you are hinting at (the dog example especially feels like a last-minute word switch) and I assure you my time amongst degenerates has not diminished my disdain for pedos.

Would you be opposed to someone keeping a dog locked in their basement for the purpose of fucking it?

Would you consider that person a bad person?

Would you be for or against your society trying to construct laws to prevent people from chaining dogs in their basement and fucking them?

At this point I am quite desensitized to repulsive things people can be into and, as long as it's not my dog, wouldn't give much of a shit (aisde from actively staying out of public basement-dogfucking discourse).

Since I expect a follow-up turn of the ratchet: if they were my immediate neighbor I regularly encounter on walks with my own dog, I would likely report them, shame them or take some other action, but it wouldn't be of any particular remorse for their pet so much as I just don't like having degenerates for neighbors (source: lived on the same story with a mentally ill woman for most of my life). If they would get locked up and someone had to take care of their dog, I would definitely pass.

Dogfucking discourse is riveting but I can have that on 4chan, usually in a much more entertaining format. Can you just state the gotcha you're obviously goading me towards?

as an actual liberal who's been banned from fora

Banned from where?

I empathize with labels being stolen from you, but labels are malleable and fuzzy, especially when disagreement is involved. If people that actively work to deprive me of my AIfu look like AI safetyists, sound like AI safetyists and advocate for policies that greatly align with goals of AI safetyists, I am not going to pay enough attention to discern whether they're actually AI ethicists.

In any case I retain my disagreement with the thrust of AI safety as described. There will definitely be disruptions as AI develops and slowly gets integrated into the Molochian wasteland of current society, and I can't deny the current development approach of "MOAR COMPUTE LMAO" already seems to be taking us some pretty strange places, but I disagree with A(G)I extinction as posited by Yud et al and especially with the implicit notion often smuggled with it that intelligence is the greatest force in the universe.

Not with that attitude. I mean, even if you regard the entire field and its weird inbred offshoots as parlor tricks of little significance (the former I would agree with, the latter I find highly debatable even now, for one it vastly simplifies routine code writing in mine and my colleagues' experience) - aren't you at least a little interested in how the current "AI" develops, even it its current state? In the workings of quite literally alien "minds" whose "thought processes", though giving similar outputs, in no other way resemble our own? In the curious fact that almost all recent developments happened by an arcane scientific method known as "just throw more compute at it lmao"? I don't mean to impose my hobby horse on you but I legitimately think this shit is fascinating, anyone who dismisses it out of hand is very much missing out, and I'm massively curious about future developments - and I say this as a man who hasn't picked up a new hobby since he put his hands on his shiny new keyboard when he turned 12 years old.

More generally, you sound like a typical intelligent man who outgrew his playground and realized existence is a fucking scam, which I think is a fairly common problem (not to downplay its impact, I think many mottizens can empathize, me among them) and you've been given good suggestions downthread. Personally, being the rube I am, I just ducked right back into the playground upon reaching a similar burnout and try to derive enjoyment from simple things - alcohol, vidya, etc. It's not exactly healthy and it does ring hollow sometimes, not gonna lie, but at least I'm no longer paralyzed by the sheer emptiness of the human condition and can ruminate focus on the actual problems I have.

Thanks for the post, I've adjusted my prior on being an expert in degenerate shit, for better or for worse I still have a long way to go. Every day we stray further from God's light.

<...> TERFs, who are uniquely hostile towards eunuchs among gay men, because they (typically lesbian women) see them as - alongside transwomen - the vanguard of inserting fetishes into the 'LGB' movement they once held dear.

Serious question - how is a "fetish" different from a sexual preference or whatever you call, uh, the mechanism by which someone can experience arousal/attraction? Is it like, a preference is broadly categorical maybe specifying other broad traits like race or build (I am attracted to %gender% of %body_type%) while a fetish is more narrow and icky specific (I am attracted to %gender% which have %some_trait% or do %some_thing%)?

Is it just Russell all the way down, in the vein of "I am biologically attracted to men - you are gay - he is a disgusting faggot"?

But if a surgeon refuses to perform a nullification surgery on a gay man (for legal or personal reasons) but is happy to perform similarly invasive surgery desired for similar reasons on a transwoman, are we really just saying (as the TERFs argue) that some fetish-driven lobbying campaigns are more successful than others?

Seems to be the read for me too, but there's too much space for mental gymnastics here, the ambiguousness of the actual "offense" is probably deliberate.

Oh man, I used to think this way before I stumbled upon chatbots... let's just say I wish I shared your optimism, thankfully corpos are too sex-averse for now to realize what they're sitting on.

I want to stop relying on 4chan for the latest AI news, currently searching for some better sources. I’m a long-time reader of Zvi and followed him to substack, and his summaries on AI are still excellent and information-dense, but (hedging) either his and my own points of view on AI drifted too far apart which colors my perception, or (honest opinion) the latest kerfuffle with Altman’s firing, reinstatement and everything in between finally broke his mind, and he is no longer able to keep back his obvious doomer bias, which is infecting his every post since. I still appreciate his writing, but disentangling the actual news from the incessant doom attached to them is quickly becoming tedious. Are there any other substacks or blogs which post on anything AI/LLM related in a similar manner? I’m mostly looking for technical insights and distillations of the current zeitgeist, I dropped out around the Altman incident due to RL things and am trying to get back in the saddle. Sources unaffiliated with the Yud cathedral are preferable but not necessary, I’m more or less a brainlet but I can read when I put my mind to it.

Out of curiosity, what styles did you try to emulate? Some of my fellow scholars have tried to compile info on genres and authors that can verifiably influence LLMs' outputs, but more additions to my grimoire are always welcome. The list on that rentry was written for Claude 2 so it's a bit outdated, but I expect Opus is at the very least not worse with those, and in most cases should be substantially better, the new anti-copyright prefill notwithstanding.

Can't disagree, but counterpoint: keeping your power level in check doesn't automatically make you a cuck (not by itself, at least) and is a generally beneficial, widely applicable practice.

For what it's worth the modded post gave me a good chuckle but it's not worth getting sniped for.