rayon
waifutech enthusiast
No bio...
User ID: 2632
Why rely on random anonymous compilations?
I just linked the first source I saw in the wild, thanks, this is better.
So far looks like no defendants or lawyers for any of them have made an appearance.
Is that not the norm for anonymous wire fraud or whatever charge they're levying here? I'm near-certain none of the Does (none of the major ones, at least) live in the US.
this way the most likely outcome is Microsoft secures a default judgement against them.
I'm a rube unfamiliar with the American legal system - what do the results of that typically look like in ghost cases like this? Does Microsoft get their damages, if yes then whence?
Not uncensored per se, afaik it still required some prompting (as mentioned in the erstwhile rentry) but the keys commonly used definitely had laxer filtering, nowhere near the hair-trigger user-facing model where you get dogged for the dumbest things. I'm not sure a totally uncensored model exists, in the current climate it sounds like something that'd require nuclear plant-level security clearance. But yes, this is basically how keys work, the entire point is that you can call the model from any source (including a reverse proxy) as long you do it through a valid key with a valid prompt structure - which most frontends, image- or textgen, take care of under the hood.
More notes from the AI underground, this time from imagegen country. The Eye of Sauron continues to focus its withering gaze on hapless AI coomers with growing clarity, as another year begins with another crackdown on Azure abuse by Microsoft - a more direct one this time:
Microsoft sues service for creating illicit content with its AI platform
In the complaint, Microsoft says it discovered in July 2024 that customers with Azure OpenAI Service credentials — specifically API keys, the unique strings of characters used to authenticate an app or user — were being used to generate content that violates the service’s acceptable use policy. Subsequently, through an investigation, Microsoft discovered that the API keys had been stolen from paying customers, according to the complaint.
Microsoft alleges that the defendants used stolen Azure OpenAI Service API keys belonging to U.S.-based customers to create a “hacking-as-a-service” scheme. Per the complaint, to pull off this scheme, the defendants created a client-side tool called de3u, as well as software for processing and routing communications from de3u to Microsoft’s systems.
Translated from corpospeak: at some point last year, the infamous hackers known as 4chan cobbled together de3u, a A1111-like interface for DALL-E that is hosted remotely (semi-publicly) and hooked up to a reverse proxy with unfiltered Azure API keys which were stolen, scraped or otherwise obtained by the host. I probably don't need to explain what this "service" was mostly used for - I never used de3u myself, I'm more of an SD guy and assorted dalleslop has grown nauseating to see, but I'm familiar enough with general thread lore.
As before, Microsoft has finally took notice, and this time actually filed a complaint against 10 anonymous John Does responsible for the abuse of their precious Azure keys. Most publicly available case materials compiled by some industrious anon here. If you don't want to download shady zips from Cantonese finger painting forums, complaint itself here, supplemental brief with screencaps (lmao) here.
To my best knowledge,
- Doe 1 with "access to and control over [...] github.com/notfiz/de3u" is
notFiz, the person actually hosting the proxy/service in question. - Doe 2 with "access to [...] https://gitgud.io/khanon/oai-reverse-proxy" is Khanon, the guy who wrote the reverse proxy codebase underlying de3u. I'm really struggling to think what can be plausibly pinned on him given that the proxy is simply a tool to use LLM API keys in congregate - it's just that the keys themselves happen to be stolen in this case - but then again I don't know how wire fraud works.
- Doe 3 with "access to and control over [...] aitism.net" is Sekrit, a guy who was running a "proxy proxy" service somewhere in Jan-Feb of 2024 during the peak of malicious spoonfeeding and DDoS spitefaggotry, in an attempt to hide the actual endpoint of Fiz's proxy. The two likely worked together since, I assume de3u was also hosted through him. Came off as something of a pseud during "public" appearances, and was the first to get appropriately spooked by recent events.
- Does 4-10 are unknown and seem to be random anons who presumably donated money and/or API keys to the host, or simply extensively used the reverse proxy.
At first blush, suing a bunch of anonymous John Does seems like a remarkably fruitless endeavor, although IANAL and have definitely never participated in any illegal activities before officer I swear. A schizo theory among anons is that NSFW DALLE gens included prompts of RL celebrities (recent gens are displayed on the proxy page so I assume they've seen some shit - I never checked myself so idk), which put most of the pressure on Microsoft once shitposted around; IIRC de3u keeps metadata of the gens, and I assume they would much rather avoid having the "Generated by Microsoft® Azure Dall-E 3" seal of approval on a pic of Taylor Swift sucking dick or whatever. Curious to hear the takes of more lawyerly-inclined mottizens on how likely all this is to bear any fruit whatsoever.
Regardless, the chilling effect already seems properly achieved; far as I can tell, every single person related to the "abuses", as well as some of the more paranoid adjacent ones, have vanished from the thread and related communities, and all related materials (liberally spoonfed before, some of them posted right in the OPs of /g/ threads) have been scrubbed overnight. Even the jannies are in on it - shortly after the news broke, most rentry names containing proxy-related things were added to the spam filter, and directly writing them on /g/ deletes your post and auto-bans you for a month (for what it's worth I condone this, security in obscurity etc).
If gamers are the most oppressed minority, coomers are surely the second most - although DALL-E can burn for all I care, corpo imagegen enjoyers already have it good with NovelAI.
Wow, Musk really walked into the wrong neighbourhood here. His earlier D4 claim went mostly unquestioned (to my awareness) because frankly D4 bad nobody really gives enough of a shit, but with how zealous its fanbase is PoE was a bad choice to flex, and specifically PoE2 (brutal and borderline bullshit as it is) was a really bad choice. Other replies already mentioned it but you absolutely do not get this far (in HC to boot!) without considerable knowledge of the game, and the minor things like the item level gaffe instantly betray the lack of underlying knowledge. This whole charade distinctly feels like reading a "budget" starter build guide that has Mageblood or something as a required item. I will be very disappointed if there won't be a new meme unique item that does something with level requirements before the end of the year.
It warms my heart to see gamers(tm) continue to be the community least deceived by, or tolerant of, transparent bullshit. Truly the master race.
It personally sets my teeth on edge to read something that clearly wants to inspire strong emotions in the reader or perhaps persuade them of something but doesn't actually speak of anything that is happening to be excited about.
Ironically, this would also describe the writing of AI/LLMs themselves when you prompt them to show any sort of character or express a "personal" opinion. At this rate Sam could get replaced by an actual AI halfway through the singularity and literally nobody would notice.
It reads like a particularly opaque sort of intentional hype cycle that might be mostly designed to inspire us to transfer tons of wealth to them before AI progress stalls out for a while.
If I had to guess they feel the AGI competition, current Claude is near-strictly better already and the recent Deepseek V3 seems quite close while being orders of magnitude cheaper (epistemic status: haven't tested much yet). If I had no big-dick reveals in the pipeline I'd probably look to cut and run too.
Even if I agree with you that the West has fallen and billions must die (which to be fair I do)... I don't know how to put this but this just ain't it, chief. This is just a wall of brain-hijacking zealot rhetoric. You have allowed a higher power to overwrite your save file, it is literally visible when it speaks through you:
People will still hem and haw, and not accept violence RIGHT THIS SECOND is called for, and that we should feel anguish and moral scorn every second we're delayed by practical realities
In other words, everyone who does not reblog updoot the issue du jour is trash.
Classical Jewish psychology
This is where you sneak in obligatory tribute to said higher power, I assume, I actually do not understand the relation here.
you are faggots, cucks and race traitors who value you failed cuck discussion norms far more that the truth. Failed discussion norms taught to you by failed jews like Yudkowsky and Alexander who openly admit their ritualized cuckoldry and sexual depravity. In this you are a microcosm and exact continuation of the failed morality and intellectual norms that have led the west to this exact moment.
I'm away from home and can't ask Claude to flip your madlibs around into liberal negrolatry circa 2020, so that will be left as an exercise for the reader. The last sentence can even be left as is.
Any light produced without heat is an illusion, a trick cast on the wall, a fire in a film that illuminates only what the director chooses and warms nothing.
Sir, this is a Wendy's. Illusion or not, this is the entire, explicit point of this place, getting mad at this feels like that "I entered a thread full of things I do not like, and now I am mad. How did this happen to me?" meme. The fact that you're mostly getting measured responses instead of TL;DRs or "your hands are shaking btwbeit"-type dismissals only further proves this point; I even suspect that you know this and chose to post this here exactly so that people would actually 'engage' with you.
Since you use the same playbook the wokes do to get me to side with you on at least some of the issue - I agree that things like Rotherham conclusively prove that the Bri'ish cannot be saved. But in this particular case it seems quite beside the point. This brand of seethe vacuous righteous fury isn't picky regarding the exact excuses to unleash it, and contrary to what you seem to think, it actively dampens your point instead of strengthening it.
Tried a few of my comments here on a blank prompt; it's either a testament to my mimicry or a consequence of little substance but it mostly fails, especially memes and/or chudisms seem to throw it off and it defaults to American. Weirdly enough, the failure rate is lower when I paste multiple comments at once (even when individually it judges every comment as American), the main mechanism at work indeed seems to be pattern-matching. ...Man, an AI-driven police state would be some shit, huh?
It's still mildly spooky with some of my drafts and longer writeups - Claude has none of my shit and consistently guesses right across multiple regens, even standing its ground when I wink-wink-nudge-nudge it if it's really really sure. Its explanations are also sometimes funny:
The term "AIfu" (combining AI + waifu) suggests anime culture which has a notable following in Eastern Europe
Writing style shows high English proficiency but with subtle ESL markers
I think I just got dissed by a machine, send help.
References to "grey matter" literally translated (suggests Slavic background)
Really? I thought it's a common idiom, point taken.
For what it's worth, 4o indeed fails 100% of the time on the same prompts. Don't have o1 to try but 4o seems to get sidetracked by the content almost immediately so I don't think the CoT layers would help much.
The House Select Subcommittee on the Coronavirus Pandemic has published its final report on the results of their investigation (dated from December 4th for some reason). It's quite the whopper at 520 pages and I'm only starting to read through the thing, but they tackle one of the big scissor issues of the issue - the origin of the virus - right at the start so there is a good hook right away. I have not read the Fauci emails so some of this might be old news, but some of those include rather damning excerpts.
According to the report, what would eventually become Proximal Origin started on Feb 1 with a write-up by Kristian Andersen who has Noticed™ some concerning biological properties of the virus which did not strike him as natural. He contacted Jeremy Farrar over this, who acknowledged his concerns and referred him to Fauci; Fauci was appropriately alarmed and shortly arranged a conference call to discuss the findings. Andersen mentions that the talk they had before the call was his first time talking to Fauci, and that he "specifically suggests that if [Andersen] thinks this came from the lab, [he] should consider writing a scientific paper on it."
So he does - apparently encountering inconvenient difficulties along the way. Feb 8, in an internal email from Andersen (p.24):
A lot of good discussion here, so I just wanted to add a couple of things for context that I think are important - and why what we're considering is far from "another conspiracy theory", but rather is taking a valid scientific approach to a question that is increasingly being asked by the public, media, scientists, and politicians (e.g. I have been contacted by Science, NYT, and many other news outlets over the last couple of days about this exact question).
<...> Our main work over the past couple of weeks has been focused on trying to disprove any type of lab theory, but we are a crossroad where the scientific evidence isn’t conclusive enough to say that we have high confidence in any of the three main theories considered. <...>
Feb 20, in another email from Andersen as the work continues (p.25):
<...> just one more thing though, reviewer 2 is unfortunately wrong about "Once the authors publish their new pangolin sequences, a lab origin will be extremely unlikely". Had that been the case, we would of course have included that - but the more sequences we see from pangolins (and we have been analyzing/discussing these very carefully) the more unlikely it seems that they're intermediate hosts. They definitely harbor SARS-CoV-like viruses, no doubt, but it's unlikely they have a direct connection to the COVID-19 epidemic.
Unfortunately none of this helps refute a lab origin and the possibility must be considered as a serious scientific theory (which is what we do) and not dismissed out of hand as another ‘conspiracy’ theory. We all really, really wish that we could do that (that’s how this got started), but unfortunately it’s just not possible given the data.
Emphasis mine. There are already hints of a foregone conclusion, but it doesn't seem bad yet - however Jeremy Farrar, who referred Andersen to Fauci earlier, seems to have different concerns. Same page, email from Farrar (emphasis mine):
I hope there is a paper/letter ready this week to go to Nature (and WHO) which effectively puts to bed the issue of the origin of the virus.
I do think [it's] important to get ahead of even more discussion on this, which may well come if this spreads more to US and elsewhere, and other "respected" scientists publish something more inflammatory.
He later gets notified via email that "rumors of bioweaponeering are now circulating in China", to which his response is:
Yes I know and in US - why so keen to push out ASAP. I will push Nature
Same page, another email from Farrar to Andersen reviewing (some version of) the draft:
Sorry to micro-manage/microedit!
But would you be willing to change one sentence?
From "It is unlikely that SARS-CoV-2 emerged through laboratory manipulation of an existing SARS-related coronavirus."
To "It is improbable that SARS-CoV-2 emerged through laboratory manipulation of an existing SARS-related coronavirus."
That's... certainly one sentence, I suppose.
I'm still reading but from a cursory glance the report tackles many topics, including the government response, the lockdowns, economic impacts, etc. I think many people will find their hobby horse something of interest in here. Discussion thread go.
Yep it worked (at least the website says it did), thanks a lot! Now I can personally participate in crashing the servers at launch.
Kind of, but it's not as big a hurdle as you imagine it to be, though you do have to at least loosely keep up with new (= more filtered) snapshot releases and general happenings. It also depends on the exact things you do, you probably don't need the big-dick 2k token stuff for general conversation, ever since I burned out on hardcore degeneracy I haven't really been updating my prompts and they still mostly work on the latest GPT snapshots when I'm not doing NSFW shit.
As for jailbreaks, this list is a good place to start. Most jailbreaks come in the form of "presets" that rigidly structure the prompt, basically surrounding the chat history with lots of instructions. The preset's .json can be imported into frontends like SillyTavern with relatively little hassle, the UI can be intimidating at first but wrangling prompts is not actually difficult, every block of the constructed prompt has its own content and its own spot in the overall massive prompt you send to the LLM. Example. The frontend structures the prompt (usually into an RP format) for you, and during chat you only need to write your actual queries/responses as the user, with the frontend+preset taking care of the rest and whipping the LLM to generate a response according to the instructions.
Unless you're just talking to the "bare" LLM itself, this approach usually needs a character card (basically a description of who you're talking to), I mentioned those in passing elsewhere.
To contextualize all this, I unfortunately have no better advice than to lurk /g/ chatbot threads, it's smooth sailing once you get going but there's not really a single accessible resource/tutorial to get all this set up (maybe it's for the better, security in obscurity etc).
I've been on the fence about shelling out since the announcement, if you're offering I'd be glad to have it. How does it work, is it just a separate Steam key? None of my friends bought the higher-tier packs.
I'll echo the responses below and say that 3.5 is... suboptimal, much better and nearly as accessible alternatives exist. Claude 3 is the undisputed king of roleplay and I've sung it enough praises at this point, but it is much less accessible than GPT, and to be fair 4o is not actually that bad although it may require a decent jailbreak for more borderline things.
Aside from that, RP-related usage is best done through API (I believe you can still generate a GPT API key in your OpenAI account settings, not sure how you legitimately get a Claude API key) via specific frontends tailored for the task. This kills two birds at the same time - you get mostly rid of the invisible system prompts baked into the ChatGPT/claude.ai web interface, and chat frontends shove most of the prompt wrangling like jailbreaks, instructions and Claude prefills under the hood so you're only seeing the actual chat. Frontends also usually handle chat history more cleanly and visibly, showing you where the chat history cuts off in the current context limit. The context limit can be customized in settings (the frontend itself will cut off the chat accordingly) if you want to moderate your usage and avoid sending expensive full-context prompts during long chats, in my experience 25-30k tokens of context is the sweet spot, the model's long-term recall and general attention starts to slowly degrade beyond that.
Agnai has a web interface and is generally simple to use, you can feed it an API key in the account settings. SillyTavern (the current industry standard, as it were) is a more flexible and capable locally-hosted frontend, supporting a wide range of both local and corpo LLMs, but it may be more complicated to set up. Both usually require custom instructions/prompts as the default ones are universally shit, unironically /g/ is a good place to find decent ones. Beware the rabbit hole Feel free to shoot me a DM if you have any questions.
People forget how ridiculously compressed LLMs are compared to their training corpus, even if you spill an amount of personal info, it has little to no chance of explicitly linking it to you, let alone regurgitating it.
That is true of course, but I read @quiet_NaN's comment as less concerned about models having their data "baked into" newer models during training (nowhere on the internet is safe at this point anyway, Websim knows where we live), and more about the conversations themselves physically stored, and theoretically accessible, somewhere inside OpenAI's digital realm.
While I'm sure manually combing chatlogs is largely untenable at OpenAI's scale, there has been precedent, classifier models exist, and in any case I do not expect the Electric Eye's withering gaze to be strictly limited to degenerates for very long.
Considering OpenAI's extensive, ahem, alignment efforts, I think using GPT in its current state as a therapist will mostly net you all the current-year (or past-year rather, I think the cutoff is still 2023?) progressive updates and not much beyond that. Suppose you can at least vent to it. Claude is generally better at this but it's very sensitive to self-harm-adjacent topics like therapy, and you may or may not find yourself cockblocked without a decent prompt.
what do people think about therapy becoming AI?
I'm quite optimistic actually, in no small part because my shady source of trial Claude has finally ran dry last month and I hate to say I'm feeling its absence at the moment, which probably speaks to either my social retardation or its apparent effectiveness. I didn't explicitly do therapy with it (corpo models collapse into generic assistant speak as soon as you broach Serious Topics, especially if you use medical/official language like that CIA agent prompt downthread) but comfy text adventures are close enough and I didn't realize how much time I spend on those and how salutary they are until I went cold turkey for a month. Maybe the parable of the earring did in fact have some wisdom to it.
Despite my past shilling I'm so far hypocritically valiantly resisting the masculine urge to cave and pay OpenRouter, I don't think there's any kind of bottom in that particular rabbit hole once you fall in, scouring /g/ is at least more of a trivial inconvenience than paying the drug dealer directly.
"You shouldn't get to have a decision on AI development unless you have young children". You don't have enough stake.
That strikes me as a remarkably arbitrary line in the sand to draw (besides being conveniently self-serving) - you can apply this to literally anything that is not 100% one-sided harmless improvement.
You shouldn't get to have a decision in education policy unless you have young children. You don't have enough stake.
You shouldn't get to have a decision in gov't spending unless you have young children. You don't have enough stake.
You shouldn't get to have a vote in popular elections unless you have young children. You don't have enough stake.
What is the relation of child-having to being more spiritual grounded and invested in the human race (the human race, not just their children)'s long-term wellbeing? I'm perfectly interested in the human race's wellbeing as it is, and I've certainly met a lot of shitty parents in my life.
I hope this isn't too uncharitable but your argument strikes me less as a noble God-given task for families to uphold, and more as a cope for those that have settled in life and (understandably) do not wish to rock the boat more than necessary. I'm glad for you but this does nothing to convince me you and yours are the prime candidates to hold the reins of power, especially over AI where the cope seems more transparent than usual. Enjoy your life and let me try to improve mine.
(Childless incel/acc here, to be clear.)
You are making an "argument from incredulity", i.e. the beliefs of Sam Altman are so crazy that they can’t be real. I don't think this is the case.
The idea that Sam Altman would literally want to destroy humanity to birth in a superior AI life form might sound ridiculous to you. But you don't know these people.
Besides this being a gossip thread, your argument likewise seems to boil down to "but the beliefs might be real, you don't know". I don't know what to answer other than reiterate that they also might not, and you don't know either. No point in back-and-forth I suppose.
There's a good chance (not 100%, but not 0% either) that we're going to build superintelligence while the "adults in the room" argue about GDP numbers or whatever. If this happens it could make some people (perhaps a single person) more powerful than anyone in history. Do you want Sam Altman to be that person? Because I sure as hell don't.
At least the real load-bearing assumption came out. I've given up on reassuring doomers or harping on the wisdom of Pascal's mugging, so I'll simply grunt my isolated agreement that Altman is not the guy I'd like to see in charge of... anything really. If it's any consolation I doubt OpenAI is going to get that far ahead in the foreseeable future. I already mentioned my doubts on the latest o1, and coupled with the vacuous strawberry hype and Sam's antics apparently scaring a lot of actually competent people out of the company, I don't believe Sam is gonna be our shiny metal AI overlord even if I grant the eventual existence of one.
Sam is going to get us all killed; that he's entirely misanthropic and sincerely believes that humanity should die out giving birth to machine intelligence.
...Fine, I'll bite. How much of this impression of Sam is uncharitable doomer dressing around something more mundane like "does not believe AI = extinction and thus has no reason to care", or even just same old "disregard ethics, acquire profit"?
I have no love for Altman (something I have to state awfully often as of late) but the chosen framing strikes me as highly overdramatic, besides giving him more competence/credit than he deserves. As a sanity check, how -pilled would you say that friend of yours is in general on the AI question? How many years before inevitable extinction are we talking here?
It's a coder's model I think, not a gooner's model.
I have no hopes for GPT in the latter department anyway, but my point stands, I think this is a remarkably mundane development and isn't nearly worth the glazing it gets. The things I read on basket weaving forums do not fill me with confidence either. Yes, it can solve fairly convoluted riddles, no shit - look at the fucking token count, 3k reasoning tokens for one no-context prompt (and I bet that can grow as context does, too)! Suddenly the long gen times, rate limits and cost increase make total sense, if this is what o1 does every single time.
Nothing I'm seeing so far disproves my intuition that this is literally 4o-latest but with a very autistic CoT prompt wired under the hood that makes it talk to itself in circles until it arrives at a decent answer. Don't get me wrong, this is still technically an improvement, but the means by which they arrived at it absolutely reeks of crutch coding (or crutch prompting, rather) and not any actual increase in model capabilities. I'm not surprised they have the gall to sell this (at a markup too!) but color me thoroughly unimpressed.
... it just might work?
It might, hell it probably will for the reasons you note (at the very least, normies who can't be arsed to write/steal proper prompts will definitely see legit major improvements), but this is not the caliber of improvement I'd expect for so much hype, especially if this turns out to be "the" strawberry.
The cynical cope take here is - with the election on the horizon, OpenAI aren't dumb enough to risk inconveniencing the hand that feeds them in any way and won't unveil the actual good shit (IF they have any in the pipeline), but the vital hype must be kept alive until then, so serving nothingburgers meanwhile seems like a workable strategy.
One interesting thing is for the hidden thoughts, it appears they turn off the user preferences, safety, etc, and they're only applied to the user-visible response.
So o1 can think all kinds of evil thoughts and use it to improve reasoning and predictions
Judging by the sharp (reported) improvement in jailbreak resistance, I don't believe this is the case. It's much more likely (and makes more sense) to make the... ugh... safety checks at every iteration of the CoT to approximate the only approach in LLM censorship abuse prevention that has somewhat reliably worked so far - a second model overseeing the first, like in DALL-E or CAI. Theoretically you can't easily gaslight a thus prompted 4o (which has been hard to jailbreak already in my experience) because if properly "nested" the CoT prompts will allow it enough introspection to notice the bullshit user is trying to load it with.
...actually, now that I spelled out my own chain of thought, the """safety""" improvements might be the real highlight here. As if GPT models weren't sterile enough already. God I hate this timeline.
It's late and I'm almost asleep but let me get this straight: did they just basically take 4o, wired a CoT instruction or three under the hood as an always-on system prompt, told it to hide the actual thinking from the user on top of that, and shipped it? (The reportedly longer generation times would corroborate this but I'm going off hearsay atm) Plus ever more cucked wrt haram uses again because of course?
Sorry if this is low effort but it's legitimately the vibe I'm getting, chain of thought is indeed a powerful prompt technique but I'm not convinced this is the kind of secret sauce we've been expecting. I tuned out of Altman's hype machine and mostly stan Claude at this point so I may be missing something, but this really feels like a textbook nothingburger.
My claim is not about AI in general but only that OpenAI is no longer special.
That much is true, I agree.
But the next development could come from anywhere, even China. Two years ago this wasn't true. Back then, OpenAI was heads and shoulders above the competition.
I agree as well but I'll note the obvious rejoinder - the next development could indeed come from anywhere, even OpenAI. Sure, they get mogged left and right these days, whodathunk propping yourself up as a paragon/benchmark of LLM ability backfires when you drag your feet for so long that people actually start catching up. But this still doesn't negate their amassed expertise and, more realistically, unlimited money from daddy Microsoft; unless said money was the end goal (which to be fair there is nonzero evidence for, as you note downthread) they're in a very good position to throw money at shit and probe for opportunities to improve or innovate. Surely Altman can see the current strategy of resting on laurels is not futureproof right?
As regards Sora. In my mind, it was a neat demo but ultimately a dead end and a distraction. Where's the use case?
Fair enough but still counts as advancement imo, even though tech like that is guaranteed to be fenced off from plebs, no points for guessing what (nsfw?) the usual suspects try to make with "ai video" generators in this vein. I generally stopped looking for immediate use cases for LLMs, I think all current advancements (coding aside) mostly do not have immediate useful applications, until they suddenly will when multiple capabilities are combined at once into a general-purpose agentic assistant. Until one arrives, we cope.
I'm no fan of sama and the cabal he's built, but nonetheless I think it's still too early to write off any major company working on AI-related things right now. I'm not convinced all of the low-hanging fruit has already been picked wrt applications (even dumb stuff like "characterai but uncucked" alone is likely to be a smash hit), and besides most past/present developments were sufficiently arcane and/or undercover that you can't really predict where and what happens next - cf. Chinese LLMs being regarded as subpar until Deepseek, or Anthropic being safety fanatics with only a dumb soy model to their name until Claude 3(.5).
If Sora is anything to go by I think OpenAI still have some genuine aces up their sleeves, and while I don't believe they're capable of properly playing them to full effect, they at least have the (faded) first-mover advantage and Sam "Strawberry" Hypeman to exploit normies boost their chances.
Kinda late to this thread but I have watched and played some of Wukong last week so I will note down my own thoughts about the game itself, isolated from broader cultural context. (I actually managed to completely miss the DEI kerfuffle it reportedly had, gonna look that up)
The good: The presentation is, as the young'uns say, absolute cinema - easily on par with Western hits like God of War (in fact I think it's fair to call Wukong "God of War but Chinese", the parallels broadly hold in most aspects) and imho exceeding them at some points. Major fights in Wukong are exactly what an unsophisticated rube like me pictures in his head when he imagines xianxia - the prologue scene/fight solidly establishes that the sheer spectacle is the main draw of the game, and so far it does not disappoint while still having the difficulty to match, fuck White-clad Noble you are forced to git gud as soon as chapter 1. The game is gorgeous, the monster designs are consistently great, and I physically feel the cultural gap. After so many Western games that SUBVERT LE EXPECTATIONS, seeing a mythical power fantasy played entirely, shamelessly straight feels very refreshing.
The great: Special mention to the animated ending scenes for each chapter, with every one having a completely different art style, and an interactive in-game "tapestry" afterwards that serve as loosely-related loredumps to put things into context. Those are universally amazing, with incredible effort put into throwaway 5-minute segments that aren't even strictly speaking related to the game itself - I checked the credits out of curiosity and every cutscene has a separate fucking animation studio responsible for it! That is an insane level of dedication to your storytelling - foreign as the subject matter is to my uncultured ass, the sheer passion to share your culture and get your point across is still unmistakable. This right here should be the bar for every Chinese cultural export going forward.
The mid: I'm conflicted about combat. On one hand it feels a little floaty to my taste, especially the bread and butter light combos, and you do not get the stone form spell (the parry button of the game) until quite a bit into chapter 2 so the only reliable defensive option you have is dodge roll spamming. On the other heavy attacks are very satisfying to charge and land, and the frequent boss fights are almost universally great and engaging, with very diverse movesets for every one. There don't seem to be any bullshit boss immunities either, the Immobilize spell (which completely stops an enemy for a few seconds) works on pretty much every enemy and boss I've seen so far. Hitboxes and especially delayed enemy attacks can be frustrating at times though.
The bad: The exploration is worse than even Souls games; no map, sparse bonfires and very few notable landmarks scattered over the huge levels are not a recipe for convenient navigation. Maybe it's a skill issue on my part but it's been VERY easy to lose track of where you are (and especially where you were) and miss side content - of which there is a lot, adding to the frustration. To be fair, this is also why Souls games aren't my usual cup of tea.
Overall I think it is a very solid game, especially for the first game of the company, and I think that all the hand-wringing about Chinese bots or whatever is misplaced. It's not a masterpiece - it's just a good, solid game, and "organic" breakout hits of this scale are not unheard of, we had one just earlier this year.
- Prev
- Next
I'm quite sure no service would be willing to be declared the world's first public-use CP generator, which it will become 100% within 4 seconds of its release to the plebs (whether it would be actually deserved is entirely irrelevant). The possibility of genning anything that looks even remotely teenage remains a hard technical problem, as of yet unsolved; while open-source's answer can be "yes, and", I think this will not fly for anything corpo-adjacent. This was discussed earlier wrt textgen, and the same is doubly, triply, orders of magnitude more true of imagegen; doing it properly requires painstakingly curating the dataset of your model, and even then I imagine there will be no shortage of borderline cases from crafty
coomersproompters to incense the normies.More options
Context Copy link