site banner

Culture War Roundup for the week of January 13, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

3
Jump in the discussion.

No email address required.

More notes from the AI underground, this time from imagegen country. The Eye of Sauron continues to focus its withering gaze on hapless AI coomers with growing clarity, as another year begins with another crackdown on Azure abuse by Microsoft - a more direct one this time:

Microsoft sues service for creating illicit content with its AI platform

In the complaint, Microsoft says it discovered in July 2024 that customers with Azure OpenAI Service credentials — specifically API keys, the unique strings of characters used to authenticate an app or user — were being used to generate content that violates the service’s acceptable use policy. Subsequently, through an investigation, Microsoft discovered that the API keys had been stolen from paying customers, according to the complaint.

Microsoft alleges that the defendants used stolen Azure OpenAI Service API keys belonging to U.S.-based customers to create a “hacking-as-a-service” scheme. Per the complaint, to pull off this scheme, the defendants created a client-side tool called de3u, as well as software for processing and routing communications from de3u to Microsoft’s systems.

More articles here and here.

Translated from corpospeak: at some point last year, the infamous hackers known as 4chan cobbled together de3u, a A1111-like interface for DALL-E that is hosted remotely (semi-publicly) and hooked up to a reverse proxy with unfiltered Azure API keys which were stolen, scraped or otherwise obtained by the host. I probably don't need to explain what this "service" was mostly used for - I never used de3u myself, I'm more of an SD guy and assorted dalleslop has grown nauseating to see, but I'm familiar enough with general thread lore.

As before, Microsoft has finally took notice, and this time actually filed a complaint against 10 anonymous John Does responsible for the abuse of their precious Azure keys. Most publicly available case materials compiled by some industrious anon here. If you don't want to download shady zips from Cantonese finger painting forums, complaint itself here, supplemental brief with screencaps (lmao) here.

To my best knowledge,

  • Doe 1 with "access to and control over [...] github.com/notfiz/de3u" is not Fiz, the person actually hosting the proxy/service in question.
  • Doe 2 with "access to [...] https://gitgud.io/khanon/oai-reverse-proxy" is Khanon, the guy who wrote the reverse proxy codebase underlying de3u. I'm really struggling to think what can be plausibly pinned on him given that the proxy is simply a tool to use LLM API keys in congregate - it's just that the keys themselves happen to be stolen in this case - but then again I don't know how wire fraud works.
  • Doe 3 with "access to and control over [...] aitism.net" is Sekrit, a guy who was running a "proxy proxy" service somewhere in Jan-Feb of 2024 during the peak of malicious spoonfeeding and DDoS spitefaggotry, in an attempt to hide the actual endpoint of Fiz's proxy. The two likely worked together since, I assume de3u was also hosted through him. Came off as something of a pseud during "public" appearances, and was the first to get appropriately spooked by recent events.
  • Does 4-10 are unknown and seem to be random anons who presumably donated money and/or API keys to the host, or simply extensively used the reverse proxy.

At first blush, suing a bunch of anonymous John Does seems like a remarkably fruitless endeavor, although IANAL and have definitely never participated in any illegal activities before officer I swear. A schizo theory among anons is that NSFW DALLE gens included prompts of RL celebrities (recent gens are displayed on the proxy page so I assume they've seen some shit - I never checked myself so idk), which put most of the pressure on Microsoft once shitposted around; IIRC de3u keeps metadata of the gens, and I assume they would much rather avoid having the "Generated by Microsoft® Azure Dall-E 3" seal of approval on a pic of Taylor Swift sucking dick or whatever. Curious to hear the takes of more lawyerly-inclined mottizens on how likely all this is to bear any fruit whatsoever.

Regardless, the chilling effect already seems properly achieved; far as I can tell, every single person related to the "abuses", as well as some of the more paranoid adjacent ones, have vanished from the thread and related communities, and all related materials (liberally spoonfed before, some of them posted right in the OPs of /g/ threads) have been scrubbed overnight. Even the jannies are in on it - shortly after the news broke, most rentry names containing proxy-related things were added to the spam filter, and directly writing them on /g/ deletes your post and auto-bans you for a month (for what it's worth I condone this, security in obscurity etc).

If gamers are the most oppressed minority, coomers are surely the second most - although DALL-E can burn for all I care, corpo imagegen enjoyers already have it good with NovelAI.

The point of suing anonymous John Does is that the suit will allow the Plaintiffs to get a pre-discovery subpeona directed toward the relevant parties that could help identify them. Whether or not they'd actually be able to collect, or what the damages would even be, is an open question, though. Sometime the point of a lawsuit isn't to win the judgement but to demonstrate your willingness to defend you rights.

—spend two billion dollars creating AI whose only commercial application is creating naughty pictures of Taylor Swift

—move heaven and earth so it can’t do that anymore

—go bankrupt

—???????

—PROFIT

The Fascist-Feminist synthesis that the majority of normies implicitly agree with is that men viewing explicit material is metaphysically damaging in some way, and thus it must be curtailed as much as possible. There's a bunch of laws that obliquely touch on these aspects (often relating the "production of child porn"), as well as a ton of potential PR damage. That's why webhosts, credit card processors, sites like Patreon, etc. have always been weirdly prudish about any explicit material. We should expect the same thing to happen to image generators. It'll probably reach a similar steady-state eventually, with explicit stuff existing on the periphery while facing periodic crackdowns.

Why expect that as opposed to a pornhub model, where there’s a separate image generator for porn? Mind geek gives 0 fucks, I’m sure.

Mind geek gives 0 fucks, I’m sure.

I'm quite sure no service would be willing to be declared the world's first public-use CP generator, which it will become 100% within 4 seconds of its release to the plebs (whether it would be actually deserved is entirely irrelevant). The possibility of genning anything that looks even remotely teenage remains a hard technical problem, as of yet unsolved; while open-source's answer can be "yes, and", I think this will not fly for anything corpo-adjacent. This was discussed earlier wrt textgen, and the same is doubly, triply, orders of magnitude more true of imagegen; doing it properly requires painstakingly curating the dataset of your model, and even then I imagine there will be no shortage of borderline cases from crafty coomers proompters to incense the normies.

The…what?

Hosts and processors have generally erred on the side of prudishness because that’s the side of caution. It is harder to get sued or boycotted or arrested for not doing something than for doing it.

Why is explicit material so risky? Because most people recognize some sort of lazy deontology, and pornography triggers most of the common “boo” lights. For the spiritually inclined, that’s metaphysical damage. The rest of us have to dig for some physical justification. Harming children is a PRETTY GOOD REASON to criticize something. Thus, near-universal condemnation of the central examples, plus an umbrella of distaste for anything remotely related.

It sounds like we agree on almost everything here, we just use different language.

The only bit I'd raise an objection to is the "THINK OF THE CHILDREN" excuse being a good reason for much of anything.

I think describing this casual morality as a “fascist-feminist synthesis” is either very confused or very inflammatory.

Why rely on random anonymous compilations? Courtlistener has the full docket and will almost certainly be updated as the case progresses. So far looks like no defendants or lawyers for any of them have made an appearance. If the case continues this way the most likely outcome is Microsoft secures a default judgement against them.

Why rely on random anonymous compilations?

I just linked the first source I saw in the wild, thanks, this is better.

So far looks like no defendants or lawyers for any of them have made an appearance.

Is that not the norm for anonymous wire fraud or whatever charge they're levying here? I'm near-certain none of the Does (none of the major ones, at least) live in the US.

this way the most likely outcome is Microsoft secures a default judgement against them.

I'm a rube unfamiliar with the American legal system - what do the results of that typically look like in ghost cases like this? Does Microsoft get their damages, if yes then whence?

Is that not the norm for anonymous wire fraud or whatever charge they're levying here? I'm near-certain none of the Does (none of the major ones, at least) live in the US.

Probably? Microsoft did secure subpoenas to various ISPs to try and determine the actual identities of the individuals involved. Whether that can be done remains unclear.

I'm a rube unfamiliar with the American legal system - what do the results of that typically look like in ghost cases like this? Does Microsoft get their damages, if yes then whence?

Microsoft is going to get a legal judgment from a US court that X individuals are responsible for Y damages. How likely they are to actually get Y damages likely depends on the legal jurisdiction that X individuals reside in and their perspective on enforcing the judgement of US courts. US courts, for example, won't respect foreign civil judgements regarding liability for speech where that speech would be protected by the First Amendment in the United States.

So the generation takes place in Azure, and there are sets of totally unrestricted API keys which give access to an uncensored model?

Not uncensored per se, afaik it still required some prompting (as mentioned in the erstwhile rentry) but the keys commonly used definitely had laxer filtering, nowhere near the hair-trigger user-facing model where you get dogged for the dumbest things. I'm not sure a totally uncensored model exists, in the current climate it sounds like something that'd require nuclear plant-level security clearance. But yes, this is basically how keys work, the entire point is that you can call the model from any source (including a reverse proxy) as long you do it through a valid key with a valid prompt structure - which most frontends, image- or textgen, take care of under the hood.

I imagine that the model is not more or less censored than the one the public interacts with, but on top of the model's basic suitableness/unsuitableness for generating naughty content, there's likely an additional layer of filtering on public interfaces and public API keys. And since that will interfere with some applications, I would guess that there are probably API keys shared with trusted 3rd-party that Microsoft / OpenAI trusts will implement their own filtering that bypasses those additional layers of filtering.