This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I honestly don't get a lot of concern over AI generated images, you'd think from the all the apocalyptic rhetoric that 90% of the average Mottizens income came from fulfilling online requests for furry porn. Somehow I don't think that's actually the case.
I think it's less interesting for the economic impact -- in addition to there just not being that many full-time furry porn artists, most of their commissioners are likely to want things more specific or outside of the realm of current ML generators, and there's a lot of social stuff going on that can't really been done by a python script -- and more because it's an interesting case that should allow exploration that isn't easy to do elsewhere. In order:
Furries are identifiable by other furries, but avoid a lot of the privacy and identify concerns relevant for most living people with large numbers of photos available, and while technically as much protected by copyright as Mario are a lot less likely to result in a takedown request.
On-topic training data is limited, and of sporadic quality, and unlikely to be used by professional environments. There's a lot of pre-tagged data on the various furry boorus! But it's probably on the order of 3-5m images, quite a lot of which won't meet the requirements used for LAION et all, and covering a pretty wide variety of topics, and a lot of reasons for ML researchers to not want to touch it. In retrospect, 'can AI realize your text string is a request to photoshop a wildlife shot onto a human's body' is pretty trivially obvious, but I don't think it was such a given three years ago.
On-topic enthusiast data is wildly available: randos can and already have started going everything from exploring the limits of textual_inversion to fine-tuning the algorithm on personally-curated data sets. So we'll get a lot of that.
There are a lot of symbols with few or marginal referents. There might be a fuschia otter in the training set, somewhere, or ghibli-style foxes and rabbits, but it's probably pretty rare in the training data. There's a scale here from dragons to griffons to hippogryphs to khajit or tabaxi to avali, not in that the subjects are fictional, but that a specific generated image is less and less likely to be pulled from a large class of input images and instead reflect interaction at a trait level (to whatever extent these are different things) or something more complex. (As an example: StableDiffusion gives robot-like outputs for protogen, despite very clearly having no idea what they are. Which isn't surprising that 'proto' and 'gen' have those connotations, but it's not clear people three years ago would have assumed ML could find them out.) At the very least, this points toward a upper bound for the minimum number of examples for a class; at most, it may explore important areas about how much a neural net can intuit. While I don't expect image generation solutions to these problems to generalize, that they exist is reason to think they at least can exist for other realms.
Outputs need be specific. This is partly just an extrapolation of composition problem that Scott was betting on, but there's also matters that are elevated from the connotations in ways that powerful AI will need to be able to do to handle a number of real-world situations, and most current ML can't.
Outputs are not human-specialized, in the way that even minor face abnormalities are able to trigger people, but they are still obvious where defects occur. StableDiffusion can't reliably do human faces, or even rabbit or weasel faces, but that it can do tigers most of the time, and foxes and wolves often, and this kinda says something interesting about what we expect and what ML (and even non-ML applications) may be able to easily do moving forward.
Inputs need be complex. Most current AI generators struggle with this, both because it's a generally hard problem and because a lot of the techniques they've used to handle other issues make this harder. I don't think the ability for a hypothetical 2025 version of StableDiffusion to handle this will be especially destructive on its own, but it will mean a pretty significant transformation that will impact a wide variety of other fields, and be very obvious here.
Much of that is an issue of data efficiency. I wonder how it'll be improved for really big models, but expect general few-shot learning in SD-type system scaled even 10x. Of course a different architecture would help.
More options
Context Copy link
More options
Context Copy link
People are expecting/fearing the singularity, and everything that looks vaguely like a part of it is liable to be catastrophized. In fairness, it's a genuine win for the transhumanists- it wasn't so long ago when people could credibly claim that this would never happen, and it unquestionably will affect real lives.
Over at The Dispatch, I was mildly startled to see a caption under an image that went something like (generated by Midjourney).
More options
Context Copy link
I see it as a herald for things to come. Perhaps you feel that furries are scum and deserve what's coming for them. That's all well and good, but the broader point to be read lies in the topic of job displacement in general.
"AI workers replace humans" used to be a prediction, not an accurate description of current reality. We now have (or are on the brink of having) a successful demonstration of just that. The reactions and policies and changes that arrive from the current ongoing chaos are going to set precedent for future battles involving first-world job replacement, and I am personally very interested in seeing what kind of slogans and parties and perhaps even extremism emerges from our first global experiment.
"Technology displaces workers" is not a new thing or a very controversial prediction that I am aware of anyone on the other side of. The contentious prediction is that AI would create structural persistent unemployment effects across the entire economy which every prior technological paradigm shift has yet failed to do. A few commission artists having to find jobs elsewhere in the service sector won't be evidence for that, nor would they really be the first to be impacted by AI in general (most translation work is now done by deep learning models, for example -- similar to AI art, a human in the loop is only necessary when the requirements are particularly complex or the quality demanded exceeds some nominal bar).
The part that you might not quite appreciate if you weren't monitoring every advance in this field is how quickly things have improved, which is to say how rapidly this disruption occurred.
We passed a point where computers became better at chess than any possible human a couple decades ago. Computers became better at Go about 6 years ago. This year they became better at producing art than 99.9% of humans, and they're certainly faster at it than any human could be. Most of the advances there occurred in the last 2 years.
And now there are models that can be applied to basically any game or task that can be effectively digitized, and can reliably train themselves to [better-than-human levels in a matter of days, maybe weeks.]
That's not to say that we're going to see unprecedented levels of 'hard' unemployment, but it is likely to sweep into unexpected places in very short order.
More options
Context Copy link
Completely true. Current advances do not guarantee the "no more jobs" dystopia many predict. My excitement is likely primarily a result of how much I've involved myself in observing this specific little burst of technological displacement.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link