site banner

Culture War Roundup for the week of October 21, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

8
Jump in the discussion.

No email address required.

Has anyone noticed how much vitriol there is towards AI-generated art? Over the past year it's slowly grown into something quite ferocious, though not quite ubiquitous.

I honestly think it's far closer to the opposite of ubiquitous, but it certainly is quite ferocious. But like so much ferocity that you see online, I think it's a very vocal but very small minority. I spend more time than I should on subreddits specifically about the culture war around AI art, and (AFAIK) the primary anti-AI art echochamber subreddit, /r/ArtistHate, has fewer than 7k members, in comparison to the primary pro-AI art echochamber subreddit, /r/DefendingAIArt, which has 23K members. The primary AI art culture war discussion subreddit, /r/aiwars, has 40K members, and the upvote and commenting patterns indicate that a large majority of the people there like AI art, or at least dislike the hatred against it.

These numbers don't prove anything, especially since hating on AI art tends to be accepted in a lot of generic art and fandom communities, which lead to people who dislike AI art not particularly finding value in a community specifically made for disliking it, but I think they at least point in one direction.

IRL, I've also encountered general ambivalence towards AI art. Most people are at least aware of it, with most finding it a cool curiosity, and none that I've encountered actually expressing anything approaching hatred for it. My sister, who works in design, had no qualms about gifting me a little trinket with a design made using AI. She seems to take access to AI art via Photoshop just for granted - though interestingly, I learned this as part of a story she told me about interviewing a potential hire whose portfolio looked suspiciously like AI art, which she confirmed by using Photoshop to generate similar images and finding that the style matched. She disapproved of it not out of hatred against AI art, but rather because designers they hire need to have actual manual skills, and passing off AI art without specifically disclosing it like that is dishonest.

I think the vocal minority that does exist makes a lot of sense. First of all, potential jobs and real status - from having the previously rather exclusive ability to create high fidelity illustrations - are on the line. People tend to get both highly emotional and highly irrational when either are involved. And second, art specifically has a certain level of mysticism around it, to the point that even atheist materialists will talk about human-manually-made art (or novel or film or song) having a "soul" or a "piece of the artist" within it, and the existence of computers using matrix math to create such things challenges that notion. It wasn't that long ago that scifi regularly depicted AI and robots as having difficulty creating and/or comprehending such things.

And, of course, there's the issue of how the tools behind AI art (and modern generative AI in general) were created, which was by analyzing billions of pictures downloaded from the internet for free. Opinions differ on whether or not this counts as copyright infringement or "stealing," but many artists certainly seem to believe that it is; that is, they believe that other people have an obligation to ask for their permission before using their artworks to train their AI models.

My guess is that such people tend to be overrepresented in the population of illustrators, and social media tends to involve a lot of people following popular illustrators for their illustrations, and so their views on the issue propagate to their fans. And no technology amplifies hatred quite as well as social media, resulting in an outsized appearance of hatred relative to the actual hatred that's there. Again, I think most people are just plain ambivalent.

That, to me, is actually interesting in itself. So far, the culture wars around AI art hasn't seem to have been subsumed by the larger culture wars that have been going on constantly for at least the past decade. Plenty of left/progressive/liberal people hate AI art because they're artists, but plenty love it because they're into tech or accessibility. I don't know so much about the right/conservative side, but I've seen some religious conservatives call it satanic, and others love it because they're into tech and dunking on liberal artists.

It wasn't that long ago that scifi regularly depicted AI and robots as having difficulty creating and/or comprehending such things.

There's a widely memed exchange in the movie I, Robot where Will Smith's character asks a robot something to the effect of "Can a robot paint a masterpiece? Compose a symphony?" and the robot replies "Can you?"

Nowadays he'd just respond yeschad.jpeg.

And second, art specifically has a certain level of mysticism around it, to the point that even atheist materialists will talk about human-manually-made art (or novel or film or song) having a "soul" or a "piece of the artist" within it

That hasn't been a tenable position for quite some time. Duchamp took a urinal and put it in an art gallery in 1917. Probably, he did not simultaneously impart a piece of his soul into it.

You are getting at something important though. I'd be a lot more interested in AI art if I had a reasonable degree of confidence that the AI was conscious, and that it created the piece with intent and drew from its conscious experiences as inspiration. I'd actually be very interested to learn about what it's like to be an entity who has the entire internet memorized! What does it experience, what does it feel. I have nothing against that at all, even if it does put humans out of work. Losing the Darwinian competition to another conscious, feeling subject is not so bad. Losing the Darwinian competition to a hoard of mindless replicators is horrific and should be avoided at all costs.

The AI art we have right now seems to me to be more akin to waves on the beach just so happening to etch very detailed pictures into the sand by random chance; this to me is lacking the principle features that make art interesting (communication between conscious subjects; wondering at what kind of subjectivity could have lead to the present work).

That hasn't been a tenable position for quite some time. Duchamp took a urinal and put it in an art gallery in 1917. Probably, he did not simultaneously impart a piece of his soul into it.

I'm not sure how you justify the "probably" in the last sentence. If we posit that, say, Van Gogh left a piece of his soul into his famous self portrait through the act of painting it, how can we deny that Duchamp left a piece of soul into the urinal when he placed it in an art gallery? What's the mechanism here by which we can make the judgment call of "probably" or "probably not?"

The AI art we have right now seems to me to be more akin to waves on the beach just so happening to etch very detailed pictures into the sand by random chance; this to me is lacking the principle features that make art interesting (communication between conscious subjects; wondering at what kind of subjectivity could have lead to the present work).

I think that's a perfectly reasonable way to determine whether a work of art is interesting. What I find confusing here, though, is that, by that standard, AI art is interesting! To take the beach metaphor, someone who types in "big booba anime girl" into Midjourney on Discord and posts his favorite result on Twitter is akin to someone who hovers over this beach and snaps photos using a simple point and shoot, then publishes the resulting prints that he likes (if we stretch a bit, this is all nature photography or even street photography). In both cases, a conscious person is using his subjective judgment to determine the features of what gets shared. Fundamentally, this would be called "curation" rather than "illustration," and one can certainly argue that curation isn't interesting or that it's not an art, but by the standard that it requires a conscious being using his subjective judgment to communicate something through his choices in the results, curation fits just as well as any other work of art.

This is why I believe there's something more to it than that and alluded to the mysticism in my previous comment.

I feel that the snapping pictures of a beach and choosing the best ones gets at something here. That doesn't really sound like art at all. It's an obvious thing to point a camera at and has little intention to it, only a few more degrees of freedom than your anime example. The more you specify the care and thought that goes into the choice of view and reasoning behind it and craft to control the image, the closer it gets to art. Same with prompting. If you do enough micro decisions, curation and combination and juxtaposition of what the ai gives you, the more you are moving in the direction of art.

I'm not sure how you justify the "probably" in the last sentence.

It was a joke.

Fundamentally, this would be called "curation" rather than "illustration,"

I agree that this is a valid point. It's not enough to outweigh the negatives for me, but I agree that that is at least something you can say in defense of AI.

It was a joke.

I see, it must have gone over my head, but that's not an unusual experience for me with jokes, unfortunately. So is it that you were just being ironic, and that your meaning was the opposite, that the mysticism around art being imbued with a part of the artist's soul is still quite common in artists' circles, with a part of Duchamp's soul being in that toilet just as much as, e.g. part of Van Gogh's soul being in his self portrait?

I'd be a lot more interested in AI art if I had a reasonable degree of confidence that the AI was conscious, and that it created the piece with intent and drew from its conscious experiences as inspiration.

Which invites the question of what it means to be conscious and if any of us are in the sense you mean.

Have you ever felt pain? If yes, then you know what it means to have a conscious experience. It's that, and the other things like that (sensations more generally, the way things look, the way things sound, and the like).

Animals can presumably feel pain, but cannot create art (as we understand it, at any rate). AIs presumably cannot feel pain, but can create art. I don't understand the connection between consciousness and ability to create art.

Animals can presumably feel pain, but cannot create art

That's a good point.

I did specify though that there was more to it than just consciousness:

[...] and that it created the piece with intent and drew from its conscious experiences as inspiration.

So the bare fact of consciousness alone is a necessary, but not sufficient, condition for me to find value in a work.

So the bare fact of consciousness alone is a necessary, but not sufficient, condition for me to find value in a work.

But again, this just seems like a bit of a cheat. Supposing I presented you with the most movingly crafted novel ever composed, with vividly drawn characters, a delicately paced plot and subtle but resonant symbolism. You read it, it moves you to tears, you're thinking about it for weeks afterwards. Then I tell you that I just gave ChatGPT-5 (which has no upper character limit) the prompt "write me a literary novel which could win the Booker prize". How do you explain your relationship to this hypothetical novel? All the emotions it made you feel, all the thoughts it provoked - they weren't real, because the words were arranged on the page by an entity who wasn't conscious? Nothing about the arrangement of the words on the page has changed - you've only learned something new about the creator. (Asserting "Chat-GPT could never do that" is refusing to engage with the terms of my hypothetical, not an actual response.)

I think everyone who has read a novel or watched a movie is familiar with the experience of information you learn later coloring your perception of what came before. Like, you're watching a movie, and in the beginning there are a lot of tantalizing clues about how the story might develop, and you're interested to see where it goes; but then the big twist at the end sucks, it doesn't stick the landing. So you end up concluding that the movie as a whole was bad and not worth the time. "Yeah, it was cool in the beginning, but it didn't go anywhere". Your knowledge of what the complete work looks like invalidates the excitement you felt in the beginning.

Or, to take a more extreme example: suppose you have a neighbor who you have had nothing but pleasant and friendly interactions with for years, and then one day you learn that he's actually been a serial killer this whole time, committing murders unbeknownst to you. You would immediately change your judgement of him and start thinking that he's a terrible person, regardless of how outwardly friendly he had been to you up until that point. Certainly, your previous pleasant interactions with him were real and are still real; the past isn't literally rewritten. It's just that the prior information you had about him is no longer relevant in your overall evaluation of his moral status, due to the overwhelming significance of the new information you've acquired.

Hopefully these analogies illustrate how it is conceivable that learning that a work was actually created by AI could shift your overall evaluation of it, even if you previously had a very positive evaluation based on your direct experience of the work. I agree with @DTulpa's assessment here: if I learned that my favorite album was actually AI, I wouldn't be able to look at it the same way again.

The AI art we have right now seems to me to be more akin to waves on the beach just so happening to etch very detailed pictures into the sand by random chance; this to me is lacking the principle features that make art interesting (communication between conscious subjects; wondering at what kind of subjectivity could have lead to the present work).

I agree with you on this, but I probably feel differently from you from here onwards. Okay, so we have developed the technology for waves on the beach to produce aesthetically pleasing patterns that we can use as a really cheap source for many things that we currently use art for today. We will have cheap pleasing images for our books, advertisement, and maybe even in art museums, but these images will lack most meaning behind them of what an artist would otherwise have been trying to express.

I'm kind of okay with this state of affairs. It's just changing the place art has in our society, vs cheap pretty things. Once again I feel it's comparable to live musicians being replaced by CDs.

Edit: also note that I am a trained musician who really values live performance. I believe in the power of improvisation and connecting with an audience. I would have loved to make a living performing live music... But I can't and a lot of that is that the monetary value of live music isn't worth that much to consumers, because people can mostly just use recorded music instead, for their use cases. Unfortunately that's simply the way of the world and I needed to accept it and move on.

We will have cheap pleasing images for our books, advertisement, and maybe even in art museums

Yeah, this does help clarify the disagreement more. I don't think that more "cheap pleasing images" is a necessary thing, or even a good thing. I think we already had quite enough as it is. There was already a supply glut, we didn't need more. And for me the negative of knowing that say, a book cover might be AI, outweighs any positives that might come from the technical quality of the image itself. I'd gladly trade quality for the guarantee that every image was produced by a human (I tend to have a very eccentric notion of what counts as a "pleasing" image anyway).

Let's explore this more. Do you feel the same way about how most clothes needed to be produced by a tailor, but now they're mass produced for orders of magnitude less money?

What about how bread used to be produced by bakers, but now you can get bread in the store, once again for orders of magnitude less?

I don't know if there's a wrong answer here, but there is a pattern of "these things used to cost way more, but now there are cheaper options that have taken away part of the everyday-niche these things used to do by making it way cheaper. You can still buy the original product that is way more expensive, but the more expensive original versions have been relegated to luxury status."

Do you feel the same way about how most clothes needed to be produced by a tailor, but now they're mass produced for orders of magnitude less money?

What about how bread used to be produced by bakers, but now you can get bread in the store, once again for orders of magnitude less?

There certainly is something lost in both cases, yes.

I would prefer it if people formed more relationships with other individuals, rather than with anonymous corporations. That's not the world we live in and we're never going to go back to that world. But, if I were God, that's how I would set things up.

I've made this argument before. I suspect the overwhelming majority of people expressing public opposition to AI art are wearing shoes which were mass-produced in a factory with little human input. What's more, the machine that assembles these shoes was designed by a human who borrowed techniques developed by trained cobblers (analogous to how generative AI is trained on images created by artists). Everyone seems to have made their peace with this, and while of course they recognise that shoes made by hand and tailored to individual specifications will usually be of superior quality to shoes mass produced by machine, on the margin the mass-produced shoes are good enough for the ordinary consumer. Moreover, the fact that ordinary consumers can afford high-quality shoes is an unalloyed good, as shoes should not be so expensive that only the wealthy can own them.

I think a shoe factory is closer to digital art (like, Photoshop, the free limitless copying and distribution enabled by the internet, and all that those things enable) than it is to AI art. It's not a perfect analogy, but it's closer.

Comparing the differences in attitudes that artists have towards digital art and AI art is instructive. You'll find a few ultra-trads who think it's "cheating", but most people basically learned to live with it, even though it did bring about changes in how art is done and lead to job downsizing in some cases. Similarly with a shoe factory, at least there's still at least one person who actually had to design the shoe in the first place. AI art is a different class of existential threat from digital or the camera because it's the first technology that cuts humans out of the loop entirely.

AI art is a different class of existential threat from digital or the camera because it's the first technology that cuts humans out of the loop entirely.

I'm not sure if this is really the case. You still need someone to write the prompts, which is a skill in itself (albeit a radically different one from the skill of being an artist, and one which is much faster and easier to learn).