site banner

Culture War Roundup for the week of September 12, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

40
Jump in the discussion.

No email address required.

As an aside, SD can generate pictures of Mickey Mouse doing novel things, same with any Marvel characters and so on. If I'm not allowed to release a new cartoon of Mickey Mouse (or Batman) acting out a new story, are the SD authors allowed to release this model?

(This is a distinct topic from style.)

the AI isn't itself a new cartoon of mickey mouse or batman. It can be made to create a new cartoon of mickey mouse and batman, in much the same way photoshop or a pencil and paper can. Why should the model be regulated differently from paper or drawing software?

Photoshop does not contain anything specific to mickey mouse, I have to know what he looks like if I want to create a picture of him. Meanwhile, SD does know what mickey looks like, I don't have to know. Even a blind man who has no idea what the mouse looks like can create images of him because SD contains the info of what the character looks like.

I'd agree with you if I had to type in a full, detailed description of what mickey mouse looks like, color, shape etc, and SD knew how to draw him only afterwards.

SD pretty much contains a representation of mickey mouse in the model weights. I'm not allowed to release a textured 3d mesh of mickey mouse, even though the user first has to choose a viewing angle and a light source position etc in order to render a pic of mickey from that 3d asset. Similarly here with SD we don't have a 3d mesh, but have something that can be controlled in slightly differently but is still a representation. Just because the format is neural weights instead of explicit 3d assets, the situation is very similar. Else what do you say about neural encodings of distance fields from which the surface can be recovered? How about NeRFs?

Photoshop does not contain anything specific to mickey mouse, I have to know what he looks like if I want to create a picture of him. Meanwhile, SD does know what mickey looks like, I don't have to know. Even a blind man who has no idea what the mouse looks like can create images of him because SD contains the info of what the character looks like.

It contains the colors yellow, red, black and white. It contains curve tools that can represent lines of specific angles and specific thicknesses, and a raster grid on which they can be presented.

Neither photoshop nor the AI contain an actual image of Mickey Mouse. They both contain the tools necessary to depict mickey mouse. Photoshop lacks the idea of mickey mouse, and so needs a human who does have that idea. The AI simply contains the idea. Not a picture of mickey, the idea of mickey.

Even a blind man who has no idea what the mouse looks like can create images of him because SD contains the info of what the character looks like.

Even a blind man can create an image of mickey in photoshop; a custom UI would make it easier but is not actually necessary. square canvas, new layer, circle tool>center-of-canvas>150-pixel-radius, set line width to 3 pixels, circle tool> center-of-canvas-minus80px_x-minus80px_y>80-pixel-radius, etc,etc...

SD pretty much contains a representation of mickey mouse in the model weights.

In much the same way that my brain contains a representation of mickey mouse, yes. In other senses, very much no. There is no picture, there is no mesh. There is no actualized output contained in the model. There is the idea, just as there is in my own mind. The AI is a rudimentary mind, not a collection of pictures. I'm pretty sure this can be proved mathematically, just based on the size of the final model versus the size of the training set, versus theoretical limits of data compression. The original pictures are not in there in any meaningful sense.

Else what do you say about neural encodings of distance fields from which the surface can be recovered? How about NeRFs?

I have no idea what this means. Elaborate?

In much the same way that my brain contains a representation of mickey mouse, yes.

Yes, but you can't release your brain. It's not an artefact or a tool. Humans and their minds have a very different standing under the law than inanimate objects and information-carrying media.

I have no idea what this means. Elaborate?

There are new ways of representing 3D scenes or 3D geometry using neural networks. They encode the properties of the 3D scene in neural network weights, and they can be used to create new images. But the representation has no notion of images, pixels, vertices, textures etc, it's all a bunch of "opaque" neural weights.

Here's one variant described: https://youtube.com/watch?v=T29O-MhYALw

The point is, law usually cares about intended use and how one interacts with the thing, not the implementation details. And nobody really knows how courts will treat these new methods. Laws were not made with the knowledge of such things, so interpretations of the wider goals will have to guide the court's work.

from a perusal of the video, this is taking a whole lot of references of a specific thing, and then using the AI to generate an interpolation. The NN interpolates a 3d scene from the source images, but it doesn't actually generate novel scenes in any significant way; the triceratops skull won't suddenly change to a T-rex skull, for example, or even change color to be pink with polkadots.

If I'm understanding it correctly, without the source images, the NN contains no representation of any scene at all, only an understanding of how to interpolate a scene from a set of images arranged in a particular way. If you feed it mickey mouse, you'll get 3d mickey mouse, but the copyright claim should be against the source images, not the neural net. Am I missing something here?

The point is, law usually cares about intended use and how one interacts with the thing, not the implementation details.

I think this would have to be an objection to AI itself, not to any specific implementation. And sure, that's a thing we can do if we want, but claiming it's an outgrowth of copyright seems wrongheaded. Again, I'm pretty sure that it can be proven mathematically that the AI does not actually contain copyrighted images in any form, and the whole point of an AI is a machine that does things only a human could do previously. "contain an understanding of previous visual media" is one of those things, and there's no way I see to object to that without objecting to AI as a whole.

There are all kinds of models that are in-between the two: do the triceratops thing but you can manipulate the scene, swap out objects etc. And in some sense SD also "interpolates" between its source images, just in a very complex way.