site banner

Culture War Roundup for the week of September 5, 2022

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

106
Jump in the discussion.

No email address required.

One possible model of the situation is that AI will be so disruptive that it should be thought of as being akin to an invading alien force. If the earth was under attack from aliens, we wouldn't expect one political party to be pro-alien and one to be anti-alien. We would expect humanity to unite (to some degree) against their common enemy. There would be some weirdos who would end up being pro-alien anyway, but I wouldn't expect them to be concentrated particularly on either the left or the right.

In the short- and medium-term, your views on AI will be largely correlated with how strongly your personal employment prospects are impacted. As you point out, left-aligned artists and journalists aren't going to be too friendly to AI if it starts taking their jobs (especially if it leaves many right-coded industries unaffected), regardless of what other political priors they might have.

I wrote an essay on the old site about how techno-optimism and transhumanism fit more comfortably in a leftist worldview than a rightist worldview, and I still think there's some truth to that. But people can be quick to change their views once their livelihoods are on the line.

I would expect most non-religious freelance artists(religious art commissions work differently) to take a haircut, but aren’t most professional artists in basically 8-5 employment doing web design or advertising? I’d expect those people to stay employed doing largely what they were doing, just much faster.

Now in the long term it’s probably not good news for graphic design students or aspiring animators, but I’m under the impression their chances of actually making it were pretty low anyways.

I don't think this is going to be that big of a bane on the average artist. In fact, I think this will be much like other digital tools, which have allowed below-average artists to punch above their weight. AI will be quickly adopted by these folks. Their overall art will improve, and they'll be able to pump out a lot more content. But they'll likely suck at doing revisions, as the AI probably isn't going to be built with that in mind. So the average artist will be able to step in, using AI to create ideas and starting points, and then build off of that. AI will be the go to for reference images.

And you'll have AI whisperers who are incredibly good at constructing prompts to get great results from AI.

I think artists largely fall into two camps. One are people who produce things that appeal to others, and another is people who produce things that appeal to themselves. Sometimes, in rare cases, the people who do their own art are able to appeal to the masses; and truly great artists can influence what appeals to the masses. When it comes to dealing with clients who are commissioning a work, some artists are trying to shove their vision on their client, while others are able to take what their clients want and replicate it perfectly. But the great artist is able to take what a client wants, filter it through themselves, and produce something the client didn't explicitly ask for, but really wanted. Or something like that.

Anyways, over the course of the next few years, I imagine there will be a few scandals, from niche to mainstream, of artists using AI but representing it as human-made. What I'm really looking forward to is a scandal of a web personality turning out to be a complete fabrication, and all their art/work being produced by AI. Because at the end of the day, most of the artists online are only popular because of the work they put into creating a name for themselves, cultivating an audience. It's largely marketing, with a small amount based on skill. Some of it, to be honest, is a woman having a pretty face and a prettier body. And so the real threat isn't a computer that can make great art; it's a computer that can connect with an audience in the same way an 'influencer' or 'content creator' can. The social skill needed to amass an audience, and retain them, is something that is far more valuable than drawing or any other skill. An AI that can replicate that is a direct threat to every 'influencer', whether they be an artist, streamer, Twitter journalist, etc. Though that will open the door for people with fewer social skills to do well, since they could leverage AI to create a social identity, but even if not, their inept social skills will come across as more 'authentic'.

Imagine if that happened with acting. Movies in a couple decades, the ones made with actual human actors in front of a camera, could end up with atrocious acting just so it seems more authentic..

It depends on how good the technology gets, and how quickly.

It’s pretty limited right now. By that I mean there’s a wide range of prompts and scenarios that simply don’t give good results at all (and aren’t helped very much by img2img, fine tuning, textual inversion, etc). That’s the main thing keeping artists’ jobs secure right now.

The better it gets, the more artists’ jobs will be on the chopping block.

Anyways, over the course of the next few years, I imagine there will be a few scandals, from niche to mainstream, of artists using AI but representing it as human-made.

Already here, technically:

https://www.washingtonpost.com/technology/2022/09/02/midjourney-artificial-intelligence-state-fair-colorado/

So the average artist will be able to step in, using AI to create ideas and starting points, and then build off of that. AI will be the go to for reference images.

The problem with this reasoning is that AI capabilities scale up FAST. Just a year ago the predecessors of the current models were barely passable at art. One year from now, they could be exponentially better still.

And artists who use it as a tool are actually helping it learn to replace them, eventually! So this isn't like handing someone a tool which will make their life easier, its hiring them an assistant who will learn how to do their job better and more cheaply and ultimately surpass them.

My favorite illustration of this is something called Centaur Chess.

Early chess engines would occasionally make dumb moves that were obvious to human players. Even when their overall level improved enough to beat the top human players they still often did things that skilled players could see were sub-optimal.

This meant that in the late 90s / early 00s the best "players" were human-computer teams. A chess engine would suggest moves, then a human grandmaster would make their move based on that - either playing the way the computer suggested, or substituting their own move if they saw something the computer had missed.

But as AI continued to develop the engine's suggestions kept getting better. Eventually they reached a point where any "corrections" were more likely to be the human misunderstanding what the computer was trying to do rather than a genuine mistake. Human plus computer became weaker than the computer alone, and the best tactic was to just directly play the AI's moves and resist the temptation to make improvements.

Just a year ago the predecessors of the current models were barely passable at art. One year from now, they could be exponentially better still.

https://xkcd.com/605/

Here's another relevant XKCD:

https://xkcd.com/1425/

8 years ago when this comic was published the task of getting a computer to identify a bird in a photo was considered a phenomenal undertaking.

Now, it is trivial. And further, the various art-generating AIs can produce as many images of birds, real or imagined, as you could possibly desire.

So my point is that I'm not extrapolating from a mere two data points.

And my broader point, that AI will continue to improve in capability with time, seems obviously and irrefutably true.

And my broader point, that AI will continue to improve in capability with time, seems obviously and irrefutably true.

I'll give a caveat, here. AI will certainly get better within its existing capabilities and within some set of new capabilities, but there are probably at least some capabilities that will require changes in type rather than degree, or where requirements grow very quickly.

These examples are easier to talk about in the sense of text. GPT-3 is very good at human-like sentences, and GPT-4/5 will definitely be much better at that. It very likely handle math questions better. It more likely than not will still fail to rhyme well. It is also unlikely to hold context for 50k tokens (eg, a novel) in comparison to GPT-3's ~2k (ie, a long post), because the current implementations go badly quadratic. There are some interesting possible alternative approaches/fixes -- that Gwern link is as much about them as the problem -- but they are not trivial changes to design philosophies.

Very interesting.

I do wonder if certain architectures/frameworks for machine learning will start to break as they exceed certain sizes, or at least see massively diminished returns that are only partially solved by throwing more compute at them, indicating there's issues with the core design.

It is interesting to consider that no HUMAN can hold the full text of a Novel in their head, they make notes, they have editors to help, and obviously they can refer back to and refine the manuscript itself.

It more likely than not will still fail to rhyme well.

Well this, I'd assume, is because it can't have any way to know what 'rhyming' is in terms of the auditory noises we associate with words, because text doesn't convey that unless you already know the sounds of said words.

Perhaps there'll be some way to overcome that by figuring out how to get a text-to-speech AI and GPT-type AI to work together?

Well this, I'd assume, is because it can't have any way to know what 'rhyming' is in terms of the auditory noises we associate with words, because text doesn't convey that unless you already know the sounds of said words.

Unfortunately, it's a dumber problem than that. Neural nets can pick up a lot of very surprising things from their source data. StableDiffusion can pick up artists and connotations that aren't obvious from its input data, and GPT is starting to 'learn' some limited math despite not being taught what the underlying mathematical symbols are (albeit with some often-sharp limitations). GPT does actually have a near-encyclopedic knowledge of IPA pronunciation, and you can easily prompt it to rewrite whole sentences in phonetic pronunciation. And we're not talking a situation where these programs try to do something rhyme-like and fail, like match up words with large number of letter overlaps without understanding pronunciation. Indeed, one of the limited ways people have successfully gotten rhymes out of it have involved prompting it to explain the pronunciation first. (Though not that this runs into and very quickly fills up the available Attention.) Instead, GPT and GPT-like approaches struggle to rhyme even when trained on a corpus of poetry or limericks: the information is in the training data, it's just inaccessible at the scope the model is working at : either it does transparent copy or it doesn't get very close.

Gwern makes the credible argument that (at least part of) GPT's problem is that it works in fairly weird byte-pair encodings to avoid hitting some of those massively diminishing returns as early as had it been trained on phonetic or character-level minimum units, but at the cost of completely eliminating the ability to handle or even examine certain sub-encoding concepts. It's possible that we'll eventually get enough input data and parameters to just break these limits from an unintuitive angle, but the split from how we suspect human brains handle things may just mean that this scope of BPEs cause bad results in this field and a better work-around needs to be designed (at least where you need these concepts to be examined).

((Other tools using a similar tokenizer have similar constraints.))

How does this work? My understanding was that the only "learning" that took place is when the model is trained on the dataset (which is done only once, requiring a huge amount of computational resources), and any subsequent usage of the model has no effect on the training.

I'm far from an expert here.

If they want to make the AI 'smarter' at the cost of longer/more expensive training, they can add parameters (i.e. variables that the AI considers when interpreting an input and translating it into an output), and more data to train on to better refine said parameters. Very roughly speaking, this is the difference between training the AI to recognize colors in terms of 'only' the seven colors of the rainbow vs. the full palette of Crayola crayons vs. at the extreme end the exact electromagnetic frequency of every single shade and brightness of visible light.

My vague understanding is that the current models are closer to the crayola crayons than to the full electromagnetic frequency.

Tweaking an existing model can also achieve improvements, think in terms of GANs.

If the AI produces an output and receives feedback from a human or another AI as to how well the output satisfices the input, and is allowed to update its own internals based on this feedback, it will become better able to produce outputs that match the inputs.

This is how a model can get refined without needing to completely retrain it from scratch.

Although with diffusion models like DallE, outputs can also be improved by letting the model take more 'steps' (i.e. run it through the model again and again) to refine the output as far as it can.

As far as I know there's very little benefit to manually tweaking the models once they're trained, other than to e.g. implement a NSFW filter or something.

And as we produce and concentrate more computational power, it becomes more and more feasible to use larger and larger models for more tasks.

We ran a natural experiment on the alien invasion thing recently and while nobody went explicitly pro alien, caring about the invaders was definitely blue coded and ignoring was red coded.

I fully expect that if actual aliens showed up, at least one of the tribes would decide that being ruled by the aliens would be strictly superior to being ruled by their political rivals, and so would become vehemently pro-alien.

Especially if the aliens are capable of exerting God-like power.

That's a big enough issue to completely reconfigure the tribes. "Our benefactors" can be super based if they want, I'm not living under alien rule. Especially if they have that level of power over us. Vigilo Confido.

The issue with the COVID analogy is that people had very different reasons to fall on either side. If the measures weren't coercive it would have played out very differently culturally.

Eh, that's not how I remember it.

At first, caring about the invaders was red coded, and blue tribe laughed and mocked them. When they weren't calling them racist. Blue tribe wanted the population to come out to super spreader events to show how not-racist they were.

Then half time was called, and the tribes switched sides of the field.

Now red tribe had decided all the measures to protect from the aliens weren't proportional the the threat the aliens posed. And blue tribe said red tribe was murdering people. And was still racist.

One possible model of the situation is that AI will be so disruptive that it should be thought of as being akin to an invading alien force.

I agree we'd be better off if everyone thought that way, but the way I see it is that anyone that defects from Team Humanity has a shit ton of power to gain in the short term. To extend your analogy, the "pro-alien weirdos" would also be getting Alien arms and supplies. And if it's not team Blue or team Red, I'm sure team CCP can pick up the slack.