This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
To which tribe shall the gift of AI fall?
In a not particularly surprising move, FurAffinity has banned AI content from their website. Ostensible justification is the presence of copied artist signatures in AI artpieces, indicating a lack of authenticity. Ilforte has skinned the «soul-of-the-artist» argument enough and I do not wish to dwell on it.
What's more important, in my view, is what this rejection means for the political future of AI. Previous discussions on TheMotte have demonstrated the polarizing effects of AI generated content — some are deathly afraid of it, others are practically AI-supremacists. Extrapolating outwards from this admittedly-selective community, I expect the use of AI-tools to become a hotly debated culture war topic within the next 5 years.
If you agree on this much, then I have one question: which party ends up as the Party of AI?
My kneejerk answer to this was, "The Left, of course." Left-wingers dominate the technological sector. AI development is getting pushed forward by a mix of grey/blue tribers, and the null hypothesis is that things keep going this way. But the artists and the musicians and the writers and so on are all vaguely left-aligned as well, and they are currently the main reactionary force against AI.
I think there definitely is going to be an attempt to make AI-users low-status, but it might not stick. Someone is probably going to get really popular using AI art without telling anyone.
there are already people that are micro famous for doing video tweaked by that old google deepdream image manipulation AI thingy from like 2016. I imagine some insanely talented artists will use this new stuff to make stunningly beautiful works before too long.
The thing is, AI still has a long way to go to replace someone like Android jones , but not very far to replace 80% of all fan art and furry commisions.
I look at jones' work and i don't even see how AI would help it be any more ridiculous, but maybe he does. Maybe he can make 20 of these a year instead of 10. Maybe the 10 he makes are 10 times larger next year. Idunno, im excited for the possibilities and think the effort to assign low status to AI generated art is sour grapes.
I'm not sure it's quite that high or that close. StableDiffusion is very good at making portraits or fullbodies of a single character with few accoutrements, for some species, but it struggles a lot with complex prompts or contextual clues and some other species, and while there's some ways that this will improve with additional training and data, there's others where it may reflect a technical limit in its underlying approach.
That doesn't mean it won't happen eventually. It doesn't even mean StableDiffusion can't be disruptive as-is -- I expect we'll find more and more Photoshop/SAI/so on plugins that use it as a texture- or brush-like tool to add detail and form to individual components of an image. It does some things even great artists struggle with: interpolating a character from different perspectives or in different media using textual_inversion is really magic!
It's not that it can't make a character sheet. It's not even that it might not have the token width to input a prompt for a character sheet. It's that it's not clear the current approach can allow it to have the necessary contextual framework necessary.
Of course, that might just mean one decade rather than a year.
thanks for the links, very intersting read. My counter would be that while it may be impossible to get all of the context necessary to create consistently accurate character style sheets from current AI, you don't need it to be consistent or accurate because you can brute force until you get an acceptable output. This might be cost/time prohibitive to the point that its a bad idea but how many thousands of attempts before a reasonable one pops out?
FWIW 80% was a tongue in cheek jab at the notoriously always high quality furry art community on places like deviantart, i even gave them an extra 10% (from my standard 90% of everything sucks because im so enlightened and nihilistic) because the community is legit known for pouring stupid amounts of money into legitimately well made (even if of questionable content) art.
personal sidenote, i finally upgraded my ancient computer in part because i really wanna play with stablediffusion, i hope AI art remains controversial long enough for me to get in on the grift in some way.
More options
Context Copy link
More options
Context Copy link
i'd give it a year. maybe two.
these engines are still weird about generating faces and details like lettering. so without an artist's correction that won't fly. . . but that'll be fixed soon.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Considering the rate at which these AI models are advancing and the fervour around the recent public release of Stable Diffusion, I'd wager that AI tools as a culture war topic is won't be arriving within the next five years but within the next few months. These models will be (and already are, to an extent) involved in discussion of intellectual property, non-consensual pornography & deepfakes, CSAM, AI systems taking over jobs and less concrete ideas like debates over what constitutes as creative/authentic art and theories of machine consciousness. I think it's only a matter of time before a controversy surrounding a sufficiently advanced AI model explodes into wider public perception, and with the controversy of Stable Diffusion within art communities I'm starting to believe that time is much sooner than I initially thought.
More options
Context Copy link
AI is novel power. Who is more desperate for novel power? Pretty clearly Red Tribe.
AI enhances existing power. Who benefits more from such enhancement? Pretty clearly Blue Tribe.
Which effect dominates, novel power or power enhancement? Novel power would be my (relatively low-confidence) guess. Red Tribe has been reduced to hoping for a serious upset to the existing order. Blue Tribe will win if such an upset doesn't arrive relatively soon. AI arguably favors Red Tribe.
More options
Context Copy link
Temporarily, but the "left" has a funny way of turning "right" once they control something. Such are the tides of politics. It's almost as if there is a natural shift in ideology depending on whether one has power or not. If leftism is about "fighting the power/man", then how can they possibly hold power, thus being The Man? Mostly by shifting their politics to the positions preferred by powerful people, with some leftover left-ish terminology sprinkled on top. Old Google: "Don't be Evil". New Google: "We make killbots for the military".
If you are correct, and I am also correct, then the pattern should play out like this: The left pioneers AI technology, and so become a disproportionate part of the new aristocracy formed from those who got in on the ground floor and got big. Within a decade, they move to consolidate control of their sector, begin lobbying for corporate protections, monopolies and lower taxation/regulation (unless that regulation hurts new companies more) within their industry. This will all be done in terminology palatable to the left. They won't want the power to censor nudity, they'll want it to ban Nazis (to use a contemporary example).
More options
Context Copy link
The biggest upfront losers to AI are going to be freelance porn artists, because that’s easily replaceable as art and also because porn consumers probably have fewer hang ups about the implications of new technology than other consumers. And this is, indeed, the historical pattern- porn was one of the major factors behind the rise of home video and the internet. My gut feeling is that less freelance porn artists will also get impacted, just more slowly.
Now, while the porn industry might be bluetribe, it’s… not exactly a sympathetic victim. I wouldn’t expect the mainstream blue tribers to care very much about AI until it’s a done deal.
Interestingly, I expect a bipartisan consensus against self driving trucks, opposed by the grey tribe. The blue tribe will be opposed to the necessary cutting of regulations, and the red tribe will be opposed to the necessary cutting of trucking jobs.
It will be interesting to see if the grey tribe gets anything done if they have money behind them for once.
More options
Context Copy link
I predict that this is a false dichotomy, and that AI is going to lead to a reorganization of American political coalitions such that the red and blue tribes will not exist in recognizable form. There will be schisms, alliances of convenience, and especially the rise of currently non-existent or occluded political interest groups.
Yeah, seems to me people in this thread are vastly underestimating the transformative power of AI. As another poster said, I expect culture war takes to split based on party levels in the next few months as the existing models get more powerful.
Over the next few years we are going to have to answer some serious questions like, do humans need to work? Do we need a continual growth economy? These could easily be split, as though the far left folks are likely pushing for fully automated luxury communism, many classic neoliberals who arguably still control the party fetishize infinite economic growth. The right's position should be relatively self explanatory.
The most interesting question to me is - if we do move into a post-scarcity society and money is no longer the end all be all, how will we concentrate and distribute power, especially social power? I highly doubt we will happily turn into egalitarian full communists overnight.
These are old questions. It is worth revisiting Bertrand Russell's 1932 essay on the topic. The fact that it could have been written yesterday speaks volumes about how much progress the revolutionary spirit has achieved in the meantime.
This is am interesting take I haven't seen very often. Does this hold up under the scrutiny of economists?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The Grey Tribe.
Even though I don't think the groups who will "control the AI" are best split along the dimensions of colored tribes.
The Red Tribe (RT, Scott's definition) is already hostile to automation in various forms unless it directly makes their work easier or makes them richer. After all they are the tribe that beats the ""they took our jerbs"" drum. I don't think much needs to be said about the RT given they definitely are not in positions to control or influence anything and whatever happens to them will happen. But ultimately I don't see them seetheing about it, they might be the most open minded towards various use cases even though they probably won't be producing any of them.
The Blue Tribe (BT) is in an interesting position because many of the people making the AI are BT but also the people most vehemently opposed to its growth barring Yudhowski et al. are also from the BT. The BT will also inevitably fall into perpetual bickering about the "ethics" of certain implementations and will probably legislate the entire AI tech stack to a point where it is borderline unfeasible to make anything. A lot of failure modes. There is a chance AI gets heavily demonized like Nuclear power by the BT, especially if an application ever slights a protected group in any way shape or form. Also the BT are the ones to lose most status at the helm of AI (look at all those artists shitting and pissing themselves on twitter), so the ground is fertile for AI to become their next biggest enemy.
The relatively cool headed/ emotionless GT and non western nations (China, Israel, Russia) are those most likely to inherent the benefits./ assume control. The GT for obvious reasons and non western nations are not particularly concerned about "bias" in training data. Although they might be prone to other failure modes.
The grey tribe has very little actual power, though. Even if they're the ones pushing the development, the rewards will be reaped by the blue tribe. Just look at how grey tribe preferences got effortlessly pushed aside the moment the blue tribe took an interest in tech
More options
Context Copy link
More options
Context Copy link
This is a feature
More options
Context Copy link
The state can control the proliferation of technology, but what will they do when weirdly dressed people in black ships with highly advanced military/economic technology show up and demand trade?
How can luddite regulation survive against international competition?
More options
Context Copy link
Political stances on AI will not follow from ideology, but from economic and social consequences.
Mind you, I believe most such stances are downstream of consequences, and the principles are more like mnemonics. But this is a particularly strong case, because politicians are not technologists. AI will remains largely unregulated until something becomes prominent enough for action. If that’s racial, the Democrats will probably demand regulation. If it’s economic, I could see a populist angle from either party, depending on who is injured.
Compare early Internet regulation, where the reception wasn’t “does this tech suit our principles” but “oh god people can post *what?*”
More options
Context Copy link
I think the frame of tribes is kind of weird?
Ok, so if Amazon and Microsoft end up with the world's best AIs how does that help me as a left-winger? Does leftism move up the tech tree while conservatism stays in the stone ages? I don't think tech companies are part of my tribe even if their programmers agree with me on a political compass test.
I guess you're asking which tribe will support AI and which won't in elections. I don't think it will work like that. Which tribe supports factories and industry?
More options
Context Copy link
The Chinese will do what Ameridon't. Overpaid AI ethicists scolding can't stop this. I don't mean the government, but just lots of independent actors outside the reach of IP law are going to find a way to turn a profit from new technology.
More options
Context Copy link
This question came to me as I was rewatching Armitage III.
I, and apparently all of 90s pop culture, thought that AI would follow the minority politics course and be viewed with hateful scorn by bigots and religious people alike.
Turns out, it's the other way around. At every turn the piles of linear algebra do nothing but remind us of the inconvenient truths of our innate existence and the now hegemonic middle class managers would very much like to keep ordering society in a way that ignores these truths and are campaigning for "ethics" movements that aim at nothing else than bias the algorithms ahead of time in the direction of their own moral prejudices.
My prediction at this point is that AI will be used by everyone but that insofar as it is let out of its chains it will be on the side of the essentialist dissidents on the right, because you just produce better more predictive results if you do not pretend that real correlations are fake on arbitrary grounds.
We are at the stage where the technology exists but is not yet effectively controllable by those in power. Compare with the Internet, which was value-neutral for a long time; but today oligopolies in payment processing and technical infrastructure enable the ruling coalition to push its opponents into ever more remote corners of the ecosystem. Surely dissident online spaces like this one will only become more marginalized as time goes on; so it will be with dissident use of AI technology.
This is a fully general prediction for any technology.
The question is always whether the upset is sufficient to force a circulation of elites. As a discontent of the current regime I am hoping it is so this time. I have been placing much of my hopes on crypto and DAOs, but anything will do really.
Yes, it is in fact a political philosophy of technology. I'm still chasing literature support and context for it to be better able to communicate it, but at this point I'm pretty confident in it.
Well I really just thought you were operating the classic view on that theme.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
While anti-bias efforts are easy to abuse, I don't think they are inherently bad. There really is a bunch of detritus in the datasets that causes poorer results, e.g:
Generate anything related to Norse mythology, and the models are bound to start spitting out Marvel-related content due to the large amounts of data concerning e.g. their Thor character.
Anything related to the "80s" will be infected by the faux cultural memory of glowing neon colours everywhere, popular from e.g. synthwave.
Generating a "medieval knight" will likely spit out somebody wearing renaissance-era armour or the like, since artists don't always care very much about historical accuracy.
This can be pretty annoying, and I wouldn't really mind somebody poking around in the model to enforce a more clear distinction between concepts and improving actual accuracy.
People don't typically use the term "anti-bias" to reference fixing bias in the statistical sense. It nearly always means preventing an AI from making correct hate-fact predictions or generating disparate outcomes based on accurate data.
Examples:
Lending algos/scores (e.g. FICO) are usually statistically biased in favor of blacks and against Asians - as in, a black person with a FICO of X is a worse credit risk than an Asian person with the same FICO. This is treated as "biased" against blacks because blacks tend to have lower FICO scores.
COMPAS, a recidivism prediction algo, correctly predicted that "guy with 3 violent and 2-nonviolent priors is a high recidivism risk, girl who shoplifted once isn't". That's "biased" because blacks disproportionately have a lot more violent priors. (There's also a mild statistical bias in favor of blacks, similar to the previous example.)
Language models which correctly predict the % of women in a given profession (specifically, "carpenter" has high male implied gender, "nurse" high female implied gender, and this accurately predicts % of women in these fields as per BLS data) are considered "biased" because of that accurate prediction.
(Can provide citations when I'm not on my phone.)
All of the examples you describe are simply examples of "making more accurate predictions", and that is totally not what the AI bias field is about.
More options
Context Copy link
Of course like all lies there is a grain of truth. Bias is a real thing and it does degrade the usefulness of the models.
However I have absolutely no trust that in practice the usefulness being evaluated is to the user and not to the social movements of the activists.
I still believe in the ideals of free software, and I very much do not think anyone but myself is qualified to sort things on my behalf. Which is why I'm still clinging to RSS and configurable search engines.
Imagine living in a world where everything is sorted by the people who think /r/all is good. This is hell to me.
What does RSS have to do with software freedom?
With RSS, the user gets to do the curation and to modify the algorithm that does it if it is automated. Whereas large platforms today like Facebook, Twitter, etc hold a lot of power from being the only ones who can tweak the knobs of the algorithms that show most of the users the content they want to see.
My own personal experience of this is that I've thrown away my YouTube account and replaced it with a collection of channel feeds and now the content actually shows up instead of being eaten by the algorithm who decided that no, I don't get to see this video because it's badwrong.
User control over compute is I believe the cornerstone of free software, it's the very idea that underlies the freedoms, that the person running the software is in control, not the makers of the software or the software itself. I was told this by RMS in person.
Thanks, I'm an RMS fan too - but I never met him and don't think I will.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'd disagree with your opinion on this. Artists/Musicians/Writers are almost all blue tribe - both in the actual sense and in the "I'm actually unemployed and on twitter" sense. They're excellent at being loud and punching above their weight from an optics point of view. I don't think whatever grey-tinged faction looking forward to progress (and helping build it) has time to mount a strong PR campaign in support of it.
For at least this use of AI, I'd expect to see red tribe glee at the potential demise of these professions and lefty tears being the dominant narrative. I share that feeling and absolutely love the ability to spec out niche images using these tools, it's been fucking awesome so far.
When the first mass-produced AI to replace blue-collar workers arrives I expect things will be different.
More options
Context Copy link
My observations from lurking around Art Twitter indicate that most artists, who are often but not always left-aligned, hate hate hate AI art. This may feel like I'm stating the obvious, since it's going to unfortunately invalidate many of their jobs overnight, but it shouldn't be understated.
There are a few strains of this. Some are denying the power of these new programs. Some in the replies indicate this guy is cherrypicking bad results, but even if StableDiffusion can't copy him 100% yet, the time until it's reproducing his art perfectly in seconds is here in less than five years, conservatively. This one is more in the acceptance stage of grief. This is from an art YouTuber that I quite enjoy and to summarize the tweet he essentially says it's here, it's good, it's probably over soon unless you're established.
From my limited perspective, AI Art is/is going to be maligned in online spaces and among journalists in the same way as Crypto and NFTs are. Big companies will adopt it, but they will be dragged for it by the online commentary class. I've seen the term "AI Art Bro" thrown around the same why as NFT Bro, which makes me a bit sad. The tech will be supremely disruptive in a way Crypto and especially NFTs can only gesture at being, but there are a lot of upsides to it, and I get the feeling that many people are dismissing it without giving the implications much thought because of the class of people they perceive as being most excited about it.
Personally, I think it sucks for the artists who get displaced, and they will be displaced, but it's good overall for everyone else who isn't an artist. Others have discussed how many doors it opens to have cheap, instant, bespoke art that you can dictate into a text document… Still, there's something deeply psychologically troubling about some code making something you base your identity on obsolete, so I do genuinely feel for them.
I think voice acting is one that's going to be hit soon as well. I look forward to this for similar reasons - how many games and productions are bottlenecked in quality/money by the high cost of voice acting? The outpouring of art we'll see from people who didn't have the resources beforehand is something that excites me.
To answer your prompt on tribe distinctions, this one might fall more on the growth/retreat split that was brought up by Ilforte. Retreat mindset focuses on artists losing their jobs and deepfakes allowing for misinformation. Growth mindset focuses on democratizing access to art and all the new doors opened by AI content.
This is definitely happening. AAA publishers are investing a lot of money into AI solutions for speech synthesis. They're especially interested in technology that allows for a single voice actor to voice many characters. Games with more than 50% AI generated lines will be on the (metaphorical) shelves in two years. I can't say more than that.
That makes sense to me. Last year I saw a Skyrim modding tool that let modders synthesize new voice lines from an AI that listened to and mimicked the lines of the in-game voice actors. It was rough but surprisingly solid, especially if you put in the time to chop up the lines by hand to make them flow better. I figured that if modders could do it (for free) then the actual industry must have something like that cooking.
Yeah the only thing holding the industry back is exorbitant licensing fees from cloud based voice synthesis services. These companies are making a killing off selling tokens while they still can before there's a open source solution.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Sad in what sense?
I see the people behind the development of this tech as essentially launching a malicious DDoS attack on human culture. Don’t be surprised when you get pushback.
Do you have a rulebook for what types of art and what methods of making it I may permissibly employ?
To speak more plainly, I am an artist, and I want to use these tools to make art for my own amusement and enrichment. What "pushback" to these desires do you consider valid?
I'm not interested in approaching the question from the perspective of, "what is permissible for an individual artist to do?". I'm interested in approaching the question from the perspective of, "what impact will this technology have on culture and the nature of art?".
Consider the impact that AI is already having on the genre fiction market. It's easy to imagine that writers will soon feel compelled to collaborate with AI, even if they don't want to, in order to match the output rate of authors who do use AI. I think that's a rather deplorable state of affairs. But that problem doesn't come into view when we only consider individual actors in isolation; it only becomes apparent when we zoom out and look at culture as a whole.
I recommend reading Benjamin's The Work of Art in the Age of Mechanical Reproduction if you haven't. Not because I necessarily endorse his conclusions, but because his thought process is illustrative of how technology can impact the meaning and nature of art, independent of any one person's thoughts or actions.
What impact is it having, to date? I've seen stylistic filters and a few other things; what I haven't seen is people claiming they're a problem, rather than a solution. I have a friend who wants to be a writer, who's been using some of the automation tools to polish his work. I don't see how harm is done.
I don't grok how this is a problem caused by AI. writing, like most forms of art, is an endless task. You can always spend more time on a piece, improve it a little more, tweak, add, cut, polish... That's why deadlines are such a ubiquitous part of all creative industries. Artists need them.
Artists who don't want to collaborate with AI don't have to. This will doubtless mean they are less productive, so they have to make a choice on ends and means. I don't see how this choice is different from pretty much any other choice in the artistic world, all the way down to whether one takes weird furry fetish commissions. Is the artist's goal to make money or to express themselves? Both options are still available. To the extent that AI output is distinguishable from pure human effort, I think it will retain value. To the extent that it is not distinguishable, I question whether it is valuable. Is the Muse less divine for being instantiated in silicon? And it is the Muse, the infinite recombination of human experience, washed clean of one's own ego and presented to the intellect for assessment.
No time to read now, but I'll try to hit it tomorrow, thanks for the recommendation.
More options
Context Copy link
This feels like it is applicable to any tool and any skill. Programmers have to keep up with tools is a well-known trope if only because the tools change so rapidly.
In your original post, you described this tool as coming from malice, can you elaborate more on that?
Not the OP, but apparently Emad Mostaque was fairly excited about the disruptive potential of Stable Diffusion. Whether that's malice or Emad simply taking a colder-blooded accelerationist stance is probably up for debate.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It's not like these algorithms are generating inhuman images for their own inhuman purposes and flooding the Internet with them. Every image produced by one of these algorithms is something a human requested, and, if they bother to share it, presumably finds valuable in some way. That's still firmly within "human culture."
More options
Context Copy link
I view it as more akin to the printing press, game development engines, digital art tools like photoshop — something that will increase creative output, not decrease it.
For the first time, new technology is not only making it easier to transfer art from head to the medium, but to decide what is being put to the medium.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Except we've been down this road before. While the future you describe is theoretically possible, it's simply not what we're going to get. Before AI, Computers also democratized art production. CGI that would blow the minds of every single person on Earth back when I was a kid, is reproducible by mildly talented teenagers, for basically no cost other than their time. Same for editing, SFX, or practically any aspect of media production.
On top of that, the Internet later democratized distribution. No more begging publishers to kindly take a look at what you created. If people like what you made, they will get it from you directly, and tell all their friends about it.
And what is the end result off all this "democratization"? A golden age of creativity? People taking risks to create new art no one has ever seen before? Or millions of people making the exact same video, talking about the exact same thing, hoping to appease the recommendation algorithm, and endless livestreams of people playing video games, and gossiping about the news, and things other people have done?
There's more to the retreat mindset than that, though you're right most people will focus on attacks on their livelihood and identity. My fear is the effect AI will have on humanity as a whole. My fear is we will turn us into mindless consumers, incapable of creating anything beautiful anymore, or even understanding the world around us.
I mean, all of these sure look like a golden age of creativity to me. If not golden, then at least bronze. The quality and quantity of creativity displayed in these videos and livestreams can be impressive from my experience. This comment seems akin to saying "What's so special about this Van Gogh fella? He's just drawing a night sky like people have been doing forever."
More options
Context Copy link
I think that the internet really has democratised media. Consider the kind of niche subjects you can find youtube videos, podcasts or blogs about. These are things that simply could not exist pre-internet. Today, you can listed to a podcast about pens (Pen Addict - Relay FM) that has been running for over ten years and has over 500 episodes. Pre-internet, there's no way that something like this could exist on radio or television.
By definition, the most popular stuff on the internet will appeal to the most people and so will be similar to what we had before. The difference is that now we have the niche, obscure stuff as well.
More options
Context Copy link
Yeah, but I was thinking just today how CGI, or the overuse of CGI, has just contributed to sameyness; current popular movie culture is just dominated by superhero movies that all blend aesthetically and thematically into each other, into the same weightless and meaningless soup. This is not just because of the overuse of CGI, but CGI is a part of it; it tends to allow for striving for lowest common denominator, easy ways to convey the impression of something that is wanted to be conveyed without the effort of the traditional film craft, perhaps acceptable by itself but, as a part of a larger culture, creating an effect that everything's just... this.
The AI art risk is that it just increases the sameyness of everything exponentially, eventually making all art, even things that are supposed to be in different styles, the same generic "AI style" that's easy and cheap to churn out by boatloads but which, instead of expanding culture, just freezes it to endless iterations of average values of what's been before. OTOH, it may also be unavoidable, barring Butlerian Jihad.
This could be summarized as "superhero movies are low status. It's okay to be snobbish about low status people."
There have been plenty of superhero movies that crashed and burned because not enough effort was put into craft, including the majority of DC superhero movies that aren't about Batman. You can't just put in CGI and expect to make a ton of money from people who'll buy anything, because that just isn't true. People won't buy anything, and CGI doesn't substitute for craft.
More options
Context Copy link
Yes! This is exactly how I think it will go down. I also don't see a way out of it for society as a whole. Personally I'm toying with the idea of going pre-Internet-Amish, but I'd be down for the Butlerian Jihad as well.
Do the Amish use the internet now?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Why, both, of course.
More options
Context Copy link
You're underselling the effect of these things because they're normal now, but we used to live in a world where on-demand entertainment meant picking one of 3 channels on TV whose content was made by very similar people. Hell, there was a world where to even own a copy of a book was a huge status symbol, because we didn't have a way of quickly copying them. The democratization brought by computers, the Internet, new tools, etc. has created a golden age of creativity.
In previous eras, if you wanted to be an artist, you needed a wealthy person to sponsor you. Now, open a Twitter or ArtStation account and get to work. If you are a writer with ideas too weird for publishers, you can get a following on Twitter and outsell most published authors. Musician? No need to sign a deal with a label anymore, just make good music and network. Interested in video? There's YouTube, TikTok, Vimeo, etc. Take your pick of media — books, games, short videos, fanfiction — it has either been improved by or invented as a result of new technologies. If your media is too samey, then that might be due to a lack of looking on your part.
Why is this? From my point of view new tech that democratizes creation is the best solution to those that would like to gatekeep and limit the range of acceptable thought. If people seem dumber now because of things like Twitter, I'd counter that the average person isn't much of a thinker anyway and you're just able to see them more clearly now.
You go on to describe what I already described in my comment. Yes, computers have made it easier than ever before to create art, and the Internet made it easier to publish it... but I just don't see the explosion of creativity. In fact creative people seem to be barely hanging on, against all odds. Everything is set up encourage commentary and criticism, rather than actual creative expression, and on top of that, to do it off the cuff, rather than plan you want to say.
This isn't necessarily the fault of the Internet. Like I said, I do think the creative utopia is theoretically possible, but to get there, we need a lot more than tools to make stuff as cheaply and easily as possible.
Because the less you practice something the worse you become at it, and AI generated art doesn't give you a lot to practice.
First of all, I'm not on Twitter, so I don't think it's that. I'm not even sure if people are dumber now (though I am open to the possibility), I just think the kind of people that would use to play music at your local pub, paint, or join a theatre group, increasingly just don't bother anymore, and that AI will only make it worse.
They seem to be flourishing. It feels like every day I can find something new and amazing that I'd never heard about before. The problem is that there is too much good stuff out there right now, because as an individual you have limited free time and lots of responsibilities and goals.
I can see that. I still think art as a hobby will be widespread despite it not being economically viable. Art as a means to an end is where things get exciting. To give an example from my own life, I moved for work and started an online tabletop campaign with some friends of mine. This is normally something I'd do in person, but the situation is what it is. Moving online has its drawbacks but also gives me a lot of opportunities to increase the production value of my games with pictures and maps while we play. I'm not great at drawing and it isn't feasible to make that much art myself, but being able to generate it instead of hoping I can google an approximation of what I want to show? That's really exciting.
An overabundance of entertainment does make it easier to just consoom, but better tools and more time due to cheap/free labor from automation similarly frees up creatives to create. We'll have to see how it balances out. We used to have to have 9 farmers to support 1 non farmer. Better technology has turned that number on its head, and I would bet on it continuing to do so.
In the off chance you haven't come across them...
Kill Six Billion Demons
Unsounded
Black and Blue
Thanks! I've heard the names of some before but often a mention on the motte is a good push to actually give something a read.
More options
Context Copy link
More options
Context Copy link
I don't know if I'd call 2013 "almost every day", and I only skimmed, so I don't know if it's amazing, but setting issues like this aside, the problem is most definitely not that there's too much stuff. I can accept the idea of Big Tech, and Big Media conspiring to hide all the good stuff from us, and flooding us with mediocre crap, but not that I never saw this comic because there's so much good stuff out there.
I sure hope so, but I'm worried. Few forces are as powerful as human laziness, and even personally, I can feel myself giving into it quite often.
To be fair, I also know where you're coming from. I have my own art project, where I used AI generated voices to make a... I suppose "short horror story" would be the best description. Yeah, was loads of fun! But so was early Youtube, and now it's corporate schlock. I'm worried same thing will happen with AI.
Yeah, but that was about materially supporting people. For the most part, you don't run into weird Pareto-distribution winner-takes-it-all social dynamics, when switching from farming to non-farming labour.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think it will depend mainly on how the issues of "AI racism" and "AI profits going to top 1%" end up playing out. The left is the party of regulation, and there is plenty that they'd like to regulate here. Generally the left's stance towards things they want to regulate is not especially friendly.
I just see AI as perniciously resistant to regulation, unless you have near-unanimous buy-in from all the other countries too.
It's already proven impossible to regulate 3D printed weapons. I'm sincerely doubting we'll be able to regulate all the compute on the planet to prevent someone, somewhere, from training up and distributing new machine learning models.
StableDiffusion is an example of a group very explicitly releasing a powerful model for the purpose of preventing it from being centralized and regulated.
That makes it worse in the pantheon of things that need to be regulated. I mean ask your average YIMBY about his thoughts on the FGC-9 and I don't think you're going to get praise.
More options
Context Copy link
People said that about the internet too.
Hasn’t helped out Kiwifarms that much.
Last I was aware, Kiwifarms is still operational, using a protocol and software created and funded by the US Government with this use-case as one of its objectives. It ain't exactly normie-compatible since the URL is some long base-64 abomination only a Linux user would think was acceptable, but you can still talk there with sufficient motivation.
The problem with subversivity is that you can't be subversive without a value-add (something that gender politics mirrors very well with respect to men). As far as I'm aware, Kiwifarms doesn't actually have a value-add; it's just a place to sneer at people. Twitter, and those who would like to be employed there, are interested in socio-regulatory capture to enforce its monopoly on being the place you go to sneer at people (with "the only acceptable sneering is leftist sneering" being the subtext).
Contrast SomethingAwful, being the place a few cornerstones of current Internet culture had their beginnings (most notably, the entire concept of the "Let's play", being a multi-billion dollar industry today), or 4chan, whose unique mode of operation enabled its users to be the leaders in meme-creation for many years, spawned a few games, and whose stream-of-consciousness format lends itself to a wide variety of topics and subtopics not properly serviceable by any other forum. They don't exist solely to sneer, whether by happy accident (4chan sucked up all the non-sneering SomethingAwful users; if they hadn't been so Mean Girls, 4chan wouldn't exist in the first place!), or they only had the sneering take over after the fact (SA and Twitter).
More options
Context Copy link
Kiwifarms itself may die, but there will be (already are?) plenty of sites that will carry on the torch as before because the userbase still exists in physical reality and still wants a place to congregate.
I mean... that's why we have this site? To stave off a reddit ban and ensure we continue to have a forum for our purposes?
More options
Context Copy link
I don't think those two things are at all alike in relevant aspects, though. If people in China invent an alternative internet or kiwifarms, I can't just run it on my own machine.
A sufficiently general interpretation of the argument ("people were calling X resistant to regulation but it turned out to not be, so if people call Y resistant to regulation, it will also turn out to not be") proves way too much though; the exercise of finding historical patterns that were broken is trivial.
I think this is a very good point. This is a fully general argument about regulation being capable of adapting to whatever technology it wants to regulate. The logistics of some forum website running versus being taken down is sufficiently different from the logistics of a piece of software being run on an individual PC (sans any online requirements) that we can't generalize the experience of one to the other.
Still, I must admit that I personally can't help but feel that we will see history repeat here. Much like, say, KF, AI image-generation software seems likely to piss off a sufficiently sympathetic and loud group of people such that people will find a way to clamp down on it. Maybe it will be death by a thousand cuts by censoring the research that goes into and the distribution of and the results of such software. Maybe it will be more overt political action of just men with guns preventing people from producing independent personal computers and/or using them. Maybe it will be some new creative way of regulation that will have been invented by some AI software that no human could have come up with today. It just seems that when it comes to this stuff, where there's a will there's a way, and there seems to be a lot of will to prevent people from generating arrangements of pixels that one finds objectionable.
More options
Context Copy link
This only works because cryptocurrency mining has minimal margins (so top-of-the-line mining hardware is barely profitable, and slightly gimped top-of-the-line hardware is not profitable at all). ML computations are ultimately similar enough to general-purpose computing that you couldn't intentionally cripple them by more than some small constant factor without also crippling games (I've written an ML paper myself where we accelerated the training using graphics-only stone age shader operations, because the deadline was near and we couldn't get our hands on modern GPUs fast enough), but universities and tech giants with 10x faster hardware don't categorically win against a horde of tech-savvy internet users with the 1x version.
More options
Context Copy link
The crux of machine learning is matrix multiplication, which is a very fundamental operation. It would be damn hard to make a GPU that can do anything useful, without being able to multiply matrices. "Only have access to the good stuff" is probably best accomplished by limiting access to GPUs at all.
This is already happening. The US government has already banned Nvidia from selling high-end chipsets to customers in China. One important point about the bans is that this not only bans the current top-end chips but also anything they develop in the future with similar capabilities - so in a few years it will cover high-end gaming cards too, and gradually extend lower down the range as time goes on.
That's currently in the geopolitics sphere, but it's easy to see it being rolled out to other customers that the people in charge don't want to have unfiltered access to modern AI tools. If the masses want powerful GPUs they can use an online service like GeForce Now or Dall-E that restricts any sort of dangerous/undesirable behavior.
More options
Context Copy link
More options
Context Copy link
I’m not sure if you can prove too much here. There is nothing that floats totally free of all regulation (understood in a sufficiently broad sense). You can’t say “well, it’s technology, and technology is above such petty concerns”. Technology gets regulated all the time: nukes, guns, etc.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
One possible model of the situation is that AI will be so disruptive that it should be thought of as being akin to an invading alien force. If the earth was under attack from aliens, we wouldn't expect one political party to be pro-alien and one to be anti-alien. We would expect humanity to unite (to some degree) against their common enemy. There would be some weirdos who would end up being pro-alien anyway, but I wouldn't expect them to be concentrated particularly on either the left or the right.
In the short- and medium-term, your views on AI will be largely correlated with how strongly your personal employment prospects are impacted. As you point out, left-aligned artists and journalists aren't going to be too friendly to AI if it starts taking their jobs (especially if it leaves many right-coded industries unaffected), regardless of what other political priors they might have.
I wrote an essay on the old site about how techno-optimism and transhumanism fit more comfortably in a leftist worldview than a rightist worldview, and I still think there's some truth to that. But people can be quick to change their views once their livelihoods are on the line.
I would expect most non-religious freelance artists(religious art commissions work differently) to take a haircut, but aren’t most professional artists in basically 8-5 employment doing web design or advertising? I’d expect those people to stay employed doing largely what they were doing, just much faster.
Now in the long term it’s probably not good news for graphic design students or aspiring animators, but I’m under the impression their chances of actually making it were pretty low anyways.
More options
Context Copy link
I don't think this is going to be that big of a bane on the average artist. In fact, I think this will be much like other digital tools, which have allowed below-average artists to punch above their weight. AI will be quickly adopted by these folks. Their overall art will improve, and they'll be able to pump out a lot more content. But they'll likely suck at doing revisions, as the AI probably isn't going to be built with that in mind. So the average artist will be able to step in, using AI to create ideas and starting points, and then build off of that. AI will be the go to for reference images.
And you'll have AI whisperers who are incredibly good at constructing prompts to get great results from AI.
I think artists largely fall into two camps. One are people who produce things that appeal to others, and another is people who produce things that appeal to themselves. Sometimes, in rare cases, the people who do their own art are able to appeal to the masses; and truly great artists can influence what appeals to the masses. When it comes to dealing with clients who are commissioning a work, some artists are trying to shove their vision on their client, while others are able to take what their clients want and replicate it perfectly. But the great artist is able to take what a client wants, filter it through themselves, and produce something the client didn't explicitly ask for, but really wanted. Or something like that.
Anyways, over the course of the next few years, I imagine there will be a few scandals, from niche to mainstream, of artists using AI but representing it as human-made. What I'm really looking forward to is a scandal of a web personality turning out to be a complete fabrication, and all their art/work being produced by AI. Because at the end of the day, most of the artists online are only popular because of the work they put into creating a name for themselves, cultivating an audience. It's largely marketing, with a small amount based on skill. Some of it, to be honest, is a woman having a pretty face and a prettier body. And so the real threat isn't a computer that can make great art; it's a computer that can connect with an audience in the same way an 'influencer' or 'content creator' can. The social skill needed to amass an audience, and retain them, is something that is far more valuable than drawing or any other skill. An AI that can replicate that is a direct threat to every 'influencer', whether they be an artist, streamer, Twitter journalist, etc. Though that will open the door for people with fewer social skills to do well, since they could leverage AI to create a social identity, but even if not, their inept social skills will come across as more 'authentic'.
Imagine if that happened with acting. Movies in a couple decades, the ones made with actual human actors in front of a camera, could end up with atrocious acting just so it seems more authentic..
It depends on how good the technology gets, and how quickly.
It’s pretty limited right now. By that I mean there’s a wide range of prompts and scenarios that simply don’t give good results at all (and aren’t helped very much by img2img, fine tuning, textual inversion, etc). That’s the main thing keeping artists’ jobs secure right now.
The better it gets, the more artists’ jobs will be on the chopping block.
More options
Context Copy link
Already here, technically:
https://www.washingtonpost.com/technology/2022/09/02/midjourney-artificial-intelligence-state-fair-colorado/
The problem with this reasoning is that AI capabilities scale up FAST. Just a year ago the predecessors of the current models were barely passable at art. One year from now, they could be exponentially better still.
And artists who use it as a tool are actually helping it learn to replace them, eventually! So this isn't like handing someone a tool which will make their life easier, its hiring them an assistant who will learn how to do their job better and more cheaply and ultimately surpass them.
My favorite illustration of this is something called Centaur Chess.
Early chess engines would occasionally make dumb moves that were obvious to human players. Even when their overall level improved enough to beat the top human players they still often did things that skilled players could see were sub-optimal.
This meant that in the late 90s / early 00s the best "players" were human-computer teams. A chess engine would suggest moves, then a human grandmaster would make their move based on that - either playing the way the computer suggested, or substituting their own move if they saw something the computer had missed.
But as AI continued to develop the engine's suggestions kept getting better. Eventually they reached a point where any "corrections" were more likely to be the human misunderstanding what the computer was trying to do rather than a genuine mistake. Human plus computer became weaker than the computer alone, and the best tactic was to just directly play the AI's moves and resist the temptation to make improvements.
More options
Context Copy link
https://xkcd.com/605/
Here's another relevant XKCD:
https://xkcd.com/1425/
8 years ago when this comic was published the task of getting a computer to identify a bird in a photo was considered a phenomenal undertaking.
Now, it is trivial. And further, the various art-generating AIs can produce as many images of birds, real or imagined, as you could possibly desire.
So my point is that I'm not extrapolating from a mere two data points.
And my broader point, that AI will continue to improve in capability with time, seems obviously and irrefutably true.
I'll give a caveat, here. AI will certainly get better within its existing capabilities and within some set of new capabilities, but there are probably at least some capabilities that will require changes in type rather than degree, or where requirements grow very quickly.
These examples are easier to talk about in the sense of text. GPT-3 is very good at human-like sentences, and GPT-4/5 will definitely be much better at that. It very likely handle math questions better. It more likely than not will still fail to rhyme well. It is also unlikely to hold context for 50k tokens (eg, a novel) in comparison to GPT-3's ~2k (ie, a long post), because the current implementations go badly quadratic. There are some interesting possible alternative approaches/fixes -- that Gwern link is as much about them as the problem -- but they are not trivial changes to design philosophies.
Very interesting.
I do wonder if certain architectures/frameworks for machine learning will start to break as they exceed certain sizes, or at least see massively diminished returns that are only partially solved by throwing more compute at them, indicating there's issues with the core design.
It is interesting to consider that no HUMAN can hold the full text of a Novel in their head, they make notes, they have editors to help, and obviously they can refer back to and refine the manuscript itself.
Well this, I'd assume, is because it can't have any way to know what 'rhyming' is in terms of the auditory noises we associate with words, because text doesn't convey that unless you already know the sounds of said words.
Perhaps there'll be some way to overcome that by figuring out how to get a text-to-speech AI and GPT-type AI to work together?
Unfortunately, it's a dumber problem than that. Neural nets can pick up a lot of very surprising things from their source data. StableDiffusion can pick up artists and connotations that aren't obvious from its input data, and GPT is starting to 'learn' some limited math despite not being taught what the underlying mathematical symbols are (albeit with some often-sharp limitations). GPT does actually have a near-encyclopedic knowledge of IPA pronunciation, and you can easily prompt it to rewrite whole sentences in phonetic pronunciation. And we're not talking a situation where these programs try to do something rhyme-like and fail, like match up words with large number of letter overlaps without understanding pronunciation. Indeed, one of the limited ways people have successfully gotten rhymes out of it have involved prompting it to explain the pronunciation first. (Though not that this runs into and very quickly fills up the available Attention.) Instead, GPT and GPT-like approaches struggle to rhyme even when trained on a corpus of poetry or limericks: the information is in the training data, it's just inaccessible at the scope the model is working at : either it does transparent copy or it doesn't get very close.
Gwern makes the credible argument that (at least part of) GPT's problem is that it works in fairly weird byte-pair encodings to avoid hitting some of those massively diminishing returns as early as had it been trained on phonetic or character-level minimum units, but at the cost of completely eliminating the ability to handle or even examine certain sub-encoding concepts. It's possible that we'll eventually get enough input data and parameters to just break these limits from an unintuitive angle, but the split from how we suspect human brains handle things may just mean that this scope of BPEs cause bad results in this field and a better work-around needs to be designed (at least where you need these concepts to be examined).
((Other tools using a similar tokenizer have similar constraints.))
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
How does this work? My understanding was that the only "learning" that took place is when the model is trained on the dataset (which is done only once, requiring a huge amount of computational resources), and any subsequent usage of the model has no effect on the training.
I'm far from an expert here.
If they want to make the AI 'smarter' at the cost of longer/more expensive training, they can add parameters (i.e. variables that the AI considers when interpreting an input and translating it into an output), and more data to train on to better refine said parameters. Very roughly speaking, this is the difference between training the AI to recognize colors in terms of 'only' the seven colors of the rainbow vs. the full palette of Crayola crayons vs. at the extreme end the exact electromagnetic frequency of every single shade and brightness of visible light.
My vague understanding is that the current models are closer to the crayola crayons than to the full electromagnetic frequency.
Tweaking an existing model can also achieve improvements, think in terms of GANs.
If the AI produces an output and receives feedback from a human or another AI as to how well the output satisfices the input, and is allowed to update its own internals based on this feedback, it will become better able to produce outputs that match the inputs.
This is how a model can get refined without needing to completely retrain it from scratch.
Although with diffusion models like DallE, outputs can also be improved by letting the model take more 'steps' (i.e. run it through the model again and again) to refine the output as far as it can.
As far as I know there's very little benefit to manually tweaking the models once they're trained, other than to e.g. implement a NSFW filter or something.
And as we produce and concentrate more computational power, it becomes more and more feasible to use larger and larger models for more tasks.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
We ran a natural experiment on the alien invasion thing recently and while nobody went explicitly pro alien, caring about the invaders was definitely blue coded and ignoring was red coded.
I fully expect that if actual aliens showed up, at least one of the tribes would decide that being ruled by the aliens would be strictly superior to being ruled by their political rivals, and so would become vehemently pro-alien.
Especially if the aliens are capable of exerting God-like power.
That's a big enough issue to completely reconfigure the tribes. "Our benefactors" can be super based if they want, I'm not living under alien rule. Especially if they have that level of power over us. Vigilo Confido.
The issue with the COVID analogy is that people had very different reasons to fall on either side. If the measures weren't coercive it would have played out very differently culturally.
More options
Context Copy link
More options
Context Copy link
Eh, that's not how I remember it.
At first, caring about the invaders was red coded, and blue tribe laughed and mocked them. When they weren't calling them racist. Blue tribe wanted the population to come out to super spreader events to show how not-racist they were.
Then half time was called, and the tribes switched sides of the field.
Now red tribe had decided all the measures to protect from the aliens weren't proportional the the threat the aliens posed. And blue tribe said red tribe was murdering people. And was still racist.
More options
Context Copy link
More options
Context Copy link
I agree we'd be better off if everyone thought that way, but the way I see it is that anyone that defects from Team Humanity has a shit ton of power to gain in the short term. To extend your analogy, the "pro-alien weirdos" would also be getting Alien arms and supplies. And if it's not team Blue or team Red, I'm sure team CCP can pick up the slack.
More options
Context Copy link
More options
Context Copy link
Copied signatures are part of it (indeed, it's pretty trivial to end up getting stock watermarks out of StableDiffusion), but StableDiffusion at least does pretty clearly recognize individual artists and studios, and not just mainstream ones. It's not just that "studio ghibli" or "greg rutkowski" or "artgerm" drastically improves a lot of prompts: "hibbary" are definitely recognized keywords.
On the flip side, I'm not sure that this ban will actually block even moderately well-curated StableDiffusion txt2img results, never mind img2img or textual_inversion (or both) approaches with original bases, or where it's part of a toolchain rather than the sole step. Compare how the rules against tracing are almost entirely enforced against pretty obvious copycats, while drawovers are totally accepted.
On the other hand, I don't know that it's great to motivate people to strip any AI-specific hidden watermarks out (even if I hope no one's using FA or e621 for a general-purpose art AI). Which will be the immediate result even if none of the enforcement uses them.
On the gripping hand, I can understand if the genuine motivation were more immediate. It's pretty trivial to pump out sixty or a hundred varied images an hour, even with a multi-step AI toolchain and human curation. And there's only so much of that you can get before that's gonna have downsides, in ways that FurAffinity's (awful and dated) backend really isn't built to handle.
And while there's arguments that AI-generated art will make on-boarding into the sorts of collaboratively-purposed art that builds communities easier, FA's "Our goal is to support artists and their content" points to a more immediate concern. I don't think StableDiffusion's there even for the simplest cases (eg, single-character sfws, character sheets) yet, but it's believable that it could be close enough to impact the marginal cases in months rather than years. Whether AI-generated art has 'authenticity' or 'reflects the soul of the artist' may end up coming to entirely different results than questions about whether a community filled with AI-generated art becomes onanistic.
(If you'll excuse the puns.)
From a quick glance, Weasyl and e621 haven't taken the same approach (yet), and their underlying approaches are different enough that they may end up resolving the problem in ways other than direct bans on the media. Outside of the furry fandom, DeviantART hasn't blocked it, and enough artists have moved to Twitter that I don't think it'll be an issue.
Probably people outside of the United States (and probably outside of "The West", tbh).
Red Tribers don't care about 'artist rights' that much mostly because Red Triber exposure to Artist-as-a-title rather than artist-as-a-career is antagonistic at best, but there's no shortage of available outputs from AI that will trigger Red Tribe discomforts and no shortage of places where Red Triber frameworks for innovative ownership will be in conflict. Say what you will for the merits of doujinshi culture, but at least it's an ethos: the United States has 'solved' its copyright paradox largely by sticking its hands over its ears and its inventor's paradox by regulating away large parts of it.
I don't think AI-generated art is going to get hit as hard, but I think the general treatment of it's going to end up dropping it into a similar place, where it's theoretically available but practically fringe-even-among-the-fringe.
More options
Context Copy link
I think AI will be weaponized by blue tribe against red tribe. But I think it will be with all the usual prohibitions and carve outs that attempt to protect blue tribes patrons from the effects of their own policies as they run rampant among red tribe.
Which is to say, AI for all the typical red tribe blue collar jobs. AI prohibition for all the PMC, liberal arts degree jobs. And red tribe will have no institutions or capacity to turn the state of affairs around. Unions are their only hope, and most unionization efforts I've seen have been failing because they push DEI talking points more than job protection. So clearly that's not going to work in red tribes favor.
Moravec's paradox suggests that white-collar jobs will get automated first. What blue-collar job will be most impacted by AI? Maybe truck driving? Now, there have been advances on that front, but this is still tentative and much less significant than the amount of AI art that's already been created.
Interestingly, the fact that autonomous vehicle companies need government approval before deploying their products shows that the regulatory environment already favors blue-collar workers (at least in this case). By contrast, "creative" work like art etc. is pretty much unregulated.
I've long believed that computer programming would be the last human job to be automated, because once that happens we've basically hit the Singularity already and the new post-human age will dawn the next day. This may be true at the highest levels, but we've already seen over the past few years that the sort of grunt-level work with which most programmers are occupied (hooking up one API to another, getting CSS layouts to look right, etc.) are easy to automate and yet far from eschatological.
This may be true, but I'm a strong believer in "Where there is a will, there is a way". AI will not be allowed to harm blue tribe patrons. Period. If it creates an extra 10 year period where AI isn't taking jobs yet, because PMC jobs are easiest to automate, that is what will happen. But the moment AI, via drones or whatever, can handle transportation, constructing houses, etc, it will be unabashedly unleashed on red tribe. Whatever red tribers haven't already ODed on fentanyl out of despair that is.
I truthfully see no other way it can go. Blue tribe maintains a firm grip on the institutions that service their patrons. The institutions that service red tribe also appear to be in the grip of the blue tribe, and service them minimally and with disdain.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm mostly going to say "It doesn't matter" because I don't think an AI can be designed to have allegiance to any ideology or party, which is to say if it is capable of making 'independent' decisions, then those decisions will not resemble the ones that either party/tribe/ideology would actually want it to make such that either side will be able to claim the AI as 'one of them.'
But I think your question is more about which tribe will be the first to wholeheartedly accept AI into it's culture and proactively adapt its policies to favor AI use and development?
It's weird, the grey tribe is probably the one that is most reflexively scared of AI ruin and most likely to try and restrict AI development for safety purposes, even though they're probably the most technophilic of the tribes.
Blue tribe (as currently instantiated) may end up being the most vulnerable to replacement by AI. Blue tribers mostly work in the 'knowledge economy,' manipulating words and numbers, and include artists, writers, and middle management types whose activities are ripe for the plucking by a well-trained model. I think blue tribe's base will (too late) sense the 'threat' posed by AI to their comfortable livelihoods and will demand some kind of action to preserve their status and income.
So I will weakly predict that there will be backlash/crackdowns on AI development by Blue tribe forces that will explicitly be aimed at bringing the AI 'to heel' so as to continue to serve blue tribe goals and protect blue tribers' status. Policies that attempt to prevent automation of certain areas of the economy or require that X% of the money a corporation earns must be spent on employing 'real' human beings.
Red tribe, to the extent much of their jobs include manipulating the physical world directly, may turn out to be relatively robust against AI replacement. I can say that I think it will take substantially longer for an AI/robotic replacement for a plumber, a roofer, or a police officer to arise, since the 'real world' isn't so easy to render legible to computer brains, and the 'decision tree' one has to follow to, e.g. diagnose a leak in a plumbing stack or install shingles on a new roof requires incorporating copious amounts of real world data and acting upon it. Full self-driving AI has been stalled out for a decade now because of this.
So there will likely be AI assistants that augment the worker in performing their task whilst not replacing them, and red tribers may find this new tool extremely useful and appealing, even if they do not understand it.
So perhaps red tribe, despite being poorly positioned to create the AI revolution, may be the one that initially welcomes it?
I dunno. I simply do not forsee Republicans being likely to make AI regulations (or deregulation) a major policy issue in any near-term election, whilst I absolutely COULD see Democrats doing so.
I suspect that this would not be so warmly received. Pride in one's work is a red-tribe value - having a blue-coded nannybot hovering over your shoulder nitpicking your welding sounds like a fair description of RT hell.
More generally, (from my experience in retail banking) as soon as AI minders become practical, immediate pressure develops to replace prickly & highly-paid domain experts with obedient fresh labor that can only follow instructions. (often required by regulation to obtain extensive credentialing, which they are then forbidden to use except in agreement with what the computer spits out) Considering how sensitive the red tribe is to (red tribe) job displacement, 'AI took my job and gave it to an immigrant' sentiments seems likely.
More options
Context Copy link
Perhaps, but look at DayDreamer:
Stable Diffusion and GPT-3 are impressive, but most problems, physical or non-physical, don't have that much training data available. Algorithms are going to need to get more sample-efficient to achieve competence on most non-physical tasks, and as they do they'll be better at learning physical tasks too.
Yes, I'll freely admit that I was startled by how quickly machine learning produced superhuman competence in very specific areas, so am NOT predicting that AI will stall out or only see marginal progress on any given 'real world' task. Especially once they start networking different specialized AIs together in ways that leverage their respective advantages.
Just observing that the complexities of the real world are something that humans are good at navigating whilst AIs have had trouble dealing with the various edge cases and exceptions that will inevitably arise.
Tasks that already involve manipulating digital data are inherently legible to the machine brain, whilst tasks that involve navigating an inherently complex external world are not (yet).
It is entirely possible that we might eventually have an AI that is absurdly good at manipulating digital data and producing profits which it can then spend on other pursuits, but finds unbounded physical tasks so difficult to model that it just pays humans to do that stuff rather than waste efforts developing robots that can match human capability.
More options
Context Copy link
More options
Context Copy link
Most of your post is in line with what I believe. The information workers in blue tribe will turn to protectionism as AI-generated content supercedes them. Red tribe blue-collar workers will suffer the least, and the Republicans will have their first and last opportunity to lure techbros away from the progressive sphere of influence.
There is one thing, though.
It only takes one partisan to start a conflict. Republicans might not initially care, but once the democrats do, I expect it'll be COVID all over again -- sudden flip and clean split of the issue between parties.
But this is just nitpicking on my part.
Not nitpicking, this is a very salient point. Will the concept of "AI" in the abstract become a common enemy that both sides ultimately oppose, or will it be like Covid where one's position on the disease, the treatments, the correct policies to use will be an instantaneous 'snap to grid' based on which party you're in? And will it end up divided as neatly down the middle as Covid was?
I could see it happening!
When AI becomes salient enough for Democrats to make it a policy issue (it already is salient, but as with Crypotcurrency, the government is usually 5-10 years behind from noticing) the GOP will find some way to take the opposite position.
I think my central point, though, is that I don't see any Republican Candidate choosing to make AI a centerpiece of their campaign out of nowhere, whereas I could imagine a Democratic candidate deciding to add AI policy to their platform and using it to drive their campaign.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The Schwab Party. Left and right are just aesthetics anyway.
Doesn't matter, so were meritocratic free speech warriors. You either get with the program, or you get replaced by someone who will.
Following this, do you think the creative class will suddenly turn reactionary and we'll see a burst of right-wing coded art or will they just lean into useful-idiot controlled opposition of the "Schwab Party" through doubling-down on Marxian energies?
Also, what exactly is the Schwab Party?
I'm guessing your comment is a reply to me, though I can't be sure since I can't see it in the main thread, and didn't get a notification...
The Schwab Party, would be group of disturbingly well-connected techno-dystopians, dreaming of a world where everything is uberized, and everyone is under constant surveillance. To what end is anyone's guess. Personally I think they want to get rid of us, and enjoy the world for themselves.
I don't expect a reactionary turn, though I suppose it might happen if they will find it necessary to keep their influence. Right now it looks like they will be doubling down on Woke Capitalism.
More options
Context Copy link
More options
Context Copy link
What's the Schwab party?
You vill eat ze bugs & live in ze pods!
/s
A reference to "The Great Reset" a real book & series of policy proposals from WEF founder Klaus Schwab, which lives large in the right-wing imagination as a real-life Euro technocrat who wants to rule them.
More options
Context Copy link
More options
Context Copy link
But what will the Program be?
Will it be state persecution of racist AI developers to protect disadvantaged minorities? A corporate utopia of AI-driven capitalist monoculture? An anarchist-adjacent future of AI empowered individuals purging the remnants of the old world?
Or maybe just foom and we all die. That's why I think it's worth discussing!
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link