Wouldn't the obvious stance be "we aren't the progressives of the past?" Residential schools have plenty of evidence towards their existence in Canada, and were certainly pushed by what would've been a progressive mindset back in the day.
The issue is that "we aren't the progressives of the past" is the stance of the progressives of today. So saying that doesn't escape one from repeating the mistakes of the past; it's how you repeat the mistakes of the past.
A question that intruiges me as well. My guess is that that it will be entirely forgotten the same way that the pedo rights movement of the 70's was. Sure every once in a whole someone will dig out some receipts, and it will be seen as that weird thing that apparently happened in the past, but it will not be something pinnable on the progressive movement
I wonder about this. Unlike the 70s or any time before the 21st century, the dialogue and commentary around this is largely done on the internet, which is very easily accessible. Memory holing something that can be looked up with a single click of a hyperlink on your phone is harder than doing so for something you'd have to look up old newspapers or journals in a library.
Yet it certainly seems doable. Stuff like the Internet Archive can be attacked and taken down or perhaps captured, thus removing credible sources of past online publications. People could also fake past publications in a way to hid the real ones through obscurity. Those would require actual intentional effort, but the level of effort required will likely keep going down due to technological advancements. More than anything, human nature to be lazy and ambivalent about things that don't directly affect them in the moment seems likely to make it easy to make people forget.
I wonder how much people in the 20th century and before were saying "We're on the right side of history" as much as people have been in the past 15 years. Again, people saying that has never been as well recorded as it has now. It'd be interesting to see in the 22nd century and later some sort of study on all instances of people saying "this ideology is on the right side of history" and seeing how those ideologies ended up a century later.
It was a joke.
I see, it must have gone over my head, but that's not an unusual experience for me with jokes, unfortunately. So is it that you were just being ironic, and that your meaning was the opposite, that the mysticism around art being imbued with a part of the artist's soul is still quite common in artists' circles, with a part of Duchamp's soul being in that toilet just as much as, e.g. part of Van Gogh's soul being in his self portrait?
If your view is that we need to redefine what 'stealing' is in order to specifically encompass what AI does then yes, you can make the argument that AI art is stealing, but if you do that you can make the argument that literally anything is stealing, including things that blatantly aren't stealing.
The issue here is that when we're talking about "stealing" in the copyright/IP law sense, the only way something is "stealing" is by legally defining what "stealing" is. Because from a non-legal perspective, there's just no justification for someone having the right to prevent every other human from rearranging pixels or text or sound waves in a certain order just because they're the ones who arranged pixels or text or sound waves in that order first.
So if the law says that it is, then it is, and if it says that it isn't, then it isn't, period.
So the question is what does the law say, and what should the law say, based on the principles behind the law? My non-expert interpretation of it is that the law is justified purely on consequentialist grounds, that IP law exists to make sure society has more access to better artworks and other inventions/creations/etc. So if AI art improves such access, then the law ought to not consider it "stealing." If AI art reduces it, then the law ought to consider it "stealing."
My own personal conclusions land on one side, but it's clearly based on motivated reasoning, and I think reasonable people can reasonably land on the other side.
My brother once put it to me this way: Imagine you have a favorite band with several albums of theirs on your top-faves list. You've followed them for years, or maybe even decades. It's not even necessary for this thought experiment, but for a little extra you've even watched or read interviews with them, so you have a sense of their character, history, etc. And then one day it is revealed to you that all of it was generated by an AI instead of human beings. How would you feel?
I think I would feel a profound sense of loneliness. I would never revisit those albums again. And I don't think this basic feeling can be hacked through with some extra applications of rationalism or what have you. This feeling precedes thinking on a very deep level for me.
I think differing intuitions on this is exactly what makes this such a heated and fascinating culture war topic. My response to this thought experiment is that I'd be mostly neutral, with a bit of positivity merely for it being just incredibly cool that all this meaning that I took out of this music, as well as the backstories of the musicians who created it, was able to be created with AI sans any actual conscious or subconscious human intent.
In fact, this thought experiment seems similar to one that I had made up in a comment on Reddit a while back about one of my favorite films, The Shawshank Redemption, which I think isn't just fun or entertaining, but deeply meaningful in some way in how it relates to the human condition. If it had turned out that, through some weird time travel shenanigans, this film was actually not the work of Stephen King and Frank Darabont and Morgan Freeman and Tim Robbins and countless other hardworking talented artists, but rather the result of an advanced scifi-level generative AI tool, I would consider it no less meaningful or powerful a film, because the meaning of a film is encoded within the video and audio, and the way that video and audio is produced affects that only inasmuch as it affects those pixels (or film grains) and sound waves. And my view on the film wouldn't change either if it had been the case that the film had been created by some random clerk accidentally tripping while carrying some film reels and somehow damaging them in a way as to make the film.
This is the way I see it as well. When people say "stealing," they actually mean "infringing on IP rights," and that raises the issue of what are IP rights and what justifies them. As best as I can tell, the only justification for IP rights is that they allow for us as a society to enjoy better and more artworks and inventions by giving artists and creators more incentive to create such things (having exclusive rights to copy or republish their artworks allows greater monetization opportunities for their artworks, which obviously means greater incentive). The US Constitution uses this as the justification for enabling Congress to create IP laws, for instance.
Which is why, for instance, one of the tests for Fair Use in the US is whether or not the derivative work competes against the original work. In the case of AI art and other generative AI tools, there's a good argument to be made that the tools do compete with the original works. As such, regardless of the technical issues involved, this does reduce the incentives of illustrators by reducing their ability to monetize their illustrations.
The counterargument that I see to this, which I buy, is that generative AI tools also enable the creation of better and more artworks. By reducing the skill requirements for the creation of high fidelity illustrations, it has opened up this particular avenue of creative self expression to far more people than before, and as a result, we as a society benefit from the results. And thus the entire justification for there being IP laws in the first place - to give us as a society more access to more and better artworks and inventions - become better fulfilled. I recall someone saying the phrase "beauty too cheap to meter," as a play on the whole "electricity too cheap to meter" quote about nuclear power plants, and this clearly seems to be a large step in that direction.
That hasn't been a tenable position for quite some time. Duchamp took a urinal and put it in an art gallery in 1917. Probably, he did not simultaneously impart a piece of his soul into it.
I'm not sure how you justify the "probably" in the last sentence. If we posit that, say, Van Gogh left a piece of his soul into his famous self portrait through the act of painting it, how can we deny that Duchamp left a piece of soul into the urinal when he placed it in an art gallery? What's the mechanism here by which we can make the judgment call of "probably" or "probably not?"
The AI art we have right now seems to me to be more akin to waves on the beach just so happening to etch very detailed pictures into the sand by random chance; this to me is lacking the principle features that make art interesting (communication between conscious subjects; wondering at what kind of subjectivity could have lead to the present work).
I think that's a perfectly reasonable way to determine whether a work of art is interesting. What I find confusing here, though, is that, by that standard, AI art is interesting! To take the beach metaphor, someone who types in "big booba anime girl" into Midjourney on Discord and posts his favorite result on Twitter is akin to someone who hovers over this beach and snaps photos using a simple point and shoot, then publishes the resulting prints that he likes (if we stretch a bit, this is all nature photography or even street photography). In both cases, a conscious person is using his subjective judgment to determine the features of what gets shared. Fundamentally, this would be called "curation" rather than "illustration," and one can certainly argue that curation isn't interesting or that it's not an art, but by the standard that it requires a conscious being using his subjective judgment to communicate something through his choices in the results, curation fits just as well as any other work of art.
This is why I believe there's something more to it than that and alluded to the mysticism in my previous comment.
That would have to depend on the specific principle at hand. If it's, say, that training an AI model from public data is stealing, then, perhaps if they approve of AI art tools confirmed to have been trained only from authorized images, even if it causes them to face the ire of their peers who still disapprove of it, or even if it causes them to lose out on commissions.
Fundamentally, this has existed for about 2 years, though the software to make it easy to do is more recent. I haven't used Photoshop, but I believe it essentially does that with Firefly, and for free tools, the Krita (freeware) extension to use Stable Diffusion does this pretty well. However, actually getting a "good looking" picture out of it is still something that's not likely to be a one-step process, but rather requiring iterations and intentional inpainting.
What you're talking about is a version of what's referred to as IMG2IMG, which is exactly what it sounds like, and, in fact, it's actually the same thing as TXT2IMG, just, instead of starting with random noise in the case of the latter, you're starting with an image that you sketched. Early on, keeping the structure of the original image was a major struggle, but something like 1.5 years ago, a tech referred to as "ControlNet" was developed, which allowed the image generation to be guided by further constraints beyond just the text prompt and settings. Many different versions of ControlNet exist, including edge detection, line-art, depth map, normal map, and human pose, among others. In each, those particular details from the original image can be used to constrain the generation so that objects you might draw in the foreground don't blend in to the background, or so that the person you drew in a certain pose comes out as a human in the exact same pose. It's possible to run multiple of these at the same time.
Again, in practice, these aren't going to be one-step solutions, with various issues and weaknesses that need manual work or further iterations to make look actually like a good work of art. But in terms of turning, say, a crude mess of blobs into something that looks somewhat realistically or professionally rendered while following the same composition, it's quite doable.
But there are also many non-artists who don't like AI art. Also, people who have objections to AI painting also tend to have objections to AI music and AI voice acting, even if those areas don't overlap with their personal skill set. Which is evidence that the objections are principled rather than merely opportunistic.
I don't think this follows. The only way some behavior is evidence that some belief in a principle is sincere is if that behavior is costly to the person, e.g. giving up food for some religious holiday or even the Joker setting money he stole on fire in The Dark Knight. I don't think making this kind of objection is costly to these people; if anything, it seems gainful in terms of status within their social groups. At best, it's evidence that they understand the logical implications of the principle they're espousing.
Has anyone noticed how much vitriol there is towards AI-generated art? Over the past year it's slowly grown into something quite ferocious, though not quite ubiquitous.
I honestly think it's far closer to the opposite of ubiquitous, but it certainly is quite ferocious. But like so much ferocity that you see online, I think it's a very vocal but very small minority. I spend more time than I should on subreddits specifically about the culture war around AI art, and (AFAIK) the primary anti-AI art echochamber subreddit, /r/ArtistHate, has fewer than 7k members, in comparison to the primary pro-AI art echochamber subreddit, /r/DefendingAIArt, which has 23K members. The primary AI art culture war discussion subreddit, /r/aiwars, has 40K members, and the upvote and commenting patterns indicate that a large majority of the people there like AI art, or at least dislike the hatred against it.
These numbers don't prove anything, especially since hating on AI art tends to be accepted in a lot of generic art and fandom communities, which lead to people who dislike AI art not particularly finding value in a community specifically made for disliking it, but I think they at least point in one direction.
IRL, I've also encountered general ambivalence towards AI art. Most people are at least aware of it, with most finding it a cool curiosity, and none that I've encountered actually expressing anything approaching hatred for it. My sister, who works in design, had no qualms about gifting me a little trinket with a design made using AI. She seems to take access to AI art via Photoshop just for granted - though interestingly, I learned this as part of a story she told me about interviewing a potential hire whose portfolio looked suspiciously like AI art, which she confirmed by using Photoshop to generate similar images and finding that the style matched. She disapproved of it not out of hatred against AI art, but rather because designers they hire need to have actual manual skills, and passing off AI art without specifically disclosing it like that is dishonest.
I think the vocal minority that does exist makes a lot of sense. First of all, potential jobs and real status - from having the previously rather exclusive ability to create high fidelity illustrations - are on the line. People tend to get both highly emotional and highly irrational when either are involved. And second, art specifically has a certain level of mysticism around it, to the point that even atheist materialists will talk about human-manually-made art (or novel or film or song) having a "soul" or a "piece of the artist" within it, and the existence of computers using matrix math to create such things challenges that notion. It wasn't that long ago that scifi regularly depicted AI and robots as having difficulty creating and/or comprehending such things.
And, of course, there's the issue of how the tools behind AI art (and modern generative AI in general) were created, which was by analyzing billions of pictures downloaded from the internet for free. Opinions differ on whether or not this counts as copyright infringement or "stealing," but many artists certainly seem to believe that it is; that is, they believe that other people have an obligation to ask for their permission before using their artworks to train their AI models.
My guess is that such people tend to be overrepresented in the population of illustrators, and social media tends to involve a lot of people following popular illustrators for their illustrations, and so their views on the issue propagate to their fans. And no technology amplifies hatred quite as well as social media, resulting in an outsized appearance of hatred relative to the actual hatred that's there. Again, I think most people are just plain ambivalent.
That, to me, is actually interesting in itself. So far, the culture wars around AI art hasn't seem to have been subsumed by the larger culture wars that have been going on constantly for at least the past decade. Plenty of left/progressive/liberal people hate AI art because they're artists, but plenty love it because they're into tech or accessibility. I don't know so much about the right/conservative side, but I've seen some religious conservatives call it satanic, and others love it because they're into tech and dunking on liberal artists.
The Democrat says "Come with me and you won't have to go to NASCAR races and eat McDonald's any more. You can be just like me! Wouldn't that be great?". It shows a real lack of understanding about the working class and what they value. They don't do these things because they have to. They like McDonald's!
This reminds me of the narrative I bought into about 20 years ago, when the left was pushing the idea that everyone, including those in the Middle East, just wanted liberal democracy (even if they weren't aware of it). So once freed from the religious oppressive forces keeping them down, they'd gravitate towards such a system like in America. Same for immigrants from such cultures, whose kids would see how awesome liberal democracy is and thus adopt its values. I particularly recall a (more recent, but still like a decade old, I think?) 5-hour long conversation between Cenk Uygher and Sam Harris about this kind of stuff, where Cenk was smugly telling Sam about how suicide bombers and other similar Muslim terrorists could just be won over with the benefits of Western liberal values.
I think the amount of epicycles that have been required to explain the various failures and speedbumps that such a narrative has encountered in the past 2 decades shows that, no, it was rather that the people who pushed such a narrative largely just lacked the ability or willingness to appreciate the true diversity of thought there exists in humans. I don't put much weight to any sort of sociological study anymore, but I suspect that the findings that liberals in America have a hard time modeling how conservatives think in a way that doesn't exist in reverse might be pointing at something that's true. Likewise for the cliche that "liberals think conservatives are evil; conservatives think liberals are stupid."
All I know comes from West Wing and I have a feeling that the reality is way more regarded than the typical mass media depiction.
I recall talking to someone in the industry in some social event like a decade ago and being told that real life is much closer to Veep than to West Wing, except that Veep depicted everyone as far more competent than the real-life versions. I imagine they were being facetious, but I chose to take it at face value and believe it unironically, and the older I get, the more I think that was correct.
I actually encountered a friend of mine confusing this a couple of years ago. He had never heard the term before, and when I explained to him the generation it was a label for, he commented how stupid it was to make the label based on a video chat app. I had to inform him that the term came predated 2020 and came from a combination of "Generation Z" and "boomer."
I'm just mystified by the idea that Harris is so certain that young men, especially young black men, would benefit from greater availability of recreational marijuana, that she has made it a highlight of her campaign.
I don't think either Harris or Trump or any particular politician that's running for office has any reason to care if policies they propose would actually benefit anyone. I think the implication of Harris making this a highlight of her campaign isn't that young black men would benefit from greater availability of recreational marijuana, but rather that pushing for greater availability is more likely to cause young black men, as well as people who believe that young black men are disproportionately likely to go to prison for marijuana use, to giver her their votes.
This epitomizes general differential expectations of conservatives and liberals. Conservatives are regarded (and to a shocking degree, regard themselves) as lacking in agency to the point of being almost animalistic.
Liberals, though. They're supposed to be better, smarter, more accountable. Apparently.
They're supposed to be adults in the room.
As a liberal (both in the classical sense and in the liberal/conservative dichotomy sense), I feel like this is exactly the correct state of things. Because the only good justification I see for picking a particular side is if one believes that that side is, in some real meaningful sense, better than the other side. And liberals being actually responsible for getting decisions right, being the adults in the room who think through their ideas and the consequences of implementing them, while conservatives being animalistic emotional creatures following their base whims and needing faith and tradition and religion to keep them from falling to their base impulses, is one of the most meaningful ways to differentiate the former as better than the latter.
Because you get rid of that, then what do we have left with, just that this set of ideas labeled L are better than this other set of ideas labeled C? But how could I justify holding such a belief, if the process by which those L ideas were produced wasn't, in some meaningful way, better than the process by which those C ideas were produced? Because I've reasoned to myself that those L ideas are better than those C ideas? Why should anyone, especially myself, who grew up in an environment that was biased heavily towards L ideas and away from C ideas, trust that my reasoning on this preference is sound, when the more likely explanation is that I have a set of preferences inculcated in me by my society, which I've used motivated reasoning to justify as "correct" in my mind?
Now, I've seen enough to recognize that most people on any side are just tribalists blindly following their animalistic urges, but even so, in the world of ideology and politics, I'll always insist on double standards, where my side is held to a higher standard than the other one, so as to make myself feel more secure that I've actually chosen the correct side. Otherwise, it's basically guaranteed that I've just chosen the side that happens to match up with my preferences and reasoned my way backwards that it's the correct one (even with double standards, this isn't off the table, but it at least helps to make me feel somewhat more secure in it).
The econ 101 explanation, which I believe is pretty much correct, is that we always want some inflation and never want deflation. Deflation means that cash just naturally accrues value - the $100 in my hand today is worth more tomorrow if I don't spend it. So I don't spend it unless absolutely necessary. And if everyone does this, then the economy slows down, there's less growth, and everyone suffers from the fewer goods being produced and services being rendered. Inflation has the opposite effect, where currency constantly loses value; so the most value you can get out of that $100 is right now, which puts pressure on you to look for the right opportunity to trade it for a useful product or service right now.
Of course, too much inflation is well known to cause serious issues. Income tends to be sticky and laggy - most people's salaries aren't pegged to inflation, but rather get raised once every period of time, and so if inflation is very high, then most workers suffer. Which is kinda what happened the last few years. This is why the US federal reserve's dual mandate includes hitting a 2% inflation rate - it's positive, but not so positive as to cause issues, at least most people seem to think so.
So eventually, yeah, a loaf of bread will cost a million dollars. If a loaf of bread were to cost $2 today, then at 2% inflation, that'd take a little over 660 years. Much like how movie tickets cost a literal nickel like a century ago and cost $15 today. But the idea of inflation is that, when a loaf of bread costs $1,000,000, the average income will also be (1,000,000/2)*(average income today), so things will remain the same in relative terms. In practice, it doesn't work that perfectly.
Sticking only to the sports aspect, I personally don't like the use of AI or non-AI computer tech to make officiating decisions more accurate. I see sports as fundamentally an entertainment product, and large part of the entertainment is the drama, and there's a ton of entertaining drama that results from bad officiating decisions, with all the fallout that follows. It's more fun when I can't predict what today's umpire's strike zone will be, and I know that there's a chance that an ace pitcher will snap at an umpire and get ejected from the game in the 4th inning to be replaced with a benchwarmer. It's more fun if an entire team of Olympians with freakish genes and even freakishier work ethic who trained their entire waking lives to represent their nations on the big stage have their hopes and dreams for gold be dashed by a single incompetent or corrupt judge. It's more fun if a player flops and gets away with it due to the official not recognizing it and gets an opponent ejected, resulting in the opposing team's fans getting enraged at both the flopper and the official, especially if I'm personally one of those enraged fans.
Now, seeing a match between athletes competing in a setting that's as fair as possible is also fun, and making it more fair makes it more fun in that aspect, but I feel that the loss of fun from losing the drama from unfair calls made by human officials is too much of a cost.
That just puts the cart before the horse. Trump has every incentive not to be fair or unbiased, not only because he could keep some small hope of actually overturning the result, but also to muddy the waters and retain his clout within the Republican party ("I'm not a loser, I'm a victim!")
Right, and that's exactly why if Trump himself were to say that he lost fair and square, that would convince the vast majority of Republicans, I believe. Very few people outside of him and others vetted by him have the credibility to certify a Trump loss as legitimate, and I don't really see a way for Democrats or other elements of the government to gain that kind of credibility in a short enough time frame to matter.
I think if the reins of any election audit were handed over to Trump himself, with the final report requiring his sign-off, this would convince all but the most hardcore of Republican partisans/conspiracy theorists that a Trump loss in the election was the correct result. I don't know if such a thing would be unconstitutional, though; if it isn't, then Congress should be able to pass whatever laws necessary to allow such an audit to happen.
Au contraire, I think if this guy was placed in a position by Harris to ask the exact same question at her for a full hour, this would be of great benefit to all American voters.
I had a vague recollection of the same thing, but I thought I might be confusing it with Romney's 47% remark that was surreptitiously recorded and released. From my Googling, Time doesn't mention any secret recordings for Hillary's remark. Says it was at a fundraiser, but it doesn't seem like it was a closed event, and a full transcript of her speech is also in the article, which points to it not being surreptitiously recorded.
What if we take the hardball metaphor seriously, and the interviewer tells the interviewee straight-up, after each non-answer, "Okay, so you just struck out. Want to try again?" And as the interview progresses, the interviewer brings up the scorecard and reminds the interviewee of their performance so far and perhaps their need to hit a grand slam now if they want to win?
More realistically, when a politician non-answers a question like in some of these clips, I'd like it if the interviewer explicitly called it out and refused to move on until it was answered in a way that a reasonable layman would understand as "answered." There's probably too many incentives against any interviewer in a position to interview anyone that matters actually doing this sort of thing, though.
This is the question I always have in mind when I see ideologues of any stripe popularize poll numbers that show their preferred candidate for an upcoming election in the lead and denigrate polls that show the opposite. There's certainly a "celebrate the home team winning" aspect that I understand - when the Red Sox have a good record or have a big lead against the other team, I, too, like to remind other Red Sox fans of this if the context is appropriate, and we both have a positive experience from it.
But unlike professional sports, elections are things that the fans actually have pretty direct input on the outcome of. So one would think that a committed fan of a particular politician or party would try to behave in a way as to increase the odds of their preferred team winning. Which, I think, would lead one to the exact opposite of the abovementioned behavior; present the polls that show your favorite team losing as the most dependable, most reliable polls that everyone needs to be paying attention to all the time, and present the other ones as fake and flawed and maybe even part of a conspiracy theory to keep people on your side complacent.
But I also see an argument for the opposite case, that, as one Osama Bin Laden said, IIRC, "When people see a strong horse and a weak horse, they instinctively side with the strong horse." So presenting your favored politician as "stronger" in the polls could actually lead other people into learning that they genuinely, in good faith, agree with that politician's ideology, and thus they become more likely to vote for them.
I admit I haven't looked hard, but I've yet to see any empirical evidence for which factor is stronger and by how much. As it is, the fact that it feels really really good to celebrate your favorite team politician being in the lead (certainly it feels far better than "celebrating" the opposite) makes me highly skeptical of any evidence or arguments that such celebration also conveniently helps your favorite team politician win in the upcoming election.
Genuinely having good faith belief that XYZ is right and then fighting for XYZ is how someone takes orders from a hivemind, though, whether that be progressive or conservative or any other ideology or way of thinking. And this, to me, is the sticking point of the issue I have as a progressive with the movement that's called progressive; the point of progressivism is progress, which means moving forward, not just moving in some direction and then declaring the direction as forward. In order to do the former instead of doing the latter while honestly but mistakenly believing that it's the former requires actually acknowledging this risk and finding ways to mitigate the risk. A risk which can never be reduced to zero or even all that close to zero, but which can still be reduced through things like empiricism and discourse.
As you say, though, this is useless unless progressives are convinced that they actually took orders from a progressive hivemind, or at least acknowledge the very real risk that that they are taking such orders, which seems about as likely as a snowball's chance in hell right now. The fact that this is the state of things seems pretty insane to me, akin to a world in which, say, Muslims can't be convinced that there is only one god who is called Allah or Christians can't be convinced that Jesus Christ is the son of God and died for our sins.
More options
Context Copy link