Well, I know there's some sort of "law" some political commentator coined, that says that any organization that's not explicitly right wing eventually becomes left wing. There are certainly enough examples that calling it a "law" doesn't seem obviously ridiculous.
The part I don't understand quite so well is why it happens to such an extreme extent like that 40% - 4% shift you say happened in journalism. From a purely cynical, selfish perspective, knowing the opposition better allows one to defeat them better - there's even a cliche saying, "Keep your friends close, but your enemies closer," that alludes to this. So if I'm cynically running a left-wing organization in order to crush the right-wing, then I'm going to want to populate it with at least enough right-wingers that we can learn from. From a good faith perspective of wanting to make the world a better place through leftist values and policies, it's obvious that blind spots develop when you're surrounded by people with similar values and beliefs. So if I'm a bright-eyed idealist running a left-wing organization in order to improve the world, then I'm going to want to populate it with at least enough right-wingers to provide real, substantive criticism of the weaknesses and pitfalls of our values that I and people who agree with me can't recognize.
Which leads me to conclude that there are no real adults in the room, and everyone's just cynically aiming for the betterment of their own careers and status among peers, and if that results in their organization becoming ineffective or evil, then, well, hopefully that'll be after they've retired and the younger generations can deal with that.
This is the thought I had from seeing a related phenomenon in the field of entertainment, where over the past couple years, we've seen companies burn 8-10 figures in producing works like the films Indiana Jones 5 or The Marvels, TV shows Rings of Power or The Acolyte, video games Concord, Star Wars: Outlaws, or Unknown 9: Awakening. I would have expected that the cynical selfish greedy decisionmakers at the top would have put a stop to it before all that money was sunk. But, well, it's not like it's their money - it's their investors' money - and even if they were to get fired, they at least gained status among their peers by greenlighting such things. That's the best I've come up with.
I kind of agree here which is what makes this move so baffling. They know they’re not going to affect the outcome with this move, and they know that this kind of stupid reporting is only going to hurt their credibility.
Do they? What's the evidence that they're aware of the fact that this kind of thing would hurt their credibility? I remember as far back as 2016 during Trump's first campaign, I was among a tiny minority of Democrats complaining that there are more than enough honest ways to criticize and denigrate Trump, and that constantly reaching for hyperbole or even just lies would only hurt our ability to make any criticisms of him and other politicians in the future. We were shut down for "tone policing" or just ignored, and, sure enough, over the following 4 years of his presidency and continuing for 4 years after that, we've seen the trust in media keep going down. And the explanation for this has always been adding more epicycles about disinformation, Russian propaganda, low-information voters, and the like, instead of just owning up to the fact that when you don't speak credibly, your credibility declines in the eyes of the audience. At some point, when someone just keeps making the same obvious mistake over and over again that harms them, one has to conclude that, somehow, that mistake isn't that obvious or even understood by the person.
I also have to wonder if there's an evaporative cooling going on, where journalists who could recognize the constant self-destructive behavior of self-inflicted injuries to credibility that much of mainstream media engages in quit and did their own thing, and thus the ones remaining are only the most deluded ones.
I don't know the sequence of steps, but it seems like you'd want to train a LORA, which I believe can be done using the popular Automatic1111 UI. As best as I can tell, Stable Diffusion 1.5 is still the most popular base model on which to build LORAs, but I think the tech exists for the more recent versions too. There are a decent amount of resources in the Stable Diffusion subreddit about this sort of stuff.
They’ve already been calling Trump Hitler for 8 years. Why would it finally stick?
Indeed. I remember probably as far back as 2016, and certainly by 2018, joking that Democrats were calling Trump "Giga-Hitler" was considered trite. I don't know whether to even believe that this is something that counts as an "October surprise" due to how banal this is. If it's an actual attempt at a coordinated smearing, it speaks to an incredible level of incompetence in the Democratic party and its media supporters. A level of incompetence so great that I wouldn't have believed that it was possible until this year. Unfortunately, after what I've seen this year, it actually seems depressingly plausible that the top decisionmakers for a movement that believes that this election will literally determine life and death of democracy in America, thought that this would be an effective tactic.
I have no great overarching theory, but a couple of thoughts. One is that, at least since the 90s, and I'm guessing earlier, the idea that "separate is not equal" was taught as dogma to kids due to the history of the US, i.e. Plessy vs Ferguson & Brown vs Board of Education. We took that to heart. That meant that any difference at all in how people were treated - i.e. being "separate" - was, definitionally, unequal. So treating transwomen as literally indistinguishable from women in every single way, i.e. in their sex and not just in their "gender identity," became a moral prerogative.
Another is the success of the gay rights/gay marriage movement on the idea that it was an innate "born this way" thing. I remember back in high school, a friend of mine dated a girl who came out as gay after they broke up; when I talked about how he dated her back when she was straight, my friend "corrected" me by telling me that she was already gay when she dated him, she just didn't know it yet (I bought it at the time, but now I wonder how I could have taken this on faith when it's obvious that such a definitive statement about how sexual orientation works would require absolute mountains of empirical evidence to prove - I was very good at coming up with epicycles for this kind of stuff, I think). The movement to normalize trans people took the same tactic, hence the claim that, say, Bruce Jenner was a woman when "she" won the men's decathlon gold medal or Ellen Page was a man when "he" was nominated for best actress for Juno. This reinforced the idea that someone's "transness" is not tied to anything in physical reality but rather entirely up to the individual's personal judgment, which meant that autogynephiles were encouraged to and celebrated for transitioning, and such people absolutely want access to female-only spaces, and so discriminating against them on the basis that their sex was male despite their gender being otherwise became verboten.
I wish I had a simple answer, but I think there isn't a pole star to follow other than the vague notions of making things "better" in some real sense by increasing prosperity and reducing suffering for each and every individual. One obvious problem there is that these things are highly idiosyncratic and difficult to measure, but I think e.g. getting rid of anti-sodomy laws or making gay marriage a thing helps to achieve that better by benefiting gay people, or having progressive taxes and welfare and socialized health care helps to achieve that better by benefiting poor people. I think stuff like "equality" or "freedom" are decent enough slogans for supporting bringing up people who were considered lesser than others or who were granted fewer rights than others, but only exist as end goals in some far flung future where we have so much prosperity that each individual is equally free to create a literal heaven in reality for themselves. In the here and now, I think the immediate goals include figuring out which of existing systems can be dismantled for easy gains (I think treating individuals on the basis of group identity is one such system that needs dismantling, which is where I diverge greatly from the modern progressive movement), or figuring out how to maintain economic growth so as both to uplift the poorest of us and to bring about that scifi post-scarcity future, or figuring out how better to advance knowledge so that we can build the tech needed to free us from our physical constraints (this, too, is where I disagree heavily with modern progressivism, as they seem all too happy to play-act at knowledge generation through a cargo cult of academics).
From a high level view, perhaps you can say that the goal is bootstrap our way into figuring out what the metaphorical pole star is, since we've been forced to contend with the reality that the pole stars that our civilization used to follow - and still follow to a great extent - were merely mirages that happened to be useful in certain contexts but also greatly harmful in certain others.
I've also said before that a progressive is someone who read Brave New World by Huxley and thought, "Hey, this seems like a pretty cool society to live in" like I did, and I think that's generally true, though that specific world probably isn't a realistic end state goal.
I think there’s room for a stable equilibrium, and it probably involves distinguishing sex from gender.
I think this was the equilibrium 10-25 years ago when I was growing up/young adult, and it's been proven to be unstable. I think the only stable equilibrium at this point would be far future scifi where literal sex change is possible.
Are you hoping for them to have an epiphany that the progressive hivemind previously ordered them to fight for things that they now know were bad, and realise that this might be happening again? (Useless without persuading them that they themselves and past progressives actually took marching orders from a progressive hivemind, as opposed to fighting for what they themselves believe to be right.)
Genuinely having good faith belief that XYZ is right and then fighting for XYZ is how someone takes orders from a hivemind, though, whether that be progressive or conservative or any other ideology or way of thinking. And this, to me, is the sticking point of the issue I have as a progressive with the movement that's called progressive; the point of progressivism is progress, which means moving forward, not just moving in some direction and then declaring the direction as forward. In order to do the former instead of doing the latter while honestly but mistakenly believing that it's the former requires actually acknowledging this risk and finding ways to mitigate the risk. A risk which can never be reduced to zero or even all that close to zero, but which can still be reduced through things like empiricism and discourse.
As you say, though, this is useless unless progressives are convinced that they actually took orders from a progressive hivemind, or at least acknowledge the very real risk that that they are taking such orders, which seems about as likely as a snowball's chance in hell right now. The fact that this is the state of things seems pretty insane to me, akin to a world in which, say, Muslims can't be convinced that there is only one god who is called Allah or Christians can't be convinced that Jesus Christ is the son of God and died for our sins.
Wouldn't the obvious stance be "we aren't the progressives of the past?" Residential schools have plenty of evidence towards their existence in Canada, and were certainly pushed by what would've been a progressive mindset back in the day.
The issue is that "we aren't the progressives of the past" is the stance of the progressives of today. So saying that doesn't escape one from repeating the mistakes of the past; it's how you repeat the mistakes of the past.
A question that intruiges me as well. My guess is that that it will be entirely forgotten the same way that the pedo rights movement of the 70's was. Sure every once in a whole someone will dig out some receipts, and it will be seen as that weird thing that apparently happened in the past, but it will not be something pinnable on the progressive movement
I wonder about this. Unlike the 70s or any time before the 21st century, the dialogue and commentary around this is largely done on the internet, which is very easily accessible. Memory holing something that can be looked up with a single click of a hyperlink on your phone is harder than doing so for something you'd have to look up old newspapers or journals in a library.
Yet it certainly seems doable. Stuff like the Internet Archive can be attacked and taken down or perhaps captured, thus removing credible sources of past online publications. People could also fake past publications in a way to hid the real ones through obscurity. Those would require actual intentional effort, but the level of effort required will likely keep going down due to technological advancements. More than anything, human nature to be lazy and ambivalent about things that don't directly affect them in the moment seems likely to make it easy to make people forget.
I wonder how much people in the 20th century and before were saying "We're on the right side of history" as much as people have been in the past 15 years. Again, people saying that has never been as well recorded as it has now. It'd be interesting to see in the 22nd century and later some sort of study on all instances of people saying "this ideology is on the right side of history" and seeing how those ideologies ended up a century later.
It was a joke.
I see, it must have gone over my head, but that's not an unusual experience for me with jokes, unfortunately. So is it that you were just being ironic, and that your meaning was the opposite, that the mysticism around art being imbued with a part of the artist's soul is still quite common in artists' circles, with a part of Duchamp's soul being in that toilet just as much as, e.g. part of Van Gogh's soul being in his self portrait?
If your view is that we need to redefine what 'stealing' is in order to specifically encompass what AI does then yes, you can make the argument that AI art is stealing, but if you do that you can make the argument that literally anything is stealing, including things that blatantly aren't stealing.
The issue here is that when we're talking about "stealing" in the copyright/IP law sense, the only way something is "stealing" is by legally defining what "stealing" is. Because from a non-legal perspective, there's just no justification for someone having the right to prevent every other human from rearranging pixels or text or sound waves in a certain order just because they're the ones who arranged pixels or text or sound waves in that order first.
So if the law says that it is, then it is, and if it says that it isn't, then it isn't, period.
So the question is what does the law say, and what should the law say, based on the principles behind the law? My non-expert interpretation of it is that the law is justified purely on consequentialist grounds, that IP law exists to make sure society has more access to better artworks and other inventions/creations/etc. So if AI art improves such access, then the law ought to not consider it "stealing." If AI art reduces it, then the law ought to consider it "stealing."
My own personal conclusions land on one side, but it's clearly based on motivated reasoning, and I think reasonable people can reasonably land on the other side.
My brother once put it to me this way: Imagine you have a favorite band with several albums of theirs on your top-faves list. You've followed them for years, or maybe even decades. It's not even necessary for this thought experiment, but for a little extra you've even watched or read interviews with them, so you have a sense of their character, history, etc. And then one day it is revealed to you that all of it was generated by an AI instead of human beings. How would you feel?
I think I would feel a profound sense of loneliness. I would never revisit those albums again. And I don't think this basic feeling can be hacked through with some extra applications of rationalism or what have you. This feeling precedes thinking on a very deep level for me.
I think differing intuitions on this is exactly what makes this such a heated and fascinating culture war topic. My response to this thought experiment is that I'd be mostly neutral, with a bit of positivity merely for it being just incredibly cool that all this meaning that I took out of this music, as well as the backstories of the musicians who created it, was able to be created with AI sans any actual conscious or subconscious human intent.
In fact, this thought experiment seems similar to one that I had made up in a comment on Reddit a while back about one of my favorite films, The Shawshank Redemption, which I think isn't just fun or entertaining, but deeply meaningful in some way in how it relates to the human condition. If it had turned out that, through some weird time travel shenanigans, this film was actually not the work of Stephen King and Frank Darabont and Morgan Freeman and Tim Robbins and countless other hardworking talented artists, but rather the result of an advanced scifi-level generative AI tool, I would consider it no less meaningful or powerful a film, because the meaning of a film is encoded within the video and audio, and the way that video and audio is produced affects that only inasmuch as it affects those pixels (or film grains) and sound waves. And my view on the film wouldn't change either if it had been the case that the film had been created by some random clerk accidentally tripping while carrying some film reels and somehow damaging them in a way as to make the film.
This is the way I see it as well. When people say "stealing," they actually mean "infringing on IP rights," and that raises the issue of what are IP rights and what justifies them. As best as I can tell, the only justification for IP rights is that they allow for us as a society to enjoy better and more artworks and inventions by giving artists and creators more incentive to create such things (having exclusive rights to copy or republish their artworks allows greater monetization opportunities for their artworks, which obviously means greater incentive). The US Constitution uses this as the justification for enabling Congress to create IP laws, for instance.
Which is why, for instance, one of the tests for Fair Use in the US is whether or not the derivative work competes against the original work. In the case of AI art and other generative AI tools, there's a good argument to be made that the tools do compete with the original works. As such, regardless of the technical issues involved, this does reduce the incentives of illustrators by reducing their ability to monetize their illustrations.
The counterargument that I see to this, which I buy, is that generative AI tools also enable the creation of better and more artworks. By reducing the skill requirements for the creation of high fidelity illustrations, it has opened up this particular avenue of creative self expression to far more people than before, and as a result, we as a society benefit from the results. And thus the entire justification for there being IP laws in the first place - to give us as a society more access to more and better artworks and inventions - become better fulfilled. I recall someone saying the phrase "beauty too cheap to meter," as a play on the whole "electricity too cheap to meter" quote about nuclear power plants, and this clearly seems to be a large step in that direction.
That hasn't been a tenable position for quite some time. Duchamp took a urinal and put it in an art gallery in 1917. Probably, he did not simultaneously impart a piece of his soul into it.
I'm not sure how you justify the "probably" in the last sentence. If we posit that, say, Van Gogh left a piece of his soul into his famous self portrait through the act of painting it, how can we deny that Duchamp left a piece of soul into the urinal when he placed it in an art gallery? What's the mechanism here by which we can make the judgment call of "probably" or "probably not?"
The AI art we have right now seems to me to be more akin to waves on the beach just so happening to etch very detailed pictures into the sand by random chance; this to me is lacking the principle features that make art interesting (communication between conscious subjects; wondering at what kind of subjectivity could have lead to the present work).
I think that's a perfectly reasonable way to determine whether a work of art is interesting. What I find confusing here, though, is that, by that standard, AI art is interesting! To take the beach metaphor, someone who types in "big booba anime girl" into Midjourney on Discord and posts his favorite result on Twitter is akin to someone who hovers over this beach and snaps photos using a simple point and shoot, then publishes the resulting prints that he likes (if we stretch a bit, this is all nature photography or even street photography). In both cases, a conscious person is using his subjective judgment to determine the features of what gets shared. Fundamentally, this would be called "curation" rather than "illustration," and one can certainly argue that curation isn't interesting or that it's not an art, but by the standard that it requires a conscious being using his subjective judgment to communicate something through his choices in the results, curation fits just as well as any other work of art.
This is why I believe there's something more to it than that and alluded to the mysticism in my previous comment.
That would have to depend on the specific principle at hand. If it's, say, that training an AI model from public data is stealing, then, perhaps if they approve of AI art tools confirmed to have been trained only from authorized images, even if it causes them to face the ire of their peers who still disapprove of it, or even if it causes them to lose out on commissions.
Fundamentally, this has existed for about 2 years, though the software to make it easy to do is more recent. I haven't used Photoshop, but I believe it essentially does that with Firefly, and for free tools, the Krita (freeware) extension to use Stable Diffusion does this pretty well. However, actually getting a "good looking" picture out of it is still something that's not likely to be a one-step process, but rather requiring iterations and intentional inpainting.
What you're talking about is a version of what's referred to as IMG2IMG, which is exactly what it sounds like, and, in fact, it's actually the same thing as TXT2IMG, just, instead of starting with random noise in the case of the latter, you're starting with an image that you sketched. Early on, keeping the structure of the original image was a major struggle, but something like 1.5 years ago, a tech referred to as "ControlNet" was developed, which allowed the image generation to be guided by further constraints beyond just the text prompt and settings. Many different versions of ControlNet exist, including edge detection, line-art, depth map, normal map, and human pose, among others. In each, those particular details from the original image can be used to constrain the generation so that objects you might draw in the foreground don't blend in to the background, or so that the person you drew in a certain pose comes out as a human in the exact same pose. It's possible to run multiple of these at the same time.
Again, in practice, these aren't going to be one-step solutions, with various issues and weaknesses that need manual work or further iterations to make look actually like a good work of art. But in terms of turning, say, a crude mess of blobs into something that looks somewhat realistically or professionally rendered while following the same composition, it's quite doable.
But there are also many non-artists who don't like AI art. Also, people who have objections to AI painting also tend to have objections to AI music and AI voice acting, even if those areas don't overlap with their personal skill set. Which is evidence that the objections are principled rather than merely opportunistic.
I don't think this follows. The only way some behavior is evidence that some belief in a principle is sincere is if that behavior is costly to the person, e.g. giving up food for some religious holiday or even the Joker setting money he stole on fire in The Dark Knight. I don't think making this kind of objection is costly to these people; if anything, it seems gainful in terms of status within their social groups. At best, it's evidence that they understand the logical implications of the principle they're espousing.
Has anyone noticed how much vitriol there is towards AI-generated art? Over the past year it's slowly grown into something quite ferocious, though not quite ubiquitous.
I honestly think it's far closer to the opposite of ubiquitous, but it certainly is quite ferocious. But like so much ferocity that you see online, I think it's a very vocal but very small minority. I spend more time than I should on subreddits specifically about the culture war around AI art, and (AFAIK) the primary anti-AI art echochamber subreddit, /r/ArtistHate, has fewer than 7k members, in comparison to the primary pro-AI art echochamber subreddit, /r/DefendingAIArt, which has 23K members. The primary AI art culture war discussion subreddit, /r/aiwars, has 40K members, and the upvote and commenting patterns indicate that a large majority of the people there like AI art, or at least dislike the hatred against it.
These numbers don't prove anything, especially since hating on AI art tends to be accepted in a lot of generic art and fandom communities, which lead to people who dislike AI art not particularly finding value in a community specifically made for disliking it, but I think they at least point in one direction.
IRL, I've also encountered general ambivalence towards AI art. Most people are at least aware of it, with most finding it a cool curiosity, and none that I've encountered actually expressing anything approaching hatred for it. My sister, who works in design, had no qualms about gifting me a little trinket with a design made using AI. She seems to take access to AI art via Photoshop just for granted - though interestingly, I learned this as part of a story she told me about interviewing a potential hire whose portfolio looked suspiciously like AI art, which she confirmed by using Photoshop to generate similar images and finding that the style matched. She disapproved of it not out of hatred against AI art, but rather because designers they hire need to have actual manual skills, and passing off AI art without specifically disclosing it like that is dishonest.
I think the vocal minority that does exist makes a lot of sense. First of all, potential jobs and real status - from having the previously rather exclusive ability to create high fidelity illustrations - are on the line. People tend to get both highly emotional and highly irrational when either are involved. And second, art specifically has a certain level of mysticism around it, to the point that even atheist materialists will talk about human-manually-made art (or novel or film or song) having a "soul" or a "piece of the artist" within it, and the existence of computers using matrix math to create such things challenges that notion. It wasn't that long ago that scifi regularly depicted AI and robots as having difficulty creating and/or comprehending such things.
And, of course, there's the issue of how the tools behind AI art (and modern generative AI in general) were created, which was by analyzing billions of pictures downloaded from the internet for free. Opinions differ on whether or not this counts as copyright infringement or "stealing," but many artists certainly seem to believe that it is; that is, they believe that other people have an obligation to ask for their permission before using their artworks to train their AI models.
My guess is that such people tend to be overrepresented in the population of illustrators, and social media tends to involve a lot of people following popular illustrators for their illustrations, and so their views on the issue propagate to their fans. And no technology amplifies hatred quite as well as social media, resulting in an outsized appearance of hatred relative to the actual hatred that's there. Again, I think most people are just plain ambivalent.
That, to me, is actually interesting in itself. So far, the culture wars around AI art hasn't seem to have been subsumed by the larger culture wars that have been going on constantly for at least the past decade. Plenty of left/progressive/liberal people hate AI art because they're artists, but plenty love it because they're into tech or accessibility. I don't know so much about the right/conservative side, but I've seen some religious conservatives call it satanic, and others love it because they're into tech and dunking on liberal artists.
The Democrat says "Come with me and you won't have to go to NASCAR races and eat McDonald's any more. You can be just like me! Wouldn't that be great?". It shows a real lack of understanding about the working class and what they value. They don't do these things because they have to. They like McDonald's!
This reminds me of the narrative I bought into about 20 years ago, when the left was pushing the idea that everyone, including those in the Middle East, just wanted liberal democracy (even if they weren't aware of it). So once freed from the religious oppressive forces keeping them down, they'd gravitate towards such a system like in America. Same for immigrants from such cultures, whose kids would see how awesome liberal democracy is and thus adopt its values. I particularly recall a (more recent, but still like a decade old, I think?) 5-hour long conversation between Cenk Uygher and Sam Harris about this kind of stuff, where Cenk was smugly telling Sam about how suicide bombers and other similar Muslim terrorists could just be won over with the benefits of Western liberal values.
I think the amount of epicycles that have been required to explain the various failures and speedbumps that such a narrative has encountered in the past 2 decades shows that, no, it was rather that the people who pushed such a narrative largely just lacked the ability or willingness to appreciate the true diversity of thought there exists in humans. I don't put much weight to any sort of sociological study anymore, but I suspect that the findings that liberals in America have a hard time modeling how conservatives think in a way that doesn't exist in reverse might be pointing at something that's true. Likewise for the cliche that "liberals think conservatives are evil; conservatives think liberals are stupid."
All I know comes from West Wing and I have a feeling that the reality is way more regarded than the typical mass media depiction.
I recall talking to someone in the industry in some social event like a decade ago and being told that real life is much closer to Veep than to West Wing, except that Veep depicted everyone as far more competent than the real-life versions. I imagine they were being facetious, but I chose to take it at face value and believe it unironically, and the older I get, the more I think that was correct.
I actually encountered a friend of mine confusing this a couple of years ago. He had never heard the term before, and when I explained to him the generation it was a label for, he commented how stupid it was to make the label based on a video chat app. I had to inform him that the term came predated 2020 and came from a combination of "Generation Z" and "boomer."
I'm just mystified by the idea that Harris is so certain that young men, especially young black men, would benefit from greater availability of recreational marijuana, that she has made it a highlight of her campaign.
I don't think either Harris or Trump or any particular politician that's running for office has any reason to care if policies they propose would actually benefit anyone. I think the implication of Harris making this a highlight of her campaign isn't that young black men would benefit from greater availability of recreational marijuana, but rather that pushing for greater availability is more likely to cause young black men, as well as people who believe that young black men are disproportionately likely to go to prison for marijuana use, to giver her their votes.
This epitomizes general differential expectations of conservatives and liberals. Conservatives are regarded (and to a shocking degree, regard themselves) as lacking in agency to the point of being almost animalistic.
Liberals, though. They're supposed to be better, smarter, more accountable. Apparently.
They're supposed to be adults in the room.
As a liberal (both in the classical sense and in the liberal/conservative dichotomy sense), I feel like this is exactly the correct state of things. Because the only good justification I see for picking a particular side is if one believes that that side is, in some real meaningful sense, better than the other side. And liberals being actually responsible for getting decisions right, being the adults in the room who think through their ideas and the consequences of implementing them, while conservatives being animalistic emotional creatures following their base whims and needing faith and tradition and religion to keep them from falling to their base impulses, is one of the most meaningful ways to differentiate the former as better than the latter.
Because you get rid of that, then what do we have left with, just that this set of ideas labeled L are better than this other set of ideas labeled C? But how could I justify holding such a belief, if the process by which those L ideas were produced wasn't, in some meaningful way, better than the process by which those C ideas were produced? Because I've reasoned to myself that those L ideas are better than those C ideas? Why should anyone, especially myself, who grew up in an environment that was biased heavily towards L ideas and away from C ideas, trust that my reasoning on this preference is sound, when the more likely explanation is that I have a set of preferences inculcated in me by my society, which I've used motivated reasoning to justify as "correct" in my mind?
Now, I've seen enough to recognize that most people on any side are just tribalists blindly following their animalistic urges, but even so, in the world of ideology and politics, I'll always insist on double standards, where my side is held to a higher standard than the other one, so as to make myself feel more secure that I've actually chosen the correct side. Otherwise, it's basically guaranteed that I've just chosen the side that happens to match up with my preferences and reasoned my way backwards that it's the correct one (even with double standards, this isn't off the table, but it at least helps to make me feel somewhat more secure in it).
I think skill/intelligence/expertise and having at least some sense of ethics are somewhat different with respect to salary levels. Certainly, it's easier to be ethical if you're well compensated, but I don't think having at least some sense of ethics when doing journalism requires some generous salary that is beyond the capabilities of these companies to pay now, and plenty of not-very-well compensated journalists (and other workers in general) can be and have been known to behave with at least some sense of ethics. If the lower budget to pay salaries means compromising on skills, intelligence, expertise, and ethics, among other things, they could have decided compromise less on ethics at the cost of compromising on other things, so that their journalists would have at least some sense of ethics, even if they weren't up to the same level of skill, intelligence, or deep knowledge of some beat as the other options.
I do think this is likely the biggest factor. It's hard to say exactly what determines how ethical any given journalist or individual in general would behave, but I think the leadership and company culture likely has a lot of influence, moreso than the budgets available for salaries.
More options
Context Copy link