Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.
Jump in the discussion.
No email address required.
Notes -
LLM-generated images and videos are controversial, with some arguing they should be banned as they take work away from talented artists. But I've thought of one use case for them that I think even the most ardent opponent of them might begrudgingly concede is reasonable.
There's an early episode of Friends in which Joey secures a gig as a stock photo model, only to later discover to his horror that his photo is being used as part of a PSA to encourage people to get tested for STDs, and now no woman will go near him with a barge pole. It's common enough to have its own trope, and there have been some real life examples. I remember a few years ago in which a guy was hired to be a stock photo model, only to later find that his photo was being used in Reductress articles with titles like "Hero! This Man Watched Porn That Wasn’t About Flushing A Woman’s Head In A Toilet!" - he went on Twitter to assure people that he respects women and had no say in how his likeness was used. What got me thinking about it was that I saw an ad campaign the other day which advises people that the act of even threatening to share revenge porn is itself a criminal offense, with a still image of a man at home being visited by two police officers. I feel kind of bad for the guy in the ad - sure, he agreed to do it, but a nonzero amount of women in his future may well think he's some kind of creep.
So this strikes me as a use case for AI images almost everyone could get behind: if you need a still image to depict a fictional character who is extremely unsympathetic (particularly in PSAs, government ad campaigns and similar), and you're concerned that the actor you hire to portray that character will be mistaken for the real thing by some significant proportion of the public and face abuse, harassment and damage to their career as a result. What do you think? Particularly interested in hearing from people who are very opposed to most applications of AI-generated images.
I don't think this is a very strong argument. Playing Devil's Advocate for the other side (who I strongly disagree with in principle), they could argue that the ethics are more easily solved by having better informed consent for the models and actors. If they want to use your image for an unsympathetic role, they need to advertise the position that way so the models and actors can make an informed decision. The fact that this will result in social ramifications means that such positions will have to pay extra in order to make up for it, and people who feel like the tradeoff is worth it can earn extra money in exchange for sacrificing their reputation, just as a sewer works earns extra money to compensate for the fact that it stinks and is unpleasant. Supply and Demand will handle the details for you, as long as people are informed.
I personally don't think that people have a right to jobs that can be automated, that's just strictly inferior to welfare. But if you did believe that, then replacing these jobs is potentially more damaging than average because they are (or ought to be) higher paying jobs than average.
But they're only higher-paying than average because they have unusually high disutility. So in terms of net utility gain to the model, it shouldn't be any different.
More options
Context Copy link
More options
Context Copy link
I wonder if this became a well-known trend, would it bleed out to similar scenarios? For example, especially unsympathetic villains would be portrayed by AI instead of humans? Could it start a euphemism treadmill where humans are only associated with less- and less- offensive things, and AI is used for anything remotely negative?
More options
Context Copy link
More options
Context Copy link