Tbf to Amadan, the use of 'generative AI' as a description of use case rather than of design is a pretty common one from anti-AI artist and writers.
Hm, I was not aware of that. I'd thought most of such people at least ostensibly maintained a principled objection against generative AI for its training methods, rather than one based on pure protectionism.
That's fair, perhaps this "mania," as you call it, might be the immovable object that matches up to the irresistible force of wokeness. I just think that, sans a definitive proof, any denial of LLM-usage from an author that is deemed as sufficiently oppressed would be accepted at face value, with any level of skepticism deemed as Nazi-adjacent and appropriately purged.
Now I'm imagining a scandal where someone publishes a sort of postmodern scifi novel that they claim to be the unedited ChatGPT log where they had it write a novel piece by piece, publishing all the prompts they input between segments and all, but it comes out that, actually, the author fraudulently crafted the novel, writing each and every word the old fashioned way like a novelist in the pre-LLM era. Bonus points if it was written by hand, as revealed by a notebook with the author's handwriting showing the rough drafts.
Bonus bonus points if it's then revealed later on that the handwritten manuscript was actually created by an advanced 3D printer working off a generative AI based on a prompt written by the author.
I see a couple of issues with that scenario.
One is that there will almost always be plausible deniability with respect to LLM usage. There would have to be a slip-up of the sort of including meta-text that chatbot-style LLMs provide - something like "Certainly! Here is the next page of the story, where XYZ happens." - for it to be definitive proof, and I'd expect that the audience and judges would pick up on that early enough to prevent such authors from becoming high status. That said, it could still get through, and also someone who did a good enough job hiding this early on could slip up later in her career, casting doubt on her original works.
But the second, bigger issue, is that even if this were definitively proven, with the author herself outright claiming that she typed in a one-word prompt into ChatGPT 10 to produce all 70,000 words of her latest award-winning novel, this could just be justified by the publishing industry and the associated awards on the basis of her lacking the privilege that white/straight/cis/male authors have, and this LLM usage merely ensures equity by giving her and other oppressed minorities the writing ability that privileged people are just granted due to their position in this white supremacist patriarchal society. Now, you might think that this would simply discredit these organizations in the eyes of the audience, and it certainly will for some, but I doubt that it would be some inflection point or straw that breaks the camel's back. I'd predict that, for the vast majority who are already bought in, this next spoonful would be easy to swallow.
This generalized antipathy has basically been extended to any use of AI at all, so even though the WorldCon committee is insisting there has been no use of generative AI
(Emphasis added).
If they admit to using ChatGPT, how can they claim they didn't use generative AI? ChatGPT and all LLMs are a type of generative AI, i.e. they generate strings of text. ChatGPT, I believe, is also trained on copyright-protected works without permission from the copyright holders, which is the criterion many people who hate AI consider to qualify as the generative AI "stealing" from authors and artists.
Just based on this description, it sounds like these WorldCon people are trying to thread a needle that can't be. They should probably just say, "Yes, we used generative AI to make our lives easier. Yes it was trained on copyright protected works without permission. No, we don't think that's 'stealing.' Yes, this technology might replace authors like you in the future, and we are helping to normalize its usage. If you don't like it, go start your own AIFreeWorldCon with blackjack and hookers."
I'm a Catholic, and not a particular fan of Trump, and I found the picture both inevitable and mildly amusing.
Only mildly amusing? I found it holy amusing!
A part of this that hadn't occurred to me until I saw it pointed out is that there seems to be a sort of donation duel between this lady's case and that of Karmelo Anthony, who's a black teen charged with murdering a white teen during a track meet by stabbing him in the heart during a dispute over seating. I think there was a top-level comment here about this incident before, but there was a substantial amount of support on social media for Anthony on racial grounds, including fundraising for his defense. I get the feeling that a lot of the motivation to donate to this lady is by people who feel that the support Anthony has been getting on racial grounds has been unjust, and supporting her is a way of "balancing the scales," as it were. This isn't the instantiation of "if you tell everyone to focus on everyone's race all the time in every interaction, eventually white people will try to play the same game everyone else is encouraged to" that I foresaw, but it sure is a hilarious one.
Now, one conspiracy theory that I hope is hilariously true, is that the guy who recorded this lady was in cahoots with the lady herself and staged the whole thing in order to cash in on the simmering outrage over the Anthony case. But I doubt that anyone involved has the foresight to play that level of 4D chess.
I don't think either are particularly moral, and it's a cultural battle to be waged against both. I don't think we'll ever convince fellow humans to stop lying to manipulate people, but I can at least imagine a world where we universally condemn media companies who publish AI slop.
So I do think there's a big weakness with LLMs in that we don't quite have a handle on how to robustly or predictably reduce hallucinations like we can with human hallucinations and fabrications. But that's where I think the incentive of the editors/publishers come into play. Outlets that publish falsities by their human journalists lose credibility and can also lose lawsuits, which provide incentives for the people in charge to check the letters their human journalists generate before publishing them, and I see similar controls as being effective for LLMs.
Now, examples like Rolling Stone's A Rape on Campus article show that this control system isn't perfect, particularly when the incentives for the publishers, the journalists, and the target audience are all aligned with respect to pushing a certain narrative rather than conveying truth. I don't think AI text generators exacerbate that, though.
I also don't think it's possible for us to enter a world where we universally condemn media companies who publish AI slop, though, unless "slop" here refers specifically to lies or the like. Given how tolerant audiences are of human-made slop and how much cheaper AI slop is compared to that, I just don't see there being enough political or ideological will to make such condemnation even a majority, much less universal.
Personally: AI-hallucinated quotes are worse than fabricated quotes, because the former masquerades as journalism whereas the latter is just easily-falsifiable propaganda.
AI-hallucinated quotes seem likely to be exactly as easy as falsifiable as human-fabricated quotes, and easily-falsifiable propaganda seems to be an example of something masquerading as journalism. These just seem like describing different aspects of the same thing.
Can I extend this to your view on the OP being that it doesn't matter at all that the article that Adam Silver reposted is AI slop, versus your definition of "slop" in general? It doesn't move your priors on Adam Silver (the reposter), X (the platform), or Yahoo Entertainment (the media institution) even an iota?
I'm not Dean, but I would agree with this. I didn't have a meaningful opinion on Yahoo Entertainment, but, assuming that that article was indeed entirely AI-generated, the fact that it was produced that way wouldn't reflect negatively or positively on them, by my view. Publishing a falsehood does reflect negatively, though. As for Silver (is it not Nate?), I don't expect pundits to fact-check every part of an article before linking it, especially a part unrelated to the point he was making, and so him overlooking the false quote doesn't really surprise me. Though, perhaps, the fact that he chose to link a Yahoo Entertainment article instead of an article from a more reputable source reflects poorly on his judgment; this wouldn't change even if Yahoo Entertainment hadn't used AI and the reputable outlet had.
Actions speak louder than words. The fact they forcibly butted him aside due to the age concerns should be enough proof.
All that is proof of is that they believed that Biden, given the emperor-has-no-clothes moment at the debate, was less likely to garner more electoral votes against Trump than an alternative. The action of taking your hand out of the cookie jar after you're caught with your hand in it isn't proof of any sort of owning up to screwing up by trying to steal the cookies in the first place.
I agree, though, that actions do speak louder than words. If all the White House staff and journalists that ran cover for Biden's infirmity had actively pointed spotlights at the past words and articles that they had stated and published that had misled people, followed by resigning and swearing never to pursue politics or journalism again, those actions would be proof enough in my view. Actions that don't go quite as far could also serve as proof, depending on the specifics, but it would have to be in that ballpark.
If you believe I broke a rule, I encourage you to report me.
Those rules are so vague they can apply to anyone. And when you‘re facing a hostile community, they apply to you.
I don't think those rules are that vague, except by stretching what "vague" means to such an extent that all rules everywhere can be declared "so vague they can apply to anyone." If you don't think that his comments were pretty obviously unkind and failing to make reasonably clear and plain points, on top of making extreme claims without proactively providing evidence, then I don't take your judgment seriously.
The ‚they‘re obviously not interested in debate‘ talking point is an absurd, but very common justification for censoriousness.
I don't care if he was or wasn't interested in debate. What matters is that he was posting text that wasn't conducive to, and actually quite deleterious to, debate.
Well, if he‘s really not interested in debate, let him leave, don‘t ban him(or threaten to ban him). Call it keeping the moral high ground
I don't see how not enforcing against blatant rule violations is keeping the moral high ground. The rules are right there on the right sidebar, and he refused to follow the ones around things like speaking clearly or being no more obnoxious than necessary or proactively providing evidence, despite being given ample opportunity to do so. Letting the forum be polluted with the type of content that the forum was specifically set up to prevent seems to be immoral, if anything, in making the forum worse for the rest of the users who use this forum because of the types of discussion that is fostered by those rules being enforced (though I'd argue that there's no real moral dimension to it regardless). I don't know if Millard is a reasonable person, but he certainly did not post reasonable comments and, more importantly, posted comments that broke the forum's rules in a pretty central way.
I mean, if there were people with enough understanding of engineering and astronomy to even understand the very concept of what a "rocket to the moon" meant in 2000 BC, I think it would've been pretty cool if they'd started working on it then.
From a cursory search through the auto-generated transcript, it does seem to me like that that quote was made up. That does seem worth caring about. It's too bad that it's not defamatory, since it probably won't trigger some lawsuit or other major controversy, but perhaps a controversy could be created if someone decided to publicize this.
Seems like Yahoo's fact checking/editing department isn't built to handle its writers using LLMs. I still don't see why I would care about LLM usage if a journalism outlet had the proper controls for factual information. The problem isn't that it's AI generated, it's that it's false.
I probably wouldn't have guessed that this article was almost purely generated by AI if I hadn't been primed on it beforehand. Looking at it with that priming, I'm still not convinced that it was a pure copy-paste GPT job, though certainly it's filled with phrasing that, having been primed, strike me as being from an LLM, such as "While some applauded the self-deprecating humor, others criticized the segment for reinforcing cultural stereotypes" or "As speculation mounts over the 2028 Democratic field, Walz offers a glimpse into his political philosophy for the years ahead." Is there any direct evidence of it being LLM-generated?
But more to the point, I don't see why most people would care if this was purely AI generated, other than perhaps this author Quincy Thomas's employers and his competitors in the journalism industry. Particularly for what seems to be intended to be a pretty dry news article presenting a bunch of facts about what some politicians said. This isn't some personal essay or a long-form investigative report or fiction (even in those, I wouldn't care if those were purely LLM-generated as long as they got the job done, but I can see a stronger case for why that would matter for those). This kind of article seems exactly like the kind of thing that we'd want LLMs to replace, and I'd just hope that we could get enough of a handle on hallucinations such that we wouldn't even need a human like Quincy Thomas to verify that the quotes and description of events actually matched up to reality before hitting "publish."
Once some fairly reputable news outlet gets sued for defamation for publishing some hallucination that was purely LLM generated and failing to catch it with whatever safeguards that are in place, that's something I'd be interested to see how it plays out. At the rate things are going, I wouldn't be surprised if that happened in the next 5 years.
I'm probably around your age or a little younger, as I had very recently graduated college in 2008, and most of my peers were around my age. We were in Massachusetts, which had already legalized gay marriage by that point, and our perception was that gay marriage was so obviously a human right (it was vanishingly rare to encounter people socially who didn't agree with this - the few times we did, that person was usually socially ostracized by people within my circle - I was never enough of a social butterfly to have much influence over or feel much impact of these decisions) that either mainstream Dem politicians who were against it and for civil unions were just making cynical, calculated decisions to misrepresent their true beliefs for the purpose of not scaring off the superstitious/bigoted conservatives (including the more conservative/religious Democratic voters) or were just superstitious/bigoted themselves due to clinging to religion.
For Obama specifically, we almost definitely projected a lot of our own values onto him as the avatar of Hope and Change who would lead us out of the dark Bush 2 years. With gay marriage, we thought it was basically an open secret that he was cynically lying about his opposition to it, and plenty of us, including myself, also had a lot of confidence that he was actually an atheist cynically lying about his faith in Christianity.
Huh, I either hadn't heard of or forgot about him opposing gay marriage in 2010, after his election. The general running narrative among Democrats in my sphere is that Obama had cynically lied in 2008 about his opposition to gay marriage in a (what turned out to be successful) bid to gain voters for his presidential election. This was an openly stated belief during the 2008 campaign before he got elected, and it seemed to be the common belief the last time I encountered the topic among my peers a few years ago, and I generally leaned in the direction of believing that, but now I'm wondering if he really was stating his honest beliefs, which actually truly changed over time.
The one person this person reminded me of, which I'm guessing is coincidental and unrelated, is someone on the Motte subreddit (IIRC - might've been the SlateStarCodex one) talking about how it was wrong for banks to demand he pay back money he borrowed if he spent that money at a store and the store deposited the money to the bank, under the reasoning that the bank got the money back. There was more to it than that, and I'm probably remembering the details wrong. It was pretty fascinating trying to wrap my head around how someone could attempt to logically justify that person's rather deranged belief about money and property and kinda wish I could find the thread again.
The typical retort is that what there is, is a chance of survival for the human race in the event of total catastrophe befalling the Earth
This idea seems to come from scifi geeks thinking space is really cool, and trying to come up with some sort of justification for exploring it.
I don't understand this perspective. I'm not an astronomy or physics expert, but I did study it in school, and as best as I can tell, there is a scientific consensus that the Earth will become uninhabitable to humans due to the Sun expanding within the next 5 billion years. Which means that, if we want humanity to survive beyond that, we will have to figure out some way to sustainably live off of Earth (and likely off of the Solar System) between now and then. This, to me, has always been the justification for figuring out space exploration.
I'm partial to the argument that undertaking this project in the year 10^9 AD or even 10^6 AD might be a better use of resources than in the year 2025 AD. But I'm also partial to the argument that technology doesn't just progress through time alone, that we can always come up with excuses for why this would be easier or more efficient to tackle later, and as such, we might as well start working on it now.
Self-sustaining habitats in Antarctica or the sea floor or underground seem like decent short-term projects for catastrophes in the short term (as well as good settings for steampunk-inspired video games), but I don't see any way around space exploration for long term human survival, outside of even more outlandish things like time travel or portals.
So you're implying that these stable societies (stable for whom, exactly -- the precariat? https://en.wikipedia.org/wiki/Precariat) aren't comprised of a majority of people who experience incessant instability and poverty?
I probably implied it in that comment, and in this comment, I'm explicitly stating it, yes, that a minority of people in societies that have private property live in poverty.
More than what? The USA benefits more people on average than which countries? Compared to which periods in history?
For a current-time example, I'd say the USA compares favorably against North Korea, though perhaps South Korea vs North Korea would be a better example, as USA is only one specific and rather idiosyncratic example of a society that has private property rights, and South Korea is probably more similar to North Korea than USA is.
Trick questions, actually, because there's a fundamental flaw in your argument no matter how you'd answer them: better-than-worse does not substitute for as-good-as-better. Those are two mutually exclusive orientations. Yours is the former. No matter how much better than others an example might be, it says nothing about how good it realistically could be.
Well, the problem here is that you also say nothing about how good it realistically could be. So, how good could it realistically be? I'm all ears. As of yet, you've described the current system with words like "psychopathic" done by "paranoid" people, which I agree with you are completely morally neutral. As such, I have no desire to overthrow this non-immoral system which keeps giving us very good results, unless there's some other system in store for something even better to replace it. So what are those other ideas?
AI is a question of fundamental possibility: by contrast, with AI, there is no good reason to think we can create AI sufficient to replace OpenAI-grade researchers with forseeable timelines/tech.
From my mostly layman/hobbyist's view of the tech, I agree that there's no good reason to believe that AGI, ASI, or even AI that can substitute for OpenAI's researchers strictly within the narrow use case of doing research/development/engineering/etc. that OpenAI wants to do are right around the corner. But where I disagree is that I don't think it's a question of fundamental possibility; rather, just like with a Mars base, I see it as a question of logistics, and I suspect that that disagreement is the source of your confusion.
Recent LLM tech has proven that we can create machines that produce text in response to text really really well, and I think getting to AGI or even ASI is just a matter of making a machine that produces text really really really well. My perception of the tech right now is that it's not progressing fast enough that we'll cross that threshold into AGI, for however we want to define it, any time soon. But I think one reasonable possibility is that I'm just ignorant of the details and most recent developments that people predicting otherwise are privy to, and those details could give them confidence that we're right around the corner from crossing that threshold.
I won't speculate on exactly what you think AGI or ASI is, but certainly many people believe that AGI requires something more than producing text really really really well, which I've seen lead to disagreements about how close we are to achieving it, or if it's achievable at all.
No, "paranoid", "not sharing", and "psychopathy" have zip-all to do with morality.
I'd generally agree that these aren't moral concepts. Given that they are neither moral nor immoral, and that this system of "psychopathy with a makeover" that makes sense to "paranoid" people "who don't understand the concept of sharing" keeps leading to stable societies with people leading prosperous lives, when instability and poverty has been the norm for most lives anywhere, I have to conclude that "psychopathy" and "paranoia" and "not sharing" are really cool things that I want more of, for the purpose of my own benefit from living in a stable and prosperous society and from the good feelings I get from believing that I support a system that benefit more people in general. Why would I want to come up with an alternative?
A common speculation is that the tools were buried because the dead person would need them in the afterlife. That doesn't require that the person had owned the tool in life.
But it does require that the person own the tool in death. Burying it with that person is certainly depriving other living people of usage of that tool, which I guess is the relevant portion of "ownership" in this context.
- Prev
- Next
The steelman would probably be that they've transitioned from one gender to no gender, rather than transitioning from one gender to another gender.
The true reason is probably that logic is an oppressive cis-heteropatriarchal construct, and this person ended up genuinely feeling like they're whatever identities were most useful and convenient for them in this context, which in this case happened to be both agender and trans.
More options
Context Copy link