Given hypergamy, I wouldn't be surprised if a woman's wealth - or at least her earnings - are positively correlated with how important she considers her partner to be gainfully employed and to lack a criminal record (which might not lower status in all contexts, but which would provide greater risk in the man's ability to keep earning money).
Now, I've had a few people acknowledge this point, and accept that, sure, some asymptotic limit on the real-world utility of increased intelligence probably exists. They then go on to assert that surely, though, human intelligence must be very, very far from that upper limit, and thus there must still be vast gains to be had from superhuman intelligence before reaching that point. Me, I argue the opposite. I figure we're at least halfway to the asymptote, and probably much more than that — that most of the gains from intelligence came in the amoeba → human steps, that the majority of problems that can be solved with intelligence alone can be solved with human level intelligence, and that it's probably not possible to build something that's 'like unto us as we are unto ants' in power, no matter how much smarter it is. (When I present this position, the aforementioned people dismiss it out of hand, seeming uncomfortable to even contemplate the possibility. The times I've pushed, the argument has boiled down to an appeal to consequences; if I'm right, that would mean we're never getting the Singularity, and that would be Very Bad [usually for one or both of two particular reasons].)
This seems like a potentially interesting argument to observe play out, but it also seems close to a fundamental unknown unknown. I'm not sure how one could meaningfully measure where we are along this theoretical asymptote in relationship between intelligence and utility, or that there really is an asymptote. What arguments convinced you both that this relationship would be asymptotic or at least have severely diminishing returns, and that we are at least halfway along the way to this asymptote?
I absolutely think they should be. Now, maybe it's not practical to check each student's individual political preferences and assign bespoke assignments for them on that basis (which could be gamed anyway). Rather, humanities-based courses should test students on their ability to defend a wide variety of different, highly offensive and ideally "dangerous" ideas in whatever topics are at hand, to stimulate actually learning how to think versus what to think.
Hard to say if that will work, though; teaching students how to think seems to be one of those things that people in education have been trying to do for ever, without there being any sort of noticeable progress whatsoever. I just know that that was how I was educated, and it seemed to work for me and my classmates (but of course I'd think that, and so my belief that it seemed to work should count for approximately nothing), but even if it did, that doesn't mean that it's generalizable.
I can't be 100% sure, but I think even if I hadn't been told, I would have pegged this as LLM-produced. It has the exact sort of "how do you do, fellow human kids?" energy that I'd expect from an LLM that was prompted to create a post that sounded casual, especially the very first paragraph.
The steelman would probably be that they've transitioned from one gender to no gender, rather than transitioning from one gender to another gender.
The true reason is probably that logic is an oppressive cis-heteropatriarchal construct, and this person ended up genuinely feeling like they're whatever identities were most useful and convenient for them in this context, which in this case happened to be both agender and trans.
Tbf to Amadan, the use of 'generative AI' as a description of use case rather than of design is a pretty common one from anti-AI artist and writers.
Hm, I was not aware of that. I'd thought most of such people at least ostensibly maintained a principled objection against generative AI for its training methods, rather than one based on pure protectionism.
That's fair, perhaps this "mania," as you call it, might be the immovable object that matches up to the irresistible force of wokeness. I just think that, sans a definitive proof, any denial of LLM-usage from an author that is deemed as sufficiently oppressed would be accepted at face value, with any level of skepticism deemed as Nazi-adjacent and appropriately purged.
Now I'm imagining a scandal where someone publishes a sort of postmodern scifi novel that they claim to be the unedited ChatGPT log where they had it write a novel piece by piece, publishing all the prompts they input between segments and all, but it comes out that, actually, the author fraudulently crafted the novel, writing each and every word the old fashioned way like a novelist in the pre-LLM era. Bonus points if it was written by hand, as revealed by a notebook with the author's handwriting showing the rough drafts.
Bonus bonus points if it's then revealed later on that the handwritten manuscript was actually created by an advanced 3D printer working off a generative AI based on a prompt written by the author.
I see a couple of issues with that scenario.
One is that there will almost always be plausible deniability with respect to LLM usage. There would have to be a slip-up of the sort of including meta-text that chatbot-style LLMs provide - something like "Certainly! Here is the next page of the story, where XYZ happens." - for it to be definitive proof, and I'd expect that the audience and judges would pick up on that early enough to prevent such authors from becoming high status. That said, it could still get through, and also someone who did a good enough job hiding this early on could slip up later in her career, casting doubt on her original works.
But the second, bigger issue, is that even if this were definitively proven, with the author herself outright claiming that she typed in a one-word prompt into ChatGPT 10 to produce all 70,000 words of her latest award-winning novel, this could just be justified by the publishing industry and the associated awards on the basis of her lacking the privilege that white/straight/cis/male authors have, and this LLM usage merely ensures equity by giving her and other oppressed minorities the writing ability that privileged people are just granted due to their position in this white supremacist patriarchal society. Now, you might think that this would simply discredit these organizations in the eyes of the audience, and it certainly will for some, but I doubt that it would be some inflection point or straw that breaks the camel's back. I'd predict that, for the vast majority who are already bought in, this next spoonful would be easy to swallow.
This generalized antipathy has basically been extended to any use of AI at all, so even though the WorldCon committee is insisting there has been no use of generative AI
(Emphasis added).
If they admit to using ChatGPT, how can they claim they didn't use generative AI? ChatGPT and all LLMs are a type of generative AI, i.e. they generate strings of text. ChatGPT, I believe, is also trained on copyright-protected works without permission from the copyright holders, which is the criterion many people who hate AI consider to qualify as the generative AI "stealing" from authors and artists.
Just based on this description, it sounds like these WorldCon people are trying to thread a needle that can't be. They should probably just say, "Yes, we used generative AI to make our lives easier. Yes it was trained on copyright protected works without permission. No, we don't think that's 'stealing.' Yes, this technology might replace authors like you in the future, and we are helping to normalize its usage. If you don't like it, go start your own AIFreeWorldCon with blackjack and hookers."
I'm a Catholic, and not a particular fan of Trump, and I found the picture both inevitable and mildly amusing.
Only mildly amusing? I found it holy amusing!
A part of this that hadn't occurred to me until I saw it pointed out is that there seems to be a sort of donation duel between this lady's case and that of Karmelo Anthony, who's a black teen charged with murdering a white teen during a track meet by stabbing him in the heart during a dispute over seating. I think there was a top-level comment here about this incident before, but there was a substantial amount of support on social media for Anthony on racial grounds, including fundraising for his defense. I get the feeling that a lot of the motivation to donate to this lady is by people who feel that the support Anthony has been getting on racial grounds has been unjust, and supporting her is a way of "balancing the scales," as it were. This isn't the instantiation of "if you tell everyone to focus on everyone's race all the time in every interaction, eventually white people will try to play the same game everyone else is encouraged to" that I foresaw, but it sure is a hilarious one.
Now, one conspiracy theory that I hope is hilariously true, is that the guy who recorded this lady was in cahoots with the lady herself and staged the whole thing in order to cash in on the simmering outrage over the Anthony case. But I doubt that anyone involved has the foresight to play that level of 4D chess.
I don't think either are particularly moral, and it's a cultural battle to be waged against both. I don't think we'll ever convince fellow humans to stop lying to manipulate people, but I can at least imagine a world where we universally condemn media companies who publish AI slop.
So I do think there's a big weakness with LLMs in that we don't quite have a handle on how to robustly or predictably reduce hallucinations like we can with human hallucinations and fabrications. But that's where I think the incentive of the editors/publishers come into play. Outlets that publish falsities by their human journalists lose credibility and can also lose lawsuits, which provide incentives for the people in charge to check the letters their human journalists generate before publishing them, and I see similar controls as being effective for LLMs.
Now, examples like Rolling Stone's A Rape on Campus article show that this control system isn't perfect, particularly when the incentives for the publishers, the journalists, and the target audience are all aligned with respect to pushing a certain narrative rather than conveying truth. I don't think AI text generators exacerbate that, though.
I also don't think it's possible for us to enter a world where we universally condemn media companies who publish AI slop, though, unless "slop" here refers specifically to lies or the like. Given how tolerant audiences are of human-made slop and how much cheaper AI slop is compared to that, I just don't see there being enough political or ideological will to make such condemnation even a majority, much less universal.
Personally: AI-hallucinated quotes are worse than fabricated quotes, because the former masquerades as journalism whereas the latter is just easily-falsifiable propaganda.
AI-hallucinated quotes seem likely to be exactly as easy as falsifiable as human-fabricated quotes, and easily-falsifiable propaganda seems to be an example of something masquerading as journalism. These just seem like describing different aspects of the same thing.
Can I extend this to your view on the OP being that it doesn't matter at all that the article that Adam Silver reposted is AI slop, versus your definition of "slop" in general? It doesn't move your priors on Adam Silver (the reposter), X (the platform), or Yahoo Entertainment (the media institution) even an iota?
I'm not Dean, but I would agree with this. I didn't have a meaningful opinion on Yahoo Entertainment, but, assuming that that article was indeed entirely AI-generated, the fact that it was produced that way wouldn't reflect negatively or positively on them, by my view. Publishing a falsehood does reflect negatively, though. As for Silver (is it not Nate?), I don't expect pundits to fact-check every part of an article before linking it, especially a part unrelated to the point he was making, and so him overlooking the false quote doesn't really surprise me. Though, perhaps, the fact that he chose to link a Yahoo Entertainment article instead of an article from a more reputable source reflects poorly on his judgment; this wouldn't change even if Yahoo Entertainment hadn't used AI and the reputable outlet had.
Actions speak louder than words. The fact they forcibly butted him aside due to the age concerns should be enough proof.
All that is proof of is that they believed that Biden, given the emperor-has-no-clothes moment at the debate, was less likely to garner more electoral votes against Trump than an alternative. The action of taking your hand out of the cookie jar after you're caught with your hand in it isn't proof of any sort of owning up to screwing up by trying to steal the cookies in the first place.
I agree, though, that actions do speak louder than words. If all the White House staff and journalists that ran cover for Biden's infirmity had actively pointed spotlights at the past words and articles that they had stated and published that had misled people, followed by resigning and swearing never to pursue politics or journalism again, those actions would be proof enough in my view. Actions that don't go quite as far could also serve as proof, depending on the specifics, but it would have to be in that ballpark.
If you believe I broke a rule, I encourage you to report me.
Those rules are so vague they can apply to anyone. And when you‘re facing a hostile community, they apply to you.
I don't think those rules are that vague, except by stretching what "vague" means to such an extent that all rules everywhere can be declared "so vague they can apply to anyone." If you don't think that his comments were pretty obviously unkind and failing to make reasonably clear and plain points, on top of making extreme claims without proactively providing evidence, then I don't take your judgment seriously.
The ‚they‘re obviously not interested in debate‘ talking point is an absurd, but very common justification for censoriousness.
I don't care if he was or wasn't interested in debate. What matters is that he was posting text that wasn't conducive to, and actually quite deleterious to, debate.
Well, if he‘s really not interested in debate, let him leave, don‘t ban him(or threaten to ban him). Call it keeping the moral high ground
I don't see how not enforcing against blatant rule violations is keeping the moral high ground. The rules are right there on the right sidebar, and he refused to follow the ones around things like speaking clearly or being no more obnoxious than necessary or proactively providing evidence, despite being given ample opportunity to do so. Letting the forum be polluted with the type of content that the forum was specifically set up to prevent seems to be immoral, if anything, in making the forum worse for the rest of the users who use this forum because of the types of discussion that is fostered by those rules being enforced (though I'd argue that there's no real moral dimension to it regardless). I don't know if Millard is a reasonable person, but he certainly did not post reasonable comments and, more importantly, posted comments that broke the forum's rules in a pretty central way.
I mean, if there were people with enough understanding of engineering and astronomy to even understand the very concept of what a "rocket to the moon" meant in 2000 BC, I think it would've been pretty cool if they'd started working on it then.
From a cursory search through the auto-generated transcript, it does seem to me like that that quote was made up. That does seem worth caring about. It's too bad that it's not defamatory, since it probably won't trigger some lawsuit or other major controversy, but perhaps a controversy could be created if someone decided to publicize this.
Seems like Yahoo's fact checking/editing department isn't built to handle its writers using LLMs. I still don't see why I would care about LLM usage if a journalism outlet had the proper controls for factual information. The problem isn't that it's AI generated, it's that it's false.
I probably wouldn't have guessed that this article was almost purely generated by AI if I hadn't been primed on it beforehand. Looking at it with that priming, I'm still not convinced that it was a pure copy-paste GPT job, though certainly it's filled with phrasing that, having been primed, strike me as being from an LLM, such as "While some applauded the self-deprecating humor, others criticized the segment for reinforcing cultural stereotypes" or "As speculation mounts over the 2028 Democratic field, Walz offers a glimpse into his political philosophy for the years ahead." Is there any direct evidence of it being LLM-generated?
But more to the point, I don't see why most people would care if this was purely AI generated, other than perhaps this author Quincy Thomas's employers and his competitors in the journalism industry. Particularly for what seems to be intended to be a pretty dry news article presenting a bunch of facts about what some politicians said. This isn't some personal essay or a long-form investigative report or fiction (even in those, I wouldn't care if those were purely LLM-generated as long as they got the job done, but I can see a stronger case for why that would matter for those). This kind of article seems exactly like the kind of thing that we'd want LLMs to replace, and I'd just hope that we could get enough of a handle on hallucinations such that we wouldn't even need a human like Quincy Thomas to verify that the quotes and description of events actually matched up to reality before hitting "publish."
Once some fairly reputable news outlet gets sued for defamation for publishing some hallucination that was purely LLM generated and failing to catch it with whatever safeguards that are in place, that's something I'd be interested to see how it plays out. At the rate things are going, I wouldn't be surprised if that happened in the next 5 years.
I'm probably around your age or a little younger, as I had very recently graduated college in 2008, and most of my peers were around my age. We were in Massachusetts, which had already legalized gay marriage by that point, and our perception was that gay marriage was so obviously a human right (it was vanishingly rare to encounter people socially who didn't agree with this - the few times we did, that person was usually socially ostracized by people within my circle - I was never enough of a social butterfly to have much influence over or feel much impact of these decisions) that either mainstream Dem politicians who were against it and for civil unions were just making cynical, calculated decisions to misrepresent their true beliefs for the purpose of not scaring off the superstitious/bigoted conservatives (including the more conservative/religious Democratic voters) or were just superstitious/bigoted themselves due to clinging to religion.
For Obama specifically, we almost definitely projected a lot of our own values onto him as the avatar of Hope and Change who would lead us out of the dark Bush 2 years. With gay marriage, we thought it was basically an open secret that he was cynically lying about his opposition to it, and plenty of us, including myself, also had a lot of confidence that he was actually an atheist cynically lying about his faith in Christianity.
Huh, I either hadn't heard of or forgot about him opposing gay marriage in 2010, after his election. The general running narrative among Democrats in my sphere is that Obama had cynically lied in 2008 about his opposition to gay marriage in a (what turned out to be successful) bid to gain voters for his presidential election. This was an openly stated belief during the 2008 campaign before he got elected, and it seemed to be the common belief the last time I encountered the topic among my peers a few years ago, and I generally leaned in the direction of believing that, but now I'm wondering if he really was stating his honest beliefs, which actually truly changed over time.
The one person this person reminded me of, which I'm guessing is coincidental and unrelated, is someone on the Motte subreddit (IIRC - might've been the SlateStarCodex one) talking about how it was wrong for banks to demand he pay back money he borrowed if he spent that money at a store and the store deposited the money to the bank, under the reasoning that the bank got the money back. There was more to it than that, and I'm probably remembering the details wrong. It was pretty fascinating trying to wrap my head around how someone could attempt to logically justify that person's rather deranged belief about money and property and kinda wish I could find the thread again.
I almost feel a bit sorry for the assassin. Sans any evidence, my speculation is that he saw the love and adoration Mangione was receiving and decided he wanted some of that by pulling off another senseless ideological murder. But he's just not good looking enough, and the victims not suitably high up on the food chain for him to garner anywhere near the same level of following, IMHO. There's something almost funny about this, him copying Mangione with a cargo cult understanding of the phenomenon, when Mangione himself seemed to have a cargo cult understanding of how assassinations are supposed to work for affecting change.
Then again, I could be completely off about this, and he was a truly devout and deranged ideologue. Or he could gather adoration even more than Mangione. Time will tell, I suppose.
More options
Context Copy link