This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I asked Gemini 2.5 Pro Thinking to solve it. It claimed to have a solution. I asked for the most concise summary it could provide'
Okay, here's a concise summary of the proof, avoiding technical jargon:
Here's the raw answer (minus reasoning trace):
https://rentry.org/5s6q6nxe
Hmm, it's maybe coming close to something that works, but seems to fuck up at the important junctures. After a couple of paragraphs where it doesn't find anything useful, in paragraph 7 it concludes that we can break down n^(k - i - 1) into q*2^(i+1) + r_i, where r_i = n^(k - i - 1) mod 2^(i+1). But then later it declares that r_i = n^(k - i - 1) and the proof follows from there. Unfortunately I don't think this would get any points, although maybe it could figure something out if you keep telling it where it fucks up.
/images/17435421010140572.webp
I copied your comment, and it insisted it was correct. I then shared the image, and it seems to think that the issue is imprecise terminology on its part rather than an actual error.
Here's the initial response:
https://rentry.org/yzvh9n47
After putting the image in:
https://rentry.org/c6nrs385
The important bit:
This happens because while the model has been programmed by some clever sod to apologize when told that it is wrong, it doesn't actually have a concept of "right" or "wrong". Just tokens with with different correlation scores.
Unless you explicitly tell/programm it to exclude the specific mistake/mistakes that it made from future itterations (a feature typically unavailable in current LLMs without a premium account) it will not only continue to make but "double down" on those mistakes because whatever most correlates with the training data must, by definition, be correct.
This is wrong! It would have been a reasonable claim to make a few years back, but we know for a fact this isn't true now:
https://www.anthropic.com/research/tracing-thoughts-language-model
There was other relevant work which shows that models, if asked if they're hallucinating, can usually find such errors. They very much have an idea of true versus false, to deny that would be to deny the same for humans, since we ourselves confabulate or can be plain old wrong.
Gemini 2.5 Pro Thinking in particular is far more amenable to reason. It doesn't normally double down and will accept correction. At least ChatGPT has the option to add memories about the user, so you can save preferences or tell it to act differently.
I'm slightly disappointed to catch it hallucinating, which is why I went to this much trouble instead of just accepting that as a fact the moment someone contested it. It's still well ahead of the rest.
Sorry I meant to reply to @yunyun333's comment about "doubling down" but I can assure you that we do not "know that for a fact" and feel the need to caution you against believing everything you read in the marketing materials.
The "hallucination problem" can not realistically be "solved" within the context of regression based generative models as the "hallucinations" are an emergant property of the mechanisms upon which those models function.
A model that doesn't hallucinate doesn't turn your vacation pictures into a Hayo Miyazaki frame either and the latter is where the money and publicity are.
Developers can adjust the degree of hallucination up or down and tack additional models, interfaces, and layers on top to smooth over the worst offenses as Altman and co continue to do, but the fundemental nature of this problem is why many people who are seriously invested in machine learning/autonomy dismiss models like GPT (from which Claude is derived) as an evolutionary dead-end.
Anthropic is a reputable company on the cutting edge of AI, so I'd ask you for concrete disagreements instead of an advice for generalized caution.
Here are other relevant studies on the topic:
https://openreview.net/forum?id=KRnsX5Em3W
https://openreview.net/forum?id=fMFwDJgoOB
https://aclanthology.org/2023.findings-emnlp.68/, an older paper from 2023.
This applies the same standard about the ability to differentiate truth from fiction that is used to justify that belief in humans.
Further, as models get larger, hallucination rates have consistently dropped. I recently discussed a study on LLM use for medical histories which found 0% and ~0.1% hallucination rates. As I've said before, humans are not immune to hallucinations or confabulations, I'd know since I'm a psych trainee. That's true even for normal people. The only barrier is getting hallucination rates to a point where they're generally trustworthy for important decisions, and in some fields, they're there. Where they're not, even humans usually have oversight or scrutiny.
There is a difference between hallucination and imagination. That is just as true for LLMs as it is for humans. Decreasing hallucination rates do not cause a corresponding decrease in creativity, quite the opposite.
Anthropic is Silicon Valley start-up currently seeking investors that was spun out of OpenAI by friends of Sam Bankman-Fried.
From this we can infer things about the motives, politics, ethics, and thought processes of the founders/upper management. I think that a heavy dose of skepticism is warranted towards any claims they make, especially when said claim is regarding something they are trying to get you to invest in.
I skimmed the studies you linked and while the first makes the strongest case it is also the weakest version of the claim that an LLM "knows when It’s lying".
That "LLMs' internal states encode information regarding the truthfulness of their outputs, and that this information can be utilized to detect errors." is trivially true but I would argue that the use of the word "truthfulness" here is in error. What the students are actually discussing in this study are the confidence intervals generated as part of the generative/inference process. The analysis and use of CIs to try and reduce hallucinations/error rates is not a novel insight or approach, it is almost as old as machine learning itself.
As such, I took the liberty of looking into the names associated with your 3 studies and managed to positively identify the professional profiles of 10 of them. Of those 10, none appear to hold any patents in the US or EU or have their names associated with any significant projects. Only 3 appear to have done much (if any) work outside of academia at the time the linked study was posted. Of those 3, only 1 stood out to me as having notable experience or technical chops. Accordingly, I am reasonably confident that I know more about this topic than the people writing or reviewing those studies.
There may be a difference between hallucination and imagination in humans. But I assure you that no such difference exists within the context of an LLM. When you examine the raw output of the generative model (IE what the algorithm is generating, not what is presented to the consumer) "hallucination rates" and "creativity" are almost 100% corelated. This is because "Creative Decisions" and "Hallucinations" in a regression model are both essentially deviations from the training corpus and the degree of deviance you're prepared to accept is a key consideration in both the design and evaluation of an ML algorithm.
I encourage anyone who is sincerely interested in this topic to watch this video. The whole thing is excellent, but for those with limited time/attention the specific portion relevant to this thread runs from 8 minutes 23 seconds to just over the 17 minute mark.
More options
Context Copy link
More options
Context Copy link
What does "solving" the hallucination problem look like, though? Humans also hallucinate all the time - in fact, arguably, this is one of the core reasons for the existence of this website and specifically this CW roundup thread - and it is something we've "solved" through various mechanisms of checking and verifying and holding people accountable, with none of them getting anywhere near perfect. Now, human hallucinations are more well understood than LLM ones, making them easier to predict in some ways, but why couldn't we eventually get a handle on the types of hallucinations that LLMs tend to have, allowing us to create proper control mechanisms for them such that the rate of actual consequential errors becomes lower than those caused by human hallucinations? If we reach that point, then could we say that hallucinations have been "solved?" And if not, then what does it matter if it wasn't "solved?"
"What does solving the hallucination looks like?" is a very good question. A major component of the problem is defining the boundaries of what constitutes "an error" and then what constitutes an acceptable error rate. Only then can you begin to think about whether or not that standard has been met and the problem "solved".
Sumarily the answer to that question is going to look very different depending on the use case. The requirements of the average white-collar office-drone looking to translate a news article, are going to be very different from the requirements of a cyber-security professional at a financial institution, or an industrialist looking to automate portions of thier process.
When I'm giving my intake speach to interns and new hires I talk about "the 9 nines". That is that in order to have a minimally viable product we must meet or exceed the standards of "baseline human performance" with 99.9 999 999% reliability. Imagine a test with a billion questions where one additional incorrect answer means a failing grade.
In this context "Humans also hallucinate" is just not an excuse. Think about how many "visual operations" a person typically performs in the process of going about thier day. Ask yourself how many cars on your comute this afternoon, or words in this comment thread have you halucinated? A dozen? None? I you think you are sure, are you "9 nines" sure?
A lot of the current refinement and itteration work on generative machine learning models revolves around adding layers of checks to catch the most egregious errors (not unlike as with humans as you observed) and giving users the ability to "steer" them down one path or another. While this represents an improvement over the previous generation such solutions are difficult/expensive to scale and actively deleterious to autonomy. The thinking being that "a robot" that requires a full-time babysitter might as well be an employee. This is why you can't buy a self-driving car yet.
I'm not sure what "baseline human performance" means in practice, but regardless of what actual objective criterion that means, we just have to get the error rate to be under 1/10^9 to be effective as a product, right? I don't understand how that, or any other rate you might choose, couldn't be reached, in principle.
Unimportant aside: I don't think 1 mistake in a billion is reasonable for any human or any tool, but, again, I don't know exactly what you're talking where the rubber meets the road - do you have any examples of interns who fail this or are just on the threshold, where you calculated that they fail at 1.1/10^9 or 0.9/10^9, to better illustrate this concept? But regardless, the exact number is unimportant.
It's not meant to be an excuse. I'm actually not sure how many 9s sure I am that I didn't hallucinate anything in my commute today, and I'm not sure that anything in my life exceeds 9 nines certainty. I'm not sure what point this exercise is supposed to make, though. Could you explain how my not being 9 nines certain that I'm not hallucinating things like this very conversation (I'd guess I'm 3 or 4 nines sure at most?) affects the point about an LLM's ability to be useful as intelligent, semi-autonomous tools if we lower their error rate to be beneath that of a typical human serving a similar role?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I kinda respect it doubling down, but it's scrambling to cover its ass. Also, I noticed it forgot the "mod 2n" part of c_i, which also throws a wrench into things.
Ah... I get it now. Thank you! I'm disappointed to see hallucination and confabulation here, but it you're inclined, do keep trying out Gemini 2.5 Pro Thinking in particular. It's a good model.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link