This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Anthropic is a reputable company on the cutting edge of AI, so I'd ask you for concrete disagreements instead of an advice for generalized caution.
Here are other relevant studies on the topic:
https://openreview.net/forum?id=KRnsX5Em3W
https://openreview.net/forum?id=fMFwDJgoOB
https://aclanthology.org/2023.findings-emnlp.68/, an older paper from 2023.
This applies the same standard about the ability to differentiate truth from fiction that is used to justify that belief in humans.
Further, as models get larger, hallucination rates have consistently dropped. I recently discussed a study on LLM use for medical histories which found 0% and ~0.1% hallucination rates. As I've said before, humans are not immune to hallucinations or confabulations, I'd know since I'm a psych trainee. That's true even for normal people. The only barrier is getting hallucination rates to a point where they're generally trustworthy for important decisions, and in some fields, they're there. Where they're not, even humans usually have oversight or scrutiny.
There is a difference between hallucination and imagination. That is just as true for LLMs as it is for humans. Decreasing hallucination rates do not cause a corresponding decrease in creativity, quite the opposite.
Anthropic is Silicon Valley start-up currently seeking investors that was spun out of OpenAI by friends of Sam Bankman-Fried.
From this we can infer things about the motives, politics, ethics, and thought processes of the founders/upper management. I think that a heavy dose of skepticism is warranted towards any claims they make, especially when said claim is regarding something they are trying to get you to invest in.
I skimmed the studies you linked and while the first makes the strongest case it is also the weakest version of the claim that an LLM "knows when It’s lying".
That "LLMs' internal states encode information regarding the truthfulness of their outputs, and that this information can be utilized to detect errors." is trivially true but I would argue that the use of the word "truthfulness" here is in error. What the students are actually discussing in this study are the confidence intervals generated as part of the generative/inference process. The analysis and use of CIs to try and reduce hallucinations/error rates is not a novel insight or approach, it is almost as old as machine learning itself.
As such, I took the liberty of looking into the names associated with your 3 studies and managed to positively identify the professional profiles of 10 of them. Of those 10, none appear to hold any patents in the US or EU or have their names associated with any significant projects. Only 3 appear to have done much (if any) work outside of academia at the time the linked study was posted. Of those 3, only 1 stood out to me as having notable experience or technical chops. Accordingly, I am reasonably confident that I know more about this topic than the people writing or reviewing those studies.
There may be a difference between hallucination and imagination in humans. But I assure you that no such difference exists within the context of an LLM. When you examine the raw output of the generative model (IE what the algorithm is generating, not what is presented to the consumer) "hallucination rates" and "creativity" are almost 100% corelated. This is because "Creative Decisions" and "Hallucinations" in a regression model are both essentially deviations from the training corpus and the degree of deviance you're prepared to accept is a key consideration in both the design and evaluation of an ML algorithm.
I encourage anyone who is sincerely interested in this topic to watch this video. The whole thing is excellent, but for those with limited time/attention the specific portion relevant to this thread runs from 8 minutes 23 seconds to just over the 17 minute mark.
More options
Context Copy link
More options
Context Copy link