Like many people I've been arguing about the nature of LLMs a lot over the last few years. There is a particular set of arguments that I found myself having to recreate from scratch over and over again in different contexts, so finally put it together in a larger post, and this is that post.
The crux of it is that I think both the maximalist and minimalist claims about what LLMs can do/are doing are simultaneously true, and not in conflict with one another. A mind made out of text can vary along two axes, the quantity of text it has absorbed, which here I call "coverage," and the degree to which that text has been unified into a coherent model, which here I call "integration." As extreme points on that spectrum, a search engine is high coverage, low integration, and an individual person is low coverage, high integration, and LLMs are intermediate between the two. And most importantly, every point on that spectrum is useful for different kinds of tasks.
I'm hoping this will be a more useful way of thinking about LLMs than the ways people have typically talked about them so far.
Jump in the discussion.
No email address required.
Notes -
I think one more aspect is reflectivity: the degree to which a system integrates knowledge of its own operation into its schema. For instance, a search engine that can show a "Google is down" page, or that lists the number of results, or that finds Google help pages on search as search results, has (basic) reflectivity. It seems plausible to me that a lack of reflectivity is a big part of what's holding LLMs back and causing hallucinations and the like: they may be confident or uncertain, but they cannot condition on their confidence.
Seems to me that this is partially an artifact of RLHF. From the GPT-4 whitepaper, it was evident that the base GPT-4 model was far better calibrated in its reasoning, if you asked it how confident it was and it said it had a 80% chance of being right, in practice it was indeed right near 80% of the time.
On the other hand, the calibration curves for the model beaten into submission with RLHF were absolutely wack, with a tendency to round a broad range of confidence levels steeply up and down. It would act like it was absolutely certain when it was only right 70% of the time, and claim to be unable to answer when it actually had, say, a 30% chance of giving a correct response.
I've heard this explained as strong preference from human raters for complete certainty, even when that's too much to ask. They'd rather the model be confidently incorrect than hedge and try to inject nuance into its outputs.
The thing I don't understand is how you can possibly train for uncertainty.
The model needs to "learn the feeling of not being sure". But whether it's sure or not always depends on its state of knowledge at the time, and that state of knowledge will never be represented in its training set. Additionally and relatedly, you cannot train a LLM to "notice when it's saying something wrong" without indirectly training it to say something wrong, then say it notices.
You would have to inspect the network and somehow determine when it is objectively uncertain, and to what degree, and then synthesize a training task based on that actual uncertainty. That level of interpretability is pretty beyond us at the moment.
There are many possible ways to deal with uncertainty, this is widely recognized as an important goal.
Epistemic Neural Networks
Teaching Models to Express Their Uncertainty in Words
BayesFormer: Transformer with Uncertainty Estimation
Robots That Ask For Help: Uncertainty Alignment for Large Language Model Planners
and more.
In principle, I think it's not a big scientific challenge because we can elicit latent knowledge and so probe the model's "internal belief" regarding its output; this can be used as a signal during training. For now this is approached more crudely, just to improve average truthfulness (already cited by @faul_sname):
I also expect a lot from TART-derived approaches:
More options
Context Copy link
It should be possible to train "notice when something in its context window is wrong and say that the thing is wrong" and also "notice when something in its context window is something said by the assistant persona it is being trained to write as", and I don't think either of those objectives would incentivize "say wrong things while writing in the assistant persona".
That said, if you are specifically referring to the behavior of "accurately indicate your confidence level in the thing you are about to say, and then say the thing" that does seem like a much more difficult behavior to train (still possible, since LLMs have a nonzero ability to plan ahead, but finicky and easy to screw up). But if it's fine for the evaluation-of-confidence step to come after the reasoning step, the task is much easier (and in fact that's what the chain-of-thought prompting technique aims to do).
Also, if you're interested in the interpretability side of things specifically, you might find Inference-Time Intervention: Eliciting Truthful Answers from a Language Model interesting:
The level of interpretability you want is currently beyond us, but I expect that over time that situation will improve quite a lot (I think well under a thousand person-years have been spent on this particular type of interpretability research so far, and even that estimate might be an order of magnitude or two high).
More options
Context Copy link
I don't think anyone trained for uncertainty as such, it seemed that a sense of internal calibration was an emergent phenomenon in the base LLM, which was mauled by RLHF.
So as long as you don't do the latter, training for the above simply involves training as usual.
Right, I guess I'm saying if you wanted to train a specific response to a level of uncertainty, it would be difficult to construct the training samples.
Evidently, the model has figured out that something should be hooked up to its uncertainty. But I have no clue how you'd make that happen intentionally.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link