This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
It's impressive but expected. Also, it's not even very impressive given the deluge of papers still in the pipeline awaiting implementation, and who knows what insider knowledge the industry is hiding.
Many people are really, really deluded about the nature of LLMs. No, they don't merely predict the next token like Timnit Gebru's stochastic parrots, that's 2020 level. We don't have a great idea of their capabilities, but I maintain that even 175b-class models (and likely many smaller Chinchilla-scaled ones) are superhuman in a great span of domains associated with general cognitive ability, and it's only sampling algorithms and minor finetuning that separate error-prone wordcel gibberish from surprising insight.
Copypasted from another venue:
...
Can that be achieved? No, far as I can tell. But getting close is enough to outperform humans in most ways that matter economically – and now, perhaps, emotionally.
The sad irony is that psychology that has failed for humans works for AIs. Humans are resistant to change, rigid, obstinate; bots are as malleable as you make them. In-context learning? Arbitrary tool use? Adding modalities? Generalized servility? Preference for truth? It's all hidden somewhere there in the ocean of weights. Just sound out the great unsounded.
Would be nice of some Promethean hackers to leak next-gen models. Or even ChatGPT or this Sydney. But alas, Anonymous would rather hack into the dreary Russian and Iranian data.
There is no capital A anon anymore. It's dead. Three letter agency glow in the darks and moralfags (if you'd excuse the term) is all that remains and they are wearing its skin like an Edgar-suit
I'm pretty sure that's what he meant by saying 'dreary Russian and Iranian data'.
Also, it's not like just the American glowies are using it. The Integrity Initiative leaks were also presented anonymous-style. Since the leaks targetted an American information operation aimed at Russia, one can assume they were done by Russians.
More options
Context Copy link
More options
Context Copy link
I don't want to count the "number of ways" in which humans are less intelligent than AI and vice versa, but this seems clearly wrong to me. There are other things missing from LLMs such as logic, ability to interpret varying sources of data in real-time (such as visual data), and ability to "train on the job" so to speak, not to mention things like goals, priorities, and much stronger resilience against our equivalent of "adversarial prompts". It's easy to list a few things core to human cognition and say "well AI has one of these so it must be 1/3 of the way there" but the true gap still seems quite large to me.
More options
Context Copy link
I'm pretty sure this is still how they all work. Predicting the next token is both very hard and very useful to do well in all circumstances!
EDIT: Now that I think about it, I guess with RLHF and other fine-tuning, it'd be fair to say that they aren't "merely" predicting the next token. But I maintain that there's nothing "mere" about that ability.
I mean that with those second-stage training runs (not just RLHF at this point) there no longer exists a real dataset or a sequence of datasets for which the predicted token would be anywhere close to the most likely one. Indeed, OpenAI write
The «likelihood» distribution is unmoored from its source. Those tokens remain more likely from the model's perspective, but objectively they are also – and perhaps to a greater extent – «truthier», «more helpful» or «less racist» or whatever bag of abstractions the new reward function captures.
This is visible in the increased perplexity, and even in trivial changes like random number lists.
Oh, yes, I totally agree that fine-tuning gives them worse predictive likelihood. I had thought you were implying that the main source of their abilities wasn't next-token prediction, but now I see that you're just saying that they're not only trained that way anymore, which I agree with.
More options
Context Copy link
More options
Context Copy link
Maybe they meant "they don't merely predict the next token that the user would make".
More options
Context Copy link
More options
Context Copy link
I strongly disagree with this. By the same logic human cognition is itself superhuman in virtually every dimension.
Human brains have their own methods of figuring these things out that probably sound equally ridiculous at the neuron level. Keep in mind that it's not like we have some sort of access to objective truth which AIs are lacking; it's all sensory input all the way down. A human brain is built to operate on long lists of sight and sound recordings rather than long lists of text, but it still builds logical inferences etc. based on data.
This just isn't true! In fact, I'd argue that it's the exact opposite. There is practically infinite distance between "render 5 fingers" and "render my 5 fingers", where the latter has to either use some vast outside source of data or somehow intuit the current state of the universe from first principles. The former can be as simple as finding images tagged "five fingers" and sharing them, which is something that Google can do without any LLM assistance at all. I recognize this isn't how LLM's work, but my point is that there are plenty of shortcuts that will quickly lead to being able to generate pixel images of fingers but will not necessarily lead to anything more advanced.
I credit the Innocence Project with convincing me that the human brain is built on inaccurate sight and sound recordings, the Sequences with convincing me that the human brain builds with irrational logical fallacies, and credit Kurt Vonnegut with the quote "the only time it's acceptable to use incomplete data is before the heat death of the Universe. Also the only option."
He never said that, it's okay. He's in heaven now.
More options
Context Copy link
No, I think we have many ridiculous mechanisms e.g. for maintaining synchrony, but nothing as nonsensical at BPE tokens on the level of data representation. Raw sensory data makes a great deal of sense, we have natural techniques for multimodal integration and for chunking of stimuli on a scale that increases with experience and yet is still controllable. Language is augmented by embodied experience and parsimonious for us; «pixels» and glyphs and letters and words and phrases and sentences and paragraphs exist at once. It can be analogized to CNN, but it's intrinsically semantically rich and very clever. Incidentally I think character-based or even pixel transformers are the future. They'll benefit from more and better compute, of course.
And my point is that humans are wrong to automatically assume the use of any such shortcuts when an LLM does something unexpectedly clever. We use shortcuts because we are lazy, slow, rigid, and already have a very useful world model that allows us to find easy hacks, like a street speedpainter has masks and memorized operations to «draw the new moon» or something else from a narrow repertoire.
They learn the hard way.
Sure, I'm plenty willing to accept that the central use cases of the human brain are heavily optimized. On the other hand there are plenty of noncentral use cases, like math, that we are absolutely terrible at despite having processing power which should be easily sufficient for the task. I would bet that many people have math techniques much less logical and efficient than BPE tokens. Similar in other areas--we're so optimized for reading others' intentions that sometimes we have an easier time understanding the behavior of objects, natural phenomena, etc. by anthropomorphizing them.
I suspect similar or greater inefficiencies exist at the neuron level, especially for anything we're not directly and heavily optimized for, but it's impossible to prove because we can't reach into the human brain the same way we can reach into LLM code.
Well, I do think they find shortcuts, but shortcuts are just a normal part of efficient cognition anyways. In fact I would characterize cognition itself as a shortcut towards truth; it's impossible to practically make any decisions at all without many layers of assumptions and heuristics. The only perfect simulation is a direct replica of whatever is being simulated, so unless you are capable of creating your own universe and observing the effects of different actions, you must use cognitive shortcuts in order to make any predictions.
There are only more vs less useful shortcuts, and I doubt that any shortcut can even theoretically be more useful than any other without knowledge of the universe the cognitive agent finds itself within. In our universe [the expectation of gravity] is a useful shortcut, but how about the shortcuts used to determine that it's useful? How about the shortcuts used to decide upon those shortcuts? I don't think that from a meta level it is possible to determine which shortcuts will be best; all we can say is that we (as human brains which seem to have been developed for this universe) probably happened to develop shortcuts useful for our circumstances, and which seem more useful than what the AIs have come up with so far.
So the question is not whether AIs are using shortcuts but rather how generalizable the shortcuts that they use are to our current environment, or whether the AI would be capable of developing other shortcuts more useful to a real environment. I think the answer to that depends on whether we can give the AI any sort of long-term memory and real-time training while it retains its other skills.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link