This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Sample efficiency difference between radically dissimilar substates would be a very small hill to die on when arguing a conceptual limitation. But anyway: LLMs acquire «grammar» at about the same pace as humans.
Here's a fascinating new paper: Modern language models refute Chomsky’s approach to language:
Consider that children are exposed to about 6-20k words per day. So in 3 years, they can realistically process tens of millions of words. And that's augmented with all our truly innate social hooks, hypothesis-testing and multimodality that GPTs have been devoid of.
It's a long-deserved hatchet job. Statistical learning paradigm is not just shown to be more useful in engineering or even closer to the biological truth than generative linguistics – it's more epistemologically mature, philosophically profound and, yes, elegant; as often happens when people hone their thinking in challenging reality and not just adversarial ivory tower circlejerks.
I'm not sure what specifically @2rafa meant – and Chomsky is lost in his mirror labyrinth of mottes and baileys. In any case, she's exactly right.
Interesting. If it holds up, I'm updating significantly against universal grammar. (I still see some grounds to be skeptical: in my experience at least the LLaMas often make conspicuous grammatical mistakes in languages such as German which were represented in excess of that in their training set, and in my limited experience looking at the grammatical evaluation sets in that battery they tend to suffer from a certain American laconicity that may make them insufficient for evaluating understanding of recursive structure)
I'll probably come back with more commentary once I had time to read the whole of it, but I do have an issue that might turn out to be a nitpick or a portent of a more general methodological criticism right on the second page:
This line of argumentation seems wrong in a way that suggests sloppiness about something that should be a core concern of such a paper. LLMs, among being many other things, are lossy compression algorithms with respect to their training set. An output not being an exact reproduction therefore does not imply that it is not a reproduction at all, any more than "I searched the internet for images with the same first 20 pixels and found no matches" implies that a given JPEG is an original creation.
More options
Context Copy link
More options
Context Copy link