This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Apologies for the abrasive tone: I take issue with your method and authorities, but would prefer you not to take it as a personal attack.
Funny that you talk about peeking under the hood, but later refer to Hofstadter. I'm sick of hearing about this guy – and cringe at the whole little adoring culture that nerds and «hackers» have built around his books. GEB and strange loops, this intellectual isomorphism of autofellatio, self-referential kabbalistic speculation detached from all non-synthetic evidence and loudly, proudly spinning its wheels in the air. «Dude, imagine thinking about… thinking! Isn't that, like, what programmers do? Woah…» It is pretty sad when a decently smart brain belongs to a man who happens to build a brand out of a single shower thought and gets locked by incentives into inflating it endlessly! Even worse when others mistake that for an epiphany, generations of poorly socialized kids looking for the promised deeper meaning, and establishment journalists respectfully asking the matured stoner for his Expert Input. (Then again, I may simply be envious).
Last I've seen Doug speak of machine learning, it was July 2022, and he was smug because he tricked GPT-3 into hallucinations (The Economist) :
Narrator's voice: «they were 4 months away from ChatGPT». Today, pretty much the same text-only GPT-3, just finetuned in a kinda clever way for chat mode, can not only recognize absurd inputs, but also explain the difference between itself and the previous version, better than Hofstadter can understand. This was done on top of the previous InstructGPT tuning, also misrepresented by Experts On AI with what is basically a tech-illiterate boomer's conspiracy theory:
Today we know that LLMs have what amounts to concepts. Today, people have forgotten what they had expected of the future, and this Sci-Fi reality feels to them like business as usual. It is not.
Every little hiccup of AI, from hallucinations to poor arithmetic, its critics put into the spotlight and explain by there not being any real intelligence under the hood, the sort they have. The obvious intellectual capacity of LLMs demonstrated by e.g. in-context learning is handwaved away as triviality. Now, like Boretti says, «frames or symbols or logic or some other sad abstraction completely absent from real brains» – now implementing that would be a «big theoretical breakthrough». We don't know if any of that exists in minds in any substantial non-metaphorical sense, or is even rigorously imaginable, but some wordcels made nice careers out of pontificating on those subjects. Naturally, if they can be formalized, it wouldn't be much of an engineering task to add them to current ML – the problem is, such reification of schemes only makes things worse. The actual conceptual repertoire developed by humble engineers and researchers over decades of their quest is much more elegant and expressive, and more deserving of attention today.
Do you really think that the idea of «predict next word in a sentence» provides sufficient insight about the under-the-hood intelligence of LLMs, when it is trivial to change the training objective to blank-filling, and RLHF guarantees that there exists no real dataset for which the «predicted» – or actually, chosen – token is in fact the most likely one in that context?
Or that the process of self-attention , actually undergirding those «predictions», is not «reasoning about reasoning» (because it cannot attend to… itself, the way you can attend to patterns of neuron spikes, presumably?
Or that recurrence is hard to tack onto transformers? (or for that matter specialized cognitive tools, multimodality etc.?)
And so on and so forth. But ultimately after years and years of falsified forecasts and blatant displays of ignorance by skeptics, it is time to ask oneself: isn't this parlor trick of a stochastic parrot impressive enough to deserve more respect, at least, than gimmicks of our fraudulent public intellectuals? It does, after all, make more sense when talking, and is clearly more able to grapple with new evidence.
@2rafa reasonably observes that human intelligence may be not so different from next word prediction. Indeed, if I close this tab (in Obsidian) and return to it in half an hour, I may forget where I was going with this rant and start with the most likely next word; and struggling for words when in an intense conversation is an easy way to see how their statistical probabilities affect reasoning (maybe I'm projecting my meta-awareness, lol). But even if the substrate is incompatible: why do we think our one is better? Why do we think it supports «real reasoning» in a way that mutiplying matrices, estimating token likelihood, or any other level of abstraction for LLM internals does not?
It is not obvious that the human brain is anywhere near optimal for producing intelligent writing, or for much of anything except being itself. We didn't evolve to be general-purpose thinkers, we are just monkeys who had our brains scaled up under selective pressures in some limited range of environments, with stupid hacks like the phonological loop and obsession with agency. Obviously LLMs are not optimal either (we are only pursuing them out of convenience), but they might still be better at producing our own text.
A plane doesn't flap its wings, but it definitely flies – and even though it's less efficient per unit of mass, in the absolute sense it does something no bird could. We do not understand birds well enough to replicate them from scratch, nor do we need to. Birds could never achieve enough for a truly general flight, «move anything across the Earth» kind.
We, too, aren't enough for truly general intelligence. It remains to be shown that LLMs aren't better fit for that purpose.
More options
Context Copy link