This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
That's the point: He is invited NOW, after "suddenly" shipping a model on Western Frontier level.
7 months ago I have said:
Presumably, this was true and this is him succeeding. As I note here.
As for how it used to be when he was just another successful quant fund CEO with some odd interests, I direct you to this thread:
so I stand by my conjectures.
So you recognize that the run itself as described is completely plausible, underwhelming even. Correct.
What exactly is your theory then? That it's trained on more than 15T tokens? 20T, 30T, what number exactly? Why would they need to?
Here's a Western paper corroborating their design choices [Submitted on 12 Feb 2024]:
Here's DeepSeek paper from a month prior:
As expected they kept scaling and increasing granularity. As a result, they predictably reach roughly the same loss on the same token count as LLaMA-405B. Their other tricks also helped with downstream performance.
There is literally nothing to be suspicious about. It's all simply applying best practices and not fucking up, almost boring. The reason people are so appalled is that American AI industry is bogged down in corruption covered with tasteless mythology, much like Russian military pre Feb 2022.
It's pretty weird: there's nothing there that any of the big labs in the West should have trouble replicating a hundred times over, and DeepSeek still managed to make something that can trade blows with them (and subjectively win, more often than not).
Might it really be just clarity of purpose leading to focusing on what matters? About a week ago, I remember Claude lecturing me, apropos of nothing, a bit about how it's best to buy from local bookstores instead of online retailers in response to me asking about what kind of textbook would be used for a particular course. I've not experienced DeepSeek doing anything even close to that, and it makes me wonder if the extraneous post-training being lathered on is the real difference here. Western models get distracted and are pulled in a thousand different directions, while DeepSeek can focus on what's relevant.
More options
Context Copy link
I'm not impressed by "they work in a field censured by the state, therefore they have no state connections". Jack Ma was also (personally!) censured by the state, and he's certainly connected. In the US, the DOJ seeks to break up Google. The Sacklers got sued into oblivion. All these people are connected - getting rekt by government action is an occupational hazard of being Noticed by the government, and those who are Noticed typically try to ingratiate themselves.
Thanks for the links about the model training, that's interesting reading.
More options
Context Copy link
More options
Context Copy link