site banner

Small-Scale Question Sunday for June 23, 2024

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

4
Jump in the discussion.

No email address required.

OK, this is the last straw, I'll write up in detail on the condition of open source in AI, as I promised. Mods, would that be best for the roundup or a separate post? I don't have an opinion.

I happen to know a bit about this specific issue.

For now, in short:

Deepseek-Coder is, as far as anyone can tell, for real, and a bigger deal than Meta's LLaMA3-70B. Claims to the opposite are mostly red-faced nationalistic sputtering and cope, in the vein of "Unitree robots are CGI, Choyna fakes and steals everything". (Indeed, we're at the stage where Stanford students, admittedly of Indian extraction, steal from Chinese labs). It even caused Zvi to update. Aran, the main librarian of the whole field, says that "It has the potential to solve Olympiad, PhD and maybe even research level problems, like the internal model a Microsoft exec said to be able to solve PhD qualifying exam questions."

It arguably, but pretty credibly, reaches parity with SoTA models like GPT-4 in the most utilitarian application of LLMs so far, which is code completion. It's comparably good in math and reasoning (even on benchmarks that have been released after it got uploaded to huggingface, from Gaokao to open-ended coding workloads). It's substantially more innovative than any big Western open source release (small ones like SigLIP, Florence2 etc. can compete), more open and more useful; it's so damn innovative we haven't figured out how to run it properly yet, despite very helpful papers. Design-wise, I'd say it's one year ahead of Western open source (not in raw capabilities though). It's been trained on maybe 60% more compute than LlaMA-3-8B, while being 30 times bigger and significantly more capable, and it might well only be 2x more expensive to run.

The issue of inference economics is unclear, but if their papers do not lie (and they don't seem to, the math makes sense, the model fits the description, at least one respected scientist took part in the development of this part and confirms everything), they can serve at those market-demolishing costs with a healthy margin, like 50% margin actually (if we ignore R&D costs at least). Their star developers seem very young. A well-connected account, that leaked Google project Gemini and Google Brain/Deepmind merger months prior to it being announced, made a joke (of the haha-kidding-not-kidding-variety) that "deepseek's rate of progress is how US intelligence estimates the number of foreign spies embedded in the top labs".

We don't understand the motivations of Deepseek and the quant fund High-Flyer that's sponsoring them, but one popular hypothesis is that they are competing with better-connected big tech labs for government support, given American efforts in cutting supply of chips to China. After all, the Chinese also share the same ideas of their trustworthiness, and so you have to be maximally open to Western evaluators to win the Mandate of Heaven.

Interested to see your thoughts. I also saw the Unitree robots, thought they were real but couldn't really tell. On reflection, Chinese CGI has a certain artificial look to it that was missing.