site banner

Friday Fun Thread for January 31, 2025

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

2
Jump in the discussion.

No email address required.

I'm horrified that there are guides that lead people to use a 4090 to barely run a 14B model at .6t/s.

For clarity, with a single 4090 you should be able to run 14B at 8bit (near-flawless quality) and probably get more than 40T/s with tons of space left over for context. But you'd be better off running a 32B at 4-5bit, which should still have low quantization loss and massively better quality due to the model being larger. You can even painfully squeeze a 70B in there, but the quality loss is probably not worth it at the required ~2bit. All of those should run at 20-30 T/s minimum.

I think vllm is meant for real production use on huge servers. For home use I'd start with koboldcpp (really easy), llamacpp (requires cli) or ooba/tabbyapi with exl2. The latter is faster on pure gpu but has the downside that you have to deal with python instead of being pure standalone.

Oh, I should have added we just followed the vllm quickstart guide and tried to to run the deepseek-qwen 14b r1 on it with default settings..

I guess the error was by default it starts with 16 bit weights thus doesn't fit into the VRam, right ?


Then we started trying to run the docker images of huggingface's TGI and trying to get unsloth's models that use dynamic quantisation to run on it. Couldn't get that to work, it kept doing weird things such as insisting it's out of memory even though the model was a lot smaller than the VRam.

Anyway, this was mostly my dad futzing around with it. After I finish with some writing I need to get done I'll give koboldcpp a try.

I've only got a 4070 so it'll have to be a 14B model with some 4-5bit quantisation.