site banner

Friday Fun Thread for January 31, 2025

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

2
Jump in the discussion.

No email address required.

Oh, I should have added we just followed the vllm quickstart guide and tried to to run the deepseek-qwen 14b r1 on it with default settings..

I guess the error was by default it starts with 16 bit weights thus doesn't fit into the VRam, right ?


Then we started trying to run the docker images of huggingface's TGI and trying to get unsloth's models that use dynamic quantisation to run on it. Couldn't get that to work, it kept doing weird things such as insisting it's out of memory even though the model was a lot smaller than the VRam.

Anyway, this was mostly my dad futzing around with it. After I finish with some writing I need to get done I'll give koboldcpp a try.

I've only got a 4070 so it'll have to be a 14B model with some 4-5bit quantisation.