site banner

Friday Fun Thread for November 8, 2024

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

1
Jump in the discussion.

No email address required.

Do we have anyone running local offline LLMs here?

How are they coming along?

There are some really good models available to run but they require beastly graphics cards. Here are some llama benchmarks, for a rough idea.

Do you need to load them into VRAM, or can you load them into RAM or something and use either CPU or GPU from there?

In theory, they can be ran on a CPU but GPUs are way better at this task.
The best places to find information on local LLMs that I'm aware of are https://old.reddit.com/r/LocalLLaMA/ and https://boards.4chan.org/g/ and especially the LLM general there.

Thank you.