Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.
- 127
- 2
What is this place?
This website is a place for people who want to move past shady thinking and test their ideas in a
court of people who don't all share the same biases. Our goal is to
optimize for light, not heat; this is a group effort, and all commentators are asked to do their part.
The weekly Culture War threads host the most
controversial topics and are the most visible aspect of The Motte. However, many other topics are
appropriate here. We encourage people to post anything related to science, politics, or philosophy;
if in doubt, post!
Check out The Vault for an archive of old quality posts.
You are encouraged to crosspost these elsewhere.
Why are you called The Motte?
A motte is a stone keep on a raised earthwork common in early medieval fortifications. More pertinently,
it's an element in a rhetorical move called a "Motte-and-Bailey",
originally identified by
philosopher Nicholas Shackel. It describes the tendency in discourse for people to move from a controversial
but high value claim to a defensible but less exciting one upon any resistance to the former. He likens
this to the medieval fortification, where a desirable land (the bailey) is abandoned when in danger for
the more easily defended motte. In Shackel's words, "The Motte represents the defensible but undesired
propositions to which one retreats when hard pressed."
On The Motte, always attempt to remain inside your defensible territory, even if you are not being pressed.
New post guidelines
If you're posting something that isn't related to the culture war, we encourage you to post a thread for it.
A submission statement is highly appreciated, but isn't necessary for text posts or links to largely-text posts
such as blogs or news articles; if we're unsure of the value of your post, we might remove it until you add a
submission statement. A submission statement is required for non-text sources (videos, podcasts, images).
Culture war posts go in the culture war thread; all links must either include a submission statement or
significant commentary. Bare links without those will be removed.
If in doubt, please post it!
Rules
- Courtesy
- Content
- Engagement
- When disagreeing with someone, state your objections explicitly.
- Proactively provide evidence in proportion to how partisan and inflammatory your claim might be.
- Accept temporary bans as a time-out, and don't attempt to rejoin the conversation until it's lifted.
- Don't attempt to build consensus or enforce ideological conformity.
- Write like everyone is reading and you want them to be included in the discussion.
- The Wildcard Rule
- The Metarule
Jump in the discussion.
No email address required.
Notes -
I'm horrified that there are guides that lead people to use a 4090 to barely run a 14B model at .6t/s.
For clarity, with a single 4090 you should be able to run 14B at 8bit (near-flawless quality) and probably get more than 40T/s with tons of space left over for context. But you'd be better off running a 32B at 4-5bit, which should still have low quantization loss and massively better quality due to the model being larger. You can even painfully squeeze a 70B in there, but the quality loss is probably not worth it at the required ~2bit. All of those should run at 20-30 T/s minimum.
I think vllm is meant for real production use on huge servers. For home use I'd start with koboldcpp (really easy), llamacpp (requires cli) or ooba/tabbyapi with exl2. The latter is faster on pure gpu but has the downside that you have to deal with python instead of being pure standalone.
Oh, I should have added we just followed the vllm quickstart guide and tried to to run the deepseek-qwen 14b r1 on it with default settings..
I guess the error was by default it starts with 16 bit weights thus doesn't fit into the VRam, right ?
Then we started trying to run the docker images of huggingface's TGI and trying to get unsloth's models that use dynamic quantisation to run on it. Couldn't get that to work, it kept doing weird things such as insisting it's out of memory even though the model was a lot smaller than the VRam.
Anyway, this was mostly my dad futzing around with it. After I finish with some writing I need to get done I'll give koboldcpp a try.
I've only got a 4070 so it'll have to be a 14B model with some 4-5bit quantisation.
More options
Context Copy link
More options
Context Copy link