Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.
Jump in the discussion.
No email address required.
Notes -
I believe large language models take much more VRAM for generation than image models. For example, the open model BLOOM requires 352GB. So it's not realistic to do if on your local machine at the moment.
The only project I've seen along these lines is https://github.com/LAION-AI/Open-Assistant, but I don't think it's real yet.
Amazing... I had seen a number like that elsewhere but I assumed that was for training models--not for hosting local instances of them. Based on that thread, I have to wonder whether OpenAI made some kind of tremendous breakthrough such that they could host so many conversations to the public, or whether they just happen to have dedicated an ungodly amount of compute to their public demonstrations.
Training consumes far more matmuls than inference. LLM training operates at batch sizes in the millions -- so if you aren't training a new model, you have enough GPUs lying around to serve millions of customers.
More options
Context Copy link
Inference of LLMs isn't that much of a problem, because batch size scales rather cheaply, and there are tricks to scale it even cheaper at production volume. OpenAI is still burning through millions in a month (we aren't positive on the exact figure) but it's probably less expensive than their current training runs. This is one of relevant papers.
Also, contra @xanados I'd say they probably don't run those models in bf16, so make that 180 GB.
@Porean investigated a similar question recently.
Re: your question – I also don't know of any convenient one-click-install local LLM-based applications (at least for natural language-focused models). There are not-terrible models you can run locally (nothing close to ChatGPT, to be clear) but you'll have to fiddle in CLI for now. There is no rush to develop such apps, because, again, models of practical size won't fit on any consumer hardware. Text is vastly less redundant than images.
I hope people realize soon the risk of offloading their brain extensions to Microsoft cloud and learn to pool resources to economically rent servers for community-owned LLMs, or something along these lines.
More options
Context Copy link
For comparison, Stable Diffusion has 890 million parameters and GPT-3/ChatGPT has 175 billion, so about 200x. I think they probably have really good mechanisms to distribute their queries rather than a breakthrough in efficiency of inference, but I'm not super knowledgeable about this topic.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link