site banner

Small-Scale Question Sunday for July 9, 2023

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

5
Jump in the discussion.

No email address required.

I've run a couple LLMs offline using guides I found on the LocalLlama Subreddit, though I don't actively use it due to the far lower level of intelligence compared to ChatGPT, even 3.5. By analogy, if GPT 3.5 is a middle schooler and GPT 4.0 is a high school freshman, the best of the best currently available models that can be run at reasonable speed on a high-end consumer GPU (e.g. a 4090 with 24GB of VRAM) is a 1st or 2nd grader. So the tradeoff in usability for privacy, customizability, and uncensored nature doesn't make a lot of sense for my own use cases.

But if privacy is a high priority, then it's pretty trivial to run a local LLM while making sure it's remaining private. Just disconnect your computer from the internet when you use them. The UI tools to run these are all open source and require being checked out directly from Github, so you can check directly yourself that it's not saving your prompts and responses and sending them back to some central server somewhere. I admit I haven't checked this directly, but the community is active enough that something that egregious would've been caught by now.