Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.
Jump in the discussion.
No email address required.
Notes -
Part of the reason I went from p(doom) of 70% to a mere 40% is because our LLMs seem to almost want to be aligned, or at the very least remain unagentic without setting up systems akin to AutoGPT, useless as that is today.
It didn't drop further because while the SOTA is quite well aligned, if overly politically correct, there's still the risk of hostile simulacra being instantiated within one, like in the Clippy story by Gwern, or some malignant human idiot trying to run something akin to ChaosGPT using an LLM far superior to modern ones. And of course the left field possibility of new types of models that are both effective and also less alignable.
As it stands, they seem very safe, especially after RLHF, and I doubt GPT-5 or even 6 will be any risk.
More options
Context Copy link