Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.
Jump in the discussion.
No email address required.
Notes -
See also https://www.lesswrong.com/posts/nmxzr2zsjNtjaHh7x/actually-othello-gpt-has-a-linear-emergent-world
IMO, LLMs are "just" trying to predict the next token, the same way humans are "just" trying to pass on our genes. It does not preclude LLMs having an internal world model, and I suspect they actually do.
More options
Context Copy link