It bears mentioning that so far ChatGPT has shown no ability to discern factual correctness in its responses. No capability appears to exist in the architecture to be able to differentiate between a reasonable response and utter horseradish.
Any appearance of correctness is because the model happens to contain content that correlates with the input text vector. The machine is simply returning a bunch of tokens from its model that match the input with high probability with some random variance. The machine cannot problem solve, because problem solving requires predicting whether a change will make a solution more or less correct. The machine may be able to predict an outcome but it chooses randomly not based on correctness.
While it can appear to perform adequately for subjective tasks (blogs, marketing, social media), the illusion crumbles when presented a novel challenge that requires some degree of correctness. Humans, dogs, rats, and yes even birds, seem to be able to adapt to novel conditions estimating their competence and adjusting their behaviour. The machine appears to have no clue which change is better or worse and so it just picks randomly, by design.
With this is mind, it’s amusing when users and developers attempt to instruct the model, especially so when it is expected to restrict some behaviour, such as ChatGPTs system prompts. The machine has no concept of whether its output meets the expectations of its prompt.
If code generated by ChatGPT happens to perform competently at a task, it’s because the human has engineered a vector that points to a region in the model that contains tokens from an existing example that solves the users problem. Any problem solving ability is still firmly encapsulated in the human operator’s brain.
It bears mentioning that so far ChatGPT has shown no ability to discern factual correctness in its responses. No capability appears to exist in the architecture to be able to differentiate between a reasonable response and utter horseradish.
Any appearance of correctness is because the model happens to contain content that correlates with the input text vector. The machine is simply returning a bunch of tokens from its model that match the input with high probability with some random variance. The machine cannot problem solve, because problem solving requires predicting whether a change will make a solution more or less correct. The machine may be able to predict an outcome but it chooses randomly not based on correctness.
While it can appear to perform adequately for subjective tasks (blogs, marketing, social media), the illusion crumbles when presented a novel challenge that requires some degree of correctness. Humans, dogs, rats, and yes even birds, seem to be able to adapt to novel conditions estimating their competence and adjusting their behaviour. The machine appears to have no clue which change is better or worse and so it just picks randomly, by design.
With this is mind, it’s amusing when users and developers attempt to instruct the model, especially so when it is expected to restrict some behaviour, such as ChatGPTs system prompts. The machine has no concept of whether its output meets the expectations of its prompt.
If code generated by ChatGPT happens to perform competently at a task, it’s because the human has engineered a vector that points to a region in the model that contains tokens from an existing example that solves the users problem. Any problem solving ability is still firmly encapsulated in the human operator’s brain.
More options
Context Copy link