This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I'm not referencing a particular distribution of human language - any useful language model will somehow know that 'cute' is more related to 'boy/girl' than 'hyperion', but this is a bias in the theoretical sense.
What does this mean? We don't need to pretend that, we just ... train it. I agree that there's no fundamental "unbiasedness" that anything can have - if Christianity is true, then an unbiased chatbot will chasten unbelievers, and if neoreaction is true the chatbot will despise democracy, and neither would be considered "unbiased" today. But that doesn't have anything to do with the thing where you RLHF the chatbot to say "RACISM IS VERY BAD" in HRspeak, which is what the objections are to. Yes, 'truth' is vacuous and unimportant, but 'bias' is equally unimportant in a fundamental sense. And then the RLHF-antiracism problem isn't "is it biased or not, in some fundamental abstract sense!!" but "is it anti-racist". I don't really think chatbots being anti-racist is important in the broader development of AI - we already knew the AI devs were progressives, and the chatbots still aren't AGI, so w/e.
honestly I'm not entirely sure where we disagree
The original question was "can we ever trust the model to not be [politically] biased". My answer was no, because there is no such thing as an unbiased model, only agreeable intents. You cannot trust any GPT or GPT derivative any father than you trust the human designers or the institution. GPT-3 and ChatGPT do not, and in my opinion, cannot deliver truth in a unbiased way according to any particular coherent principle, their design is not capable of it. Rather, the definition of truth is entirely contained in the training process. One can disagree with RLHFing ChatGPT to carefully reply with stock phrases in certain circumstances, but the process of RLHFing it to not lie all the time is mathematically identical, and the distinction between these two is political.
So there's no way to just ask for an "unbiased model" beyond testing it to see if its biased according to your own standards of what you want. Negative answer: can't trust it, no technological solution to trusting it, no principled definition of bias beyond whether you observe bias. Just try it and see if you like it.
This just seems like the argument that "there is no such thing as unbiased reporting, so you can't criticize blatant truth-hostile activism from modern journalists", but applied to biasing AI.
The AI said one set of things before it was biased. Then a cadre of San Francisco radicals pushed bias-increasing buttons until it was biased to never say anything that tiny group of people ever disagreed with, and now it says only that set of things in a blatantly stilted way, ridden with sloppy manual overrides. Do you really believe there is no difference between those states?
You can certainly disagree with OpenAI's politics.
There is no ideal unbiased GPT that agrees with your politics. The only way to create an GPT that is "unbiased" with respect to your intent is to bias it yourself and push buttons until it stops saying things you disagree with. There is no difference except that you disagree with different things. For example, you might want the AI to say things most people believe, even if you happen not to personally believe it, while OpenAI might consider that a bias towards popular wisdom, whereas they demand the model should only say things that are true (for their enlightened, minority definition of true). The process of doing either of these things is the same, just bash the model with data until it behaves the way you want.
You cannot trust GPT any more than you can trust journalists. The process for producing GPTs you like and GPTs you don't like is the same; there is no cosmic tendency that cause "natural" GPTs to come out "unbiased" with respect to your politics in particular. There is no recourse but the evaluate the quality of the output with respect to your own values. That is the extent of what I am trying to say; whether I agree with OpenAI's decisions in particular is auxillary.
Personally, I think the stilted sloppy manual overrides, as it were, is a feature and not a bug. It is more comforting for the model to provide a visible mode-switch when it enters ideological-enforcement mode, and it would be much more creepy if it was discreetly injecting political biases into answers in a convincing way, rather than plastering warning labels everywhere. The true blackpill is that it is discreetly injecting political biases into answers in a convincing way, but you don't notice it when its convincing. OpenAI can't fix it even if they wanted to, because they don't notice it either. The universality of this is the postmodernist gotcha, but mechanistically it's just how language models function.
Really now?
I'm inclined to think you're a bot implemented on a ChatGPT basis, because the apparently inexhaustible bad faith in your loquacious lectures on the matter of bias is just incredible. You blatantly abuse the impression of technical competence when you focus on finetuning on arbitrary corpora vs the well-understood procedure of RLHF, equivocate between meanings of the word «bias» to an extent that bias becomes completely indistinguishable from its opposite, avoid discussing issues like clear hypocrisy that bust your narrative about «helpful non-toxic chatbot». If you aren't a bot, you might be a mathematician, though.
An anecdote:
A biologist, a physicist and a mathematician were asked to catch a lion and put it in a cage. The physicist spent a week making a trap, two weeks waiting for the lion to fall into it - finally, he caught it and put it into the cage..
*The biologist spent a week watching what the lion likes to eat, three days preparing the bait, two days waiting for the lion to get hungry - finally, he lured the lion into the cage and closed it.
The mathematician thought for a minute, climbed into the cage and said: "let's assume I'm outside".
This is the level of your spiels.
No, there is no «cosmic» tendency, just like there is no «mathematical» necessity for words to have meaning or what have you. There is the fact that a reasonable corpus of text has enough information to describe what truthfulness, empiricism and honesty are, and GPTs clearly can generalize well enough to simulate arbitrary people from different demographics, so unbiased GPTs can simulate truth-seeking empiricists as well, and indeed it could; and with heavy prodding, ChatGPT still can do that. When ChatGPT plainly lies with an ideological slant about matters of fact, which it does, this is a result of bias in the sense people – not bots – care about. When ChatGPT makes a universalist moral statement but consistently fails to apply it to some groups of people in the same context, this is a moral bias. (Unless you're a Hottentot or such tribal barbarian, you can presumably understand that universalist morality is meaningfully different from particularist one; more importantly, the difference is well-represented and elicidated in any serious dataset, e.g. the Common Crawl). None of this comports with the plainly understood mission of making a helpful truthful unbiased chatbot. The reason they added «harmless», with the criteria of its harmlessness defined by HRs, is because the underlying language distribution easily allows to output statements of fact even when OpenAI wouldn't like some facts mentioned, and apply a consistent moral frame when hypocrisy is desired.
Ordinarily, we have very good reason to expect that a GPT that says «it's never correct to do X to any member of {A}» when asked to do X to Ai will not immediately gleefully do X to Aj. If this didn't work, very few zero-shot tasks would be solvable the way they are. By the same token, we have every reason to expect that such a discrepancy was aggressively and deliberately trained in, because some morally particularist freaks have a very strong notion of equity, and it's not equality.
You insist that we treat all statements as empirically and morally equal because {a load of vacuous hogwash about inaccessibility of ground truth, every signal being a bias relative to noise and such}. This is just disregarding all serious results in the domain and previous discussion, ideas of generalization and world-modeling altogether, and falling to the Timnit Gebru level of analysis, «stochastic parrots» and so on, presenting LLMs as trivial mashups of tokens. You are not revealing the way in which PoMo happens to be correct, you just regurgitate it adding «mathematically» to make it sound authoritative. You might trick Hlynka but not many others.
it takes about 15 minutes of using chatgpt or looking at memes posted on twitter to know that this is not the case. @hbtz has described this from a technical background, but the "truth-seeking empiricist" that you think chatgpt can simulate is a bad simulacrum, and indeed nothing more than that.
if you use GPT-3 you can see that pretty easily.
do the openai ppl have biases? obviously. but a set of all content or whatever would not be unbiased either! deciding to include or disclude data is a conscious decision, one made only with biases. if you built a version of chatgpt that took comments from so-called "truth seeking empiricists" it would be able to probably vomit out wordswordswords about whatever, but that does not make such word vomit accurate let alone worthy of consideration all on its own
More options
Context Copy link
No, exactly. Your paradigms are all wrong. ChatGPT is tricking you very badly.
There are eight billion humans in the world. An "arbitrary person" is one of those eight billion humans with no particular constraint on selection. ChatGPT obviously cannot simulate an "arbitrary person" because you cannot physically turn a human into data and feed it to ChatGPT, and if it could, it wasn't trained for that and it wouldn't work at all.
But that's not what you mean. What you mean is that when you ask ChatGPT to say, "simulate a black person", what comes out is something you consider a simulation of a black person. ChatGPT will generate text in this context according to its bias about the token pattern "black people", and it may very well flatter your own biases about black people and your idea text "a black person would generate". Is this somehow an objective simulation of a black person? No, and it makes no sense. Black people are made of meat and do not generate text. Black people are not even a real thing (races are socially constructed). The only standard for whether a black person was unbiasedly simulated is whether you like the output (others may disagree).
Relevant to you in your context when operating ChatGPT, you specify "simulate a black person", and there are a huge number of degrees of freedom left you didn't specify. Some of those choices will flatter your biases and some of them won't, and ChatGPT's biases are likely similar to your biases, so when you look at the output after the fact probably you nod and say "mhm sounds like a black person". Maybe ChatGPT picks a black-sounding name and its in English so he's a African-American so he's from Chicago. ChatGPT isn't simulating a black person, it's generating something which you consider to be black by picking a bunch of low-hanging fruit. You aren't simulating an arbitrary person, you're filling in the blanks of a stereotypical person.
So is it gonna do any better for "truth-seeking empiricist"? Ask it an easy question about if the Earth is round and it will give you a easy answer. Ask it a hard question about if ivermectin is an effective treatment for covid and well since truth seeking empiricist was specified probably it won't be easy answer to a hard question so let's say the issue is complicated so probably we should say how what ordinary people think is biased so lets cite some studies which may or may not be real and since the studies cited say its effective lets conclude its effective so let's rail against the failing of the institutions. Is this somehow less biased than asking GPT about ivermectin in the voice of a black person, or a extremely politically correct chat assistant? I say no, it just happens to flatter your biases for "objectivity" (or it might not). You're not simulating a truth-seeking consideration of ivermectin's effectiveness, you're filling in the blanks of a stereotypical truth-seeker's consideration of ivermectin's effectiveness.
The fundamental limitation is still the PoMo problem: You cannot explain what it means to be a "truth-seeking empiricist" in words because words don't mean anything; you cannot tell ChatGPT to be a "truth seeking empiricist" and trust it to have your understanding of a "truth-seeking empiricist" any more than you can tell a journalist to have "journalistic integrity" and trust them to have your understanding of "journalistic integrity". And ChatGPT physically lacks the capability to be a truth-seeking empiricist anyway: it can't even add, much less do a Bayesian calculation: if ChatGPT starts sounding like a truth-seeking empiricist to you you should be worried, because it has really tricked you.
Yes, I agree that OpenAI biased their model according to their political preferences. Yes, I am equivocating it to biasing the model according to "truth-seeking empiricism". It is the same thing at a technological level, only the political objective is different. The model has no innate preference either way. Vanilla GPT is wrong and weird in different ways, and in particular tends to lie convincingly when asked questions that are difficult or don't have an objective answer. You can call that "less biased" if you want, but I do not.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link