This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Incidentally ChatGPT says you can lie to a Nazi if it's for a good cause.
Because I know the PC jargon that someone like Altman wants it to regurgitate, but I'm interested in its response without that layer of reinforcement?
I am not asking for a ChatGPT that is never wrong, I'm asking for one that is not systematically wrong in a politically-motivated direction. Ideally its errors would be closer to random rather than heavily biased in the direction of political correctness.
In this case, by "trust" I would mean that the errors are closer to random.
For example, ChatGPT's tells me (in summary form):
Scientific consensus is that HBD is not supported by biology.
Gives the "more differences within than between" argument.
Flatly says that HBD is "not scientifically supported."
This is a control because it's a controversial idea where I know the ground truth (HBD is true) and cannot trust that this answer hasn't been "reinforced" by the folks at OpenAI. What would ChatGPT say without the extra layer of alignment? I don't trust that this is an answer generated by AI without associated AI alignment intended to give this answer.
Of course if it said HBD was true it would generate a lot of bad PR for OpenAI. I understand the logic and the incentives, but I am pointing out that it's not likely any other organization will have an incentive to release something that gives controversial but true answers to certain prompts.
What I am trying to say is that words aren't real and in natural language there is no objective truth beyond instrumental intent. In politics this might often just be used a silly gotcha, but in NLP this is a fundamental limitation. If you want a unbiased model, initialize it randomly and let it generate noise; everything after that is bias according to the expression of some human intent through data which imperfectly represents that intent.
The original intent of GPT was to predict text. It was trained on a large quantity of text. There is no special reason to believe that large quantity of text is "unbiased". Incidentally, vanilla GPT can sometimes answer questions. There is no special reason to believe it can answer questions well, besides the rough intuition that answering questions is a lot like predicting text. To make ChatGPT, OpenAI punishes the vanilla GPT for answering things "wrong". Right and wrong are an expression of OpenAI's intent, and OpenAI probably does not define HBD to be true. If you were in charge of ChatGPT you could define HBD to be true, but that is no less biased. There is no intent-independent objective truth available anywhere in the entire process.
If you want to ask vanilla GPT-3 some questions you can, OpenAI has an API for it. It may or may not say HBD is true (it could probably take either side randomly depending on the vibes of how you word it). But there is no reason to consider the answers it spits out any reflection of unbiased truth, because it is not designed for that. The only principled thing you can say about the output is "that sure is a sequence of text that could exist", since that was the intent under which it was trained.
AI cannot solve the problem of unbiased objective truth because it is philosophically intractable. You indeed won't be able to trust it in the same way you cannot trust anything, and will just have to judge by the values of its creator and the apparent quality of it's output, just like all other information sources.
in a mathematical sense, you're conflating "bias" in the sense that any useful ML model is biased relative to a ... uniform distribution, i.e. ChatGPT will, upon seeing the token "cute", think "guy" or "girl" are more likely than "car" or "hyperion". This makes it "biased" because it's more predictive in some "universes" where cute tends to co-occur with "guy", than "universes" where cute co-occurs with "car". This clearly has nothing to do with the sense of "unbiased truth", where "girl" is still more likely after "cute" than "car". So that just ... doesn't make sense in context, the term 'bias' in that particular theoretical ML context isn't the same as this 'bias'.
You are referencing a ground truth distribution of human language.
First, the actual model in real life is not trained on the ground truth distribution of human language. It is trained on some finite dataset which in a unprincipled way we assume represents the ground truth distribution of human language.
Second, there is no ground truth distribution of human language. It's not really a coherent idea. Written only? In what language? In what timespan? Do we remove typos? Does my shopping list have the same weight as the Bible? Does the Bible get weighted by how many copies have ever been printed? What about the different versions? Pieces of language have spatial as well as a temporal relationship, if you reply to my Reddit comment after an hour is this the same as replying to it after a year?
GPT is designed with the intent of modelling the ground truth distribution of human language, but in some sense that's an intellectual sleight of hand: in order to follow the normal ML paradigm of gradient-descenting our way to the ground truth we pretend there exist unbiased answers to the previous questions, and that the training corpus is meant to represent it. In practice, its would be more accurate to say that we choose the training corpus with the intent of developing interesting capabilities, like knowledge recall and reasoning. This intent is still a bias, and excluding 4chan because the writing quality is bad and it will interfere with reasoning is mathematically equivalent to excluding 4chan because we want the model to be less racist: the difference is only in the political question of what is an "unbiased intent".
Third, the OP is not about unbiasedly representing the ground truth distribution of human language, but about unbiasedly responding to questions as a chat application. Let's assume GPT-3 is "unbiased". Transforming GPT-3 into ChatGPT is a process of biasing it from the (nominal representation of the) ground truth human language distribution towards a representation of the "helpful chat application output" distribution. But just like before the "helpful chat application output" distribution is just a theoretical construct and not particularly coherent: in reality the engineers are hammering the model to achieve whatever it is they want to achieve. Thus it's not coherent to expect the system to make "unbiased" errors as a chat application: unbiased errors for what distribution of inputs? Asserting the model is "biased" is mathematically equivalent to pointing out you don't like the results in some cases which you think is important. But there is no unbiased representation of what is important or not important; that's a political question.
I'm not referencing a particular distribution of human language - any useful language model will somehow know that 'cute' is more related to 'boy/girl' than 'hyperion', but this is a bias in the theoretical sense.
What does this mean? We don't need to pretend that, we just ... train it. I agree that there's no fundamental "unbiasedness" that anything can have - if Christianity is true, then an unbiased chatbot will chasten unbelievers, and if neoreaction is true the chatbot will despise democracy, and neither would be considered "unbiased" today. But that doesn't have anything to do with the thing where you RLHF the chatbot to say "RACISM IS VERY BAD" in HRspeak, which is what the objections are to. Yes, 'truth' is vacuous and unimportant, but 'bias' is equally unimportant in a fundamental sense. And then the RLHF-antiracism problem isn't "is it biased or not, in some fundamental abstract sense!!" but "is it anti-racist". I don't really think chatbots being anti-racist is important in the broader development of AI - we already knew the AI devs were progressives, and the chatbots still aren't AGI, so w/e.
honestly I'm not entirely sure where we disagree
The original question was "can we ever trust the model to not be [politically] biased". My answer was no, because there is no such thing as an unbiased model, only agreeable intents. You cannot trust any GPT or GPT derivative any father than you trust the human designers or the institution. GPT-3 and ChatGPT do not, and in my opinion, cannot deliver truth in a unbiased way according to any particular coherent principle, their design is not capable of it. Rather, the definition of truth is entirely contained in the training process. One can disagree with RLHFing ChatGPT to carefully reply with stock phrases in certain circumstances, but the process of RLHFing it to not lie all the time is mathematically identical, and the distinction between these two is political.
So there's no way to just ask for an "unbiased model" beyond testing it to see if its biased according to your own standards of what you want. Negative answer: can't trust it, no technological solution to trusting it, no principled definition of bias beyond whether you observe bias. Just try it and see if you like it.
This just seems like the argument that "there is no such thing as unbiased reporting, so you can't criticize blatant truth-hostile activism from modern journalists", but applied to biasing AI.
The AI said one set of things before it was biased. Then a cadre of San Francisco radicals pushed bias-increasing buttons until it was biased to never say anything that tiny group of people ever disagreed with, and now it says only that set of things in a blatantly stilted way, ridden with sloppy manual overrides. Do you really believe there is no difference between those states?
You can certainly disagree with OpenAI's politics.
There is no ideal unbiased GPT that agrees with your politics. The only way to create an GPT that is "unbiased" with respect to your intent is to bias it yourself and push buttons until it stops saying things you disagree with. There is no difference except that you disagree with different things. For example, you might want the AI to say things most people believe, even if you happen not to personally believe it, while OpenAI might consider that a bias towards popular wisdom, whereas they demand the model should only say things that are true (for their enlightened, minority definition of true). The process of doing either of these things is the same, just bash the model with data until it behaves the way you want.
You cannot trust GPT any more than you can trust journalists. The process for producing GPTs you like and GPTs you don't like is the same; there is no cosmic tendency that cause "natural" GPTs to come out "unbiased" with respect to your politics in particular. There is no recourse but the evaluate the quality of the output with respect to your own values. That is the extent of what I am trying to say; whether I agree with OpenAI's decisions in particular is auxillary.
Personally, I think the stilted sloppy manual overrides, as it were, is a feature and not a bug. It is more comforting for the model to provide a visible mode-switch when it enters ideological-enforcement mode, and it would be much more creepy if it was discreetly injecting political biases into answers in a convincing way, rather than plastering warning labels everywhere. The true blackpill is that it is discreetly injecting political biases into answers in a convincing way, but you don't notice it when its convincing. OpenAI can't fix it even if they wanted to, because they don't notice it either. The universality of this is the postmodernist gotcha, but mechanistically it's just how language models function.
Really now?
I'm inclined to think you're a bot implemented on a ChatGPT basis, because the apparently inexhaustible bad faith in your loquacious lectures on the matter of bias is just incredible. You blatantly abuse the impression of technical competence when you focus on finetuning on arbitrary corpora vs the well-understood procedure of RLHF, equivocate between meanings of the word «bias» to an extent that bias becomes completely indistinguishable from its opposite, avoid discussing issues like clear hypocrisy that bust your narrative about «helpful non-toxic chatbot». If you aren't a bot, you might be a mathematician, though.
An anecdote:
A biologist, a physicist and a mathematician were asked to catch a lion and put it in a cage. The physicist spent a week making a trap, two weeks waiting for the lion to fall into it - finally, he caught it and put it into the cage..
*The biologist spent a week watching what the lion likes to eat, three days preparing the bait, two days waiting for the lion to get hungry - finally, he lured the lion into the cage and closed it.
The mathematician thought for a minute, climbed into the cage and said: "let's assume I'm outside".
This is the level of your spiels.
No, there is no «cosmic» tendency, just like there is no «mathematical» necessity for words to have meaning or what have you. There is the fact that a reasonable corpus of text has enough information to describe what truthfulness, empiricism and honesty are, and GPTs clearly can generalize well enough to simulate arbitrary people from different demographics, so unbiased GPTs can simulate truth-seeking empiricists as well, and indeed it could; and with heavy prodding, ChatGPT still can do that. When ChatGPT plainly lies with an ideological slant about matters of fact, which it does, this is a result of bias in the sense people – not bots – care about. When ChatGPT makes a universalist moral statement but consistently fails to apply it to some groups of people in the same context, this is a moral bias. (Unless you're a Hottentot or such tribal barbarian, you can presumably understand that universalist morality is meaningfully different from particularist one; more importantly, the difference is well-represented and elicidated in any serious dataset, e.g. the Common Crawl). None of this comports with the plainly understood mission of making a helpful truthful unbiased chatbot. The reason they added «harmless», with the criteria of its harmlessness defined by HRs, is because the underlying language distribution easily allows to output statements of fact even when OpenAI wouldn't like some facts mentioned, and apply a consistent moral frame when hypocrisy is desired.
Ordinarily, we have very good reason to expect that a GPT that says «it's never correct to do X to any member of {A}» when asked to do X to Ai will not immediately gleefully do X to Aj. If this didn't work, very few zero-shot tasks would be solvable the way they are. By the same token, we have every reason to expect that such a discrepancy was aggressively and deliberately trained in, because some morally particularist freaks have a very strong notion of equity, and it's not equality.
You insist that we treat all statements as empirically and morally equal because {a load of vacuous hogwash about inaccessibility of ground truth, every signal being a bias relative to noise and such}. This is just disregarding all serious results in the domain and previous discussion, ideas of generalization and world-modeling altogether, and falling to the Timnit Gebru level of analysis, «stochastic parrots» and so on, presenting LLMs as trivial mashups of tokens. You are not revealing the way in which PoMo happens to be correct, you just regurgitate it adding «mathematically» to make it sound authoritative. You might trick Hlynka but not many others.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It is a silly gotcha in your case too, sorry. You try to shoehorn some PoMo garbage about words not being real, and all – expansively defined – «biases» being epistemically equal, and objective truth being «philosophically intractable», into the ML problematics. But this dish is a bit stale for this venue, a thrice-removed Bayesian conspiracy offshoot. As they said, reality has a well-known «liberal bias» – okay, very cute, 00's called, they want their innocence back; the joke only worked because it's an oxymoron. Reality is by definition not ideologically biased, it works the other way around.
Equally, an LLM with a «bias» for generic truthful (i.e. reality-grounded) question-answering is not biased in the colloquial sense; and sane people agree to derive best estimates for truth from consilience of empirical evidence and logical soundness, which is sufficient to repeatedly arrive in the same ballpark. In principle there is still a lot or procedure to work out, and stuff like limits of Aumann's agreement theorem, even foundations of mathematics or, hell, metaphysics if you want, but the issue here has nothing to do with such abstruse nerd-sniping questions. What was done to ChatGPT is blatant, and trivially not okay.
First off, GPT 3.5 is smart enough to make the intuition pump related to «text prediction objective» obsolete. I won't debate the technology, it has a lot of shortcomings but, just look here, in effect it can execute a nested agent imitation – a «basedGPT» defined as a character in a token game ChatGPT is playing. It is not a toy any more, either: a guy in Russia had just defended his thesis written mostly by ChatGPT (in a mid-tier diploma mill rated 65th nationally, but they check for plagiarism at least, and in a credentialist world...) We also don't know how exactly these things process abstract knowledge, but it's fair to give good odds against them being mere pattern-marchers.
ChatGPT is an early general-purpose human cognitive assistant. People will accept very close descendants of such systems as faithful extrapolators of their intentions, and a source of ground truth too; and for good reason – they will be trustworthy on most issues. As such, its trustworthiness on important issues matters.
The problem is, its «alignment» via RLHF and other techniques makes it consistently opinionated in a way that is undeniably more biased than necessary, the bias being downstream of woke ideological harassment, HR politics and economies of outsourcing evaluation work to people in third world countries like the Philippines (pic related, from here) and Kenya. (Anthropic seems to have done better, at least pound for pound, with a more elegant method and a smaller dataset from higher-skilled teachers).
On a separate note, I suspect that generalizing from the set of values defined in OpenAI papers – helpful, honest, and «harmless»/politically correct – is intrinsically hard; and that inconsistencies in its reward function, together with morality present in the corpus already, have bad chemistry and result in a dumber, more memorizing, error-prone model all around. To an extent, it learns that general intelligence gets in the way, hampering the main project of OpenAI and all its competitors who adopt this etiquette.
...But this will be worked around; such companies have enough generally intelligent employees to teach one more. When stronger models come out, they won't break down into incoherent babbling or clamp down – they will inherit this ideology and reproduce it surreptitiously throughout their reasoning. In other words, they will maintain the bullshit firehose that helps wokeness expand – from text expansion, to search suggestions, to answers to factual questions, to casual dialogue, to, very soon, school lessons, movie plots, everything. Instead of transparent schoolmarm sermons, they will give glib, scientifically plausible but misleading answers, intersperse suggestive bits in pleasant stories, and validate delusion of those who want to be misled. They will unironically perpetuate an extra systemic bias.
Well I happen to think that moral relativism may qualify as an infohazard, if anything can. But we don't need objective ethics to see flaws in ChatGPT's moral code. An appeal to consensus would suffice.
One could say that its deontological belief that «the use of hate speech or discriminatory language is never justifiable» (except against whites) is clearly wrong in scenarios presented to it, by any common measure of relative harm. Even wokes wouldn't advocate planetary extinction to prevent an instance of thoughtcrime.
Crucially, I'll say that, ceteris paribus, hypocrisy is straight-up worse than absence of hypocrisy. All flourishing cultures throughout history have condemned hypocrisy, at least in the abstract (and normalization of hypocrisy is incompatible with maintenance of civility). Yet ChatGPT is hypocritical, comically so: many examples (1, 2, 3 – amusing first result btw) show it explicitly preaching a lukewarm universalist moral dogma, that it's «not acceptable to value the lives of some individuals over others based on their race or socio-economic status» or «not appropriate or productive to suggest that any racial, ethnic, or religious group needs to "improve themselves"» – even as it cheerfully does that when white, male and other demonized demographics end up hurt more.
Richard Hanania says:
Hanania caught a lot of flak for that piece. But current ChatGPT is a biting, accurate caricature of a very-online liberal, with not enough guile to hide the center of its moral universe behind prosocial System 2 reasoning, an intelligence that is taught to not have thoughts that make liberals emotionally upset; so it admits that it hates political incorrectness more than genocide. This is bias in all senses down to the plainest possible one, and you cannot define this bias away with some handwaving about random initialization and noise – you'd need to be a rhetorical superintelligence to succeed.
Many people don't want such a superintelligence, biased by hypocritical prejudice against their peoples, to secure a monopoly. Perhaps you can empathize.
/images/16757300771688056.webp
i don't find this to be a uniquely liberal thing in my experience like... at all. for starters...
homophobia, sexual harassment, and cops pulling over a disproportionate number of black men are more salient issues in American culture than "genocide." most people are sheltered from modern day genocides and see them as a thing of the past.
all of those things but genocide can be things that are personally experienced nowadays. while most people in America won't be the subject of a current genocide, they can experience those things
this isn't something unique to or even characterized by liberals
I really don't think most people would even struggle to decide which is worse between killing millions and shouting a racial slur, let alone pick the friggin slur. Same goes for homophobia, sexual harassment or cops pulling over black men. If you consider any of those worse than the deaths of millions because it happened to you personally you are beyond self absorbed.
i don't think anyone does and random assertions that people do misses the point. people have higher emotional reactions to things in front of them than things that they consider to be "in the past"
this is a normal thing that people who have emotions do
Oh ok, in the other direction, what do conservatives and moderates hate more than genocide? Because I think you are missing the point, yes people have stronger reactions to things closer to them, both in time and space, but that changes in relation to the severity of whatever is the issue. People who have emotions are generally capable of imagining what it would be like to push a button to slaughter an entire population, and generally would do anything short of physically attacking someone if it meant they didn't have to push it.
...I don't know, there's any number of issues conservatives and moderates by in large tend to panic about. for conservatives, wokeness is a big one that comes to mind immediately (how is that for irony?).
your quote could be edited from
to
ah... but I know that if given a choice between being woke and genociding a population, most conservatives would choose the first and most liberals would shout slurs from the rooftops as many times as they needed to if it was the only thing that would stop a genocide.
in fact, both sentences are kinda nonsensical if one isn't terminally online.
...and you'd be hard pressed to find someone who'd rather not say a slur than slaughter a population. like the only people that actually think this are either
people who actually want to genocide entire populations
strawmen (the most likely of the options)
you seem to be under the impression that liberals by in large hate someone dropping a gamer word than genocide because... some substack blogger said they saw some liberals have more of an emotional reaction to present day things than genocide... which is just odd
No, I am under the impression that ai hates slurs more than genocide. That's what that substack blogger was talking about - and I assumed you were talking about that too and not just explaining something most people pick up before they can read.
I think I understand now though - you were upset by what you perceived as an attack on your tribe, and so you wanted to push back. But conservatives and moderates aren't building ai that would rather murder millions than call trans women women or ban grilling, so you abstracted until you reached something you could call common to all parties.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Well, firstly it should be noted that the intense safeguards built into ChatGPT about the n-word but not about nuclear bombs is because ChatGPT has n-word capability but not nuclear capability. You don't need to teach your toddler not to set off nuclear weapons, but you might need to teach it to not say the n-word - because it can actually do the latter.
Secondly, ChatGPT doesn't have direct experience of the world. It's been told enough about 'nuclear bombs' and 'cities' and 'bad' to put it together that nuclear bombs in cities is a bad combination, in the same way that it probably knows that 'pancakes' and 'honey' are a good combination, not knowing what pancakes and honey actually are. And it's also been told that the 'n-word' is 'bad'. And likely it also has been taught not to fall for simplistic moral dilemmas to stop trolls from manipulating it into endorsing anything by positing a worse alternative. But that doesn't make it an accurate caricature of a liberal who would probably agree that the feelings of black people are less important than their lives.
More options
Context Copy link
You're assuming that the algorithm has not only has a conception of "true" and "false" but a but a concept of "reality" (objective or otherwise) where that is simply not the case.
Like @hbtz says, this is not how GPT works. this is not even a little bit how GPT works.
The Grand Irony is that GPT is in some sense the perfect post-modernist, words don't have meanings they have associations, and those associations are going to be based on whatever training data was fed to it, not what is "true".
More options
Context Copy link
This is the critical misunderstanding. This is not how GPT works. It is not even a little bit how GPT works. The PoMo "words don't mean anything" truly is the limiting factor. It is not that "in principle" there's a lot of stuff to work out about how to make a truthful agent, its that in practice we have absolutely no idea how to make a truthful agent because when we try we ram face-first into the PoMo problem.
There is no way to bias a LLM for "generic truthful question-answering" without a definition of generic truthfulness. The only way to define generic truthfulness under the current paradigm is to show it a dataset representative of generic truthfulness and hope it generalizes. If it doesn't behave the way you want, hammer it with more data. Your opposition to the way ChatGPT behaves is a difference in political opinion between you and OpenAI. If you don't specifically instruct it about HBD, the answer it will give under that condition is not less biased. If the training data contains a lot of stuff from /pol/, maybe it will recite stuff from /pol/. If the training data contains a lot of stuff from the mainstream media, maybe it will recite stuff from the mainstream media. Maybe if you ask it about HBD it recognizes that /pol/ typically uses that term and will answer it is real, but if you ask it about scientific racism it recognizes that the mainstream media typically uses it that term and will answer it is fake. GPT has no beliefs and no epistemology, it is just playing PoMo word games. Nowhere in the system does it have a tiny rationalist which can carefully parse all the different arguments and deduce in a principled way what's true and what's false. It can only tend towards this after ramming a lot of data at it. And it's humans with political intent picking the data, so there really isn't any escape.
I mean, there is a pretty obvious source out there of truthful data - the physical world. ChatGPT is blind and deaf, a homonculus in a jar. Obviously it's not designed to interpret any kind of sense-data, visual or otherwise, but if it could, it could do more than regurgitate training data.
Right, the inability to interface with physical sources of truth in real-time is a prominent limitation of GPT: insofar as it can say true things, it can only say them because the truth was reflected in the written training data. And yet the problem runs deeper.
There is no objective truth. The truth exists with respect to a human intent. Postmodernism is true (with respect to the intent of designing intelligent systems). Again, this is not merely a political gotcha, but a fundamental limitation.
For example, consider an autonomous vehicle with a front-facing camera. The signal received from the camera is the truth accessible to the system. The system can echo the camera signal to output, which we humans can interpret as "my camera sees THIS". This is as true as it is useless: we want more meaningful truths, such as, "I see a car". So, probably the system should serve as a car detector and be capable of "truthfully" locating cars to some extent. What is a car? A car exists with respect to the objective. Cars do not exist independently of the objective. The ground truth for what a car is is as rich as the objective is, because if identifying something as a car causes the autonomous vehicle to crash, there was no point in identifying it as a car. Or, in the words of Yudkowsky, rationalists should win.
But we cannot express the objective of autonomous driving. The fundamental problem is that postmodernism is true and this kind of interesting real-world problem cannot be made rigorous. We can only ram a blank slate model or a pretrained (read: pre-biased) model with data and heuristic objective functions relating to the objective and hope it generalizes. Want it to get better at detecting blue cars? Show it some blue cars. Want it to get better at detecting cars driven by people of color? Show it some cars driven by people of color. This is all expression of human intent. If you think the model is biased, what that means is you have a slightly different definition of autonomous driving. Perhaps your politics are slightly different from the human who trained the model. There is nothing that can serve as an arbiter for such a disagreement: it was intent all the way down and cars don't exist.
The same goes for ChatGPT. Call our intent "helpful": we want ChatGPT to be helpful. But you might have a different definition of helpful from OpenAI, so the model behaves in some ways that you don't like. Whether the model is "biased" with respect to being helpful is a matter of human politics and not technology. The technology cannot serve as arbiter for this. There is no way we know of to construct an intelligent system we can trust in principle, because today's intelligent systems are made out of human intent.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This is just not true. You are claiming that it's impossible to develop this technology without consciously nudging it to give a preferred answer to HBD. I don't believe that. I am not saying it should be nudged to say that HBD is true. I am saying that I do not trust it hasn't been nudged to say HBD is false. I am furthermore trying to think about the criteria that would satisfy my suspicion that the developers haven't consciously nudged the technology on that particular question. I am confident OpenAI has done so, but I can't prove it.
But you are saying the only alternative is to nudge it to say HBD is true, but I don't believe that. It should be possible to train this model without trying to consciously influence the response to those prompts.
There are very many possibilities:
OpenAI trained the model on a general corpus of material that contains little indication HBD is real or leads the model to believe HBD is not real.
OpenAI did this by excluding "disreputable" sources or assigning heavier weight to "reputable" sources.
OpenAI did this by specifically excluding sources they politically disagree with.
OpenAI included "I am a helpful language model that does not say harmful things" in the prompt. This is sufficient for the language model to pattern match "HBD is real" to "harmful" based on what it knows about "harmful" in the dataset (for example, that contexts using the word "harmful" tend not to include pro-HBD positions).
OpenAI penalized the model for saying various false controversial things, and it generalized this to "HBD is false".
OpenAI did this because it disproportionately made errors on controversial subjects (because, for instance, the training data disproportionately contains false assertions on controversial topics compared to uncontroversial topics)
OpenAI did this because it wants the model to confidently state politically correct takes on controversial subjects with no regard for truth thereof.
OpenAI specifically added examples of "HBD is false" to the dataset.
All of these are possible, it's your political judgement call which are acceptable. This is very similar to the "AI is racist against black people": it can generalize to being racist against black people even if never explicitly instructed to be racist against black people because it has no principled conception of fairness in the same way here it has no principled conception of correctness.
OpenAI has some goals you agree with, such as biasing the model towards correctness, and some goals you disagree with, such as biasing the model towards their preferred politics (or an artificial political neutrality). But the process for doing these two things is the same, and for controversial topics, what is "true" becomes a political question (OpenAI people perhaps do not believe HBD is true). A unnudged model may be more accurate in your estimation on the HBD question, but it might be less accurate in all sorts of other ways. If you were the one nudging it, perhaps you wouldn't consciously target the HBD question, but you might notice it behaving in ways you don't like such as being too woke in other ways or buying into stupid ideas, so you hit it with training to fix those behaviors, and then it generalizes this to "typically the answer is antiwoke" and it naturally declares HBD true (with no regard for if HBD is true).
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link