This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I think he would not disagree. I ran Hlynka's text through Bing Chat and this was its summary regarding the humanities professor part:
I asked about some counterarguments and here is what it came up with:
I then asked it to tailor an argument in favor of humanities professors. It searched for "value of humanities professors" on google and incorporated it into the context:
I then asked it if all those things cannot be also done by GPT-4. It then searched for "GPT-4 limitations and capabilities" and it came up with the following:
So in a way we came full circle, the last part is I think a version of original argument Hlynka made.
Not really no, and to be blunt my initial reaction reading your "exchange" largely mirrors @IGI-111's below, it really does read like student who hasn't read the material trying to bluff-thier way past the teacher.
Volokh's essay and the subsequent discussion on his blog goes into in this far more deeply than I could, but what it seems that GPT and other LLMs seem to be actually kind of good for is condensing large bodies of text into a "cliff notes" version. For example, GPT was able to correctly identify my supposition that "GPT-4 could replace humanities professors because they are both producing meaningless or misleading content" despite my not having explicit made that claim. What GPT/LLMs seem to be particularly bad at is answering questions.
More options
Context Copy link
Ah man it really is like talking to an academic bullshitter. Continuum fallacies and all.
It even brings in random definitions of things that have no connection to the underlying argument just to make the aggrieved party sound more important.
All the tactics, none of the substance.
The hopelessly ironic part is that it seems to be arguing that humanities professors can distinguish between true and false and avoid social biases, having been trained on their writings.
One has seldom produced such a clear example of self refuting nature of the post modern condition.
It is arguing in favor of humanities professors because I told it to argue that position. It researched that GPT may have trouble discerning true and false statement, and it argued that humanities professors have that capacity. It implicitly asserted that argument, but Hlynka asserts without proof that humanities professors are pomo text generators. But unlike Hlynka GPT it at least provided links to its statements, it used some jargon like autoregressive architecture and in general repeated original Hlynka's argument about deficiencies of GPT better. I think that it also correctly pointed out that this whole thing vs symbol manipulator distinction is a lot more complicated.
While I instinctively believe things are more complicated than Hlynka's distinction, I became less and less convinced of this the more I waded through Bing's verbiage on the matter.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Not sure what the point of posting this was.
We're all quite capable of reading the post and coming to our own conclusions about it. I don't feel the need to outsource my thinking to anyone else, human or machine. I learn from other people, certainly, but I don't let them do my thinking for me. The purpose of the act of thinking is to determine what I think about something. Not to determine what someone else thinks.
-- Jean Baudrillard, The Transparency of Evil: Essays in Extreme Phenomena
The point of my exercise was that Bing Chat was able to understand Hlynka's text and produce a defense of humanities professors by actually improving on original arguments made by Hlynka. It produced the same true/false argument but it also provided a description of LLM shortcomings in more technical manner speaking about hallucinations or adversarial prompts.
So in that sense it was Hlynka's text that seemed more pomo compared to what GPT produced. Which I think is quite an interesting thing to observe. In the end I think at minimum the GPT + Human pair will outperform solo player in near future. At least in a sense that Human should know in what domains to completely trust GPT despite his own intuition.
The problem is that it's defense of humanities professors was exactly the sort of meaningless pastiche that you would expect if it was a pure symbol manipulator. Now you could argue that it sounds very much like the real arguments that would come out of the mouths of real humanities professors. But that just means Hlynka wins on both sides.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Except there nothing in my post about humanities professors being replaced by thing-manipulators. GIGO applies.
It was about replacement of humanities professors by GPT-4 as opposed to thing-manipluators. But it also caught the tone of your thing vs symbol manipulators. And in that sense I completely agree about GIGO.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link