This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I'll second this one. Learning about epistemology in college was extremely helpful for me. It seems pretty core to the idea of what we think of as critical thinking. Who is telling me this information? Why are they telling it to me? Why do I believe X? What makes X true? Are all important questions to be asking and to be able to answer to understand the world around you. Especially appreciating the distinction between why you believe a thing and what would need to be the case for a thing to be true.
I am not sure about teaching Bayes Theorem or specific fallacies, but I think teaching students the ability to reflect on their beliefs and how they formed them would be very valuable. School itself is rife with opportunities for this since most of the time you learn things by trusting the testimony of a teacher or some other expert source rather than by direct firsthand experience of the facts that establish something as true.
Yes, there's an irony in that if you do really well at traditional academics, you're basically training yourself to accept the word of an authority figure as truth.
"Teacher lectured on this topic, the textbooks confirmed their teachings, and then I was tested on my ability to accurately recite the teachings! What a useful way to discover the truth!"
It'd be interesting, for example, if teachers explicitly told students that they'd be slipping occasional falsehoods into the lessons and teaching them as true, and that there were extra credit available to anyone who not only could identify the falsehoods, but explain exactly how they figured out it was false.
Its a good exercise to test one's epistimics AND to teach that authority figures occasionally (lol) outright lie to you!
I'd say that's a good idea, and what should have been done, but these days what will happen is that someone will copy paste lecture transcripts into an LLM and have it find the error and explain it lol. I suppose that does still deserve points for diligence.
I recall a professor back in med school who did do that, and I'm particularly proud of catching several of the bugs myself, even if I suspect a handful were simply him misspeaking from memory instead of being intentional heh.
Actually, medicine might be a bad idea for such tricks, plenty of things are outright counter-intuitive or edge-cases where we need to memorize where the heuristics fail. If you try this before the students have a good fundamental underpinning of theory and some practise, they might well never figure out where they went astray unless they crack out a textbook and pour over every claim.
Or the more traditional pre-Internet failure mode: the student knows better than the teacher, finds "intentional" errors that are unintentional and just the teacher not knowing better, and gets punished for it.
More options
Context Copy link
Also probably true, but I'll also say that if we have LLMs that are reliably able to spot and correct actual falsehoods we're probably in a world that is a little epistemically safer for the average person than our current one.
But this will tip into my other concern that people will become utterly reliant on AI tools for information, and thus almost ALL of their knowledge will ultimately rest on an appeal to authority. "The AI says this was true, no need to question that."
And finally, I do think relying on authority is not the worst thing people can do! If you've found an actual reliable source of information then you can choose to simply take most of what they say as accurate! I have a handful of people I follow on Twitter who I believe are making a good faith effort to be correct about complex issues, so when they summarize things or make a prediction, I lend them a lot of credence. Because I don't have the energy to assess every single claim I encounter for accuracy, myself.
But there's gotta be some bedrock somewhere where the person(s) making certain claims actually care about getting it right.
Entirely dependent on your standards for "reliability" in my eyes. I have found SOTA models adequate for that purpose in almost everything I've cared to try, and I have checked to see whether they were providing corrections that had a basis in objective fact. It's not perfect, but I say we're past "good enough" to catch anything the teacher says that they already expect diligent students to notice.
I broadly agree with the rest of your comment, I'm happy to defer to Scott for most things, even if I do disagree with certain things he's said, and there are certainly plenty of crime-thinkers on Twitter I follow because I trust them to give me information that's both true and suppressed because it's outside the general Overton Window.
If, say, we had an aligned AGI that proved itself to be smarter and more capable in terms of answering questions I had of it, including taking into account my values where relevant, I'd have few qualms about eventually handing over my decision making to it. But if I had a route to improving my own cognition to the point where I didn't need it, being able to match it myself, I'd prefer that.
I think we should probably continue exercising caution with current LLMs due to their propensity to hallucinate, especially if given a prompt that encourages such.
And since they're able to do internet searches now, we're hinging some of their reliability/truthfulness on the accuracy of the internet at large which... well, you know why we're here on THIS site rather than on Reddit.
I suspect that I won't be ready to accept LLM's as 'oracles'/truth-sayers until they've got the ability to tap into the real world directly and explain their reasoning for their logic.
If I ask ChatGPT "Is the sun currently shining right now"
I don't want it to just say "Based on your location data (which I scraped from your browser) I figured out what your time zone is and based on weather reports for your zip code is appears that it is a bright and sunny day!"
I'd want to hear something like "I've checked several camera feeds from various locations around the globe and it appears the sun is shining brightly in the following areas []."
This is definitely the future I want but ain't sure I'm gonna get it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link