site banner

Culture War Roundup for the week of November 27, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

11
Jump in the discussion.

No email address required.

For a brief period in time, we have cheap, scalable models that are better in almost every way than the average school teacher available for personalized 1:1 tutoring for every kid alive.

That's not a bad thing, if we can be sure the AI isn't going to hallucinate "this author wrote this poem" (no they didn't) or that the word "potato" contains the letter "b".

But the major problem is the same one bedeviling conventional education today - kids who want to learn will, be that teacher, homeschooling, or AI. Kids who don't want to, won't. Johnny playing on his phone instead of listening to the fun! interactive! personal tutor! AI is not going to learn. And how do you make Johnny pay attention?

That's not a bad thing, if we can be sure the AI isn't going to hallucinate "this author wrote this poem" (no they didn't) or that the word "potato" contains the letter "b".

You studiously refrain from even using any LLM, so take it from me that hallucinations aren't a debilitating problem, and presumably teachers are going to check their students work in the first place. I don't deny it's an issue, but every new and improved model suffers from it less and less, and I don't think even you can claim that every statement made by a human teacher in class can be accepted as gospel truth.

They're already anal enough about not considering Wikipedia a "source", leaving aside it comes with its own citations for claims.

But the major problem is the same one bedeviling conventional education today - kids who want to learn will, be that teacher, homeschooling, or AI. Kids who don't want to, won't. Johnny playing on his phone instead of listening to the fun! interactive! personal tutor! AI is not going to learn. And how do you make Johnny pay attention?

This isn't a problem that can be entirely solved today, far from it. But an AI had the benefits of having much more leeway in organizing a tailored curriculum, assessing mastery, rephrasing or reformulating knowledge and so on than a harassed teacher grappling with a class of 30 students.

Don't let the perfect be the enemy of the better and all that jazz.

Look, friend, we've had decades of exciting! new! educational! reform! that were going to absolutely alter all the fusty old ways of teaching, engage every child, lift every boat, mend the hole in your bucket, and I don't know what else.

iPads for the classrooms was the last iteration of "new fancy computer tech will entice kids to learn", and how is that going?

While the use of iPads enabled the teacher to work closely with a small group of students, that did not mean that the students using the iPads independently were always on-task. One of the most striking findings during our observations was how frequently children appeared to be off-task, not engaged with their device, distracting peers, or finding reasons to be out of their seat. This was especially prevalent when we observed students using apps that were a regular and consistent part of their day, including, ST Math (https://www.stmath.com), RazzKids (https://www.raz-kids.com), and IXL (https://www.ixl.com) (see Table 3). These were apps that many teachers used on a very frequent (often daily) basis with the students to support differentiated learning or to provide small group instruction. It was during these daily rotation times when we observed the most concentrated instances of behavior issues and potential for problematic use of iPads.

This finding was further supported when we were able to observe a class during both a rotational period and a time when students were working on a project with a novel app (e.g., iMovie, Trailers, PicCollage). We noticed a contrast in students’ behavior, engagement, and enjoyment between these two types of activities; students were more actively engaged while working on novel projects. However, as the novelty of using these specific apps diminished, behavior problems were more likely to be observed. For example, one student was not only off-task from the work she was supposed to be doing on her iPad (using an app they use regularly), but also was disruptive to her classmates and resistant to the teacher's requests to work on her iPad. The disruptive behaviors that were observed included one student engaging the other students at their table in conversation, moving to other chairs and tables to talk to another student or show them something on her iPad, and even poking or touching other students to get their attention. These types of examples occurred frequently, and teachers were regularly observed managing student behavior during student use of these frequently used apps.

In our interviews with teachers, some teachers acknowledged that behavior problems, potentially out of boredom, could arise during iPad use. For example, one teacher (113) said, “I think the phonics app they're bored...it's frustrating for them if they're having a hard time.”

Somehow I'm still seeing stories about standards dropping like a stone, kids graduating without being able to read, gross misbehaviour in class, etc.

AI individual tutoring will work for the kids who want to learn. For the kids who can't, and especially the ones who don't want to learn, it'll be no more use than anything else. The kid who five minutes into the class starts throwing chairs around, either because of psychological problems or because he's a budding criminal, how is AI going to deal with that?

AI individual tutoring will work for the kids who want to learn. For the kids who can't, and especially the ones who don't want to learn, it'll be no more use than anything else. The kid who five minutes into the class starts throwing chairs around, either because of psychological problems or because he's a budding criminal, how is AI going to deal with that?

Taps sign: "Don't let the perfect become the enemy of the good (or at least better)"

Existing, presumably 100% real human (not that they act like it) US teachers are largely powerless in the same situation, correct me if I'm wrong, but they don't even have the leeway to kick the little shit out of class, let alone rap them on the knuckles. This is the same line of thinking that prevents gifted students from entering fast track programs, because imagine the plight of those left behind!

So if helps those who can be helped, while not really making things worse for those who can't be fixed short of a good caning, what's there to lose?

There are plenty of intermediate situations between purely human teachers and purely artificial ones, you could simply have the electronics that each kid has come loaded with the AI so they can ask it questions, during class or after, about things they couldn't quite get. Or have it tell the human teacher where the kid is struggling.

I think GPT-4 is a better tutor than the average human teacher, certainly if given a curriculum to follow and metrics to track, but even if it's worse, it seems to be clearly additive to the capabilities of having the human around while not costing much.

they don't even have the leeway to kick the little shit out of class, let alone rap them on the knuckles

See, that's the thing. If your individually tailored AI is - let's be blunt here - run along the lines of streaming or segregation or the other methods whereby children were sorted out into classes based on academic ability, it might work.

That doesn't mean the less academic kids are condemned to the 'stupid pile'; AI tuition for woodworking and metalwork classes (I think materials technology is the fancy new name) and practical subjects would be great, also.

But that does mean Johnny with the anger management issues needs to be in a separate class, and probably still needs human teacher supervision. That's not even the main problem, because if Johnny gets proper support, he may do as well as he can.

What we need - and what nobody wants to do - is separate out the troublemakers. Forget the bleeding-heart stuff about "school to prison pipeline"; if Johnny or Jamal or Jésus is smashing up stuff, dealing drugs, assaulting other students and teachers, and not paying a second's attention to any attempt to teach him something useful, then boot him out. Schools would love to do this, but for a combination of reasons - the political lack of will, the bureaucratic social worker belief in not doing anything because of culture or not imposing values, the legal requirement that children get an education which means that parents will go to court to force some unlucky school to take their little budding Al Capone for daycare because they don't want him hanging round at home - they can't.

I've seen early school leavers programmes, and the result there depends on the same thing: motivation. A kid who drops out of ordinary school but does want to turn their life around will work in that environment. A kid who is bored, surly, and violent, and only interested in weed, sex, and easy money, hence moving towards crime all of their own volition, won't do anything except waste time until they're finally old enough to be sent to adult prison.

AI on its own won't work, if the expectations are "this will solve the problem of dropping out, lack of attainment, illiteracy, crime, and so forth" - and those expectations will be loaded onto it! - because the roots of those are in social conditions as much as intelligence or academic ability. And until we're willing to look that Gorgon in the eye - that Jamal or Johnny may not be salvageable - then all the happy techo-optimism that this time this fix will work for sure - well, you know what I think there.

What solutions do you prefer instead?