site banner

Culture War Roundup for the week of February 17, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

Weirdly, I think the biggest non-Google winner is OEMs that make widgets that need some UI. I'm thinking of the infotainment systems in cars, thermostats, all of the IoT randomness that's around. It's a lot easier to start with a base that at least has a head start on drawing to a screen and accepting inputs than from base Linux. Even if you start with a Raspberry Pi, you still have a bunch more work to do if you want to not only output bitmaps, but also accept user input.

I'm in basically the same boat as you are. But I'm mostly commenting on the "normal" people using Android.

My stance is that the base OS itself, Linux, is open. AOSP is a pretty thin veneer on top. It's helpful for some OEMs, phone and otherwise.

I totally agree on everything else. Having toyed with the idea of procuring chips, then looking at the lead times of some unique ones... I didn't really go too far on that side other than buying way too many USB jacks that I doubt I'll ever use.

But the last bit is kind of interesting. AI is going to be replacing things that, up until now, humans were uniquely suited for.

I'll use a present project as an example. I'm working on a side project that is heavily reliant on AI. Without getting too far into the weeds, it's a system to allow "smart" replaying of sets of related API calls to enable better testing. I could get 90% of the way to success pretty easily without any AI help. Sure, things like variables would get named "var1," "var2," etc., but it worked. Then there's weirder cases where you need to be able to take some context cues to figure out how to select the right value to use in a replay.

{
  documents: {
    {
       type: ticket
       number: 123456
    },
    ...
  }
}

On a subsequent call, we need to find the ticket number, but it might not be in the same place every time. (The API I'm working with is... meh.) You may have multiple documents and they may not be in the same order (think JSONPath selectors to get the document's number where the type is "ticket").

Now, I can, in lieu of AI, present that to a human to make that decision. The end result is the same. I have the inputs that went to either the AI or the human, and I have the answer provided. In neither case do I have an audit trail of how the answer was arrived at. I would argue that you might be able to get more of an audit trail from the AI because there may be logged "reasoning" ideas used in the process. I know I can't audit what I did to come to the answer other than "it felt right."

My prediction: even in OSS, there will be an increasing reliance on AI -- both in the generation of the code itself, as well as the actual invocation of services from that code.