This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Hoo boy. Speaking as an programmer who uses LLMs regularly to help with his work, you're very, VERY wrong about that. Maybe you should go tell Google that the 20% of their new code that is written by AI is all garbage. The code modern LLMs generate is typically well-commented, well-reasoned, and well-tested, because LLMs don't take the same lazy shortcuts that humans do. It's not perfect, of course, and not quite as elegant as an experienced programmer can manage, but that's not the standard we're measuring by. You should see the code that "junior engineers" often get away with...
I use AI a lot at work. There is a huge difference between writing short bits of code that you can test or read over and see how it works and completing a task with a moderate level of complexity or where you need to give it more than a few rounds of feedback and corrections. I cannot get an AI to do a whole project for me. I can get it to do a small easy task where I can check its work. This is great when it's something like a very simple algorithm that I can explain in detail but it's in a language I don't know very well. It's also useful for explaining simple ideas that I'm not familiar with and would have to look up and spend a lot of time finding good sources for. It is unusable for anything much more difficult than that.
The main problem is that it is really bad at developing accurate complex abstract models for things. It's like it has memorized a million heuristics, which works great for common or simple problems, but it means it has no understanding of something abstract, with a moderate level of complexity, that is not similar to something it has seen many times before.
The other thing it is really bad at is trudging along and trying and trying to get something right that it cannot initially do. I can assign a task to a low-level employee even if he doesn't know the answer and he has a good chance of figuring it out after some time. If an AI can't get something right away, it is almost always incapable of recognizing that it's doing something wrong and employing problem solving skills to figure out a solution. It will just get stuck and start blindly trying things that are obviously dead-ends. It also needs to be continuously pointed in the right direction and if the conversation goes on too long, it keeps forgetting things that were already explained to it. If more than a few rounds of this go on, all hope of it figuring out the right solution is lost.
Thanks, it's clear that (unlike the previous poster, who seems stuck in 2023) you have actual experience. I agree with most of this. I think there are people working on giving LLMs some sort of short-term memory for abstract thought, and also on making them more agentic so they can work on a long-form task without going off the rails. But the tools I have access to definitely aren't there yet.
So, yeah, I admit it's a bit of an exaggeration to say that you can swap a junior employee's role out with an LLM. o3 (or Claude-3.5 Sonnet, which I haven't tried, but which does quite well on the objective SWE-bench metric) is almost certainly better at writing small bits of good working code - people just don't understand how horrifically bad most humans are at programming, even CS graduates - but is lacking the introspection of a human to prevent it from doing dangerously stupid things sometimes. And neither is going to be able to manage a decently-sized project on their own.
More options
Context Copy link
More options
Context Copy link
I'm a programmer too, and I'm perfectly willing to tell Google that their 20% code is garbage. Honestly you shouldn't put them on a pedestal in this day and age, we are long past the point where they are nothing but top tier engineers doing groundbreaking work. They are just another tech company at this point, and they sometimes do stupid things just like every other tech company does.
If you are willing to accept use of a tool which gives you code that doesn't even work 10% of the time, let alone solve the problem, that's your prerogative. I say that such a tool sucks at writing code, and we can simply agree to disagree on that value judgement.
More options
Context Copy link
The vast majority of that "code being written by AI" at Google is painfully trivial stuff. We're not talking writing a new Paxos implementation or even a new service from scratch. It's more, autocomplete the rest of the line "for (int i = 0"
More options
Context Copy link
More options
Context Copy link