site banner

Culture War Roundup for the week of March 31, 2025

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

4
Jump in the discussion.

No email address required.

I asked Gemini 2.5 Pro Thinking to solve it. It claimed to have a solution. I asked for the most concise summary it could provide'

Okay, here's a concise summary of the proof, avoiding technical jargon:

The Problem: Imagine you have a way of writing numbers that isn't based on powers of 10 (like our usual 1s, 10s, 100s), but on powers of $2n$, where $n$ is some odd number (like powers of 6, if $n=3$, or powers of 10, if $n=5$). We look at the number $n^k$ ($n$ multiplied by itself $k$ times). We want to show that if you pick $n$ to be a really large odd number, then when you write $n^k$ in this "base $2n$" system, all the "digits" you use will be bigger than some target value $d$.

How the Proof Works:

  1. Figuring out the Digits: The proof first finds a way to calculate the digits used to write $n^k$ in the base $2n$ system. Let's call the digits $c_0, c_1, c_2, \dots$.
  1. The Last Digit ($c_0$): It turns out the very last digit ($c_0$, the "ones" place in this system) is simply $n$ itself. So, to make this digit bigger than $d$, we just need $n$ to be bigger than $d$.
  1. The Other Digits: The proof finds a formula for the other digits ($c_1, c_2,$ etc.). It shows that each digit $c_i$ is calculated by taking $n$, multiplying it by some specific odd number (which depends on $k$ and which digit it is), dividing by a power of 2 (like 2, 4, 8, ...), and then rounding down.
  1. Making Digits Large: Since the odd number multiplier is at least 1, each digit $c_i$ is at least roughly $n$ divided by some power of 2 (like $n/1, n/2, n/4, \dots, n/2^{k-1}$).
  1. The Threshold: If we want all digits to be bigger than $d$, we need even the smallest of these estimates (which involves dividing $n$ by the largest power of 2, $2^{k-1}$) to be bigger than $d$. The proof calculates that if $n$ is larger than a specific value $N$ (which depends on $k$ and $d$, specifically $N = 2^{k-1}(d+1)$), then every digit is guaranteed to be greater than $d$.
  1. Conclusion: Therefore, such a threshold $N$ exists. Pick any odd number $n$ larger than this $N$, and all the digits of $n^k$ in base $2n$ will be larger than $d$.

Was it Induction?

The main argument of the proof is not primarily by induction. It's a direct proof that calculates the digits and then finds a condition on $n$ (making it large enough) to satisfy the requirement.

However, induction was used for one small intermediate step: proving that $n^k$ always leaves a remainder of $n$ when divided by $2n$ (which showed that the last digit $c_0$ is indeed $n$). But the overall structure connecting the size of $n$ to the size of all digits didn't rely on induction.

Here's the raw answer (minus reasoning trace):

https://rentry.org/5s6q6nxe

Hmm, it's maybe coming close to something that works, but seems to fuck up at the important junctures. After a couple of paragraphs where it doesn't find anything useful, in paragraph 7 it concludes that we can break down n^(k - i - 1) into q*2^(i+1) + r_i, where r_i = n^(k - i - 1) mod 2^(i+1). But then later it declares that r_i = n^(k - i - 1) and the proof follows from there. Unfortunately I don't think this would get any points, although maybe it could figure something out if you keep telling it where it fucks up.

/images/17435421010140572.webp

I copied your comment, and it insisted it was correct. I then shared the image, and it seems to think that the issue is imprecise terminology on its part rather than an actual error.

Here's the initial response:

https://rentry.org/yzvh9n47

After putting the image in:

https://rentry.org/c6nrs385

The important bit:

The proof never claims $r_i = n^{k-i-1}$. It uses $r_i = n^{k-i-1} \pmod{2^{i+1}}$ and the derived property $r_i \ge 1$.

Conclusion: The confusion likely arises from either the slightly ambiguous notation in the highlighted sentence (which should explicitly state "fractional part of ... is ...") or a misreading of the later step where the lower bound $r_i \ge 1$ is applied. The mathematical logic itself appears sound.

This happens because while the model has been programmed by some clever sod to apologize when told that it is wrong, it doesn't actually have a concept of "right" or "wrong". Just tokens with with different correlation scores.

Unless you explicitly tell/programm it to exclude the specific mistake/mistakes that it made from future itterations (a feature typically unavailable in current LLMs without a premium account) it will not only continue to make but "double down" on those mistakes because whatever most correlates with the training data must, by definition, be correct.

This happens because while the model has been programmed by some clever sod to apologize when told that it is wrong, it doesn't actually have a concept of "right" or "wrong". Just tokens with with different correlation scores

This is wrong! It would have been a reasonable claim to make a few years back, but we know for a fact this isn't true now:

https://www.anthropic.com/research/tracing-thoughts-language-model

It turns out that, in Claude, refusal to answer is the default behavior: we find a circuit that is "on" by default and that causes the model to state that it has insufficient information to answer any given question. However, when the model is asked about something it knows well—say, the basketball player Michael Jordan—a competing feature representing "known entities" activates and inhibits this default circuit (see also this recent paper for related findings). This allows Claude to answer the question when it knows the answer. In contrast, when asked about an unknown entity ("Michael Batkin"), it declines to answer.

There was other relevant work which shows that models, if asked if they're hallucinating, can usually find such errors. They very much have an idea of true versus false, to deny that would be to deny the same for humans, since we ourselves confabulate or can be plain old wrong.

Unless you explicitly tell/programm it to exclude the specific mistake/mistakes that it made from future itterations (a feature typically unavailable in current LLMs without a premium account) it will not only continue to make but "double down" on those mistakes because whatever most correlates with the training data must, by definition, be correct.

Gemini 2.5 Pro Thinking in particular is far more amenable to reason. It doesn't normally double down and will accept correction. At least ChatGPT has the option to add memories about the user, so you can save preferences or tell it to act differently.

I'm slightly disappointed to catch it hallucinating, which is why I went to this much trouble instead of just accepting that as a fact the moment someone contested it. It's still well ahead of the rest.

This is wrong! It would have been a reasonable claim to make a few years back, but we know for a fact this isn't true now

Sorry I meant to reply to @yunyun333's comment about "doubling down" but I can assure you that we do not "know that for a fact" and feel the need to caution you against believing everything you read in the marketing materials.

The "hallucination problem" can not realistically be "solved" within the context of regression based generative models as the "hallucinations" are an emergant property of the mechanisms upon which those models function.

A model that doesn't hallucinate doesn't turn your vacation pictures into a Hayo Miyazaki frame either and the latter is where the money and publicity are.

Developers can adjust the degree of hallucination up or down and tack additional models, interfaces, and layers on top to smooth over the worst offenses as Altman and co continue to do, but the fundemental nature of this problem is why people seriously invested in machine learning/autonomy often dismiss models like GPT (of which Claude is a branch/derivative) as an evolutionary dead-end.

feel the need to caution you against believing everything you read in the marketing materials

Anthropic is a reputable company on the cutting edge of AI, so I'd ask you for concrete disagreements instead of an advice for generalized caution.

Here are other relevant studies on the topic:

https://openreview.net/forum?id=KRnsX5Em3W

https://openreview.net/forum?id=fMFwDJgoOB

https://aclanthology.org/2023.findings-emnlp.68/, an older paper from 2023.

This applies the same standard about the ability to differentiate truth from fiction that is used to justify that belief in humans.

Further, as models get larger, hallucination rates have consistently dropped. I recently discussed a study on LLM use for medical histories which found 0% and ~0.1% hallucination rates. As I've said before, humans are not immune to hallucinations or confabulations, I'd know since I'm a psych trainee. That's true even for normal people. The only barrier is getting hallucination rates to a point where they're generally trustworthy for important decisions, and in some fields, they're there. Where they're not, even humans usually have oversight or scrutiny.

A model that doesn't hallucinate doesn't turn your vacation pictures into a Hayo Miyazaki frame either and the latter is where the money and publicity are.

There is a difference between hallucination and imagination. That is just as true for LLMs as it is for humans. Decreasing hallucination rates do not cause a corresponding decrease in creativity, quite the opposite.

The "hallucination problem" can not realistically be "solved" within the context of regression based generative models as the "hallucinations" are an emergant property of the mechanisms upon which those models function.

What does "solving" the hallucination problem look like, though? Humans also hallucinate all the time - in fact, arguably, this is one of the core reasons for the existence of this website and specifically this CW roundup thread - and it is something we've "solved" through various mechanisms of checking and verifying and holding people accountable, with none of them getting anywhere near perfect. Now, human hallucinations are more well understood than LLM ones, making them easier to predict in some ways, but why couldn't we eventually get a handle on the types of hallucinations that LLMs tend to have, allowing us to create proper control mechanisms for them such that the rate of actual consequential errors becomes lower than those caused by human hallucinations? If we reach that point, then could we say that hallucinations have been "solved?" And if not, then what does it matter if it wasn't "solved?"

"What does solving the hallucination looks like?" is a very good question. A major component of the problem is defining the boundaries of what constitutes "an error" and then what constitutes an acceptable error rate. Only then can you begin to think about whether or not that standard has been met and the problem "solved".

Sumarily the answer to that question is going to look very different depending on the use case. The requirements of the average white-collar office-drone looking to translate a news article, are going to be very different from the requirements of a cyber-security professional at a financial institution, or an industrialist looking to automate portions of thier process.

When I'm giving my intake speach to interns and new hires I talk about "the 9 nines". That is that in order to have a minimally viable product we must meet or exceed the standards of "baseline human performance" with 99.9 999 999% reliability. Imagine a test with a billion questions where one additional incorrect answer means a failing grade.

In this context "Humans also hallucinate" is just not an excuse. Think about how many "visual operations" a person typically performs in the process of going about thier day. Ask yourself how many cars on your comute this afternoon, or words in this comment thread have you halucinated? A dozen? None? I you think you are sure, are you "9 nines" sure?

A lot of the current refinement and itteration work on generative machine learning models revolves around adding layers of checks to catch the most egregious errors (not unlike as with humans as you observed) and giving users the ability to "steer" them down one path or another. While this represents an improvement over the previous generation such solutions are difficult/expensive to scale and actively deleterious to autonomy. The thinking being that "a robot" that requires a full-time babysitter might as well be an employee. This is why you can't buy a self-driving car yet.

I kinda respect it doubling down, but it's scrambling to cover its ass. Also, I noticed it forgot the "mod 2n" part of c_i, which also throws a wrench into things.

Ah... I get it now. Thank you! I'm disappointed to see hallucination and confabulation here, but it you're inclined, do keep trying out Gemini 2.5 Pro Thinking in particular. It's a good model.