This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Proof or Bluff? Evaluating LLMs on 2025 USA Math Olympiad
Background: The 'official' American competitive high school math circuit has several levels, progressing from AMC 10/12 (25 question, multiple choice, 75 minutes total) to AIME (15 questions, 3 hours, answers are in the form of positive 3 digit integers) to USAMO (2 days, 6 proof-based questions total, 3 questions with 4.5 hours per day), with difficulty increasing commensurate with the decrease in # of questions. While most AIME questions can be ground out using a standard set of high school/introductory college level math knowledge and tricks, the USAMO requires more depth of understanding and specialized techniques. For example, problem 1 (theoretically, the easiest) is as follows:
This problem can be solved fairly simply using induction on k.
I've also noticed this when plugging grad-level QM questions into Gemini/ChatGPT. No matter how many times I tell it that it's wrong, it will repeatedly apologize and make the same mistake, usually copied from some online textbook or solution set without being able to adapt the previous solution to the new context.
Relevant update: the authors of the paper, which didn't include Gemini 2.5, just added its results to MathArena.
https://matharena.ai/
@self_made_human may be interested in this, since he was trying to evaluate 2.5 himself.
Tldr is the top line number is a step improvement over all existing models, but it's mostly from being able to complete the first problem. You can click on the first result cell to see its responses and the grader's scoring rubric. Some hypothetical higher risk of contamination since it's newer.
https://x.com/mbalunovic/status/1907436704790651166
Gemini 2.5 Pro was released on the same day as the benchmarks, so data contamination seems rather unlikely. You'd expect contamination on all the questions, and not just two.
More options
Context Copy link
Thanks for the ping. As I've always said, getting models to do any better than chance is the biggest hurdle, once they're measurably better than that, further climbs up the charts are nigh-inevitable.
Agreed. What's nice is that this benchmark is now in a sweet spot. If models consistently hover around the floor or ceiling, there's no signal for whether your model is improving. Once it gets into the middle area, though, model quality can be measured and compared easily, and progress proceeds quickly. I expect this benchmark to be saturated early 2026 at the latest.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It's not a proof, and in order to be "a bluff" there would've had to have been an intent to decieve.
Last week @2rafa asked "When will the AI penny drop? and the answer i would have liked to give at at the time was "when the footprint of a decent tokenizer gets small enough to run organically" or "when the equipment available to the hobbiest and semi-pro comunity catches up with the requirements of a decent tokenizer".
Until that happens specific questions will be doomed to be answered unspecifically.
The broad consensus (which i agree with) within the robotics and machine learning communities is that the existing generative models are ill-suited for any task requiring autonomy or rigor and that this is not a problem that can be fixed by throwing more FLOPs at it.
Why are you talking about the footprint of a tokenizer? Tokenization is cheap compared to actually evaluating the LLM.
Processing tokens is cheap. Generating tokens is expensive.
Evaluating a model can range from relatively cheap to cripplingly expensive depending on the metrics chosen and level of rigor required.
I agree with everything you wrote in this reply. But your reply seems to have nothing to do with your message I originally replied to. Why were you mentioning the cost of tokenization?
Because the collection and tokenization of reference material is currently a significant bottleneck. The democratization of it, or ability to do so organically would make a number of different approaches substantially more feasible. It also introduces the posibility of a Jobs or Zuckerberg type bootstrapping an AI in thier garage.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Gave it a shot with Claude with a proof that I think works. In this case it seems that Claude is quite bad at math. I wonder if other llms are better.
https://claude.ai/share/e1408d35-76b4-4157-9a7f-dbe59b13c027
Edit: Ooops I'm an idiot. I looked at the answer key here: https://artofproblemsolving.com/wiki/index.php/2025_USAMO_Problems/Problem_1
And they just go straight for the closed form of any digit in the base representation. Then from the closed form it's ezpz to prove the final answer. My proof probably still works but it's waaay convoluted compared to what is necessary.
More options
Context Copy link
I asked Gemini 2.5 Pro Thinking to solve it. It claimed to have a solution. I asked for the most concise summary it could provide'
Okay, here's a concise summary of the proof, avoiding technical jargon:
Here's the raw answer (minus reasoning trace):
https://rentry.org/5s6q6nxe
Hmm, it's maybe coming close to something that works, but seems to fuck up at the important junctures. After a couple of paragraphs where it doesn't find anything useful, in paragraph 7 it concludes that we can break down n^(k - i - 1) into q*2^(i+1) + r_i, where r_i = n^(k - i - 1) mod 2^(i+1). But then later it declares that r_i = n^(k - i - 1) and the proof follows from there. Unfortunately I don't think this would get any points, although maybe it could figure something out if you keep telling it where it fucks up.
/images/17435421010140572.webp
I copied your comment, and it insisted it was correct. I then shared the image, and it seems to think that the issue is imprecise terminology on its part rather than an actual error.
Here's the initial response:
https://rentry.org/yzvh9n47
After putting the image in:
https://rentry.org/c6nrs385
The important bit:
This happens because while the model has been programmed by some clever sod to apologize when told that it is wrong, it doesn't actually have a concept of "right" or "wrong". Just tokens with with different correlation scores.
Unless you explicitly tell/programm it to exclude the specific mistake/mistakes that it made from future itterations (a feature typically unavailable in current LLMs without a premium account) it will not only continue to make but "double down" on those mistakes because whatever most correlates with the training data must, by definition, be correct.
This is wrong! It would have been a reasonable claim to make a few years back, but we know for a fact this isn't true now:
https://www.anthropic.com/research/tracing-thoughts-language-model
There was other relevant work which shows that models, if asked if they're hallucinating, can usually find such errors. They very much have an idea of true versus false, to deny that would be to deny the same for humans, since we ourselves confabulate or can be plain old wrong.
Gemini 2.5 Pro Thinking in particular is far more amenable to reason. It doesn't normally double down and will accept correction. At least ChatGPT has the option to add memories about the user, so you can save preferences or tell it to act differently.
I'm slightly disappointed to catch it hallucinating, which is why I went to this much trouble instead of just accepting that as a fact the moment someone contested it. It's still well ahead of the rest.
Sorry I meant to reply to @yunyun333's comment about "doubling down" but I can assure you that we do not "know that for a fact" and feel the need to caution you against believing everything you read in the marketing materials.
The "hallucination problem" can not realistically be "solved" within the context of regression based generative models as the "hallucinations" are an emergant property of the mechanisms upon which those models function.
A model that doesn't hallucinate doesn't turn your vacation pictures into a Hayo Miyazaki frame either and the latter is where the money and publicity are.
Developers can adjust the degree of hallucination up or down and tack additional models, interfaces, and layers on top to smooth over the worst offenses as Altman and co continue to do, but the fundemental nature of this problem is why many people who are seriously invested in machine learning/autonomy dismiss models like GPT (from which Claude is derived) as an evolutionary dead-end.
Anthropic is a reputable company on the cutting edge of AI, so I'd ask you for concrete disagreements instead of an advice for generalized caution.
Here are other relevant studies on the topic:
https://openreview.net/forum?id=KRnsX5Em3W
https://openreview.net/forum?id=fMFwDJgoOB
https://aclanthology.org/2023.findings-emnlp.68/, an older paper from 2023.
This applies the same standard about the ability to differentiate truth from fiction that is used to justify that belief in humans.
Further, as models get larger, hallucination rates have consistently dropped. I recently discussed a study on LLM use for medical histories which found 0% and ~0.1% hallucination rates. As I've said before, humans are not immune to hallucinations or confabulations, I'd know since I'm a psych trainee. That's true even for normal people. The only barrier is getting hallucination rates to a point where they're generally trustworthy for important decisions, and in some fields, they're there. Where they're not, even humans usually have oversight or scrutiny.
There is a difference between hallucination and imagination. That is just as true for LLMs as it is for humans. Decreasing hallucination rates do not cause a corresponding decrease in creativity, quite the opposite.
Anthropic is Silicon Valley start-up currently seeking investors that was spun out of OpenAI by friends of Sam Bankman-Fried.
From this we can infer things about the motives, politics, ethics, and thought processes of the founders/upper management. I think that a heavy dose of skepticism is warranted towards any claims they make, especially when said claim is regarding something they are trying to get you to invest in.
I skimmed the studies you linked and while the first makes the strongest case it is also the weakest version of the claim that an LLM "knows when It’s lying".
That "LLMs' internal states encode information regarding the truthfulness of their outputs, and that this information can be utilized to detect errors." is trivially true but I would argue that the use of the word "truthfulness" here is in error. What the students are actually discussing in this study are the confidence intervals generated as part of the generative/inference process. The analysis and use of CIs to try and reduce hallucinations/error rates is not a novel insight or approach, it is almost as old as machine learning itself.
As such, I took the liberty of looking into the names associated with your 3 studies and managed to positively identify the professional profiles of 10 of them. Of those 10, none appear to hold any patents in the US or EU or have their names associated with any significant projects. Only 3 appear to have done much (if any) work outside of academia at the time the linked study was posted. Of those 3, only 1 stood out to me as having notable experience or technical chops. Accordingly, I am reasonably confident that I know more about this topic than the people writing or reviewing those studies.
There may be a difference between hallucination and imagination in humans. But I assure you that no such difference exists within the context of an LLM. When you examine the raw output of the generative model (IE what the algorithm is generating, not what is presented to the consumer) "hallucination rates" and "creativity" are almost 100% corelated. This is because "Creative Decisions" and "Hallucinations" in a regression model are both essentially deviations from the training corpus and the degree of deviance you're prepared to accept is a key consideration in both the design and evaluation of an ML algorithm.
I encourage anyone who is sincerely interested in this topic to watch this video. The whole thing is excellent, but for those with limited time/attention the specific portion relevant to this thread runs from 8 minutes 23 seconds to just over the 17 minute mark.
More options
Context Copy link
More options
Context Copy link
What does "solving" the hallucination problem look like, though? Humans also hallucinate all the time - in fact, arguably, this is one of the core reasons for the existence of this website and specifically this CW roundup thread - and it is something we've "solved" through various mechanisms of checking and verifying and holding people accountable, with none of them getting anywhere near perfect. Now, human hallucinations are more well understood than LLM ones, making them easier to predict in some ways, but why couldn't we eventually get a handle on the types of hallucinations that LLMs tend to have, allowing us to create proper control mechanisms for them such that the rate of actual consequential errors becomes lower than those caused by human hallucinations? If we reach that point, then could we say that hallucinations have been "solved?" And if not, then what does it matter if it wasn't "solved?"
"What does solving the hallucination looks like?" is a very good question. A major component of the problem is defining the boundaries of what constitutes "an error" and then what constitutes an acceptable error rate. Only then can you begin to think about whether or not that standard has been met and the problem "solved".
Sumarily the answer to that question is going to look very different depending on the use case. The requirements of the average white-collar office-drone looking to translate a news article, are going to be very different from the requirements of a cyber-security professional at a financial institution, or an industrialist looking to automate portions of thier process.
When I'm giving my intake speach to interns and new hires I talk about "the 9 nines". That is that in order to have a minimally viable product we must meet or exceed the standards of "baseline human performance" with 99.9 999 999% reliability. Imagine a test with a billion questions where one additional incorrect answer means a failing grade.
In this context "Humans also hallucinate" is just not an excuse. Think about how many "visual operations" a person typically performs in the process of going about thier day. Ask yourself how many cars on your comute this afternoon, or words in this comment thread have you halucinated? A dozen? None? I you think you are sure, are you "9 nines" sure?
A lot of the current refinement and itteration work on generative machine learning models revolves around adding layers of checks to catch the most egregious errors (not unlike as with humans as you observed) and giving users the ability to "steer" them down one path or another. While this represents an improvement over the previous generation such solutions are difficult/expensive to scale and actively deleterious to autonomy. The thinking being that "a robot" that requires a full-time babysitter might as well be an employee. This is why you can't buy a self-driving car yet.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I kinda respect it doubling down, but it's scrambling to cover its ass. Also, I noticed it forgot the "mod 2n" part of c_i, which also throws a wrench into things.
Ah... I get it now. Thank you! I'm disappointed to see hallucination and confabulation here, but it you're inclined, do keep trying out Gemini 2.5 Pro Thinking in particular. It's a good model.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This does turn out straightforward once you get the idea to induct on k. But that wasn't my first though. Since I'm out of practice, I probably wouldn't have thought of that myself for a long time.
Ofc proof by indiction is the classic solution to this sort of problem so it's only my fault for failing. I wonder if slopgpt can solve it if you tell it to induct on k.
More options
Context Copy link
Could you confirm the exact models used? Both Gemini and ChatGPT, through the standard consumer interface, offer a rather confusing list of options that's even broader if you're paying for them.
I just used the free public facing ones (Gemini 2.0 flash, GPT 4-o). You can try asking it for the decay time for the 3p-1s transition in hydrogen. It can do the 2p-1s transition since this question is answered in lots of places but struggles to extrapolate.
I will note that Gemini 2.0 Flash and GPT-4o are significantly behind the SOTA! The latter got a very recent update that made it the second best model on LM Arena, but they're both decidedly inferior in reasoning tasks compared to o1, o3 or Gemini 2.5 Pro Thinking. (Many caveats apply, since o1 and o3 have different sub-models and reasoning levels)
I asked two instances of Gemini 2.5 Pro:
Number 1:
Final answer: 5.27 ns
Second iteration:
Final answer: 5.28 ns
I wasn't lying to it, I'd enabled its ability to generate and execute code. Neither instance had access to Google Search, which is an option I could toggle. I made sure it was off. If you read the traces closely, you see mention of "searching the NIST values", but on being challenged, the model says that it wasn't looking it up, but trying to jog its own memory. This is almost certainly true.
I've linked to dumps of the entire reasoning trace and "final" answer:
First instance- https://rentry.org/cqty47r2
Second instance- https://rentry.org/2oyx24sa
I certainly don't know the answer myself, so I used GPT-4o with search enabled to evaluate the correctness of the answer. It claimed that both were excellent, and the correct value is around 5.4 ns according to experimental results (the decay time for the hydrogen 3p state).
I also used plain old Google, but didn't find a clear answer. There might be one in: https://link.springer.com/article/10.1007/s12043-018-1648-4?
But it's pay walled. I don't know if ChatGPT GPT-4o was able to access it despite this impediment.
Edit:
DeepSeek R1 without search claimed 1.2e-10 seconds. o3-mini without search claims 21 ns.
The correct answer is about 5.98 ns when applying the spontaneous emission formula, so Gemini pro 2.5 got it correct, although it had to reference NIST when its original formula didn't work. It looks like it copies the correct formula so I'm not sure where the erroneous factor of 4 comes from.
Thank you.
What do you mean by "reference NIST"? I think I've already mentioned that despite its internal chain of thought claiming to reference NIST or "look up" sources, it's not actually doing that. It had no access to the internet. I bet that's an artifact of the way it was trained, and regardless, the COT, while useful, isn't a perfect rendition of inner cognition. When challenged, it apologizes for misleading the user, and says that it was a loose way of saying that it was wracking its brains and trying to find the answer in the enormous amount of latent knowledge it possesses.
I also find it very interesting that the model that couldn't use code to run its calculations got a very similar answer. It did an enormous amount of algebra and arithmetic, and there was every opportunity for hallucinations or errors to kick in.
For the first calculation dump at least, it comes up with a value 6.63 × 10⁸ s^-1, then compares it to the expected value from the NIST Atomic Spectra Database 1.6725 × 10⁸ s⁻¹, then spends half the page trying to reconcile the difference, before giving up and proceeding with the ASD value.
Hmm. I think that's likely because my prompt heavily encouraged it to reason and calculate from first principles. It's a good thing that it noted that those attempts didn't align with pre-existing knowledge, and accurately recalled the relevant values, which must be a nigh-negligible amount of the training data.
At the end of the day, what matters is whether the model outputs the correct answer. It doesn't particularly matter to the end user if it came up with everything de-novo, remembered the correct answer, or looked it up. I'm not saying this can't matter at all, but if you asked me or 99.999% of the population to start off trying to answer this problem from memory, we'd be rather screwed.
Thanks for the suggestion and looking through the answer, I've personally run up to the limits of my own competence, and there are few things I can ask an LLM to do that I can't, while still verifying the answer myself.
At the end of the day, that's not really what matters, because nobody is going to need to solve a problem in physics with a known solution. A good portion of tests that I had as an undergraduate and in graduate school were open book, because simply knowing a formula or being able to look up a particular value wasn't sufficient to be able to answer the problem. If I want a value from NIST, I can look it up. The important part is being able to correctly engage in the type of problem solving needed to answer questions that haven’t ever been answered before.
I've had some thoughts about what it actually means to be able to do "research level" physics, which I'm still convinced no LLM can actually do. I've thought about posing a question as a top level post, but I'm not really an active enough user of this forum to do that amd don't want to become one.
Finally, I want to say that for the past 18 months, I've continually been getting solicitations on LinkedIn to solve physics problems to teain LLM's. The rate they offer isn't close to enough to make it worth it for me, even if I had any interest, but it would probably seem great to a grad student. I wouldn't be surprised if these models have been trained on more specific problems than we realize.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The quote the full model names in appendix A.1, but it's really such a short paper that it's worth at least scrolling through before discussing.
While surprisingly poor performing, it's not entirely out of line with my own experience experimenting with this class of models. They do seem to hallucinate at a very high rate for problems requiring subtle but extremely tight reasoning.
Thank you for listing out the models in the paper, but I was more concerned with the ones you've personally used. If you say they're in the same tier, then I would assume that you mean o3-high, o1 pro but not Claude 3.7 Sonnet Thinking (since you didn't mention Anthropic). I will note that R1, QWQ and Flash 2.0 Thinking are worse than those two, even if they're still competent models.
The best that Gemini has to offer is Gemini 2.5 Pro Thinking, which is the state of the art at present (in most domains). Is that the one you've tried? If you're not paying, youre not getting it on the app. I use it through AI Studio, where it's free. For ChatGPT, what was the best model you tried?
If you don't want to go to the trouble of signing up to AI Studio yourself (note that it's very easy), feel free to share a prompt and I'll try it myself and report back. I obviously can't judge the quality of the answer on its own merits, so I'll have to ask you.
Ah, I'm not OP. I've tried O3 High, O1 Pro, and QwQ. For the paper they have the prompts and grading scheme on the corresponding github. USAMO questions are hard enough you definitely need some expertise to grade them accurately. I'm far from being capable of judging them accurately.
Very qualitatively, the current crop of LLMs impresses me with the huge breadth of topics they can talk about. But "talking" to them does not give the impression they are better at reasoning than anyone I know who has scored >50% on USMAO, IMO, or the Putnam.
They are still improving very quickly, and I don't see the rate of improvement leveling off. Gemini 2.5 recently answered with ease a test question of mine that Gemini 2.0 (and, honestly, everything prior to Claude 3.5) had been utterly confused by. But I admit that they're definitely lacking in reasoning skills still; they're much better at retrieval and basic synthesis of knowledge than they are at extrapolating it to anything too greatly removed from standard problems that I'd expect were in their training data sets.
Still, can we take a step back and look at the big picture here? The USAMO is an invitation-only math competition where they pick the top few hundred students from a bigger invitation-only competition winnowed from an open math competition, and the median score on it is still sub-50%. The Putnam has thousands of competitors, but they're typically the most dedicated undergrad math majors and yet the median score on it is often zero! How far have we moved the goal posts, to get to this point? It's the "Can a robot write a symphony?" "Can you?" movie scene made manifest.
More options
Context Copy link
I don't think I know anyone who:
I think my younger cousin was an IMO competitor, but he didn't win AFAIK, even if he's now in a very reputable maths program.
I'm personally quite restricted myself in my ability to evaluate pure reasoning capabilitiy, since I'm not a programmer or mathematician. I know they're great at medicine, even tricky problems, but what makes medicine challenging is far more the ability to retain an enormous amount of information in your head rather than an unusually onerous demand on fluid intelligence. You can probably be a good doctor with an IQ of 120, if you have a very broad understanding of relevant medicine, but you're unlikely to be a good mathematician producing novel insights.
I did for all three, but it was many years ago, and I think I'd struggle with most IMO problems nowadays. Pretty sure I'm still better at proofs than the frontier CoT models, but for more mechanical applied computations (say, computing an annoying function's derivative) they're a lot better than me at churning through the work without making a dumb mistake. Which isn't that impressive, TBH, because Wolfram Alpha could do that too, a decade ago. But you have to consciously phrase things correctly for WA, whereas LLMs always correctly understand what you're asking (even if they get the answer wrong).
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link