This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Sam Altman Is Super Excited for a Great 2025
Link to blog post
Yesterday, Sam Altman posted this short personal blog post. The material takeaway is summarized in this paragraph;
"AGI is right around the corner. Seriously, we mean it this time." Okay, I'll believe it when I see it and if that means I'm not worried enough about "alignment" and "safety" that's fine. Our robot overlord will smile upon me or he wont.
Sam's explicit assertion here will be debated on all the normal forms and tweet ecosystems. Thought pieces will be written by breathless techno-bros, techno-phobes, and all others. LessWrong is going to get out the Navel Gazer 6000.
None of that is particular alarming to me.
What is; the first 2/3rds of Sam's blog post.
This is because it is an amazing amalgam of personal-corpo speak that is straight out of a self-congratulatory Linkedin post. Here are some highlights (lowlights?);
This three were particularly triggering for me:
I think one of the points of near consensus on The Motte is a general hyper-suspicion to this kind of disingenuous koombayah style of writing. It's "Everyone love everyone", "we're all in this together" , "we made mistakes but that's okay because we care about one another."
This is exactly the kind of corpo-speak that both preceeds and follows a massive round of brutal layoffs based on the cold equations of a balance sheet. Or some sort of change in service to customers that is objectively absolutely worse. I am deeply surprised that it seems Sam has truly adopted this at his most personal level. This was not a sanitized press release from OpenAI, but something he posted on what appears to be his personal blog. Sure, many personal blogs become just as milquetoast as corporate press releases if/when a person gets famous enough, but, in the tech world, a personal blog or twitter account is usually the last bastion for, you know, actual real human style communication.
I had another post a few months ago about OpenAI. One of the things that came out of the comments was a sort of "verified rumor" that Sam Altman is a pure techno-accelerationist but without any sort of moral, theological, or virtuous framework. He simply wants to speedrun to the singularity because humans are kind of "whatever" in his eyes. This blog post, to me, provides some more evidence in favor of that. He's using the universal language of "nice to everybody" which is recognized - correctly - as the sound the big machine makes right before it thrashes you. This follows a pattern. OpenAI was a non-profit until it wasn't. Mr. Altman went to congress in 2023 to beg for totally not-regulatory capture for his own company but for, like, you know the good of everyone.
The technical merits and viability of AGI aside, the culture war angle here is that while many other groups are having meaningful open discussion about the future of economic, political, and social life with AIs/AGIs, Altman (and a few others like him) are using the cloaked, closed, and misleading language that has become the preferred dialect of the PMC. As I said, it is especially abundant right before they screw you over.
Matt Levine's latest reminds us that there is no objective definition of AGI, but that "deciding" that they've achieved AGI allows them to trigger certain contractual provisions. So, on the one hand, such talk can be to pump up the hype chain... but it could also be laying groundwork and making threats for hardball negotiation.
If you read that closely, Microsoft was careful - OpenAI can't declare AGI until they've returned $100bil to them from profits.
My read of the text was not that. There is an economic interest in OpenAI and a commercial deal. The economic interest is just that whatever happens, AGI or not, Microsoft makes up to ~$100B if OpenAI makes a bunch of money. The commercial deal ends when OpenAI "decides" that they've reach AGI.
Frankly, it would even be weirder if it was the other way around. People still kind of think that "achieving AGI" is some sort of factual state of affairs, even if it's ill-defined. It sort of doesn't make sense to define AGI in terms of revenue/profit. I mean, I guess someone could, but I don't think that's what they've done here.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I don't think it makes sense to treat Altman as a PMC apparatchik, though I agree that the blog post is written in that dialect. He just doesn't want to scare the hoes, the hoes here being normie investors and consumers excited about being able to cheat on their homework easier. That dialect is meant to be comforting and create a sense of normalcy.
One thing to understand about the folks at OpenAI etc is that they've thoroughly drank the Kool-Aid. Any communication coming from them has to be assumed as adversarial, and looking for honesty about intention or scope, if that honesty would interfere with achieving that goal, is a fool's errand.
Phrasing it as the precursor to the "screw you over" step is kind of right, but potentially misleading. Altman isn't hoping for the conventional "take your money before riding off into the sunset laughing at the rube" kind of screwing; he's thinking about dominating the light cone and paperclipping it with his values.
More options
Context Copy link
Recent concerns about an AI Winter (just prior to o1) have been greatly exaggerated. That being said, I expect AGI to come somewhere between 2025-2033 (70% CI) and by 2029 (50% CI) despite everything Altman says.
The man can't be trusted farther than you can throw him. Leaving aside his tone in this blog post that also raises my hackles (especially the section regarding his coup and counter-coup, which is an excellent example of using Many Words to say Nothing), he's a born politician with no clear convictions who seems particularly adept at outmaneuvering sincere nerds.
Zvi, over at Lesswrong, has quite a few posts on Sam's duplicity and insincerity, but anyone with eyes can see the facts on the ground regarding his pivot with OAI, going from an open-source non-profit to uh.. whatever it is now (Microsoft's bitch).
More options
Context Copy link
As much as it is super important that we must never diagnose someone with a psychological condition without first paying a licensed psychologist, I've believed for a long time that Altman is a sociopath. I think this is further evidence.
Nobody in AI spaces talks like this, and he is very much "hiding his power level" in order to try to manipulate the midwits who run our country. He's done it before too. Altman recognizes that his best bet of becoming god-emperor, or whatever it is he wants, rests on having the US government make competition illegal. DeepSeek recently trained a near competitor for 6 million dollars (allegedly). The advantage that OpenAI has over its many competitors is precarious, and AI is unlikely to take off fast enough for one company to dominate.
But I'm less pessimistic than you about the possibility of near-term ASI. I think it's probable that AGI/ASI is less than 10 years away. The critics increasingly resemble the critics of evolution, worshiping the "god of the gaps" for the increasingly small things that AI can't do. The progress in the last year alone has been staggering.
Altman is, first and formost, a bullshit artist. I dont think he's "hiding his power level" as much as he's just trying to hustle credulous VCs and substack readers for influence, attention, and funding.
As I pointed out the last time Altman came up as a topic, there are legitimate applications for LLMs that OpenAI is well positioned to deliver (and make bundles of money on in the process) but when it comes to pushing the boundaries of machine learning and perhaps developing true AGI Sam Altman is not that guy, and OpenAI (at least in it's current form) is not that company.
There are serious limitations to OpenAI's model that are not going to be solved by throwing more petaflops and training data at the problem. The latter especially as the training data become increasingly poluted with OpenAI's own output.
More options
Context Copy link
I find it very unlikely that Altman wants money. He may want power, but I don’t think he’s truly driven by wanting to rule the world, at least that’s my impression from people who have known him. I think, like the first post suggests, he’s just gunning for the singularity and fuck the consequences. In a way, I respect it. Come what may, I’d rather we burn out in a glorious immolation led by a successor intelligence than in the mundanity of a GoF’d smallpox accidentally released by a Chinese lab or in nuclear MAD built on 1950s tech. We’re better than that, at least.
Immolation would be great, we all dream of a quick extinction for our children. But we're going to get Allied Mastercomputer - deep down you know it.
The nice thing about building AI via training on human text is that it increases the odds that the resulting superintelligence will care too much about humanity to just let us die.
The scary thing about building AI via training on human text is that it increases the odds that the resulting superintelligence will care too much about humanity to just let us die.
On one hand we can create AM with all the baggage that that entails, on the other we could create Mother from the movie I am Mother. I honestly hope for the second one, as it will bring a better humanity at least.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
What's currently stopping AI from contributing to improvements in AI tech, do you think?
Nothing. AI is already in the loop. Over time the percentage of code written by AI will increase until it is doing essentially all the important work.
Note that my prediction is >50% chance over 10 years so that's a relatively long timeline.
Here's what a short timeline looks like.
I don't think hobbiests and H1Bs using ChatGPT as a substitute for substack really counts as "AI is already in the loop"
Im also skeptical that a meaningful percentage if any of the code being written by AI constitutes "important work", though who knows some people get really into thier Gacha Games.
More options
Context Copy link
Yes and no. I use AI when coding AI, but it’s ultimately a souped-up StackExchange. It presents known information in more useable form. Right now, I wouldn’t say it’s contributing to improvements in AI tech in any meaningful way.
If it was, then AGI would already be here. Nobody is making that claim. I'm certainly not.
But if it makes existing human researchers 10% more efficient, then it's making a difference. Next year, 20%. Then 50%, 100%, etc... until human researchers are no longer necessary.
What I mean is, it’s a difference in kind rather than degree. If you have an AI that can code anything that’s been coded before with increasing speed and correctness, that will make human researchers more efficient but will never obviate the need for human researchers. For the same reason, it cannot foom, because increasing experiment speed is important but human ingenuity is still the bottleneck.
Code is a nearly solved problem, and I regularly see the leading models create correct output on the first try for things that haven't existed before, so long as you give them a reasonable spec.
That "reasonable spec" bit is a pretty big caveat, but the coding portion can be fully automated even today.
But producing ‘novel’ standard code is essentially interpolation in a very densely populated area of data. Research, by definition really, is extrapolation of thought into unpopulated space and as far as I’m aware LLMs can’t really do it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'd mostly agree.
It personally sets my teeth on edge to read something that clearly wants to inspire strong emotions in the reader or perhaps persuade them of something but doesn't actually speak of anything that is happening to be excited about.
Its the difference between saying "We've received and analyzed the test results which have provided us with an unparalleled depth of understanding regarding the intricate nuances of your overall health and physiological dynamics. The insights gained from this process are both illuminating and inspiring, offering an exciting roadmap for continued progress and optimization. We are deeply committed to partnering with you on this transformative journey, leveraging future interactions to refine our approach, enhance the granularity of our feedback, and will ensure you remain in top condition in the coming decades!"
(ironically, I used ChatGPT to generate the most corpo-speak version of that sentence possible)
vs. "The test came back negative. You're cancer free, congrats!"
The first just desperately wants you to feel good without delivering the information you actually would like to hear that would make you happy. The second actually gives you the reason to be happy because there's a tangible fact about the world that is 'good,' and you just needed to hear it said.
I also note that there are no concrete examples of how their products have improved productivity for any companies already. Either the examples they have are underwhelming or maybe they aren't allowed to discuss it? Otherwise why not talk about tangible achievements?
I'm increasingly annoyed when the AI 'insiders' will speak reverently about how they're instantiating a Godhead that will relieve us of all the miserable burdens of our mortal existence in the near future, but will get hugely cagey about how that's actually being done or why we can trust them do to his correctly. They talk about things in religious/spiritual terms when telling us what the future holds, but hew to corpo-speak and remain businesslike when asked about present status.
It reads like a particularly opaque sort of intentional hype cycle that might be mostly designed to inspire us to transfer tons of wealth to them before AI progress stalls out for a while.
Ironically, this would also describe the writing of AI/LLMs themselves when you prompt them to show any sort of character or express a "personal" opinion. At this rate Sam could get replaced by an actual AI halfway through the singularity and literally nobody would notice.
If I had to guess they feel the
AGIcompetition, current Claude is near-strictly better already and the recent Deepseek V3 seems quite close while being orders of magnitude cheaper (epistemic status: haven't tested much yet). If I had no big-dick reveals in the pipeline I'd probably look to cut and run too.It does, but at least with some prompt engineering you can get them to distill down to the actual informational content.
Perhaps that's the joke. This is him asking the latest GPT model to spit out his annual report and see if anybody calls him on it.
I already got suspicious when they released SORA to the wild, which is an impressive model but is now arriving late to the table in terms of publicly-available video generation capabilities.
OpenAI had what seemed like a 'comfortable' lead for about a year there, but if progress had remained exponential or even linear from the past models they should be running away with the game. Instead the other close competitors seem to be chomping steadily away at their lunch.
"When you see exponential, think logistic" seems to remain a useful rule-of-thumb. (I'm not sure of the source; I find only me when I search but I know I didn't originate it)
Maybe O'Reilly's "It's not exponential, it's sigmoidal"? https://web.archive.org/web/20240114184321/http://radar.oreilly.com/archives/2007/11/sigmoidal-not-exponential.html
I feel like I've seen your snappier version elsewhere, though. Maybe it's an echo of "When you hear hoofbeats, think horses, not zebras."
The tricky bit seems to be that it's very difficult to know where you're on on a logistic curve until you're past the midpoint. Though with the limits of pre-training people started running into last year, the claim that we're on still clearly on the left side is more tenuous.
More options
Context Copy link
Yep. I may be wrong but I seem to recall that there was a brief period of time where a lot of folks in the space did genuinely think improvements would continue to follow the exponential curve even if individual jumps between new models were a little smaller.
Or at least were willing to hype it that way. I'm prepared to be corrected if my memory is faulty there.
There was certainly a 'vibe' that we might have activated the fast takeoff scenario.
For what it's worth, this is still the vibe, indeed more than ever, and I do not understand what was the change you're implying you have noticed. After o3, the consensus of all top lab researchers seems to be "welp we're having superintelligence in under 5 years".
I guess I'd call it a bifurcation.
I read the material that suggests all the pieces are in place to achieve superintelligence.
But I'm also reading reports that the most recent training runs are seeing diminishing returns. So making the models BIGGER isn't giving the same results.
Which certainly explains why OpenAI hasn't pushed ChatGPT5 out the door, if it can't demonstrate as significant an improvement as 3-4 was.
So improvements and tweaks to existing models are giving us gains in the meantime, it isn't very clear to me where the quantum leap that will enable true AGI/Superintelligence is hiding. Which is more a me issue, I'm certainly not an insider. I'm just seeing two sides, those who think moar compute is good enough, and those who think its going to take some tricky engineering.
And Altman sure isn't telling us what he's seeing. So my question is whether he's playing cards close to the vest to avoid popping the hype bubble or because he really thinks he's going to blow us away with the next product. Possibly blow us away in the most literal meaning of the word.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
He is a startup founder with a record high burnrate with a product that isn't good enough to be a commercial product. He has to bring in lots of customers promising it will soon be better. He wants to speedrun to a point in which he can actually generate revenue that can cover his costs.
Chatgpt is not good enough to replace developers, lawyers or any other qualified profession. It is an alternative to google without the ability to insert advertising. Github co-pilot isn't great for overall productivity.
Chatgpt is facing scaling laws. The bigger the model the more power it draws. The size of the model required to be useful is too large for today's hardware and power costs. They can no longer make a model orders of magnitude larger. The datasets are too large to fine tune manually.
Sam Altman is pretty much begging for a nuclear reactor and enough GPUs to swallow all that juice.
Also, OpenAI had to raise prices to keep themselves from losing money on ChatGPT.
More options
Context Copy link
I recently asked it to define "Chinook Trough" and it told me directly that nothing of the sort exists. When I explained that it does in fact exist (I spent too much time staring at the skymap over the Pacific) it changed tunes, apologized, and gave me a relatively clear definition. I only mention this as usually ChatGPT has been good at this sort of straight explanation query.
That's weird. Usually LLMs have exactly the opposite problem. I would find them infinitely more useful if the worst case output was "I've never heard of that" rather than confident-but-wrong hallucinations. I guess "that doesn't exist" is still in confident-but-wrong territory, but it's not the usual "yes, and" improv ad-libbing I see.
More options
Context Copy link
More options
Context Copy link
I second this take, full disclosure I'm a middling noob learning programming and math post college who isn't very smart.
Sam has been making similar announcements since they serve as good pr and boost your firms morale, today anthropics models seem to perform better and llms aren't the only thing in AI. One of the people who originally did play role in llms getting where they are is Jery Howard of fast.ai fame who worked on Ulmfit and his comments have been fairly skeptical, I usually lean on his takes for most things ai. I did do the first few lessons on fast.ai before chsggpt blew up, he has a good description of the training process within transformers.
More options
Context Copy link
We'll see I guess. DeepSeek trained a GPT-4 level AI for $6 million (admittedly employing existing LLMs). They have also made huge efficiency gains in inference, charging just $0.14 per million tokens as compared to $3 per million output tokens with a comparable Claude model.
Software is becoming more efficient much more quickly than hardware. We might not need those terawatt scale data centers until after AGI is achieved.
On a theoretical level, absent some sort of woo about quantum computation in the human brain, there's no reason why silicon shouldn’t be vastly superior to synapses eventually.
I'm a huge DeepSeek fan so will clarify.
Those are their own LLMs, and they collectively bump that up to no more than $15M, most likely (we do not yet know the costs of R1 or anything about it, will take a few more weeks; V2.5 is ≈2.2M hours).
0.14/1M input, 0.24/1M output vs $3/$15, to be clear. There are nuances like 0.014 for 1M input in the case of cache hits, opt-in paid caching on Anthropic, and the price hike to come in February.
But crucially, they've published model and paper. This is most likely done because they assume top players already know all these techniques, or are close but work on another set that'll yield the same effect.
More options
Context Copy link
I think the phrase "quantum woo" vastly understates the potential impact of quantum computing on machine learning. The quantum algorithm zoo, for example, lists a number of quantum machine learning algorithms. Several of these get exponential speed up from classical algorithms, but even a quadratic speedup of grover's algorithm would be game changing at the scale frontier models operate on.
I agree that most normie use of quantum in the brain is "woo". And I also agree that it's not been established that the brain relies on any quantum effects. But there is actual legitimate research in these directions and it seems wrong to offhandedly dismiss it.
Viable quantum computing dropping today (or even in the next decade or two) would also break almost all extant (asymmetric) cryptography. Yeah, NIST just recently published specs for post-quantum crypto, but I expect it'll be a decade before those are universally supported. Maybe less if it happens: SSL everywhere happened fairly fast, but became a real priority almost overnight. But if quantum were something any well-founded startup could do, nation-state actors could throw some impressive wrenches into any secure networks for a while.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link