This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
As much as it is super important that we must never diagnose someone with a psychological condition without first paying a licensed psychologist, I've believed for a long time that Altman is a sociopath. I think this is further evidence.
Nobody in AI spaces talks like this, and he is very much "hiding his power level" in order to try to manipulate the midwits who run our country. He's done it before too. Altman recognizes that his best bet of becoming god-emperor, or whatever it is he wants, rests on having the US government make competition illegal. DeepSeek recently trained a near competitor for 6 million dollars (allegedly). The advantage that OpenAI has over its many competitors is precarious, and AI is unlikely to take off fast enough for one company to dominate.
But I'm less pessimistic than you about the possibility of near-term ASI. I think it's probable that AGI/ASI is less than 10 years away. The critics increasingly resemble the critics of evolution, worshiping the "god of the gaps" for the increasingly small things that AI can't do. The progress in the last year alone has been staggering.
Altman is, first and formost, a bullshit artist. I dont think he's "hiding his power level" as much as he's just trying to hustle credulous VCs and substack readers for influence, attention, and funding.
As I pointed out the last time Altman came up as a topic, there are legitimate applications for LLMs that OpenAI is well positioned to deliver (and make bundles of money on in the process) but when it comes to pushing the boundaries of machine learning and perhaps developing true AGI Sam Altman is not that guy, and OpenAI (at least in it's current form) is not that company.
There are serious limitations to OpenAI's model that are not going to be solved by throwing more petaflops and training data at the problem. The latter especially as the training data become increasingly poluted with OpenAI's own output.
More options
Context Copy link
I find it very unlikely that Altman wants money. He may want power, but I don’t think he’s truly driven by wanting to rule the world, at least that’s my impression from people who have known him. I think, like the first post suggests, he’s just gunning for the singularity and fuck the consequences. In a way, I respect it. Come what may, I’d rather we burn out in a glorious immolation led by a successor intelligence than in the mundanity of a GoF’d smallpox accidentally released by a Chinese lab or in nuclear MAD built on 1950s tech. We’re better than that, at least.
Immolation would be great, we all dream of a quick extinction for our children. But we're going to get Allied Mastercomputer - deep down you know it.
The nice thing about building AI via training on human text is that it increases the odds that the resulting superintelligence will care too much about humanity to just let us die.
The scary thing about building AI via training on human text is that it increases the odds that the resulting superintelligence will care too much about humanity to just let us die.
On one hand we can create AM with all the baggage that that entails, on the other we could create Mother from the movie I am Mother. I honestly hope for the second one, as it will bring a better humanity at least.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
What's currently stopping AI from contributing to improvements in AI tech, do you think?
Nothing. AI is already in the loop. Over time the percentage of code written by AI will increase until it is doing essentially all the important work.
Note that my prediction is >50% chance over 10 years so that's a relatively long timeline.
Here's what a short timeline looks like.
I don't think hobbiests and H1Bs using ChatGPT as a substitute for substack really counts as "AI is already in the loop"
Im also skeptical that a meaningful percentage if any of the code being written by AI constitutes "important work", though who knows some people get really into thier Gacha Games.
More options
Context Copy link
Yes and no. I use AI when coding AI, but it’s ultimately a souped-up StackExchange. It presents known information in more useable form. Right now, I wouldn’t say it’s contributing to improvements in AI tech in any meaningful way.
If it was, then AGI would already be here. Nobody is making that claim. I'm certainly not.
But if it makes existing human researchers 10% more efficient, then it's making a difference. Next year, 20%. Then 50%, 100%, etc... until human researchers are no longer necessary.
What I mean is, it’s a difference in kind rather than degree. If you have an AI that can code anything that’s been coded before with increasing speed and correctness, that will make human researchers more efficient but will never obviate the need for human researchers. For the same reason, it cannot foom, because increasing experiment speed is important but human ingenuity is still the bottleneck.
Code is a nearly solved problem, and I regularly see the leading models create correct output on the first try for things that haven't existed before, so long as you give them a reasonable spec.
That "reasonable spec" bit is a pretty big caveat, but the coding portion can be fully automated even today.
But producing ‘novel’ standard code is essentially interpolation in a very densely populated area of data. Research, by definition really, is extrapolation of thought into unpopulated space and as far as I’m aware LLMs can’t really do it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link