This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
"When you see exponential, think logistic" seems to remain a useful rule-of-thumb. (I'm not sure of the source; I find only me when I search but I know I didn't originate it)
Maybe O'Reilly's "It's not exponential, it's sigmoidal"? https://web.archive.org/web/20240114184321/http://radar.oreilly.com/archives/2007/11/sigmoidal-not-exponential.html
I feel like I've seen your snappier version elsewhere, though. Maybe it's an echo of "When you hear hoofbeats, think horses, not zebras."
The tricky bit seems to be that it's very difficult to know where you're on on a logistic curve until you're past the midpoint. Though with the limits of pre-training people started running into last year, the claim that we're on still clearly on the left side is more tenuous.
More options
Context Copy link
Yep. I may be wrong but I seem to recall that there was a brief period of time where a lot of folks in the space did genuinely think improvements would continue to follow the exponential curve even if individual jumps between new models were a little smaller.
Or at least were willing to hype it that way. I'm prepared to be corrected if my memory is faulty there.
There was certainly a 'vibe' that we might have activated the fast takeoff scenario.
For what it's worth, this is still the vibe, indeed more than ever, and I do not understand what was the change you're implying you have noticed. After o3, the consensus of all top lab researchers seems to be "welp we're having superintelligence in under 5 years".
I guess I'd call it a bifurcation.
I read the material that suggests all the pieces are in place to achieve superintelligence.
But I'm also reading reports that the most recent training runs are seeing diminishing returns. So making the models BIGGER isn't giving the same results.
Which certainly explains why OpenAI hasn't pushed ChatGPT5 out the door, if it can't demonstrate as significant an improvement as 3-4 was.
So improvements and tweaks to existing models are giving us gains in the meantime, it isn't very clear to me where the quantum leap that will enable true AGI/Superintelligence is hiding. Which is more a me issue, I'm certainly not an insider. I'm just seeing two sides, those who think moar compute is good enough, and those who think its going to take some tricky engineering.
And Altman sure isn't telling us what he's seeing. So my question is whether he's playing cards close to the vest to avoid popping the hype bubble or because he really thinks he's going to blow us away with the next product. Possibly blow us away in the most literal meaning of the word.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link