site banner

Culture War Roundup for the week of September 23, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

Paul Graham is the most honest billionaire (low bar) in silicon valley. Paul groomed Sam, gave him a career and eventually fired him. Paul is the most articulate man I know. Read what Paul has to say about Sam, and you'll see a carefully worded pattern. Paul admires Sam, but Sam scares him.

Before I write a few lines shitting on Sam, I must acknowledge that he is scary good. Dude is a beast. The men at the top of silicon valley are sharp and ruthless. You don't earn their respect let alone fear, if you aren't scary good. Reminds me of Kissinger in his ability to navigate himself into power. I've heard similar things about David Sacks. Like Kissinger, many in YC will talk fondly about their interactions with him. Charming, direct, patient and a networking workhorse. He could connect you to an investor, a contact or a customer faster than anyone in the valley.

But, Sam's excellence appears untethered to any one domain. Lots of young billionaires have a clear "vision -> insight -> skill acquisition -> solve hard problems -> make ton of money" journey. But, unlike other young Billionaires, Sam didn't have a baby of his own. He has climbed his way to it, 1 strategic decision at a time. And given the age by which he achieved it, it's fair to call him the best ladder climber of his generation.
Sam's first startup was a failure. He inherited YC, like Sundar inherited Google, and Sam eventually got fired. He built OpenAI, but the core product was a thin layer on top of an LLM. Sam played no part in building the LLM. I had acquaintances joining Deepmind/OpenAI/Fair from 2017-2020, no one cared about Sam. Greg and Ilya were the main pull. Sam's ability to fundraise is second to none, but GPT-3 would have happened with or without him.

I personally, struggle to trust people I consider untethered. MBA types, lawyers turned CEOs, politicians. Top 0.1 percentile autists must excel. In the absence of a grounding domain, they start demonstrating excellence in accumulating Power. Power for power's sake. Sam is a perfect archetype.

Moreover, Sam being a gay childless tech-bro means he isn't naturally incentivized to see the world improve. None of those things are bad on their own. But they don't play well with top 0.1 percentile autists. Straight men soften up overtime, learning empathy from their wife, through osmosis. Gay communities don't get that. Then you have silicon valley tech culture, which is famously insular and lacks a certain worldliness. (even when it is racially diverse). I'll take Sam being married to a 'gay white software engineer' as evidence in favor of my hypothesis. Lastly, he is childless. This means no inherent incentive to making the world a better place. IMO, Top 0.1 percentile autists will devolve into megalomania without a grounding soft touch to keep them sane. Sam is not exception and he is the least grounded of them all. Say what you want about Mark Zuckerberg, but a wife and kids has definitely played a role in humanizing him. Not sure I can say the same for Sam.

I personally, struggle to trust people I consider untethered. MBA types, lawyers turned CEOs, politicians. Top 0.1 percentile autists must excel. In the absence of a grounding domain, they start demonstrating excellence in accumulating Power. Power for power's sake. Sam is a perfect archetype.

You know, I feel almost exactly the same way. I just have an seemingly inborn 'disgust' reaction to those persons who have fought up to the top of some social hierarchy while NOT having some grounded, external reason for doing so! Childless, godless, rootless, uncanny-valley avatars of pure egoism. "Struggle to trust" makes it sound like a bad thing, though. I think its probably, on some level, a survival instinct because trusting these types will get you used up and discarded as part of their machinations, and not trusting them is the correct default position. Don't fight it!

I bought a house in a neighborhood without an HOA because I don't want to have to fight off the little petty tyrants/sociopaths who will inevitably devote absurd amounts of their time and resources to occupying a seat of power that lets them harangue people over having grass 1/2 inch too tall or the wrong color trim on their house.

That's just an example of how much I want to avoid these types.

Only recently have I noticed that either my ability to spot these people is keen enough that I can consistently clock them inside of one <30 minute interaction, or I'm somehow surrounded by them because I've deluded myself into thinking I can detect them.

One of the 'tells' I think I pick up on is that these types of people don't "have fun." I don't mean they don't have hobbies or do things that are 'fun.' I mean they don't have fun. The hobbies are merely there to expand and enable their social group, they don't slavishly follow any sports teams, they don't watch any schlocky T.V. series, and they probably also don't do recreational drugs (so not counting, e.g. adderall or other 'performance enhancers.'), although they can probably hold a conversation on such topics if the situation required it.

(Side note, this is why I was vaguely suspicious of SBF back when he was getting puff pieces written prior to FTX crash. A dude who has that much money and yet lives an ascetic lifestyle? Well he's gotta be motivated by something!)

In social settings they're always present, schmoozing, facilitating, and bolstering their status... but you notice they never suggest activities for the group to engage in or expend effort bolstering other group members status.

Because, I assume, they are there solely to leverage the social network to get something else that they want. And if its not 'fun,' if its not 'money,' and it isn't even 'sex' or 'admiration and praise,'... then yeah, power for its own sake is probably their objective.

SO. What does Sam Altman do for fun?

I don't know the guy, but I did notice that he achieved his position at OpenAI not because of any particular expertise in the field or his clear devotion to advancing AI tech itself... but mostly by maneuvering his funds around so that he could hop into the CEO spot without much resistance. Yes he was a founder, but why would he take a specific interest in THAT company of all of them, to turn it into his own little fiefdom?

I think he correctly spotted the position at OpenAI as the best bet for being at the center of a rising power base as the AI race kicked off. Had things developed differently he might have hopped to one of the various other companies he has investments in instead.

Finagling his way back into the position of power after the Nonprofit board tried to pull the plug was a sign of something.

I admit, then that I'm confused why he would push to convert to for-profit structure and to collect 10 billion if he's not inherently motivated by money.

My theory of him might be wrong or under-informed... or he just plans to use that money to leverage his next moves. That would fit with the accusation that OpenAI is running out of impressive tricks and LLMs are going to fail to live up to the hype, so he needs to prepare to skidaddle. It DOESN'T fit my model of a man who believes he is going to be at ground zero when the silicon Godhead is birthed, if he really believes that superintelligence is somewhat imminent, he should be willing to give up ridiculous sums of money to ensure he's present at that moment.

Anyhow, to bring this to a head, yeah. Him not having children, him being utterly rootless, him having no obvious investment in humanity's continued survival (unlike Elon), I don't think he has much skin in the game that would allow 'us' to hold him accountable if he did something truly disastrous or utterly anti-civilizational. Who is in any position to reign him in? What consequences dangle over his head if his misbehaves? How much power SHOULD we trust him with when his apparent impulses are to remove impediments to his authority? The Corporate Structure of OpenAI was supposed to be the check... and that is going away. One would think it should be replaced with something that has a decent chance at ensuring good behavior.

It DOESN'T fit my model of a man who believes he is going to be at ground zero when the silicon Godhead is birthed, if he really believes that superintelligence is somewhat imminent, he should be willing to give up ridiculous sums of money to ensure he's present at that moment.

Nobody with a clue thinks that is imminent. All that exists is trained on data, and there's not enough high quality data. Maybe synthesizing it will work, maybe not.

Even the most optimistic people in the know say stuff like "maybe we'll be able to replace senior engineers and good but not great scientists in 5 yrs time". 'Godhead' and superintelligence is just conjecture at this point, thought of course an aligned set of cooperating AIs with ~130 IQ individually could give a good impression of superintelligence. Or be wholly dysfunctional given the internal dynamics.

I dunno, I've read the case for hitting AGI on a short timeline just based on foreseeable advances and I find it... credible.

And If we go back 10 years ago, most people would NOT have expected Machine Learning to have made as many swift jumps as it has. Hard to overstate how 'surprising' it was that we got LLMs that work as well as they do.

And so I'm not ruling out future 'surprises.'

That said, Sam Altman would be one of the people most in the know, and if he himself isn't acting like we're about to hit the singularity well, I notice I am confused.

Human-level AGI that can perform any task that humans can will resolve almost any issues posed by demographic decline in terms of economic productivity and maintaining a globalized, civilized world.

Aschenbrenner is a smart charlatan, he's probably going to do very well in the politics of AI.

My opinion is that the way he has everyone fooled and the way he has zeroed in on the superpower competition aspect makes it clear what he is after. Power. Has he gotten as US citizenship yet? He'll need that.

There's going to be an enormous growth in computing power, possible hardware improvements (e.g. the Beff Jezos guy has some miniaturised parallel analog computer that's supposedly going to be great for AI stuff.. ). But iirc, the models can't really improve easily because there's not the best data to pretrain them, so now everyone is trying to figure out how to automatically generate good synthetic data and use that to train better models, combine different modalities (text/ images etc). All stuff that's hardly comprehensible to outsiders, so people like Leopold can go around and say stuff with confidence.

Likely, yes, but how computationally and energy expensive it's going to be matters a whole lot. Like e.g. aren't they basically near hitting physical limits pretty soon? That'd cap lowering power costs, right?

And scaling up chip production to 1000x isn't as easy as it sounds either. Especially if Chinese get scared and start engaging in sabotage.

It'd make me feel better if someone could muster a rebuttal that explained with specificity why further improvements aren't going to be sufficient to breach the "smarter than human" barrier.

There's an existence proof in the sense that human intelligence exists and if they can figure out how to combine hardware improvements, algorithm improvements, and possibly better data to get to human level, even if the power demands are absurd, that's a real turning point.

A lot of smart people and smart orgs are throwing mountains of money at the tech. In what ways are they wrong?

It'd make me feel better if someone could muster a rebuttal that explained with specificity why further improvements aren't going to be sufficient to breach the "smarter than human" barrier.

To sum it up, to train superhuman performance you need superhumanly good data. Now, I'd be all okay for the patient, obvious approach there - eugenics, creating better future generations.

I'll quote twitter again

The Synthetic Data Solves ASI pill is a bit awkward:

  • Our AI plateaues ≈on intelligence level of expert humans because it's trained on human data
  • to train a superhuman AI, we need a superhuman data generating process based on real world dynamics, not Go board states -…fuck

In what ways are they wrong?

I'd not say they're wrong. Even present day polished applications with a lot of new compute could do a lot of stuff. They're betting they'll be able to make use of that compute even if AGI is late.

And remember, the money is essentially free for them. Those power stations will be profitable even if datacenters aren't, the datacenters will generate money even if taking over the world isn't a ready option. & There's no punishing interest rates for the big boys. That's for chumps with credit cards.

To sum it up, to train superhuman performance you need superhumanly good data.

It isn't clear we need superhumanly good data. Humans can make novel discoveries if they have a sufficiently good understanding of existing data and sufficiently good mental horsepower to use that data, i.e. extrapolate from their set of 'training data' and accurately test those extrapolations to discover new, useful data.

It seems like we just need to get an AI to approximately Von Neumann level and if it starts making good contributions to various fields at that point we can have it solve problems that hold up AI development. We're seeing hints of this now with Alphafold 3 and AlphaProteo.

Right now, the one thing that appears to be a hard hurdle for AIs are navigating real world environments, where there is far more chaos and variables that don't interact with each other linearly.

It can be difficult to see a new true innovation coming when every single company starts slapping "AI Powered!" as a feature on their products, but I think the case that AI will make surprising leaps in the next few years is stronger than it will inexplicably stagnate.

It isn't clear we need superhumanly good data.

It is.

Humans can make novel discoveries if they have a sufficiently good understanding of existing data and sufficiently good mental horsepower to use that data

LLMs and similar systems aren't human, not in the slightest.

It seems like we just need to get an AI to approximately Von Neumann level

They're nowhere near that. People are happy they can count letters correctly.

More comments

Thanks for this effortpost overall. It is very insightful.

You don't earn their respect let alone fear, if you aren't scary good. Reminds me of Kissinger in his ability to navigate himself into power. I've heard similar things about David Sacks. Like Kissinger, many in YC will talk fondly about their interactions with him.

I understand what you mean. And this is psychopathy.

Without a tethering to some sort of concrete moral framework (could be religious or not, just consistent over time), these type of people must become "power for power's sake" elite performers. That's bad. That's really, really bad.

No laws are being broken, but how does society call out this kind of behavior when it's channeled in this fashion and not in the "normal" psychopathic way of robbery/murder/rape etc?

I'm not sure we can without any coherent framework around to distinguish between success and virtue.

From where I'm sitting, I think "Oh that's a satanist" and everything makes sense, and I can tell other people that and they get it too.

Saying that he's possessed is a bit more legible to the general public but still sounds anachronistic to most.