site banner

Culture War Roundup for the week of September 23, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

Altman gives me similar vibes as SBF with a little less bad-hygiene-autism. He probably smells nice, but is still weird as fuck. We know he was fired and rehired at OpenAI. A bunch (all?) of the cofounders have jumped shipped recently. I don't necessarily see Enron/FTX/Theranos levels of plain lying, but how much of this is a venture funding house of cards that ends with a 99% loss and a partial IP sale to Google or something.

This is just spreading gossip (so mods lmk if I'm out of line here) but I know someone who knows Sam. This person tells me that Sam is going to get us all killed; that he's entirely misanthropic and sincerely believes that humanity should die out giving birth to machine intelligence. Just for what it's worth.

EDIT: I'd also like to add that I consider this person highly credible but for obvious reasons can't say more.

Paul Graham is the most honest billionaire (low bar) in silicon valley. Paul groomed Sam, gave him a career and eventually fired him. Paul is the most articulate man I know. Read what Paul has to say about Sam, and you'll see a carefully worded pattern. Paul admires Sam, but Sam scares him.

Before I write a few lines shitting on Sam, I must acknowledge that he is scary good. Dude is a beast. The men at the top of silicon valley are sharp and ruthless. You don't earn their respect let alone fear, if you aren't scary good. Reminds me of Kissinger in his ability to navigate himself into power. I've heard similar things about David Sacks. Like Kissinger, many in YC will talk fondly about their interactions with him. Charming, direct, patient and a networking workhorse. He could connect you an investor, a contact or a customer faster than anyone in the valley.

But, Sam's excellence appears untethered to any one domain. Lots of young billionaires have a clear "vision / experience hypothesis -> skill acquisition -> solve hard problems -> make ton of money" journey. But, unlike other young Billionaires, Sam didn't have a baby of his own. He has climbed his way to it, 1 strategic decision at a time. And given the age by which he achieved it, it's fair to call him the best ladder climber of his generation.
Sam's first startup was a failure. He inherited YC, like Sundar inherited Google, and Sam eventually got fired. He built OpenAI, but the core product was a thin layer on top of an LLM. Sam played no part in building the LLM. I had acquaintances joining Deepmind/OpenAI/Fair from 2017-2020, no one cared about Sam. Greg and Ilya were the main pull. Sam's ability to fundraise is second to none, but GPT-3 would have happened with or without him.

I personally, struggle to trust people I consider untethered. MBA types, lawyers turned CEOs, politicians. Top 0.1 percentile autists must excel. In the absence of a grounding domain, they start demonstrating excellence in accumulating Power. Power for power's sake. Sam is a perfect archetype.

Moreover, Sam being a gay childless tech-bro means he isn't naturally incentivized to see the world improve. None of those things are bad on their own. But they don't play well with top 0.1 percentile autists. Straight men soften up overtime, learning empathy from their wife, through osmosis. Gay communities don't get that. Then you have silicon valley tech culture, which is famously insular and lacks a certain worldliness. (even when it is racially diverse). I'll take Sam being married to a 'gay white software engineer' as evidence in favor of my hypothesis. Lastly, he is childless. This means no inherent incentive to making the world a better place. IMO, Top 0.1 percentile autists will devolve into megalomania without a grounding soft touch to keep them sane. Sam is not exception and he is the least grounded of them all. Say what you want about Mark Zuckerberg, but a wife and kids has definitely played a role in humanizing him. Not sure I can say the same for Sam.

I personally, struggle to trust people I consider untethered. MBA types, lawyers turned CEOs, politicians. Top 0.1 percentile autists must excel. In the absence of a grounding domain, they start demonstrating excellence in accumulating Power. Power for power's sake. Sam is a perfect archetype.

You know, I feel almost exactly the same way. I just have an seemingly inborn 'disgust' reaction to those persons who have fought up to the top of some social hierarchy while NOT having some grounded, external reason for doing so! Childless, godless, rootless, uncanny-valley avatars of pure egoism. "Struggle to trust" makes it sound like a bad thing, though. I think its probably, on some level, a survival instinct because trusting these types will get you used up and discarded as part of their machinations, and not trusting them is the correct default position. Don't fight it!

I bought a house in a neighborhood without an HOA because I don't want to have to fight off the little petty tyrants/sociopaths who will inevitably devote absurd amounts of their time and resources to occupying a seat of power that lets them harangue people over having grass 1/2 inch too tall or the wrong color trim on their house.

That's just an example of how much I want to avoid these types.

Only recently have I noticed that either my ability to spot these people is keen enough that I can consistently clock them inside of one <30 minute interaction, or I'm somehow surrounded by them because I've deluded myself into thinking I can detect them.

One of the 'tells' I think I pick up on is that these types of people don't "have fun." I don't mean they don't have hobbies or do things that are 'fun.' I mean they don't have fun. The hobbies are merely there to expand and enable their social group, they don't slavishly follow any sports teams, they don't watch any schlocky T.V. series, and they probably also don't do recreational drugs (so not counting, e.g. adderall or other 'performance enhancers.'), although they can probably hold a conversation on such topics if the situation required it.

(Side note, this is why I was vaguely suspicious of SBF back when he was getting puff pieces written prior to FTX crash. A dude who has that much money and yet lives an ascetic lifestyle? Well he's gotta be motivated by something!)

In social settings they're always present, schmoozing, facilitating, and bolstering their status... but you notice they never suggest activities for the group to engage in or expend effort bolstering other group members status.

Because, I assume, they are there solely to leverage the social network to get something else that they want. And if its not 'fun,' if its not 'money,' and it isn't even 'sex' or 'admiration and praise,'... then yeah, power for its own sake is probably their objective.

SO. What does Sam Altman do for fun?

I don't know the guy, but I did notice that he achieved his position at OpenAI not because of any particular expertise in the field or his clear devotion to advancing AI tech itself... but mostly by maneuvering his funds around so that he could hop into the CEO spot without much resistance. Yes he was a founder, but why would he take a specific interest in THAT company of all of them, to turn it into his own little fiefdom?

I think he correctly spotted the position at OpenAI as the best bet for being at the center of a rising power base as the AI race kicked off. Had things developed differently he might have hopped to one of the various other companies he has investments in instead.

Finagling his way back into the position of power after the Nonprofit board tried to pull the plug was a sign of something.

I admit, then that I'm confused why he would push to convert to for-profit structure and to collect 10 billion if he's not inherently motivated by money.

My theory of him might be wrong or under-informed... or he just plans to use that money to leverage his next moves. That would fit with the accusation that OpenAI is running out of impressive tricks and LLMs are going to fail to live up to the hype, so he needs to prepare to skidaddle. It DOESN'T fit my model of a man who believes he is going to be at ground zero when the silicon Godhead is birthed, if he really believes that superintelligence is somewhat imminent, he should be willing to give up ridiculous sums of money to ensure he's present at that moment.

Anyhow, to bring this to a head, yeah. Him not having children, him being utterly rootless, him having no obvious investment in humanity's continued survival (unlike Elon), I don't think he has much skin in the game that would allow 'us' to hold him accountable if he did something truly disastrous or utterly anti-civilizational. Who is in any position to reign him in? What consequences dangle over his head if his misbehaves? How much power SHOULD we trust him with when his apparent impulses are to remove impediments to his authority? The Corporate Structure of OpenAI was supposed to be the check... and that is going away. One would think it should be replaced with something that has a decent chance at ensuring good behavior.

It DOESN'T fit my model of a man who believes he is going to be at ground zero when the silicon Godhead is birthed, if he really believes that superintelligence is somewhat imminent, he should be willing to give up ridiculous sums of money to ensure he's present at that moment.

Nobody with a clue thinks that is imminent. All that exists is trained on data, and there's not enough high quality data. Maybe synthesizing it will work, maybe not.

Even the most optimistic people in the know say stuff like "maybe we'll be able to replace senior engineers and good but not great scientists in 5 yrs time". 'Godhead' and superintelligence is just conjecture at this point, thought of course an aligned set of cooperating AIs with ~130 IQ individually could give a good impression of superintelligence. Or be wholly dysfunctional given the internal dynamics.

I dunno, I've read the case for hitting AGI on a short timeline just based on foreseeable advances and I find it... credible.

And If we go back 10 years ago, most people would NOT have expected Machine Learning to have made as many swift jumps as it has. Hard to overstate how 'surprising' it was that we got LLMs that work as well as they do.

And so I'm not ruling out future 'surprises.'

That said, Sam Altman would be one of the people most in the know, and if he himself isn't acting like we're about to hit the singularity well, I notice I am confused.

Human-level AGI that can perform any task that humans can will resolve almost any issues posed by demographic decline in terms of economic productivity and maintaining a globalized, civilized world.

Aschenbrenner is a smart charlatan, he's probably going to do very well in the politics of AI.

My opinion is that the way he has everyone fooled and the way he has zeroed in on the superpower competition aspect makes it clear what he is after. Power. Has he gotten as US citizenship yet? He'll need that.

There's going to be an enormous growth in computing power, possible hardware improvements (e.g. the Beff Jezos guy has some miniaturised parallel analog computer that's supposedly going to be great for AI stuff.. ). But iirc, the models can't really improve easily because there's not the best data to pretrain them, so now everyone is trying to figure out how to automatically generate good synthetic data and use that to train better models, combine different modalities (text/ images etc). All stuff that's hardly comprehensible to outsiders, so people like Leopold can go around and say stuff with confidence.

Likely, yes, but how computationally and energy expensive it's going to be matters a whole lot. Like e.g. aren't they basically near hitting physical limits pretty soon? That'd cap lowering power costs, right?

And scaling up chip production to 1000x isn't as easy as it sounds either. Especially if Chinese get scared and start engaging in sabotage.

It'd make me feel better if someone could muster a rebuttal that explained with specificity why further improvements aren't going to be sufficient to breach the "smarter than human" barrier.

There's an existence proof in the sense that human intelligence exists and if they can figure out how to combine hardware improvements, algorithm improvements, and possibly better data to get to human level, even if the power demands are absurd, that's a real turning point.

A lot of smart people and smart orgs are throwing mountains of money at the tech. In what ways are they wrong?

It'd make me feel better if someone could muster a rebuttal that explained with specificity why further improvements aren't going to be sufficient to breach the "smarter than human" barrier.

To sum it up, to train superhuman performance you need superhumanly good data. Now, I'd be all okay for the patient, obvious approach there - eugenics, creating better future generations.

I'll quote twitter again

The Synthetic Data Solves ASI pill is a bit awkward:

  • Our AI plateaues ≈on intelligence level of expert humans because it's trained on human data
  • to train a superhuman AI, we need a superhuman data generating process based on real world dynamics, not Go board states -…fuck

In what ways are they wrong?

I'd not say they're wrong. Even present day polished applications with a lot of new compute could do a lot of stuff. They're betting they'll be able to make use of that compute even if AGI is late.

And remember, the money is essentially free for them. Those power stations will be profitable even if datacenters aren't, the datacenters will generate money even if taking over the world isn't a ready option. & There's no punishing interest rates for the big boys. That's for chumps with credit cards.

To sum it up, to train superhuman performance you need superhumanly good data.

It isn't clear we need superhumanly good data. Humans can make novel discoveries if they have a sufficiently good understanding of existing data and sufficiently good mental horsepower to use that data, i.e. extrapolate from their set of 'training data' and accurately test those extrapolations to discover new, useful data.

It seems like we just need to get an AI to approximately Von Neumann level and if it starts making good contributions to various fields at that point we can have it solve problems that hold up AI development. We're seeing hints of this now with Alphafold 3 and AlphaProteo.

Right now, the one thing that appears to be a hard hurdle for AIs are navigating real world environments, where there is far more chaos and variables that don't interact with each other linearly.

It can be difficult to see a new true innovation coming when every single company starts slapping "AI Powered!" as a feature on their products, but I think the case that AI will make surprising leaps in the next few years is stronger than it will inexplicably stagnate.

More comments

Thanks for this effortpost overall. It is very insightful.

You don't earn their respect let alone fear, if you aren't scary good. Reminds me of Kissinger in his ability to navigate himself into power. I've heard similar things about David Sacks. Like Kissinger, many in YC will talk fondly about their interactions with him.

I understand what you mean. And this is psychopathy.

Without a tethering to some sort of concrete moral framework (could be religious or not, just consistent over time), these type of people must become "power for power's sake" elite performers. That's bad. That's really, really bad.

No laws are being broken, but how does society call out this kind of behavior when it's channeled in this fashion and not in the "normal" psychopathic way of robbery/murder/rape etc?

I'm not sure we can without any coherent framework around to distinguish between success and virtue.

From where I'm sitting, I think "Oh that's a satanist" and everything makes sense, and I can tell other people that and they get it too.

Saying that he's possessed is a bit more legible to the general public but still sounds anachronistic to most.

Sam is going to get us all killed; that he's entirely misanthropic and sincerely believes that humanity should die out giving birth to machine intelligence.

...Fine, I'll bite. How much of this impression of Sam is uncharitable doomer dressing around something more mundane like "does not believe AI = extinction and thus has no reason to care", or even just same old "disregard ethics, acquire profit"?

I have no love for Altman (something I have to state awfully often as of late) but the chosen framing strikes me as highly overdramatic, besides giving him more competence/credit than he deserves. As a sanity check, how -pilled would you say that friend of yours is in general on the AI question? How many years before inevitable extinction are we talking here?

You are making an "argument from incredulity", i.e. the beliefs of Sam Altman are so crazy that they can’t be real. I don't think this is the case. Many powerful people in Silicon Valley have beliefs that are far outside the Overton Window.

Say what you will about Elon Musk, he is at least pro-human. This is not at all the case for many of his peers. For example, Larry Page and Elon Musk broke up as friends over Musk's "speciesist" belief that humanity should remain dominant over god-like AI's.

The idea that Sam Altman would literally want to destroy humanity to birth in a superior AI life form might sound ridiculous to you. But you don't know these people.

There's a good chance (not 100%, but not 0% either) that we're going to build superintelligence while the "adults in the room" argue about GDP numbers or whatever. If this happens it could make some people (perhaps a single person) more powerful than anyone in history. Do you want Sam Altman to be that person? Because I sure as hell don't.

You are making an "argument from incredulity", i.e. the beliefs of Sam Altman are so crazy that they can’t be real. I don't think this is the case.

The idea that Sam Altman would literally want to destroy humanity to birth in a superior AI life form might sound ridiculous to you. But you don't know these people.

Besides this being a gossip thread, your argument likewise seems to boil down to "but the beliefs might be real, you don't know". I don't know what to answer other than reiterate that they also might not, and you don't know either. No point in back-and-forth I suppose.

There's a good chance (not 100%, but not 0% either) that we're going to build superintelligence while the "adults in the room" argue about GDP numbers or whatever. If this happens it could make some people (perhaps a single person) more powerful than anyone in history. Do you want Sam Altman to be that person? Because I sure as hell don't.

At least the real load-bearing assumption came out. I've given up on reassuring doomers or harping on the wisdom of Pascal's mugging, so I'll simply grunt my isolated agreement that Altman is not the guy I'd like to see in charge of... anything really. If it's any consolation I doubt OpenAI is going to get that far ahead in the foreseeable future. I already mentioned my doubts on the latest o1, and coupled with the vacuous strawberry hype and Sam's antics apparently scaring a lot of actually competent people out of the company, I don't believe Sam is gonna be our shiny metal AI overlord even if I grant the eventual existence of one.

Since this is a gossip thread...

I have a couple friends who genuinely want the extinction of the human race. Not in a mass murder sense as they conceptualize it, but in a create a successor species, give a good life to the remaining humans, maybe offer them the chance for brain uploads, sense. Details and red lines vary between them, but they'd broadly agree that this is a fair characterization of their goals and desires.

Where do they work? OAI, Anthropic, GDM.

I have a fair amount of sympathy for their viewpoints, but it's still genuinely shocking. It's as if you suddenly found out that every government official was secretly a Hare Krishna or part of the People's Temple, and then when you point it out, everyone thinks the accusation is too absurd to be real.

In their defense: why do we care so much about the survival of homo sapiens qua sapiens? We're different from how we were 50,000 years ago, and we'll be more different still in 5,000, and maybe even 500. So what? So long as we have continuity of culture and memory, does it matter if we engineer ourselves into immortal cyborgs or whatever is coming? What's so special about the biped mammal vessel for a mind?

What's so special about the biped mammal vessel for a mind?

The biped mammal vessel. An immortal cyborg is a qualitatively different existence and so it will have a correspondingly different mind.

A 6'7 NBA player has a qualitatively different experience from a 5'1 ballerina, but they're both humans with minds.

if we engineer ourselves into immortal cyborgs

Hubris of the highest order.

We don't let humans so much as stitch up some skin unless they've gone through a decade of training. We don't let new engineers commit new code, unless they've spent time understanding the base architecture. What makes you think we know enough about what it means to be homo sapiens that we can go replacing entire parts wholesale ?

Just look at the last few decades. We put a whole generation of women on pills that accidentally change that characteristics of which men they're attracted to. The last-gen painkillers caused the biggest drug epidemic in the country. The primary stimulant of the century (cigarettes) was causing early death enmasse. We don't know why there is a detectable difference in immunity between c-section vs natural deliveries, and this is a difference of a few seconds. That's how little we know about these flesh-suits of ours. We have no clue what we're doing.

What's so special about the biped mammal vessel for a mind?

Don't take this the wrong way. What I'm about to say is definitely stereotyping a certain type of person.

But, I only ever see internet neuro-divergents ask these sort of questions. To normies, your question sounds like the equivalent of ,"What's so great about fries?". You'd only ever ask the question if you've never enjoyed a good pack of fries or a equivalent food that makes you feel that special thing. It reveals the absence of a fundamental human experience. To a degree, it reveals that you're less human or at least 'dis-abled'.

I'm entitled. I don't think I need to explain what makes some things special. The first day of the monsoon, petting a puppy, making faces at a toddler, a warm hug, the top of a mountain, soul food, soul music, the first time you hold your child, the last time you hold your parent, the first time a sibling defeats you at a game.

In a way, these unspoken common traits are what makes all of us human. I care about the survival of these consistent 300k-old traits, because I cherish these things. And I believe that a non-human would not be able to. Because we aren't taught to cherish these things. We just do. I don't expect everyone to have experienced all of these, in the same way. Civilizational differences mean that specifics differ. But, the patterns are undeniable.

Why do I care about the authentic experiences of my imperfect body and imperfect mind ? Because that is what it means to be human.

P.S: and I am every bit an atheist. Do I have to believe in divinity to believe in beauty ?

Not gonna make an argument here because I don't think there would be a point, but I'll mention that you're doing a great job demonstrating my concerns about atheists.

Well, leaving it at that would be a cheap shot, so,

I don't think I'm my mind any more than I'm my body. Which is to say, yes to both, but there's more going on than that. Also, human beings are uniquely divine, and God is a man in heaven. Human existence and experience are uniquely important, and uniquely destined.

Believe it or not I'm open to the idea that at some point 'we' make the transition to non-organic substrates. I just don't know enough about what actually matters to rule that out. But when people are eager to make the jump to artificial bodies and minds (not that you actually advocated for this), they strike me as dangerously naive in terms of their assumptions.

How sure are you that what we are can be digitized? What, specifically, is valuable to you, and worthy of cultivation? In symbolic terms, which gods do you actually serve?

So you're arguing for qualia and souls, yes? I believe I am my mind, that the mind is computation, and that its computational substrate is irrelevant. I'm honestly baffled by people who hold otherwise --- I want to be charitable, but I'm having a hard time seeing past opposition being ultimately a product of personal incredulity regarding our conscious experience being a worldly, temporal information processing phenomenon.

Our minds are worldly, temporal information processing phenomena, yes. At least mostly, as we experience them. No disagreement there. The question is whether, if and when our minds die, there is anything of us left. I think so.

We have no idea what consciousness is, how it happens, or even why it should ever arise in the first place. Until that's sorted there's a ton of room for other perspectives. Soul of the gaps, sure. That accusation wouldn't trouble me.

Perhaps I could say that I think our minds are so loud in our conscious experience that we fall into the mistaken assumption that everything occurring in our consciousness is our minds. The only way to find out is to die. In the meantime I'm not in a rush to create perfect, immortal copies of my mind which have no internal conscious experience, let the last bio-humans die off, and call it a day.

But I want to repeat the question:

How sure are you that what we are can be digitized? What, specifically, is valuable to you, and worthy of cultivation? In symbolic terms, which gods do you actually serve?

Your position is fundamentally religious, isn't it? We feel that existing, thinking, being are so profound that they must continue after death. But what if they aren't? I've never seen evidence that they are. If you'd like to adopt a religiously flavored epistemology, that's fine, but having done so, you've departed from the realm of logical argumentation.

More comments

I'm sure the Neanderthals' last thoughts included "so what, those skinny folks with the funny heads will survive even after they've wiped us out. We shall go gently into that good night."

We're homo sapiens. If we take AI true believers seriously, this isn't hundreds of years in someone else's lifetime; it could be less than ten years before an amoral sociopath unleashes something beyond our control. I plan on being alive in ten years.

I do not happen to think AI (from the LLM model) is likely to be an extinction-level threat (that's a specific phrasing). I do think Sam Altman is a skilled amoral sociopath who shouldn't be trusted with so much as kiddy scissors, and it should haunt Paul Graham that he didn't smother Altman's career when he had a chance.

We're also part Neanderthal. (Most people reading this message in 2024 are, anyway.) Their legacy got folded into ours. Why does their story have a sad ending?

Agreed on jitters about Altman. I'm just pointing out that the AI successor species people kind of have a point.

The companies being a cult is a big part of their strategy.

Information secrecy is top notch, everyone willingly works insane hours and you can get engineers to do borderline illegal things (around data privacy and ownership) without being questioned.

I know a few people at Facebook AI research, MSR and (old) Google Brain. They seem normal. But folks at OpenAI, Anthropic & Deep mind are well known to be ..... peculiar (and admittedly smarter than I am).

There's peculiar people at every part of every company. IME people at deep mind are not more peculiar than those working at other parts of Goog, and I certainly wouldn't describe them as cultists. Can't speak for the other labs.

On further thought, I have met more cultists at some of these companies, but a majority (50%+) have been normal. Also, can't exactly scale those anecdotes up.

With that reflection, I'll take back my earlier comment.

I imagine many people of the more materialist bend are both more likely to be excited by AI and more likely to not believe uploading is extinction (in a way that matters).

Totally in line though with stories about other Silicon Valley leaders.

https://www.astralcodexten.com/p/should-the-future-be-human

Business Insider: Larry Page Once Called Elon Musk A “Specieist”:

Tesla CEO Elon Musk and Google cofounder Larry Page disagree so severely about the dangers of AI it apparently ended their friendship.

At Musk's 44th birthday celebration in 2015, Page accused Musk of being a "specieist" who preferred humans over future digital life forms [...] Musk said to Page at the time, "Well, yes, I am pro-human, I fucking like humanity, dude."

Imagine the mind set where this is not a pot fueled friendly banter, but actually a more and more heated argument. Maybe it was blown out of proportion? When Page bought DeepMind, Musk approached DeepMind's founder Demis Hassabis to convince him not to take the offer. "The future of AI should not be controlled by Larry," Musk told Hassabis.

(I don’t quote this to praise Musk, him being humanities champion frightens me a bit, but the misanthropic outlook Effective Accelerationists have.)

Any insight from your friend on why Altman feels this way?

Does it require a special explanation? It’s not actually that uncommon of a view. Well, I suppose it’s uncommon among normies, but it’s not uncommon in online AI circles. A lot of AI hype is driven by a fundamental misanthropy and a desire to “flip the table” as it were, since these people find the current world unsatisfactory.

since these people find the current world unsatisfactory.

There's a lot of that going around.

But it's not really a CURRENT YEAR thing. It's more a strain of religiosity that is inherently anti-human and has been around forever.

These same type of people might have been monks in a different environment.

And that's fine, but also let's not give them any power please.

I mean, I'm one of them. I find the current world unsatisfactory, for a fairly broad definition of "current world". Lots of people do, on all sides of the political spectrum and from a wide variety of worldviews. Table-flipping is evidently growing more and more attractive to a larger and larger portion of the population. Policy Starvation is everywhere.

I get that you have in mind a narrower selection of misanthropic transhumanist techno-fetishists, but I would argue that the problem generalizes to a much wider set.

I have a higher than average strain of consistent misanthropy, but I also ascribe to a weird blend of Catholic moralism and Aristotelian / Platonic virtue ethics - courage being high among them.

I know that sounds pretentious (and it is!) but what this boils down to is I think the world is very fucked up, I am unsure if it can be fixed, but I think we ought to try and the ends do not justify the means because the means become the ends. The only way out of this is through it, and through it with hard work and - by the day - more and more pain and suffering.

What worries me about Altman types is they seem to be operating in both a deceitful and covert way. Covert in that their final objectives are cloaked and obfuscated, deceitful in that they are manipulating current systems to go to those objectives, instead of pointing out that the current systems are fucked up and we should change them or build alternatives.

To be more specific, Altman's lobbying is 100% designed to (a) get regulatory capture for OpenAI and (b) re-direct hundreds of billions of dollars of public money to fund it. And, until today, this was all done with a ton of vibes emitting peace-and-love-and-altruism and "we're an non-profit research company, maaaaaan." It seems like comic book levels of cold calculating hyper capitalist mixed with techno anarchist mixed with millenium cult leader.

Did you read Toilken as a kid? I’ve long taken inspiration from the book which was “do your duty and that which is right even if it seems unlikely to win over evil.”

No. I never got into it.

What you've just described is the core message of the Bible.

I wish the literary character story of Christ in the New Testament was more appreciated. The entire point of The Agony in the Garden is that a literal God-Man, who is already assured of his salvation and the promise of paradise, not only doubts himself and his "role" in the story, but actively asks to avoid it;

Luke 22:42

Saying: Father, if thou wilt, remove this chalice from me: but yet not my will, but thine be done.

Although his faith returns, Christ experiences this agony/passion all the way through his earthly death.


To me, that is a much more visceral and compelling exposition of "do your duty even in the face of fear/danger/death" because it is coming from a character who already has assurance of the outcome. It's a deeply metaphysical complexity.

Anyways, yeah, I guess LoTR Is neat.

More comments

Yeah, while no one was looking we reached post scarcity and it turns out to not be so great after all.

We're become a society of Lotus eaters, enabled by ubiquitous drugs and technology.

I'll assert that more technology will not solve this problem. It's absurd to watch people claim that the solution to a society in moral decay is even more wealth, as if all we need to solve our problems is just more material goods.

Yeah, it's fair for people to know that. Friend sees Altman as basically possessed. Gay, atheist, no kids, extremely little attachment to almost anyone, no skin in the human game. Loves machine intelligence and serves it as a deity.

Reminds me of a pivotal scene from the Rifters books.

"Checkers or Chess?"

Echopraxia has a better quote about the posthuman/machine vs human relationship:

“I’ll fight you,” Brüks said aloud. Of course you will. That’s what you’re for, that’s all you’re for. You gibber on about blind watchmakers and the wonder of evolution but you’re too damn stupid to see how much faster it would all happen if you just went away. You’re a Darwinian fossil in a Lamarckian age. Do you see how sick to death we are of dragging you behind us, kicking and screaming because you’re too stupid to tell the difference between success and suicide?

Yes, I see what you mean.

He was a dead end anyway. No children. No living relatives. No vested interest in the future of any life beyond his own, however short that might be. It didn't matter. For the first time in his life, Yves Scanlon was a powerful man. He had more power than anyone dreamed. A word from him could save the world. ... He kept his silence. And smiled.

Altman does have a husband (recently) but who knows what that means to him.

Gossip or not, this is frankly one of the spookiest things I've internet'ed for some time.

"No skin in the human game" is like an all time H.P. Lovecraft banger and I'm pretty sure you just rattled it off the top of your heard. Well done.

Yeah this conversation happened a couple of months ago and it's been... weird, continuing to follow Altman in the news and not sharing the sentiment with anyone. So I guess I was just waiting for a chance to do that. I see a lot of conversations about him and wonder, "Do any of these people know what he's like?" Speculation usually seems to run to his financial endgame, but I don't at all get the sense that he's in it for the money.