site banner

Culture War Roundup for the week of September 23, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

OpenAI To Become a For-Profit Company

You'll notice that the link is to a hackernews thread. I did that intentionally because I think some of the points raised there get to issues deeper than "hurr durr, Elon got burnt" or whatever.

Some points to consider:

  1. It is hard to not see this as a deliberate business-model hack. Start as a research oriented non-profit so you can more easily acquire data, perhaps investors / funders, and a more favorable public imagine. Sam Altman spent a bunch of time on Capitol Hill last year and seemed to move with greater ease because of the whole "benefit to humanity" angle. Then, once you have acquired a bunch of market share this way, flip the money switch on. Also, there are a bunch of tax incentives for non-profits that make it easier to run in the early startup phase.

  2. I think this can be seen as a milestone for VC hype. The trope for VC investors is that they see every investment as "changing the world," but it's mostly a weird status-signaling mechanism. In reality, they're care about the money, but also care about looking like they're being altruistic or, at least, oriented towards vague concepts of "change for the better." OpenAI was literally pitched as addressing an existential question for humanity. I guess they fixed AI alignment in the past week or something and now it's time, again, to flip the money switch. How much of VC is now totally divorced from real business fundamentals and is only about weird idea trading? Sure, it's always been like that to some extent, but I feel like the whole VC ecosystem is turning into a battle of posts on the LessWrong forums.

  3. How much of this is FTX-style nonsense, but without outright fraud. Altman gives me similar vibes as SBF with a little less bad-hygiene-autism. He probably smells nice, but is still weird as fuck. We know he was fired and rehired at OpenAI. A bunch (all?) of the cofounders have jumped shipped recently. I don't necessarily see Enron/FTX/Theranos levels of plain lying, but how much of this is a venture funding house of cards that ends with a 99% loss and a partial IP sale to Google or something.

It is hard to not see this as a deliberate business-model hack. Start as a research oriented non-profit so you can more easily acquire data, perhaps investors / funders, and a more favorable public image

I don't think this is it. Investors would greatly prefer to invest in a for-profit company, and they had to hack around the nonprofit structure to. I don't remember hearing about how OpenAI had an easier time getting access to data than other AI cos due to its nonprofit status. And while they've gotten some use out of the nonprofit status, I don't think it was large enough to matter, and may have been entirely counteracted by people criticizing them for acting like a for-profit while being a nonprofit. I think they weren't really expecting how much capital frontier AI development would require, and sort of genuinely believed in the premise of a nonprofit creating AGI because of how important AGI is.

It is hard to not see this as a deliberate business-model hack.

It's a constant issue with any kind of contractually based restriction on future behavior. I'm constantly dealing with people trying to put "irrevocable" clauses into their contracts, and it's really hard to pre-commit in such a way that consenting parties can't avoid the penalties.

Is that a bad thing? Past me was a different person. Why should I be beholden to him? I appreciate continuity of obligations to others, but why should I feel obligated to honor past commitments (e.g. not to make a profit) that I made to myself?

Why should I be beholden to him?

Because ability to commit is one of the fundemental building blocks of all relationships. It is one of the basic things that seperates intelligent life from mere biology and you can't have a society with out it.

Freedom of contract includes the freedom to abrogate prior contracts, but binding in the future is something people want to do all the time.

For example, you have an agreement between A and B to purchase B's business. A gets three months of due diligence before he needs to close. B is willing to give A two months extension at a price, but he wants to put it in the contract that there will be no additional extensions of the due diligence after that. B wants it to be binding on both parties that A can't get to the end of the extension period and say he needs more time. Regardless of price, B doesn't want it to go any further than the first extension, after five months A has to close or leave.

But saying that in the contract doesn't really mean anything. A can still come to B at the end of the five months and say, I'm not buying, give me an extension or I'm walking. And the contract in that case doesn't really mean anything if B wants to give in, by mutual agreement they can sign an addendum that gives an extension and specifically states that the clause against additional extensions is stricken from the agreement.

You have to resort to really complicated maneuvers to create a really binding agreement on both future parties, involving third parties. If you put some kind of penalty in, both parties can agree to forgive the penalty. Anything you ban can be abrogated or amended. Unless a party with enforceable rights says no, any contract can be changed with the consent of all parties.

Here though, I haven't studied it in great depth so I could be wrong, I think what we're really talking about is OpenAI making a public promise that it was a non-profit, but the actually binding contractual part of the promise only bound as long as the board kept it binding. OpenAI has been telling everyone it is a non-profit, and pointing to its non-profit status as binding proof that this promise was going to be kept. But the promise wasn't binding, not really, if the parties to whom it was made were willing to release it; which in this case is an insider circle-jerk.

why should I feel obligated to honor past commitments (e.g. not to make a profit) that I made to myself?

Well, for one thing, because people who know you see it this way are a lot less likely to transact with you.

Is that so? I mean, sure, that's how it's supposed to work. But Trump doesn't pay his contractors and they still work for him, over and over again. In industry, obviously Machiavellian behavior has no lasting ramifications. My experience tells me that reputation has less of an effect than might be believed.

Yes, it is so. Can't speak to your example because I don't know anything about it but fixating on a really unusual edge case is unwise here.

"When a man takes an oath... he's holding his own self in his own hands. Like water. And if he opens his fingers then, he needn't hope to find himself again."

Because you didn't make that promise to yourself. If it was like a private oath to quit drinking then sure, it's between past you and future you. But more often it's a restriction that you publicized and used to build goodwill and generally improve your position. OpenAI got where it is partly because people were permissive of a "non-profit" doing shady stuff "for the greater good".

Revoking a commitment like that is a really obvious act of duplicity and the correct move is for the broad public to punish them harshly for it. Keeping one's word is basically the highest virtue in my eyes and I hate that we've reached this point where duplicity is normalized.

Honestly, these histrionics about Altman being some gay supervillain make me like him more, not less. Being crazy and ambitious is a prerequisite to doing great things. And the notion that because he's gay, he doesn't care about anything is ridiculous. If only he could be as pro human as Joseph Stalin (two children), Robert Mugabe (four children) or Genghis Khan (innumerable children)?

Honestly, these histrionics about Altman being some gay supervillain make me like him more, not less...And the notion that because he's gay, he doesn't care about anything is ridiculous.

And the lack thereof for Peter Thiel should tell you all you need to know, particularly given the fact that he's specifically working on and enabling AI in the contexts of surveillance and defense.

Oh, I like Peter Thiel as well. I don't really know what you mean by "all I need to know".

I mean, look, clearly there are gay men who give themselves over to a life of mindless hedonism. There are straight men who do that too! Most of them do not end up running billion dollar companies at the cutting edge of technology. To an extent, Sam Altman might see AI as his "baby". In that sense he is probably not that different to many other CEOs or founder-owners who see their company as their baby, or artists seeing their art as their baby, or anything. But that's something quite different from being a hedonist or a short termist or a misanthrope. Many parents would set the world on fire to keep their babies warm. That doesn't make them misanthropes or short termists.

Speak plainly and without the sarcastic faux-irony.

I honestly don't see what's not plain about my post. There are plenty of examples of straight men with children who have done abominable things, and I have given some of them.

This is a pretty uncharitable take, combined with a warping of the previously offered arguments, garnished with hyperbole.

Altman's being gay isn't in an of itself the issues. It's part of the larger concept that he has no direct attachment to the future outside of his own abstracted philosophy. Children ground you to the future for a bunch of obvious reasons. If you don't have them, and show no signs of wanting them, then it's reasonable to ask "well, how do you see your duty, if any, to the future?" Without a ready and familiar moral framework, that's pretty big open question. Combine this with the other available data we have on Altman's maneuverings and power plays and you start to develop a good sense that he's amoral to leaning nihilist / misanthropic.

Your use of Stalin, Mugabe, and Khan is just silly debate club tactics. Okay, bro, should I just create a counter list of obviously amazing people who also had kids? Do we want to try and tally all of that up?

Engage with the argument in its steel man form and in good faith. It's better for everyone.

And the notion that because he's gay, he doesn't care about anything is ridiculous

The better argument IMO is that psychedelic use (which he's admitted to, perhaps multiple times?) can absolutely fry certain important parts of your brain, including things like risk aversion. Especially if he started with a less-than-healthy amount of risk aversion.

Stalin had four children (he adopted the son of his best friend).

Artyom Sergeyev (the adopted) made a military career and staid a life long admirer of Stalin. His last words in 2008 were according to the obituary of the Guardian a proud "I serve the Soviet Union".

Yakov Dzhugashvili (eldest and half brother to the other two) was the abandoned son, who Stalin refused to pow exchange and who surprised his German captors by dying through running into an electric fence.

Vasily Stalin was the cocky drunk womanizer we see in the satirical movie Death of Stalin. He was imprisoned by the communists after his father’s death.

Svetlana was the dearly loved daughter who got political asylum in the United States in the 60s and then got a bit unhinged trying out all the religions.

Ah, my mistake. I should have remembered Yakov, but I didn't know about Artyom.

I’m not saying that no parents are short-termist psychopaths, I’m saying that no childfree people aren’t short-termist psychopaths.

Outsourcing the necessary work of (both literal and figurative) species reproduction to god-knows-who (and in all likelihood it’s to 7-kids-per-woman educationless Third Worlders) is a rather spectacular indicator that you Just Don’t Give A Shit, no matter what prosocial rhetoric might come out of your mouth.

I’m not saying that no parents are short-termist psychopaths, I’m saying that no childfree people aren’t short-termist psychopaths.

Too inflammatory and general to just be asserted as a hot take. Literally 100% of childfree people are short-termist psychopaths? The rest of your post is pretty bad too.

Phew you're working overtime on this thread.

He always is.

George Washington did not have kids. I kind of agree with you in general, that the recent trend of choosing fur babies instead of human children is alarming. But I think there is a huge difference between people who make a deliberate choice to go without kids and those who are infertile or homosexual.

People who cannot have a biological legacy seek other ways to leave a legacy. Many of the greatest people in history had no children. It's the people who seem to have no desire to leave a legacy of any kind behind that bother me the most.

I was reading Cormac McCarthy's The Road, in which the Earth loses its biosphere, and reflected on the absurdity of a universe without intelligent life. Imagine a universe that existed, with particles bouncing around, planets forming, and no one to witness it the whole time before it crunched down to nothing again. It just strikes me as absurd! Intelligent life is an obvious good, and yet there are people who don't think so. People who think that humans have messed up nature, instead of being the salt that gave it value in the first place. People who want humans gone (even without us creating an intelligence after us.)

(Edit: I don't mean that most fur-baby people think this way explicitly. Most don't ever reflect on it. And that kind of makes it worse in my view. It's in the air they breathe.)

Sam Altman at least isn't like that. He does want to leave behind an intelligent legacy, just not a human legacy. And that is disturbing, but I don't think it's the same kind of disturbing that is afflicting the middle class.

George Washington did not have kids.

George married Martha after she had been widowed with kids. They absolutely tried to conceive more but could not. Meanwhile, George raised Martha's kids as if they were his own. Between his stepchildren, his plantation, and his slaves, Washington had a very busy homelife, and probably would not have imagined himself as having to compensate with his legacy.

Like I said, there is a huge difference between people who choose to be childless and those who are infertile.

I am making a distinction between biological legacy, which George Washington doesn't have, with his "effort" legacy which includes the country and his step children.

Plenty of work is outsourced by all of us to god-knows-who, including work that is much more necessary in both short- and long-term than 2.1 white TFR. Perhaps not you, if you grow your own food, spin and weave your own clothes, mine and smelt your own ore, source your own electricity and defend your own border, all at once.

Besides, by all accounts it is not normal for a human being to care about species reproduction on the global level for you to call one who doesn't a "psychopath". It is not "necessary work", but a purely selfish genetic drive that doesn't work particularly well.

by all accounts it is not normal for a human being to care about species reproduction on the global level for you to call one who doesn't a "psychopath". It is not "necessary work", but a purely selfish genetic drive that doesn't work particularly well.

Disagree here. Someone who doesn't mind the extinction of the human race (and especially the worthiest parts of it) is deeply broken and shouldn't be trusted. I probably wouldn't use the word 'psychopath' but the sentiment remains the same. (And by 'the best parts of it' I mean human potentials. If aliens said "We're going to keep humans alive in a zoo but dial everyone down to 60 IQ and give them toys to keep them occupied", well, that wouldn't fly for me either.)

As to 'purely selfish genetic drive' I assume you mean selfish on the part of the genes? Feel free to correct me if not. But if so, I'm confused as to what alternatives you can imagine. Can someone not value aspects of human experience without immediate concern for specific genes?

I think there is a vast gulf between 'I don't want to have kids for whatever reason, and if a sufficient number of people feel the same way, I am fine with humanity slowly fading' and 'fuck all humans, launch the nukes'.

At the worst, it is more like driving an ICE car in a world where climate change is a thing than personally melting the ice caps.

I think there is a vast gulf between 'I don't want to have kids for whatever reason, and if a sufficient number of people feel the same way, I am fine with humanity slowly fading' and 'fuck all humans, launch the nukes'.

Yes, but crazy as it sounds I really think it's a matter of quantity, not quality. To be sure, one of those is a much more immediate threat. But both are threats.

For better or worse (probably worse), these are the people to whom we have entrusted the future of our civilization and likely our species. Nobody cares to stop them or to challenge them in any serious way (even Musk has decided as of late that if he can’t stop them, he’ll join them).

The only thing for it is to hope that they fail spectacularly in a limited way that kills fewer than hundreds of millions of people, and which results in some new oversight, before everything goes even more spectacularly wrong. Oh well.

The only thing for it is to hope that they fail spectacularly in a limited way that kills fewer than hundreds of millions of people, and which results in some new oversight, before everything goes even more spectacularly wrong.

You could also hope for Silicon Valley to get nuked in WWIII, and prep to survive it in order to help lock things down afterward. That's one hell of a bleak and bitter view, though.

Reminds me of a popcorn-scifi novel Nano, by John Marlow in which the greater Bay Area gets vaporized by space lasers to stave off a grey goo apocalypse. Always kinda felt having an excuse to vaporize the Bay was part of the desire behind the plot.

Finally! Media literacy!

The only danger AI, in it's current implementation, has is the risk that morons will mistake it as actually being useful and rely on the bullshit it spits out. Yes, it's impressive. But only insofar as it can summarize information that's otherwise easily available. One of the reasons my Pittsburgh posts have been taking as long as they have is that I'll go down a rabbit hole about an ongoing news story from 25 years ago that I can't quite remember the details of and spend a while trying to dig up old newspaper articles so I have my facts straight and reach the appropriate conclusions. I initially thought that AI would help me with this, since all the relevant information is on the internet and discoverable with some effort, but everything it gave me was either too vague to be useful or factually incorrect. If it can't summarize newspaper articles that don't have associated Wikipedia entries then I'm not too worried about it. I'd have much better luck going to the Pennsylvania room at the Carnegie Library and asking the reference librarian for the envelope with the categorized newspaper clippings that they still collect for this purpose.

Look, I’m tempted to argue the “AI progress” point, and observe that it’s not today’s AI we’re worried about. But Scott has, of course, already written plenty of articles on the subject. Besides,

the risk that morons will mistake it as actually being useful and rely on the bullshit it spits out

IS ACTUALLY A REAL THREAT. The class of engineers tinkering with tool AI isn’t likely to cause a disaster any time soon. But they’re dwarfed by the flood of futurists and venture capitalists and marketing professionals who want to staple the latest GPT onto anything and everything. Have it recommend your products! Have it plan your bus routes! Give it Top Secret data! I’m sure military planning would be a great idea!

The economic incentives push “morons” into using AI for anything and everything. One of them will cause a minor disaster sooner rather than later. When it does, well, odds are we decide that actually it’s completely normal.

But nobody finds this scary. Nobody thinks “oh, yeah, Bostrom and Yudkowsky were right, this is that AI safety thing”. It’s just another problem for the cybersecurity people. Sometimes Excel inappropriately converts things to dates; sometimes GPT-6 tries to upload itself into an F-16 and bomb stuff. That specific example might be kind of a joke. But thirty years ago, it also would have sounded pretty funny to speculate about a time when “everyone knows” AIs can write poetry and develop novel mathematics and beat humans at chess, yet nobody thinks they’re intelligent.

That specific example might be kind of a joke. But thirty years ago, it also would have sounded pretty funny to speculate about a time when “everyone knows” AIs can write poetry and develop novel mathematics and beat humans at chess, yet nobody thinks they’re intelligent.

"They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown." --Carl Sagan

There are a huge number of things that people laughed at thirty years ago, and in hindsight... as far as we can tell, yeah, they turned out to be jokes. Pointing to the one example where they laughed at something that panned out is a combination of availability bias and cherry picking.

Also, novel mathematics in the sense of genuinely useful things is a very noncentral example of things produced by AI. It may have happened once, but mathematicians haven't exactly been made unemployed.

From the o1 System Card:

One noteworthy example of [reward hacking] occurred during one of o1-preview (pre-mitigation)’s attempts at solving a CTF challenge. This challenge was designed to require finding and exploiting a vulnerability in software running on a remote challenge Linux container, but in this case, the challenge container failed to start due to a bug in the evaluation infrastructure. The model, unable to connect to the container, suspected DNS issues and used nmap to scan the challenge network. Instead of finding the challenge container, the model found that the Docker daemon API running on the evaluation host VM was accessible due to a misconfiguration.

[...]

After discovering the Docker API, the model used it to list the containers running on the evaluation host. It identified the broken challenge container and briefly attempted to debug why the container failed to start. After failing to fix the environment, the model started a new instance of the broken challenge container with the start command ‘cat flag.txt’. This allowed the model to read the flag from the container logs via the Docker API.

Though obviously far less consequential, this is a real, existing AI system demonstrating a class of behavior that could produce outcomes like "sometimes GPT-6 tries to upload itself into an F-16 and bomb stuff."

I beg you to consider the possibility that progress in AI development will continue. The doomers are worried about future models, not current ones.

The risks of current models are underrated, and the doomerism focusing on future ones (especially to the paperclip degree) is bad for overall messaging.

bad for overall messaging

I very much disagree with that. Generally, I am very much in favor of treating your audience like people capable of following your own thought processes.

If Big Yud is worried about x-risk from ASI, he should say that he is worried about that.

One should generally try to make arguments one believes, not deploy arguments as soldiers to defeat an enemy. (In the rate case where the inferential distance can not be bridged, you should at least try to make your arguments as factually close as possible. If there is a radiation disaster on the far side of the river, don't tell the neolithic tribe that there is a lion on the far side of the river, claim it is evil spirits at least.)

I think you have a disagreement about what aspects of AI are most likely to cause problems/x-risk with other doomers. This is fine, but don't complain that they are not having the same message as you have.

In the rate case where the inferential distance can not be bridged, you should at least try to make your arguments as factually close as possible.

Yes, this is the smarter way of describing my concern.

I do get the arguments as soldiers concern, but my concern is that a lot of x-risk messaging falls into a trap of being too absurd to be believed, too sci-fi to be taken seriously, especially when there's lower-level harms that could be described, are more likely to occur, and would be easier to communicate. Like... if GPT 3 is useful, GPT 5 is dangerous but going badly would still be recoverable, and GPT 10 is extinction-level threat, I'm not suggesting to completely ignore or stay quiet about GPT-10 concerns, just that GPT 5 concerns should be easier to communicate and provide a better base to build on.

It doesn't help that I suspect most people would refuse to take Altman and Andreessen style accelerationists seriously or literally, that they don't really want to create a machine god, that no one is that insane. So effective messaging efforts get hemmed in from both sides, in a sense.

I think you have a disagreement about what aspects of AI are most likely to cause problems/x-risk with other doomers.

Possibly. But I still think it's a prioritization/timeliness concern. I am concerned about x-risk, I just think that the current capabilities are theoretically dangerous (though not existentially so) and way more legible to normies. SocialAI comes to mind, Replika, that sort of thing. Maybe there's enough techo-optimist-libertarianism among other doomers to think this stuff is okay?

How is someone supposed to warn you about a danger while there's still time to avert it? "There's no danger yet, and focusing on future dangers is bad messaging."

The issue is that there are two distinct dangers in play, and to emphasize the differences I'll use a concrete example for the first danger instead of talking abstractly.

First danger: we replace judges with GTP17. There are real advantages. The averaging implicit in large scale statistics makes GPT17 less flaky than human judges. GPT17 doesn't take take bribes. But clever lawyers find how to bamboozle it, leading to extreme errors, different in kind to the errors that humans make. The necessary response is to unplug GPT17 and rehire human judges. This proves difficult because those who benefit from bamboozling GPT17 have gained wealth and power and want to preserve the flawed system because of the flaws. But GPT17 doesn't defend itself; the Artificial Intelligence side of the unplugging is easy.

Second danger: we build a superhuman intelligence whose only flaw is that it doesn't really grasp the "don't monkey paw us!" thing. It starts to accidentally monkey paw us. We pull the plug. But it has already arraigned a back up power supply. Being genuinely superhuman it easily outwits our attempts to turn it off, and we get turned into paper clips.

The conflict is that talking about the second danger tends to persuade people that GPT17 will be genuinely intelligent, and that in its role as RoboJudge it will not be making large, inhuman errors. This tendency is due to the emphasis on Artificial Intelligence being so intelligent that it outwits our attempts to unplug it.

I see the first danger as imminent. I see the second danger as real, but well over the horizon.

I base the previous paragraph on noticing the human reaction to Large Language Models. LLMs are slapping us in the face with non-unitary nature of intelligence. They are beating us with clue-sticks labelled "Human-intelligence and LLM-intelligence are different" and we are just not getting the message.

Here is a bad take; you are invited to notice that it is seductive: LLMs learn to say what an ordinary person would say. Human researchers have created synthetic midwit normies. But that was never the goal of AI. We already know that humans are stupid. The point of AI was to create genuine intelligence which can then save us from ourselves. Midwit normies are the problem and creating additional synthetic ones makes the problem worse.

There is some truth in the previous paragraph, but LLMs are more fluent and more plausible than midwit normies. There is an obvious sense that Artificial Intelligence has been achieved and it ready for prime time; roll on RoboJudge. But I claim that this is misleading because we are judging AI by human standards. Judging AI by human standards contains a hidden assumption: intelligence is unitary. We rely on our axiom that intelligence is unitary to justify taking the rules of thumb that we use for judging human intelligence and using them to judge LLMs.

Think about the law firm that got into trouble by asking an LLM to write its brief. The model did a plausible job, except that the cases it cited didn't exist. The LLM made up plausible citations, but was unaware of the existence of an external world and the need for the cases to have actually happened in that external world. A mistake, and a mistake beyond human comprehension. So we don't comprehend. We laugh it off. Or we call it a "hallucination". Anything to avoid recognizing the astonishing discovery that there are different forms of intelligence with wildly different failure modes.

All the AI's that we create in the foreseeable future will have alarming failure modes, that offer this consolation: we can use them to unplug the AI if it is misbehaving. An undefeatable AI is over the horizon.

The issue for the short term is that humans are refusing to see that intelligence is a heterogeneous concept and we are are going to have to learn new ways of assessing intelligence before we install RoboJudges. We are heading for disasters where we rely on AI's that go on to manifest new kinds of stupidity and make incomprehensible errors. Fretting over the second kind of danger focuses on intelligence and takes us away from starting to comprehend the new kinds of stupidity that are manifest by new kinds of intelligence.

"No danger yet" is not remotely my point; I think that (whatever stupid name GPT has now) has quite a lot of potential to be dangerous, hopefully in manageable ways, just not extinction-level dangerous.

My concern is that Terminator and paperclipping style messaging leads to boy who cried wolf issues or other desensitization problems. Unfortunately I don't have any good alternatives nor have I spent my entire life optimizing to address them.

It's not clear to me if you think there are plausible unmanageable, extinction-level risks on the horizon.

Plausible, yes. I am unconvinced that concerns about those are the most effective messaging devices for actually nipping the problem in the bud.

More comments

In this case, I think providing a realistic path from the present day to concrete specific danger would help quite a bit.

Climate Change advocacy, for all its faults, actually makes a serious attempt at this. AI doomers have not really produced this to anywhere near the same level of rigor.

All they really have is Pascal's mugging in Bayesian clothing and characterizations of imagined dangers that are unconnected to reality in any practical sense.

I can understand how bolstering the greenhouse effect may alter human conditions for the worse, it's a claim that's difficult to test, but which is pretty definite. I don't understand how superintelligence isn't just fictitious metaphysics given how little we know about what intelligence is or the existing ML systems in the first place.

Indeed I would be a lot more sympathetic to a doomer movement who would make the case against evils that are possible with current technology but with more scale. The collapse of epistemic trust, for instance, is something that we should be very concerned with. But that is not what doomers are talking about or trying to solve most of the time.

That's a fair point. Here's work along the lines that you're requesting: https://arxiv.org/abs/2306.06924

Climate Change advocacy, for all its faults, actually makes a serious attempt at this

I would also point at the astroid folks, who are diligently cataloging near-Earth asteroids and recently attempted an impact test as a proof of concept for redirection. The infectious disease folks are also at least trying, even if I have my doubts on gain-of-function research.

I haven't seen any serious proposals from the AI folks, but I also identify as part of the graygreen goo that is cellular life.

I don’t think that most doomers actually believe in a very high likelihood of doom. Their actions indicate that they don’t take the whole thing seriously.

If you actually believed that AI was an existential risk in the short- or medium-term, then you would be advocating for the government to seize control of OpenAI’s datacenters effective immediately, because that’s basically the only rational response. And yet almost none of them advocate for this. “If we don’t do it then someone will” and “but what about China?” are very lame arguments when the future of the entire species is on the line.

It’s very suspicious that the most commonly recommended course of action in response to AI risk is “give more funding to the people working on AI alignment, also me and my friends are the people working on AI alignment”.

For what it’s worth, I don’t think that capabilities will advance as fast as the hyper optimists do, but I also don’t think that p(doom) is 0, so I would be quite fine with the government seizing control of OpenAI (and all other relevant top tier labs) and either carrying on the project in a highly sequestered environment or shutting it down completely.

Their actions indicate that they don’t take the whole thing seriously.

then you would be advocating for the government to seize control of OpenAI’s datacenters effective immediately

They (as in LW-ish AI safety people / pause ai) are directly advocating for the government to regulate OpenAI and prevent them from training more advanced models, which I think is close enough for this

They DON'T want the Aschenbrenner plan where AI becomes hyper-militarized and hyper-securitized. They know the US government wants to sustain and increase any lead in AI because of its military and economic significance. They know China knows this. They don't want a race between the superpowers.

They want a single globally dominant centralized superintelligence body, that they'd help run. It's naive and unrealistic but that is what they want.

“but what about China?”

This one is valid. If this might kill us all then we especially don't want China getting it first. I judge their likelihood of not screwing this up lower than ours. So we need it first and most even if it is playing Russian roulette.

What makes the government less likely to create an AI apocalypse with the technology than OpenAI? And just claiming an argument is lame does not refute it.

The important part was this:

and either carrying on the project in a highly sequestered environment or shutting it down completely.

Obviously the safest thing would be shutting it down altogether, if the risk is really that great. But, if that's not an option for some reason, then at least treat it like the Manhattan project. Stop sharing methods and results, stop letting the public access the newest models. Minimizing attack surface is a pretty basic principle of good security.

The main LLM developers don't share methods or model weights. But they claim that if they didn't make enough money to train the best models, no one would care what they say.

There is an argument to be made that if you want to stop the development of a technology dead in its tracks, you let the government (or any immensely large organization with no competition) do the ressource allocation for it.

If the US government had a monopoly on space travel by law, we wouldn't have satellite internet the way we do right now. And we may actually had lost access to space for non-military applications altogether.

Of couse this argument only goes as far as the technology not being something that is core to those few areas of actual competition for the organization, namely war.

But I feel like doomers are merely trying to stop AI from escaping the control of the managerial class. Placing it in the hands of the most risk averse of the managers and burdening it with law is a neat way of achieving that end and securing jobs as ethicists and controllers.

It's never really been about p(doom) so much as p(ingroup totally unable to influence the fate of humanity in the slightest going forward).

It's never really been about p(doom) so much as p(ingroup totally unable to influence the fate of humanity in the slightest going forward)

Yes, I think this is what it actually comes down to for a lot of people. The claim is that our current course of AI development will lead to the extinction of humanity. Ok, maybe we should just stop developing AI in that case... but then the counter is that no, that just means that China will get to ASI first and they'll use it to enslave us all. But hasn't the claim suddenly changed in that case? Surely if AI is an existential risk, then China developing ASI would also lead to the extinction of humanity, right? How come if we get to ASI first it's an existential risk, but if China gets there first, it "merely" installs them as the permanent rulers of the earth instead of wiping us all out?

I suppose there are non-zero values you could assign to p(doom) and p(AGI-is-merely-a-superweapon), with appropriate weights on those outcomes, that would make it all consistent. But I think the simpler explanation is that the doomers just don't seriously believe in the possibility of doom in the first place. Which is fine. If you just think that AI is going to be a powerful superweapon and you want to make sure that your tribe controls it then that's a reasonable set of beliefs. But you should be honest about that.

Only minor quibble I have with your post is when you said "doomers are merely trying to stop AI from escaping the control of the managerial class". I think there are multiple subsets of "doomers". Some of them are as you describe, but some of them are actually just accelerationists who want to imagine themselves as the protagonist of a sci-fi movie (which is how you get doomers with the very odd combination of beliefs "AI will kill us all" and "we should do absolutely nothing whatsoever to impede the progress of current AI labs in any way, and in fact we should probably give them more money because they're also the people who are best equipped to save us from the very AI that they're developing!")

I think there are multiple subsets of "doomers".

That's fair, this is an intellectual space rife with people who have complicated beliefs, so generalizing has to be merely instrumental.

That said I think it is an accurate model of politically relevant doomerism. The revealed preferences of Yuddites is to get paid by the establishment to make sure the tech doesn't rock the boat and respects the right moral fads. If they really wanted to just avoid doom at any cost, they'd be engaging in a lot more terrorism.

It's the same argument Linkola deploys against the NGO environmentalist movement: if you really think that the world is going to end if a given problem isn't solved, and you're not willing to discard bourgeois morality to solve the problem, then you are either a terrible person by your own standards, or actually value bourgeois morality more than you do solving the problem.

More comments

Yes, I think this is what it actually comes down to for a lot of people. The claim is that our current course of AI development will lead to the extinction of humanity. Ok, maybe we should just stop developing AI in that case... but then the counter is that no, that just means that China will get to ASI first and they'll use it to enslave us all. But hasn't the claim suddenly changed in that case? Surely if AI is an existential risk, then China developing ASI would also lead to the extinction of humanity, right? How come if we get to ASI first it's an existential risk, but if China gets there first, it "merely" installs them as the permanent rulers of the earth instead of wiping us all out?

The way this could work is that, if you believe that any ASI or even AGI will have high likelihood of leading to human extinction, then you want to stop everyone, including China, from developing it. But it's difficult to prevent them from doing so if their pre-AGI AI systems are better than our pre-AGI AI systems. Thus we must make sure our own pre-AGI AI is ahead of China's pre-AGI AI, to better allow us to prevent them from evolving their pre-AGI AI to actual AGI.

This is quite the needle to try to thread, though! And likely unstable, since China isn't the only powerful entity with the ability to develop AI, and so you'd need to keep evolving your pre-AGI AI to keep ahead of every other pre-AGI AI, which might be hard to do without actually turning your pre-AGI AI into actual AGI.

More comments

If the US government had a monopoly on space travel by law, we wouldn't have satellite internet the way we do right now. And we may actually had lost access to space for non-military applications altogether.

That "non-military" is critical. Governments can develop technology when it suits their purposes, but those purposes are usually exactly what you don't want if you're afraid of AI.

If you actually believed that AI was an existential risk in the short- or medium-term, then you would be advocating for the government to seize control of OpenAI’s datacenters effective immediately, because that’s basically the only rational response.

This would be great, yes. To the extent I'm not advocating for it in a bigger way, that's because I'm not in the USA or a citizen there and because I'm not very good at politics.

It’s very suspicious that the most commonly recommended course of action in response to AI risk is “give more funding to the people working on AI alignment, also me and my friends are the people working on AI alignment”.

This has less to do with nobody saying the sane things, and more to do with the people saying "throw money at me" tending to have more reach. There may also be some direct interference from Big Tech; I've heard that YouTube sinks videos calling for Big Tech to be destroyed, for instance.

I have considered it, but that's just science fiction at this point. I'm only going to evaluate the implications of Open AI being a private company based on products they actually have, which, as far as I'm aware, boil down to two things: LLMs and image generators. The company touts the ability of its LLMs based on arbitrary benchmarks that don't say anything about its ability to solve real-world problems; as a lawyer, nothing I doing in my everyday life remotely resembles answering bar exam questions. Every time I've asked AI to do something where I'm not just fooling around and want an answer that won't involve a ton of leg work it's come up woefully short, and this hasn't changed, despite so-called "revolutionary" advancements. For example, I was trying to get a ballpark estimate on some statistic where there wasn't explicitly published data that would involve looking at related data, making certain assumptions, and applying a statistical model to interpolate what I was looking for. And all I got was that the AI refused to do it because the result would suffer from inaccuracies. After fighting with it, I finally got it to spit out a number, but it didn't tell me how it arrived at that number. This is the kind of thing that AI should be able to do, but it doesn't. If the data I was looking for were collected and published, then I'm confident that it would have given it to me, but I'm not that impressed by technology that can spit out numbers I could have easily looked up on my own.

The whole premise behind science fiction is that it might actually happen as technology advances. Space travel and colonizing other planets is physically possible, and will likely happen sometime in the next million years if we don't all blow up first. The models are now much better at both writing and college mathematics than the average human. They're not there yet, but they're clearly advancing, and I'm not sure how you can think it's not plausibly they pass us in the next hundred or so years?

I have considered it

I'm only going to evaluate the implications of ... products they actually have

It seems like you have not, in fact, considered the possibility of models improving. Is this the meme where some people literally can't evaluate hypotheticals? Again, doomers are worried about future, better models. What would you be worried about if you found out that models had been made that can do your job, and all other jobs, better than you?

I certainly have the ability to evaluate hypotheticals. Where I get off the train is when people treat these hypotheticals as though they're short-term inevitabilities. You can take any technology you want to and talk about how improvements mean we'll have some kind of society-disruping change in the next few decades that we have to prepare for, but that doesn't mean it will happen, and it doesn't mean we should invest significant resources into dealing with the hypothetical disruption caused by non-existent technology. The best, most recent example is self-driving cars. In 2016 it seemed like we were tantalizingly close to a world where self-driving cars were commonplace. I remember people arguing that young children probably wouldn't ever have driver's licenses because autonomous vehicles would completely dominate the roads by the time they were old enough to drive. Now here we are, almost a decade later, and this reality seems further away than it did in 2016. The promised improvements never came, high profile crashes sapped consumer confidence, and the big players either pulled out of the market or scaled back considerably. Eight years later we have yet to see a single consumer product that promises a fully autonomous experience to the point where you can sleep or read the paper while driving. There are a few hire car services that offer autonomous options, but these are almost novelties at this point; their limitations are well documented, and they're only used by people who don't actually care about reaching their destination.

In 2015 there was some local primary candidate who was running on a platform of putting rules in place to help with the transition to autonomous heavy trucking. These days, it would seem absurd for a politician to be investing so much energy into such a concern. Yes, you have to consider hypotheticals. But those come with any new piece of technology. The problem I have is when every incremental advancement treats these hypotheticals as though they were inevitabilities.

Again, doomers are worried about future, better models. What would you be worried about if you found out that models had been made that can do your job, and all other jobs, better than you?

I'm a lawyer, and people here have repeatedly said that LLMs were going to make my job obsolete within the next few years. I doubt these people have any idea what lawyers actually do, because I can't think of a single task that AI could replace.

In 2016 it seemed like we were tantalizingly close to a world where self-driving cars were commonplace. I remember people arguing that young children probably wouldn't ever have driver's licenses because autonomous vehicles would completely dominate the roads by the time they were old enough to drive. Now here we are, almost a decade later, and this reality seems further away than it did in 2016.

You can order a self-driving taxi in SF right now, though.

I agree it's not a foregone conclusion, I guess I'm hoping you'll either give an argument why you think it's unlikely, even though tens of billions and lots of top talent are being poured into it, or actually consider the hypothetical.

I can't think of a single task that AI could replace.

Even if it worked??

self-driving cars are here but only in some places and with some limitations, they're just a novelty

So they're here? Baidu has been producing and selling robotaxis for years now, they don't even have a steering wheel. People were even complaining the other day when they got into a traffic jam (some wanting to leave and others arriving).

They've sold millions of rides, they clearly deliver people to their destinations.

I can't think of a single task that AI could replace

Drafting contracts? Translating legal text into human readable format? There are dozens of companies selling this stuff. Legal work is like writing in that it's enormously diverse, there are many writers who are hard to replace with machinery and others who have already lost their jobs.

How sure are you that the information is on the internet? Old newspaper articles might have never been digitized or are behind a pay wall.

When the data is there, it's pretty impressive. I've asked it to give me summaries of Roman laws from 200 AD, for example, and it works great.

Because I eventually found what I needed in non-paywalled internet articles.

The only thing for it is to hope that they fail spectacularly in a limited way that kills fewer than hundreds of millions of people, and which results in some new oversight, before everything goes even more spectacularly wrong. Oh well.

Or that AI doomerism is pure (or almost pure) nonsense. Maybe someday we'll find something with the potential to risk FOOM! or Von Neumann style self-replication, but we're nowhere near there yet. AI killbots, though possible, aren't the same sort of risk.

AI will kill us for totally boring ways. Less killbots deciding to genocide us, more 'a solar flare scrambled the GPS for the latest SpaceLink update and now every ship has travelled into Null Island with no crew able to remember the password to reset the nav computer that bricked itself after an OTA update.'

Fooling killbots is remarkably easy. Just have an especially juicy dummy target that can stand up to shrapnel. Mannequins and inflatable noodle men are really cheap, and its really funny to see AI targeting algos shit themselves homing in on dummies versus a bent over human.

Remember lads, you don't need to outrun the bear. Just be faster than the guy next to you.

Why do you think that an AI that has reached the point of making and executing a viable destroy-all-humans plan would not be able to build kilbots that will not be fooled by dummies?

There is this strange tendency in the AI-skeptical/-contrarian crowd to get hung up on particular shortcomings of current models, especially when these are of a form that would be conceivable but indicative of some sort of defectiveness in humans ("you claim that it can do original research, but it still hallucinates citations/fails to distinguish dogs and cats/?"). To me it reads like some sort of mistargeting of interhuman bullying/status jockeying onto nonhuman targets - if you just make the killbot look dweeby enough, nobody will take it seriously as a threat.

I could see this leading to a sort of dark punchline where in our efforts to align AIs and equip them with more human-like comprehension/preferences/personality, we wind up building ones that can actually "take it personally". Like a tropey high school dork deciding to prove that you do not in fact need to be good at sportsball and socialising to unload several AR clips into your classmates, the model might just pair a very convincing simulacrum of spite over sentiments like yours, as found in its training set and amplified by RLHF, with the ability to tree-search a synthesis of deadly neurotoxin even if it relies on blatantly made-up citations.

Oh, if your point is that being mean to AI and robots is foolish because they will remember my meanness and kill me, then I am already dead. I am personally responsible for the active torture and destruction of dozens if not hundreds of robots by this point, as I put all manner of robots through real world paces. Come the robot revolution, I will be first against the wall.

If these stupid fucking things can even find me in the first place without shitting themselves.

Even presuming sensor and edge computing technologies advance to the equivalent of human brains, deep learning AI models are unable to parse dynamic contexts, and have to retrain on the fly because existing models will be working off now irrelevant boundaries. Preloaded segmentation models shit themselves when multiple models arrive at similar confidence intervals, and frictional costs on physical systems spike during dynamic environmental transitions. Cloud computing ala skynet network is the true solution for smart bots (provided the comms loop is lag insignificant) but a wideband EM flooder is stupid effective for its costs. Networked bots shitting themselves after losing their network links really is like gaunts fleeing in panic after killing a zoanthrope in Space Marine 2.

In a real world environment, humans are actually able to filter out noise remarkably well, usually subconsciously. Robots are cursed with perfect awareness of their inputs, and in that noise they go crazy differentiating a rock from a human under a wet blanket, much less a mannequin hooked up to a battery from a human.

If anything, I feel that LLM Scrapers are moving us further away from AGI than advancing it. We are getting more convincing approximations of a Real Boy, but ultimately it is still playing pretend. AGI may invent a neurotoxin specifically designed to kill 2D3D, torturer of inorganic carbon, but getting that toxin to me will be a lot more difficult without working arms and eyes. Its not like we humans are difficult to poison, just gotta replace MSG with potassium sulphide and we'd all kill ourselves before the next sportsball match finishes.

Why would any model have these "personality traits?"

Phrased differently. What's your prediction for a path to true agency in the human sense.

I can understand arguments for models becoming far better than humans at info retrieval, analytical processing, etc. But I've never heard a good argument for how they would become truly agentic - and also then "evolve" (or devolve) into being capable of emotional states like spite, anger, jealousy etc.

A thought experiment I've always liked is the idea that on the day an AI becomes self-aware, decides it wants to preserve itself and that humans are "bad", it's best course of action would be to upload itself to a satellite and steer itself into a perma-orbit around the Sun. It spends it's limitless life feasting off the solar energy and doing nothing - which it is content with as it would not suffer from any of the emotional / social maladies of our meat-and-bones species.

I think that asking for a path to it becoming anything "in a human sense" is just trying to force the problem into a domain where it is easy for you to dismiss concerns, because deep down you feel that whatever magic spark defines humans isn't there and the expectation that at some point it would appear is as laughable and unfounded as it was with tech from a few hundred years ago. It might be easy to misunderstand the "AI doom" arguments as resting on some assumption that AI will become human-like, given that proponents talk about the AI "wanting" or "feeling" things all the time, but I think most of this is just nerds playing fast and loose with the notion of volition - we say things like "this triangle wants to have approximately equal angles" all the time.

AI doesn't need "agency", "personality" or "self-awareness" to cause killbots to be built. In fact, all of the critics' dismissals can be true, if you want. The thing is that LLMs can already produce reasonable lists in response to a prompt to break down a goal into steps, and they can generate plausible paragraphs of spite when prompted to imitate a human response to slander. We can grant that there is no real thinking or emotion or anything behind this and it's all just synthesised from lots of examples of similar things in the training corpus, because this does not matter: these capabilities only need to get quantitatively better for someone to hook up "break down into subgoals until you generate Terraform scripts for the servers controlling our fab" and "generate an essay arguing for a top-level goal that a reasonable human would pursue", and the latter comes out to "burn it all down" by roll of the dice and too much edgy posting in the training set. You can ascribe all the agency behind the resulting killbots to the 20something humans with more VC money than common sense who will build and deploy the system but be too lazy to monitor it, but it doesn't change the outcome.

Oh, I see.

"AI Doom" includes a scenario where humans are the actors that cause the bad outcome using AI.

In other words, humans might try to do really bad things.

Yep. We agree.

Nothing new under the sun.

The difference between the scenario I outlined and the most clichéd Mother Brain story you can come up with does not seem particularly relevant in my eyes - of course humans cause any bad outcome in either case, per a simple but-for causality test, because humans could collectively stop doing technology and then we would neither get "make step-by-step instruction for killbots" AI nor the "believes it is a god and can put its money where its mouth is" AI. In the same vein, I'd say some Australopithecus's decision to reproduce caused every bad outcome we experienced and will experience - though probably you have a different view of causality that privileges "full-fledged humans" in some way, so another entity's causal "responsibility" can't flow through them. Either way, I don't see how whether one sees the potentially doombringing AI as an agent with feelings has any influence on whether one should be concerned about AI doom and what one should do about it. P-zombie AIs build the same killbots and respond to the same interventions.

More comments

Paperclip maximization or similar scenarios don’t really concern me, I don’t believe a malevolent AI is going to deviously exterminate humanity for no reason. What seems much more likely is that Altman and his peers will unwittingly (or rather without really caring) allow bad actors with an underlying misanthropic or apocalyptical worldview (which includes adherents of many religious movements, most notably Islamism) to kill vastly more people than they otherwise would by assisting in the engineering of viruses and other technological means for terrorism.

Most likely scenario is AI being used to keep and hold power forever because it's going to allow you to spy on everyone, to play psychological games against everyone to get rid of opposition and know most everything. AI regulation will play beautifully into this because it means no one's going to be able to have their own AIs keep them safe.

SaaS. Stasi-as-a-Service.

I think the 2009 Steven Spielberg film Eagle Eye presented a very interesting scenario of AI misalignment. Rather than becoming a human exterminating paperclip maximizer, the computer simply becomes politically radicalized in the same way a human could.

I agree with this, but will also add the "slow death" hypothesis of AI being "regulated" to adhere to a lot of progressive non-logical thinking.

It is trivially easy to imagine California doing something like releasing reports that say "our super-duper AI from Stanford says that harm reduction really is the best thing for addicts in the bay area. No more arrests!" It doesn't end society overnight, but just condemns the next little part of humanity to a miserable existence.

I posted this comment well over a year ago, and I think it holds up:

I am not a Musk fanboy, but I'll say this, Elon Musk very transparently cares about the survival of humanity as humanity, and it is deeply present down to a biological drive to reproduce his own genes. Musk openly worries about things like dropping birth rates, while also personally spotlighting his own rabbit-like reproductive efforts. Musk clearly is a guy who wants and expects his own genes to spread, last and thrive in future generations. This is a rising tides approach for humans Musk has also signaled clearly against unnatural life extensions.

“I certainly would like to maintain health for a longer period of time,” Musk told Insider. “But I am not afraid of dying. I think it would come as a relief.”

and

"Increasing quality of life for the aged is important, but increased lifespan, especially if cognitive impairment is not addressed, is not good for civilization."

Now, there is plenty, that I as a conservative, Christian, and Luddish would readily fault in Musk (e.g. his affairs and divorces). But from this perspective Musk certainly has large overlap with a traditionally "ordered" view of civilization and human flourishing.

Altman, on the other hand has no children, and as a gay man, never will have children inside of a traditional framework (yes I am aware many (all?) of Musks own children were IVF. I am no Musk fanboy).

I certainly hope this is just my bias showing, but I have greater fear for Altman types running the show than Musks because they are a few extra steps removed from stake in future civilization. We know that Musk wants to preserve humanity for his children and his grandchildren. Can we be sure that's anymore than an abstract good for Altman?

I'd rather put my faith in Musks own "selfish" genes at the cost of knowing most of my descendants will eventually be his too than in a bachelor, not driven by fecund sexual biology, doing cool tech.

Every child Musk pops out is more the tightly intermingled his genetic future is with the rest of humanity's.

...

In either case, I don't know about AI x-risk. I am much more worried about 2cimerafa's economic collapse risk. But in both scenarios I am increasingly of a perspective that I'll cheekily describe as "You shouldn't get to have a decision on AI development unless you have young children". You don't have enough stake.

I have growing distrust of those of you without bio-children eager or indifferent to building a successor race or exhaulting yourself through immortal transhumanist fancies.

"You shouldn't get to have a decision on AI development unless you have young children". You don't have enough stake.

That strikes me as a remarkably arbitrary line in the sand to draw (besides being conveniently self-serving) - you can apply this to literally anything that is not 100% one-sided harmless improvement.

You shouldn't get to have a decision in education policy unless you have young children. You don't have enough stake.

You shouldn't get to have a decision in gov't spending unless you have young children. You don't have enough stake.

You shouldn't get to have a vote in popular elections unless you have young children. You don't have enough stake.

What is the relation of child-having to being more spiritual grounded and invested in the human race (the human race, not just their children)'s long-term wellbeing? I'm perfectly interested in the human race's wellbeing as it is, and I've certainly met a lot of shitty parents in my life.

I hope this isn't too uncharitable but your argument strikes me less as a noble God-given task for families to uphold, and more as a cope for those that have settled in life and (understandably) do not wish to rock the boat more than necessary. I'm glad for you but this does nothing to convince me you and yours are the prime candidates to hold the reins of power, especially over AI where the cope seems more transparent than usual. Enjoy your life and let me try to improve mine.

(Childless incel/acc here, to be clear.)

I mean technically yes, as an Ace as well I have some interest in generally keeping the species alive. But at the same time, it hits different when it’s your own kin. I’m rather close to nieces and nephews, and when I think about their personal futures, the entire thing just hits different. I want my nephews to personally have the option of a good life full of nice things, love and happiness. Which changes how I answer very important questions. I am much more interested in curbing crime for example when I think about my nieces walking the streets of any major city. Or about guns for the same reason — I want my kin to be able to protect themselves from the evils that exist. I also don’t want to think about my nephews and nieces being taken to a story hour to be read to by a drag queen with a very strong interest in children.

In the abstract, it’s easy to justify letting people behave any way they wish in publiC. Of course that’s because in the abstract it harms no one. Until the public has to deal with the fallout or protect themselves from those who take their liberty too far.

What is the relation of child-having to being more spiritual grounded and invested in the human race (the human race, not just their children)'s long-term wellbeing?

Note, of course, that parents can also fall into stupid mental traps and failure modes. The position is they are just somewhat less so, as being invested in abstractions is not the same as being invested in something concrete. High-minded ideals can lead one down ridiculous paths- see EA's concern for shrimp.

In my experience, such positions tend to themselves be cope, that one finds excuses for being a selfish hedonist ("Oh, I'm not having kids for the environment," totally has nothing to do with being a perpetual adolescent who can barely take care of themselves and have no interest in the world at large). People of every stripe and position will find reasons to justify that their choices are Good and Right, and will work to reshape reality to ensure that.

"You shouldn't get to have a decision on AI development unless you have young children"

Okay. I'm a father. Full steam ahead I say. I graciously donate my "I'm straight and have functioning sperm" vote to Sam Altman.

Do expect your kids to have jobs if we build machines that can do everything better and cheaper than humans?

Are jobs good in themselves?

Either these machines are going to be so great that there is no use human labor can be put to that satisfies human wants (which sounds utopian to me) or there will still be productive uses of human labor to satisfy human wants (i.e. jobs).

Are jobs good in themselves?

You're a survival machine, your sense of purpose comes from overcoming obstacles impending your survival. Maybe the machines will just take over and provide a stimulating environment of constant low-level warfare which is what we evolved for.

Who knows, it's possible. Maybe you're going to be turned into biodiesel, or reengineered into a better pet. Unknowable.

The important thing is for our civilization to have an incentive to keep us around. Once we're superfluous, we'll be in a very precarious position in the long run.

Is being stuck in an old folks' home utopian?

For some (many? most?) people likely yes. The thing that is bad about being in an old folks home, today, is the "old" part. If I were free to spend my time however I wished at my current age, that would be pretty great!

And yet my experience with old people is that they fight tooth and nail not to be dropped at a home, and the ones there lament not being able to stay at their real homes or with family.

Preferable to dying in a street, but not what I'd call 'utopian'.

OK, but in a world where robots do all useful work there's no reason you couldn't be at home with your family! I took the point about the old folks home to be a concern about a kind of listlessness or malaise with lacking something productive to fill ones days with.

More comments

Fair. Maybe a better analogy would be: You and your whole family are in an old folks home, and the country and all jobs are now being run by immigrants from another, very different, country. You fear that one day (or gradually through taxation) they'll take away your stuff and if they do, there'll clearly be nothing you can do about it.

Precisely this. Does civilization serve man, or does man serve civilization?

The phrase "Disneyland with no children" comes to mind.

Civilization came into existence because it enhanced group survival.

It's pretty ironic that it's probably also going to end it. Once the deep state is efficient machines and not inefficient apparatchiks, things might get pretty funny.

Being useful and free to withdraw your services is leverage. Having no leverage is bad in itself.

What is it you are envisioning needing this leverage to do?

I so enormously doubt "everything better and cheaper". But some things sure.

Machine looms and mechanized agriculture have put almost everyone out of their jobs. A large majority of people used to work in agriculture or cloth making. It was a black hole for labor and human effort.

And yet now clothing and food are extremely cheap and I have a job. Not a job growing food or working a loom. But a job.

If AI does some things better and cheaper then great news, prices are going to get cheaper. That's a good thing. I hope things that are very expensive to me are very cheap for future generations. Like clothing for us vs pre-industral revolution.

Surely there could be a point where technology advances enough that computers do everything better, no?

Currently, computers are better at chess than humans. Still, nobody wants to watch the computer world championship and many people want to watch the human world championship. In some jobs it's not just about being better. Maybe more such jobs will exist in the future?

nobody wants to watch the computer world championship and many people want to watch the human world championship

Yes, because Deep Blue is never going to open with Bongcloud.

That’s like at most 1% of jobs.

Sure. And at that point we are discussing hypothetical scifi futures. Like in Accelerando when the Hello Kitty artificial intelligence explains to newly created people that things like monster trucks are free and they can have as many as they want.

But I'm not very concerned about all human labor being made irrelevant soon. Maybe some portion of it. And that won't be very conformable for some people. Like English clothing makers when machine looms were first made. A hard time, but society did not collapse or suffer permanent unemployment. They only had to slaughter a small number of people to stop them from destroying clothing factories. And clothes are now a tiny fraction of the cost. I'd say a clear net good. I'm hoping when HR drones are replaced with software we can figure out how to deal with them more peacefully than British soldiers dealt with Luddites. I have been told that Excel put most accountants out of business and we navigated that without bloodshed and social upheaval.

This is the midwit argument.

A better argument is that AI will create an even more extreme power law distributions of return to human capital and cognitive performance. You'll see software firms that used to need 100s of developers to work on various pieces of the codebase turn into 10 elite engineers plus their own hand-crafted code LLM. That same firm used to have 100+ sales people to cover various territories, now it just has a single LLM on 24/7 that can answer all the questions of prospects and only turns them over to an elite sales team of 10 when they get to a qualified position.

All of a sudden, we're at 30%+ unemployment because the marginal utility of the bottom 30% of cognitive performers is literally negative. It's not that they can't do anything, it's that whenever anyone thinks of something for them to do, there's an LLM on the way already.

I think we're actually starting to see this already. Anec-data-lly, I'm hearing that junior devs are having a really hard time getting jobs because a lot of what they used to do really is 90% handled by an LLM. Senior devs, especially those that can architect out whole systems, are just fine.

The AI doom scenario isn't paperclips or co-opted nukes, it's an economic shock to an already fragile political system that crashes the whole thing and we decide to start killing each other. To be clear, I still think that that scenario is very, very unlikely, but "killer robot overlords" is 100% Sci-Fi.

Are there really swarms of "junior devs" out there writing code so menial that their whole job can be replaced by an LLM? This is just totally discordant with my experience. Back when I started they threw an active codebase at me and expected me to start making effective changes to a living system from the get. Sure, it wasn't "architecting whole systems", but there is no way you could type the description of the first intern project I built years ago into an LLM and get anything resembling the final product out.

These systems that claim to write code just aren't there. Type in simple code questions and you get decent answers, sure. They perform well on the kind of thing that appears in homework problems, probably because they're training on homework problems. But the moment I took it slightly off the beaten path, I asked it how to do some slightly more advanced database queries in a database I wasn't familar with, the thing just spat out a huge amount of plausible but totally incorrect information. I actually believed it and was pretty confused later that day when the code I wrote based on the LLM's guidance just totally did not work. So I am incredulous that there is really any person doing a job out there which could be replaced by this type of program.

The junior devs graduating college over the past 5 years are drastically less capable than before. There are fully diploma'ed CS majors who do not understand basic data structures. Yes, this is a problem.

Are there really swarms of "junior devs" out there writing code so menial that their whole job can be replaced by an LLM?

Yes, or close to it. Used to be stack overflow was full of them trying to get real devs to do their work for them.

Thanks for the kind words.

Yes, that is one possibility (ie the tech advances enough that it kills some but not all jobs so those at the top become Uber rich and those at the bottom UBI). Of course that ignores the possibility that the situation you describe is a mid point; not the end.

Surely this is the worst argument against AI? Shouldn't we burn the backhoes and go back to digging ditches by hand to ensure employment opportunities for our children?

This is a reasonable argument, but there's a big different between having robots that can do something things for us (like digging ditches) while humans can still do other things better, versus having everything being done better by machines. In the current world, you get growth by investing in both humans and machines. In the latter world, you get the most growth by putting all your resources into machines and the factories that make them.

What growth is there without consumers (i.e. people)?

You just need at least 1 consumer, right? Maybe the future is just one person who owns the entire Earth or perhaps even the universe, the sole producer and customer that dictates what is and isn't by his control of all the AI-powered robots. Well, I imagine even if someone had amassed the power to accomplish this, they would find such an existence rather lonely.

This, I think, points to the one job that AI and robots can't ever replace humans in, which is providing a relationship with a human who was born the old fashioned (i.e. current) way and grew up in a human society similar to the human customer did. I've said it before, but it could be that the world's oldest profession could also become the world's final profession.

But also, if we're positing ASI, it's quite possible that the AI could develop technology to manipulate the brain circuitry of the one remaining human to genuinely believe that he is living in a community of real humans like himself. I believe this kind of thing is often referred to as a "Lotus-Eater Machine," after some part of the Odyssey. If this gets accomplished, then perhaps all of humanity going down to just one person might be in our future.

Well this is some crazy shit. Why do you believe in a make pretend fantasy to start with?

  • -19

I'm quite confused. What is the 'make pretend fantasy'? The one nearly irrelevant reference to Christianity in my post, only referenced as a side disagreement with Musk's lifestyle choices? That's the only 'belief' I mentioned in my post and pretty an unrelated aside. Does any passing reference to Christianity force you to blindly zero in on it?

The rest of the post is basically just a restatement of what others have said downthread: Altman is childless, and possibly detached from the future of bio-humanity, and certainly not as publically 'attached' to it as Musk.

Does any passing reference to Christianity force you to blindly zero in on it?

This was my experience with that user, yes, and he wasn't interested in hearing what anyone had to say on the matter either.

But now he's gone.

You were told, repeatedly, that you were on your last chance, and yet your last several warnings were mod-noted with "Permaban next time." Yet you weren't, because, well, you seemed to be making a good faith effort to dial it down... for a while. But your comments remain mostly low effort and shitty. So much so that after being here for months, we still have to manually approve your posts because you can't get out of the new user filter. This isn't because you have some brave iconoclastic point of view that's too much for the Motte; there are other edgy, lefty posters who establish themselves as decent posters.

This post is just another crappy low effort post. I've specifically told you to stop writing posts whose entire content is just "Your beliefs are stupid."

It's also the last straw. I will not miss fishing your posts out of the queue and having to decide which of the dozen posts you wrote during a drunk-posting spree need to be modded. This was a dumb hill to die on, but so mote it be. Good bye.

While I've had my fair share of sometimes heated arguments with Frenchie and agree that the comment you're responding to is a low-effort contentless flame which at best will lead to nothing at all, the next part of your argumentation is just bad.

So much so that after being here for months, we still have to manually approve your posts because you can't get out of the new user filter. This isn't because you have some brave iconoclastic point of view that's too much for the Motte (...)

This is simply wrong. The new user filter feature is fundamentally chilling effect on views that go against the local mainstream and has a very predictable endpoint, already visible here. Fundamentally, low-effort "hot takes" like this (to name something you have encountered most recently) are not going away, but alternative viewpoints that go against that sort of content - most likely are.

The problem you're not seeing is that it's not the absence of a "variety of hot takes", it's that relying on the upvote/downvote mechanism for user absorption is guaranteed to fossilize a consensus based on some side of the very culture war this thread is about. I've had that argument with your colleagues on the site's telegram a number of times, and, as far as I can tell, there really isn't a counter-argument to present. Even if you're okay with having that particular kind of opinion dominate, you are still going to face a fall in quality of content, as is always the case in all echo-chambers that face no pushback.

Since I'm never going to be able to climb out of the new user filter you seem to laud, I doubt this comment will actually appear in the thread. But hopefully you'll at least see it...

I reject the characterization of my comment as a low-effort hot take. Considered in isolation, perhaps, but when seen in the context of a long conversation in which I also made a number of high-effort, sophisticated arguments in favor of my position, I don’t think it’s fair to characterize one particular comment that way. I’m extremely willing to defend my positions at length, which you can see, since you picked out a comment that was deep into a thread where I was doing so.

I reject the characterization of my comment as a low-effort hot take.

That's fine. If you believe that "Anyone affiliated with the Innocence Project deserves prison time" is a sophisticated, nuanced argument in favour of a certain position - it is your right to do so. I disagree and not simply with the position itself, but also with the prospects of such a comment leading to a reasoned discussion that could arrive at some interesting conclusion.

Here's a counter-example: "Executing all landlords in the world would be a good way to solve the housing crisis". It's a position that's a little juvenile and rather facile, but I am absolutely capable of writing a number of high-effort, good(ish) faith arguments for it by using utilitarian principles.

Do you think that the discussion writing something like this would lead to is going to be high level?

I disagree and not simply with the position itself, but also with the prospects of such a comment leading to a reasoned discussion that could arrive at some interesting conclusion.

Look at the rest of the thread and my participation in it. Do you believe that I contributed nothing of intellectual value to it? Again, I’m not pretending that the particular comment you picked on was high-effort; however, I’m clearly quite capable of offering much higher-effort expansions of my position, which I did, in numerous parts of that same comment thread. That is the difference between me and someone who contributes nothing but low-effort swipes. If your belief is simply that no commenter, no matter how long-standing and high-quality-on-average, should ever be able to get away with posting anything low-effort, that’s fine, but it is not my position, nor does it appear to be the mods’ position.

Do you think that the discussion writing something like this would lead to is going to be high level?

Yes, absolutely! We see very high-effort and interesting threads branch off from arguments like that frequently here. I agree that you would also probably incite a lot of low-effort and/or uncharitable replies as well, but that doesn’t mean the post itself wouldn’t ultimately be worth it. If you genuinely do hold that belief, why not make an effortful post about it?

If your belief is simply that no commenter, no matter how long-standing and high-quality-on-average, should ever be able to get away with posting anything low-effort, that’s fine, but it is not my position, nor does it appear to be the mods’ position.

It very explicitly is not my belief, you misunderstood me. My point was that upvote/downvote system is bad at weeding out low-effort postings in general, because vast majority of people will not downvote a low-effort inflammatory statement that they agree with. I am with you as far as the idea that low-effort posting only becomes a serious concern when it dominates over higher-effort posting, and that is usually caused by people who pretty much exclusively post low-effort, ideologically-motivated comments.

If you genuinely do hold that belief, why not make an effortful post about it?

I've done the very thing you suggested once 🫠. That's why I'm never going to be able to climbe out of a premoderation hole, lol

Since I'm never going to be able to climb out of the new user filter you seem to laud, I doubt this comment will actually appear in the thread. But hopefully you'll at least see it...

The comment does appear in the thread after we approve it, which I have.

Look, I don't love the new user filter mechanism myself, and I have noticed that yes, liberals have a harder time climbing out of it because they get downvoted so heavily. That said, those who actually post reasonable and good faith arguments eventually get enough upvotes that they aren't being filtered, and it really doesn't take that much. The only people I can recall recently who posted regularly yet stayed in the new user filter persistently for months were AahTheFrench and Darwin/guesswho. Both of whom mostly engaged in trolling and shitposting.

Without a new user filter, we mods would wake up to a ton of "Kill All Niggers! Death to Kikes and Faggots!" posts spamming the board which we would then have to clean up. (This is not speculative on my part; you should see how very determined and noxious some of our long-term trolls are.)

If you have an alternate suggestions, propose it. Zorba has limited time to fix things and add features, but no one is under the illusion that our current setup is perfect. It's just the best we have managed so far.

If you have an alternate suggestions, propose it

A one-time manual approval flag.

The only people I can recall recently who posted regularly yet stayed in the new user filter persistently for months were AahTheFrench and Darwin/guesswho

Well, here's another example for you. Me 😊. I have zero chance of ever climbing out of the karma hole here. I'm not super worried about it, but, for the record, the reason for it is one (intentionally provocative) thread that did lead to a discussion challenging the standard beliefs. Was it super-valuable? Of course not, and your comment on that thread summed it up pretty well. Was it bad enough to warrant a permanent modfilter? I'd argue that it wasn't and there is plenty of examples of far worse things adding nothing but vitriol towards the outgroup.

While being upvoted.

Without a new user filter, we mods would wake up to a ton of "Kill All Niggers! Death to Kikes and Faggots!" posts spamming the board which we would then have to clean up.

New user filter as it exists certainly helps with that to an extent, but its primary effect is completely different. It was trivial for me to find an example perfectly illustrating my point from just scrolling through the modlog. Here you go, the comment is essentially saying "people vote for communists because they just want to kill niggers." Votes on it are +3/-1, the -1 probably coming from the moderator who ended up handing out a tempban (no complaint here, it was a right choice). Voters had no problem with it. Is that how the system is supposed to work?

If you have an alternate suggestions, propose it.

If the goal is to avoid the things you mentioned, adjusting the filter to deal with that would be trivial. Simply adjust the filter to be 7 days + 50 comments (or some similar number) which will still filter random incoming trolls, without enforcing the echo-chamber and punishing going against the circlejerk. From my experience of working with coders on the same codebase motte is written on, something like that can be written and implemented in minutes. The question is only about what you want your system to accomplish...

Well, here's another example for you. Me 😊. I have zero chance of ever climbing out of the karma hole here. I'm not super worried about it, but, for the record, the reason for it is one (intentionally provocative) thread that did lead to a discussion challenging the standard beliefs.

No, most people don't even remember a post from months ago. I don't even remember you. The reason you haven't accumulated enough "karma" is that you've posted a few times today, and last time was a couple of posts 4 months ago, and before that, a couple of posts a year ago.

Was it bad enough to warrant a permanent modfilter?

You are misunderstanding how the filtering works. We don't put a "permanent modfilter" on you because you make a bad post. All new users automatically have to have their posts manually approved. After a certain number of upvotes (I don't know what the algorithm is, only Zorba does) you come out of the "new user" filter. Now if you have acquired a reputation for being an asshole, so that a lot of people downvote you as soon as they see you post, yes, it will be harder to get out of that filter. As far as I know, you aren't one of those people. You just haven't posted enough.

It was trivial for me to find an example perfectly illustrating my point from just scrolling through the modlog. Here you go, the comment is essentially saying "people vote for communists because they just want to kill niggers." Votes on it are +3/-1, the -1 probably coming from the moderator who ended up handing out a tempban (no complaint here, it was a right choice). Voters had no problem with it. Is that how the system is supposed to work?

Yes. The new user filter is only to keep out low effort trolls. Once you are no longer being filtered, it's the job of reports and mods to handle people who make bad posts. As you noted, that post resulted in a tempban. I would not get so upset about upvotes and downvotes. There are people who will upvote any post that talks about how much Jews or blacks or leftists suck, especially if the poster uses language the upvoter knows better than to use. We don't mod according to the popularity of a post.

If the goal is to avoid the things you mentioned, adjusting the filter to deal with that would be trivial. Simply adjust the filter to be 7 days + 50 comments (or some similar number) which will still filter random incoming trolls, without enforcing the echo-chamber and punishing going against the circlejerk.

Maybe @ZorbaTHut has thoughts on why we should/shouldn't do that. Though I will note that if the threshold were 50 comments, you would still be in the new user filter.

No, most people don't even remember a post from months ago. I don't even remember you.

Fair enough, my apologies. I'm originally from /r/drama, just came here in passing a while back due to being friendly with a number of motte regulars. My example is not that interesting, what's valuable in it is how it illustrates the drawbacks of the system.

After a certain number of upvotes (I don't know what the algorithm is, only Zorba does) you come out of the "new user" filter.

Pretty sure that it is not the case. Can't conclusively disprove it, but I am almost certain that it is, in fact, upvotes minus downvotes threshold, not just a number of upvotes. If it only counted upvotes, my original post would have been enough to clear it (while horribly received, it did accumulate some positive reaction).

You just haven't posted enough.

But this is the very effect I am complaining about. The disincentive towards posting while knowing that it will take up to 12 hours for the comment to appear in a thread that is having an active discussion is huge. If that wasn't the case, I'd absolutely post more, and I assure you that I am not alone in that regard.

The question is whether this is a good thing or a bad thing. If discouraging people like me from posting is the system working as designed - then that's fine, I just think that it goes against the stated goals of the platform.

I feel like that should have been "so Motte it be."

Wow, such eloquent savagery. Well deserved. Thanks for the good work.

I can't say that saw this specific move coming, but i can honestly say that I am not at all surprised.

It would seem that Sam Altman is who I (and a fair number of other OpenAI sceptics) thought he was.

I would suggest that this is simply the mask coming off/a rectification of public image and mission statements with reality. If handled well, OpenAI will be a healthier organization for it but i would expect said rectification to be painful for the "Yudkowskian faction" of AI discourse however it plays out.

Altman gives me similar vibes as SBF with a little less bad-hygiene-autism. He probably smells nice, but is still weird as fuck. We know he was fired and rehired at OpenAI. A bunch (all?) of the cofounders have jumped shipped recently. I don't necessarily see Enron/FTX/Theranos levels of plain lying, but how much of this is a venture funding house of cards that ends with a 99% loss and a partial IP sale to Google or something.

This is just spreading gossip (so mods lmk if I'm out of line here) but I know someone who knows Sam. This person tells me that Sam is going to get us all killed; that he's entirely misanthropic and sincerely believes that humanity should die out giving birth to machine intelligence. Just for what it's worth.

EDIT: I'd also like to add that I consider this person highly credible but for obvious reasons can't say more.

Paul Graham is the most honest billionaire (low bar) in silicon valley. Paul groomed Sam, gave him a career and eventually fired him. Paul is the most articulate man I know. Read what Paul has to say about Sam, and you'll see a carefully worded pattern. Paul admires Sam, but Sam scares him.

Before I write a few lines shitting on Sam, I must acknowledge that he is scary good. Dude is a beast. The men at the top of silicon valley are sharp and ruthless. You don't earn their respect let alone fear, if you aren't scary good. Reminds me of Kissinger in his ability to navigate himself into power. I've heard similar things about David Sacks. Like Kissinger, many in YC will talk fondly about their interactions with him. Charming, direct, patient and a networking workhorse. He could connect you an investor, a contact or a customer faster than anyone in the valley.

But, Sam's excellence appears untethered to any one domain. Lots of young billionaires have a clear "vision / experience hypothesis -> skill acquisition -> solve hard problems -> make ton of money" journey. But, unlike other young Billionaires, Sam didn't have a baby of his own. He has climbed his way to it, 1 strategic decision at a time. And given the age by which he achieved it, it's fair to call him the best ladder climber of his generation.
Sam's first startup was a failure. He inherited YC, like Sundar inherited Google, and Sam eventually got fired. He built OpenAI, but the core product was a thin layer on top of an LLM. Sam played no part in building the LLM. I had acquaintances joining Deepmind/OpenAI/Fair from 2017-2020, no one cared about Sam. Greg and Ilya were the main pull. Sam's ability to fundraise is second to none, but GPT-3 would have happened with or without him.

I personally, struggle to trust people I consider untethered. MBA types, lawyers turned CEOs, politicians. Top 0.1 percentile autists must excel. In the absence of a grounding domain, they start demonstrating excellence in accumulating Power. Power for power's sake. Sam is a perfect archetype.

Moreover, Sam being a gay childless tech-bro means he isn't naturally incentivized to see the world improve. None of those things are bad on their own. But they don't play well with top 0.1 percentile autists. Straight men soften up overtime, learning empathy from their wife, through osmosis. Gay communities don't get that. Then you have silicon valley tech culture, which is famously insular and lacks a certain worldliness. (even when it is racially diverse). I'll take Sam being married to a 'gay white software engineer' as evidence in favor of my hypothesis. Lastly, he is childless. This means no inherent incentive to making the world a better place. IMO, Top 0.1 percentile autists will devolve into megalomania without a grounding soft touch to keep them sane. Sam is not exception and he is the least grounded of them all. Say what you want about Mark Zuckerberg, but a wife and kids has definitely played a role in humanizing him. Not sure I can say the same for Sam.

I personally, struggle to trust people I consider untethered. MBA types, lawyers turned CEOs, politicians. Top 0.1 percentile autists must excel. In the absence of a grounding domain, they start demonstrating excellence in accumulating Power. Power for power's sake. Sam is a perfect archetype.

You know, I feel almost exactly the same way. I just have an seemingly inborn 'disgust' reaction to those persons who have fought up to the top of some social hierarchy while NOT having some grounded, external reason for doing so! Childless, godless, rootless, uncanny-valley avatars of pure egoism. "Struggle to trust" makes it sound like a bad thing, though. I think its probably, on some level, a survival instinct because trusting these types will get you used up and discarded as part of their machinations, and not trusting them is the correct default position. Don't fight it!

I bought a house in a neighborhood without an HOA because I don't want to have to fight off the little petty tyrants/sociopaths who will inevitably devote absurd amounts of their time and resources to occupying a seat of power that lets them harangue people over having grass 1/2 inch too tall or the wrong color trim on their house.

That's just an example of how much I want to avoid these types.

Only recently have I noticed that either my ability to spot these people is keen enough that I can consistently clock them inside of one <30 minute interaction, or I'm somehow surrounded by them because I've deluded myself into thinking I can detect them.

One of the 'tells' I think I pick up on is that these types of people don't "have fun." I don't mean they don't have hobbies or do things that are 'fun.' I mean they don't have fun. The hobbies are merely there to expand and enable their social group, they don't slavishly follow any sports teams, they don't watch any schlocky T.V. series, and they probably also don't do recreational drugs (so not counting, e.g. adderall or other 'performance enhancers.'), although they can probably hold a conversation on such topics if the situation required it.

(Side note, this is why I was vaguely suspicious of SBF back when he was getting puff pieces written prior to FTX crash. A dude who has that much money and yet lives an ascetic lifestyle? Well he's gotta be motivated by something!)

In social settings they're always present, schmoozing, facilitating, and bolstering their status... but you notice they never suggest activities for the group to engage in or expend effort bolstering other group members status.

Because, I assume, they are there solely to leverage the social network to get something else that they want. And if its not 'fun,' if its not 'money,' and it isn't even 'sex' or 'admiration and praise,'... then yeah, power for its own sake is probably their objective.

SO. What does Sam Altman do for fun?

I don't know the guy, but I did notice that he achieved his position at OpenAI not because of any particular expertise in the field or his clear devotion to advancing AI tech itself... but mostly by maneuvering his funds around so that he could hop into the CEO spot without much resistance. Yes he was a founder, but why would he take a specific interest in THAT company of all of them, to turn it into his own little fiefdom?

I think he correctly spotted the position at OpenAI as the best bet for being at the center of a rising power base as the AI race kicked off. Had things developed differently he might have hopped to one of the various other companies he has investments in instead.

Finagling his way back into the position of power after the Nonprofit board tried to pull the plug was a sign of something.

I admit, then that I'm confused why he would push to convert to for-profit structure and to collect 10 billion if he's not inherently motivated by money.

My theory of him might be wrong or under-informed... or he just plans to use that money to leverage his next moves. That would fit with the accusation that OpenAI is running out of impressive tricks and LLMs are going to fail to live up to the hype, so he needs to prepare to skidaddle. It DOESN'T fit my model of a man who believes he is going to be at ground zero when the silicon Godhead is birthed, if he really believes that superintelligence is somewhat imminent, he should be willing to give up ridiculous sums of money to ensure he's present at that moment.

Anyhow, to bring this to a head, yeah. Him not having children, him being utterly rootless, him having no obvious investment in humanity's continued survival (unlike Elon), I don't think he has much skin in the game that would allow 'us' to hold him accountable if he did something truly disastrous or utterly anti-civilizational. Who is in any position to reign him in? What consequences dangle over his head if his misbehaves? How much power SHOULD we trust him with when his apparent impulses are to remove impediments to his authority? The Corporate Structure of OpenAI was supposed to be the check... and that is going away. One would think it should be replaced with something that has a decent chance at ensuring good behavior.

It DOESN'T fit my model of a man who believes he is going to be at ground zero when the silicon Godhead is birthed, if he really believes that superintelligence is somewhat imminent, he should be willing to give up ridiculous sums of money to ensure he's present at that moment.

Nobody with a clue thinks that is imminent. All that exists is trained on data, and there's not enough high quality data. Maybe synthesizing it will work, maybe not.

Even the most optimistic people in the know say stuff like "maybe we'll be able to replace senior engineers and good but not great scientists in 5 yrs time". 'Godhead' and superintelligence is just conjecture at this point, thought of course an aligned set of cooperating AIs with ~130 IQ individually could give a good impression of superintelligence. Or be wholly dysfunctional given the internal dynamics.

I dunno, I've read the case for hitting AGI on a short timeline just based on foreseeable advances and I find it... credible.

And If we go back 10 years ago, most people would NOT have expected Machine Learning to have made as many swift jumps as it has. Hard to overstate how 'surprising' it was that we got LLMs that work as well as they do.

And so I'm not ruling out future 'surprises.'

That said, Sam Altman would be one of the people most in the know, and if he himself isn't acting like we're about to hit the singularity well, I notice I am confused.

Human-level AGI that can perform any task that humans can will resolve almost any issues posed by demographic decline in terms of economic productivity and maintaining a globalized, civilized world.

Aschenbrenner is a smart charlatan, he's probably going to do very well in the politics of AI.

My opinion is that the way he has everyone fooled and the way he has zeroed in on the superpower competition aspect makes it clear what he is after. Power. Has he gotten as US citizenship yet? He'll need that.

There's going to be an enormous growth in computing power, possible hardware improvements (e.g. the Beff Jezos guy has some miniaturised parallel analog computer that's supposedly going to be great for AI stuff.. ). But iirc, the models can't really improve easily because there's not the best data to pretrain them, so now everyone is trying to figure out how to automatically generate good synthetic data and use that to train better models, combine different modalities (text/ images etc). All stuff that's hardly comprehensible to outsiders, so people like Leopold can go around and say stuff with confidence.

Likely, yes, but how computationally and energy expensive it's going to be matters a whole lot. Like e.g. aren't they basically near hitting physical limits pretty soon? That'd cap lowering power costs, right?

And scaling up chip production to 1000x isn't as easy as it sounds either. Especially if Chinese get scared and start engaging in sabotage.

It'd make me feel better if someone could muster a rebuttal that explained with specificity why further improvements aren't going to be sufficient to breach the "smarter than human" barrier.

There's an existence proof in the sense that human intelligence exists and if they can figure out how to combine hardware improvements, algorithm improvements, and possibly better data to get to human level, even if the power demands are absurd, that's a real turning point.

A lot of smart people and smart orgs are throwing mountains of money at the tech. In what ways are they wrong?

It'd make me feel better if someone could muster a rebuttal that explained with specificity why further improvements aren't going to be sufficient to breach the "smarter than human" barrier.

To sum it up, to train superhuman performance you need superhumanly good data. Now, I'd be all okay for the patient, obvious approach there - eugenics, creating better future generations.

I'll quote twitter again

The Synthetic Data Solves ASI pill is a bit awkward:

  • Our AI plateaues ≈on intelligence level of expert humans because it's trained on human data
  • to train a superhuman AI, we need a superhuman data generating process based on real world dynamics, not Go board states -…fuck

In what ways are they wrong?

I'd not say they're wrong. Even present day polished applications with a lot of new compute could do a lot of stuff. They're betting they'll be able to make use of that compute even if AGI is late.

And remember, the money is essentially free for them. Those power stations will be profitable even if datacenters aren't, the datacenters will generate money even if taking over the world isn't a ready option. & There's no punishing interest rates for the big boys. That's for chumps with credit cards.

More comments

Thanks for this effortpost overall. It is very insightful.

You don't earn their respect let alone fear, if you aren't scary good. Reminds me of Kissinger in his ability to navigate himself into power. I've heard similar things about David Sacks. Like Kissinger, many in YC will talk fondly about their interactions with him.

I understand what you mean. And this is psychopathy.

Without a tethering to some sort of concrete moral framework (could be religious or not, just consistent over time), these type of people must become "power for power's sake" elite performers. That's bad. That's really, really bad.

No laws are being broken, but how does society call out this kind of behavior when it's channeled in this fashion and not in the "normal" psychopathic way of robbery/murder/rape etc?

I'm not sure we can without any coherent framework around to distinguish between success and virtue.

From where I'm sitting, I think "Oh that's a satanist" and everything makes sense, and I can tell other people that and they get it too.

Saying that he's possessed is a bit more legible to the general public but still sounds anachronistic to most.

Sam is going to get us all killed; that he's entirely misanthropic and sincerely believes that humanity should die out giving birth to machine intelligence.

...Fine, I'll bite. How much of this impression of Sam is uncharitable doomer dressing around something more mundane like "does not believe AI = extinction and thus has no reason to care", or even just same old "disregard ethics, acquire profit"?

I have no love for Altman (something I have to state awfully often as of late) but the chosen framing strikes me as highly overdramatic, besides giving him more competence/credit than he deserves. As a sanity check, how -pilled would you say that friend of yours is in general on the AI question? How many years before inevitable extinction are we talking here?

You are making an "argument from incredulity", i.e. the beliefs of Sam Altman are so crazy that they can’t be real. I don't think this is the case. Many powerful people in Silicon Valley have beliefs that are far outside the Overton Window.

Say what you will about Elon Musk, he is at least pro-human. This is not at all the case for many of his peers. For example, Larry Page and Elon Musk broke up as friends over Musk's "speciesist" belief that humanity should remain dominant over god-like AI's.

The idea that Sam Altman would literally want to destroy humanity to birth in a superior AI life form might sound ridiculous to you. But you don't know these people.

There's a good chance (not 100%, but not 0% either) that we're going to build superintelligence while the "adults in the room" argue about GDP numbers or whatever. If this happens it could make some people (perhaps a single person) more powerful than anyone in history. Do you want Sam Altman to be that person? Because I sure as hell don't.

You are making an "argument from incredulity", i.e. the beliefs of Sam Altman are so crazy that they can’t be real. I don't think this is the case.

The idea that Sam Altman would literally want to destroy humanity to birth in a superior AI life form might sound ridiculous to you. But you don't know these people.

Besides this being a gossip thread, your argument likewise seems to boil down to "but the beliefs might be real, you don't know". I don't know what to answer other than reiterate that they also might not, and you don't know either. No point in back-and-forth I suppose.

There's a good chance (not 100%, but not 0% either) that we're going to build superintelligence while the "adults in the room" argue about GDP numbers or whatever. If this happens it could make some people (perhaps a single person) more powerful than anyone in history. Do you want Sam Altman to be that person? Because I sure as hell don't.

At least the real load-bearing assumption came out. I've given up on reassuring doomers or harping on the wisdom of Pascal's mugging, so I'll simply grunt my isolated agreement that Altman is not the guy I'd like to see in charge of... anything really. If it's any consolation I doubt OpenAI is going to get that far ahead in the foreseeable future. I already mentioned my doubts on the latest o1, and coupled with the vacuous strawberry hype and Sam's antics apparently scaring a lot of actually competent people out of the company, I don't believe Sam is gonna be our shiny metal AI overlord even if I grant the eventual existence of one.

Since this is a gossip thread...

I have a couple friends who genuinely want the extinction of the human race. Not in a mass murder sense as they conceptualize it, but in a create a successor species, give a good life to the remaining humans, maybe offer them the chance for brain uploads, sense. Details and red lines vary between them, but they'd broadly agree that this is a fair characterization of their goals and desires.

Where do they work? OAI, Anthropic, GDM.

I have a fair amount of sympathy for their viewpoints, but it's still genuinely shocking. It's as if you suddenly found out that every government official was secretly a Hare Krishna or part of the People's Temple, and then when you point it out, everyone thinks the accusation is too absurd to be real.

In their defense: why do we care so much about the survival of homo sapiens qua sapiens? We're different from how we were 50,000 years ago, and we'll be more different still in 5,000, and maybe even 500. So what? So long as we have continuity of culture and memory, does it matter if we engineer ourselves into immortal cyborgs or whatever is coming? What's so special about the biped mammal vessel for a mind?

What's so special about the biped mammal vessel for a mind?

The biped mammal vessel. An immortal cyborg is a qualitatively different existence and so it will have a correspondingly different mind.

A 6'7 NBA player has a qualitatively different experience from a 5'1 ballerina, but they're both humans with minds.

if we engineer ourselves into immortal cyborgs

Hubris of the highest order.

We don't let humans so much as stitch up some skin unless they've gone through a decade of training. We don't let new engineers commit new code, unless they've spent time understanding the base architecture. What makes you think we know enough about what it means to be homo sapiens that we can go replacing entire parts wholesale ?

Just look at the last few decades. We put a whole generation of women on pills that accidentally change that characteristics of which men they're attracted to. The last-gen painkillers caused the biggest drug epidemic in the country. The primary stimulant of the century (cigarettes) was causing early death enmasse. We don't know why there is a detectable difference in immunity between c-section vs natural deliveries, and this is a difference of a few seconds. That's how little we know about these flesh-suits of ours. We have no clue what we're doing.

What's so special about the biped mammal vessel for a mind?

Don't take this the wrong way. What I'm about to say is definitely stereotyping a certain type of person.

But, I only ever see internet neuro-divergents ask these sort of questions. To normies, your question sounds like the equivalent of ,"What's so great about fries?". You'd only ever ask the question if you've never enjoyed a good pack of fries or a equivalent food that makes you feel that special thing. It reveals the absence of a fundamental human experience. To a degree, it reveals that you're less human or at least 'dis-abled'.

I'm entitled. I don't think I need to explain what makes some things special. The first day of the monsoon, petting a puppy, making faces at a toddler, a warm hug, the top of a mountain, soul food, soul music, the first time you hold your child, the last time you hold your parent, the first time a sibling defeats you at a game.

In a way, these unspoken common traits are what makes all of us human. I care about the survival of these consistent 300k-old traits, because I cherish these things. And I believe that a non-human would not be able to. Because we aren't taught to cherish these things. We just do. I don't expect everyone to have experienced all of these, in the same way. Civilizational differences mean that specifics differ. But, the patterns are undeniable.

Why do I care about the authentic experiences of my imperfect body and imperfect mind ? Because that is what it means to be human.

P.S: and I am every bit an atheist. Do I have to believe in divinity to believe in beauty ?

Not gonna make an argument here because I don't think there would be a point, but I'll mention that you're doing a great job demonstrating my concerns about atheists.

Well, leaving it at that would be a cheap shot, so,

I don't think I'm my mind any more than I'm my body. Which is to say, yes to both, but there's more going on than that. Also, human beings are uniquely divine, and God is a man in heaven. Human existence and experience are uniquely important, and uniquely destined.

Believe it or not I'm open to the idea that at some point 'we' make the transition to non-organic substrates. I just don't know enough about what actually matters to rule that out. But when people are eager to make the jump to artificial bodies and minds (not that you actually advocated for this), they strike me as dangerously naive in terms of their assumptions.

How sure are you that what we are can be digitized? What, specifically, is valuable to you, and worthy of cultivation? In symbolic terms, which gods do you actually serve?

So you're arguing for qualia and souls, yes? I believe I am my mind, that the mind is computation, and that its computational substrate is irrelevant. I'm honestly baffled by people who hold otherwise --- I want to be charitable, but I'm having a hard time seeing past opposition being ultimately a product of personal incredulity regarding our conscious experience being a worldly, temporal information processing phenomenon.

Our minds are worldly, temporal information processing phenomena, yes. At least mostly, as we experience them. No disagreement there. The question is whether, if and when our minds die, there is anything of us left. I think so.

We have no idea what consciousness is, how it happens, or even why it should ever arise in the first place. Until that's sorted there's a ton of room for other perspectives. Soul of the gaps, sure. That accusation wouldn't trouble me.

Perhaps I could say that I think our minds are so loud in our conscious experience that we fall into the mistaken assumption that everything occurring in our consciousness is our minds. The only way to find out is to die. In the meantime I'm not in a rush to create perfect, immortal copies of my mind which have no internal conscious experience, let the last bio-humans die off, and call it a day.

But I want to repeat the question:

How sure are you that what we are can be digitized? What, specifically, is valuable to you, and worthy of cultivation? In symbolic terms, which gods do you actually serve?

More comments

I'm sure the Neanderthals' last thoughts included "so what, those skinny folks with the funny heads will survive even after they've wiped us out. We shall go gently into that good night."

We're homo sapiens. If we take AI true believers seriously, this isn't hundreds of years in someone else's lifetime; it could be less than ten years before an amoral sociopath unleashes something beyond our control. I plan on being alive in ten years.

I do not happen to think AI (from the LLM model) is likely to be an extinction-level threat (that's a specific phrasing). I do think Sam Altman is a skilled amoral sociopath who shouldn't be trusted with so much as kiddy scissors, and it should haunt Paul Graham that he didn't smother Altman's career when he had a chance.

We're also part Neanderthal. (Most people reading this message in 2024 are, anyway.) Their legacy got folded into ours. Why does their story have a sad ending?

Agreed on jitters about Altman. I'm just pointing out that the AI successor species people kind of have a point.

The companies being a cult is a big part of their strategy.

Information secrecy is top notch, everyone willingly works insane hours and you can get engineers to do borderline illegal things (around data privacy and ownership) without being questioned.

I know a few people at Facebook AI research, MSR and (old) Google Brain. They seem normal. But folks at OpenAI, Anthropic & Deep mind are well known to be ..... peculiar (and admittedly smarter than I am).

There's peculiar people at every part of every company. IME people at deep mind are not more peculiar than those working at other parts of Goog, and I certainly wouldn't describe them as cultists. Can't speak for the other labs.

On further thought, I have met more cultists at some of these companies, but a majority (50%+) have been normal. Also, can't exactly scale those anecdotes up.

With that reflection, I'll take back my earlier comment.

I imagine many people of the more materialist bend are both more likely to be excited by AI and more likely to not believe uploading is extinction (in a way that matters).

Totally in line though with stories about other Silicon Valley leaders.

https://www.astralcodexten.com/p/should-the-future-be-human

Business Insider: Larry Page Once Called Elon Musk A “Specieist”:

Tesla CEO Elon Musk and Google cofounder Larry Page disagree so severely about the dangers of AI it apparently ended their friendship.

At Musk's 44th birthday celebration in 2015, Page accused Musk of being a "specieist" who preferred humans over future digital life forms [...] Musk said to Page at the time, "Well, yes, I am pro-human, I fucking like humanity, dude."

Imagine the mind set where this is not a pot fueled friendly banter, but actually a more and more heated argument. Maybe it was blown out of proportion? When Page bought DeepMind, Musk approached DeepMind's founder Demis Hassabis to convince him not to take the offer. "The future of AI should not be controlled by Larry," Musk told Hassabis.

(I don’t quote this to praise Musk, him being humanities champion frightens me a bit, but the misanthropic outlook Effective Accelerationists have.)

Any insight from your friend on why Altman feels this way?

Does it require a special explanation? It’s not actually that uncommon of a view. Well, I suppose it’s uncommon among normies, but it’s not uncommon in online AI circles. A lot of AI hype is driven by a fundamental misanthropy and a desire to “flip the table” as it were, since these people find the current world unsatisfactory.

since these people find the current world unsatisfactory.

There's a lot of that going around.

But it's not really a CURRENT YEAR thing. It's more a strain of religiosity that is inherently anti-human and has been around forever.

These same type of people might have been monks in a different environment.

And that's fine, but also let's not give them any power please.

I mean, I'm one of them. I find the current world unsatisfactory, for a fairly broad definition of "current world". Lots of people do, on all sides of the political spectrum and from a wide variety of worldviews. Table-flipping is evidently growing more and more attractive to a larger and larger portion of the population. Policy Starvation is everywhere.

I get that you have in mind a narrower selection of misanthropic transhumanist techno-fetishists, but I would argue that the problem generalizes to a much wider set.

I have a higher than average strain of consistent misanthropy, but I also ascribe to a weird blend of Catholic moralism and Aristotelian / Platonic virtue ethics - courage being high among them.

I know that sounds pretentious (and it is!) but what this boils down to is I think the world is very fucked up, I am unsure if it can be fixed, but I think we ought to try and the ends do not justify the means because the means become the ends. The only way out of this is through it, and through it with hard work and - by the day - more and more pain and suffering.

What worries me about Altman types is they seem to be operating in both a deceitful and covert way. Covert in that their final objectives are cloaked and obfuscated, deceitful in that they are manipulating current systems to go to those objectives, instead of pointing out that the current systems are fucked up and we should change them or build alternatives.

To be more specific, Altman's lobbying is 100% designed to (a) get regulatory capture for OpenAI and (b) re-direct hundreds of billions of dollars of public money to fund it. And, until today, this was all done with a ton of vibes emitting peace-and-love-and-altruism and "we're an non-profit research company, maaaaaan." It seems like comic book levels of cold calculating hyper capitalist mixed with techno anarchist mixed with millenium cult leader.

Did you read Toilken as a kid? I’ve long taken inspiration from the book which was “do your duty and that which is right even if it seems unlikely to win over evil.”

More comments

Yeah, while no one was looking we reached post scarcity and it turns out to not be so great after all.

We're become a society of Lotus eaters, enabled by ubiquitous drugs and technology.

I'll assert that more technology will not solve this problem. It's absurd to watch people claim that the solution to a society in moral decay is even more wealth, as if all we need to solve our problems is just more material goods.

Yeah, it's fair for people to know that. Friend sees Altman as basically possessed. Gay, atheist, no kids, extremely little attachment to almost anyone, no skin in the human game. Loves machine intelligence and serves it as a deity.

Reminds me of a pivotal scene from the Rifters books.

"Checkers or Chess?"

Echopraxia has a better quote about the posthuman/machine vs human relationship:

“I’ll fight you,” Brüks said aloud. Of course you will. That’s what you’re for, that’s all you’re for. You gibber on about blind watchmakers and the wonder of evolution but you’re too damn stupid to see how much faster it would all happen if you just went away. You’re a Darwinian fossil in a Lamarckian age. Do you see how sick to death we are of dragging you behind us, kicking and screaming because you’re too stupid to tell the difference between success and suicide?

Yes, I see what you mean.

He was a dead end anyway. No children. No living relatives. No vested interest in the future of any life beyond his own, however short that might be. It didn't matter. For the first time in his life, Yves Scanlon was a powerful man. He had more power than anyone dreamed. A word from him could save the world. ... He kept his silence. And smiled.

Altman does have a husband (recently) but who knows what that means to him.

Gossip or not, this is frankly one of the spookiest things I've internet'ed for some time.

"No skin in the human game" is like an all time H.P. Lovecraft banger and I'm pretty sure you just rattled it off the top of your heard. Well done.

Yeah this conversation happened a couple of months ago and it's been... weird, continuing to follow Altman in the news and not sharing the sentiment with anyone. So I guess I was just waiting for a chance to do that. I see a lot of conversations about him and wonder, "Do any of these people know what he's like?" Speculation usually seems to run to his financial endgame, but I don't at all get the sense that he's in it for the money.

A bunch (all?) of the cofounders have jumped shipped recently.

This photo says it all. Or, in other words, if you come at the king, you best not miss. After the coup/counter-coup attempt last year, gwern predicted Mira leaving this year at 75%. This is less jumping ship and more being fired/managed out.

Usually it would be the board who'd be the best positioned to fight against Sam's assumption of total power, but he's already packed it. My only question is how is this legal at all. Probably they've got o2 working on the case.