This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I posted, but deleted this in response to a previous AI thread, but I think it actually aged better with Elon's signature to the letter yesterday and Yud's oped:
I am not a Musk fanboy, but I'll say this, Elon Musk very transparently cares about the survival of humanity as humanity, and it is deeply present down to a biological drive to reproduce his own genes. Musk openly worries about things like dropping birth rates, while also personally spotlighting his own rabbit-like reproductive efforts. Musk clearly is a guy who wants and expects his own genes to spread, last and thrive in future generations. This is a rising tides approach for humans Musk has also signaled clearly against unnatural life extensions.
“I certainly would like to maintain health for a longer period of time,” Musk told Insider. “But I am not afraid of dying. I think it would come as a relief.”
and
"Increasing quality of life for the aged is important, but increased lifespan, especially if cognitive impairment is not addressed, is not good for civilization."
Now, there is plenty, that I as a conservative, Christian, and Luddish would readily fault in Musk (e.g. his affairs and divorces). But from this perspective Musk certainly has large overlap with a traditionally "ordered" view of civilization and human flourishing.
Altman, on the other hand has no children, and as a gay man, never will have children inside of a traditional framework (yes I am aware many (all?) of Musks own children were IVF. I am no Musk fanboy).
I certainly hope this is just my bias showing, but I have greater fear for Altman types running the show than Musks because they are a few extra steps removed from stake in future civilization. We know that Musk wants to preserve humanity for his children and his grandchildren. Can we be sure that's anymore than an abstract good for Altman?
I'd rather put my faith in Musks own "selfish" genes at the cost of knowing most of my descendants will eventually be his too than in a bachelor, not driven by fecund sexual biology, doing cool tech.
Every child Musk pops out is more the tightly intermingled his genetic future is with the rest of humanity's.
In Yud's oped, which I frankly think contains a lot of hysteria, mixed among a few decent points, he says this:
I'm unclear whether this is Yud's bio-kid or a step kid, but the point ressonates with my perspective of Elon Musk. A few days ago SA indicated a similar thing about a hypothetical kid(?)
In either case, I don't know about AI x-risk. I am much more worried about 2cimerafa's economic collapse risk. But in both scenarios I am increasingly of a perspective that I'll cheekily describe as "You shouldn't get to have a decision on AI development unless you have young children". You don't have enough stake.
I have growing distrust of those of you without bio-children eager or indifferent to building a successor race or exhaulting yourself through immortal transhumanist fancies.
Then he is stupid about it. On average humans have around 20,000 - 25,000 genes. In matter of around 15 generations your family tree descendants have low chance of having even a single gene of yours. Now what works is creating bottlenecks - if you are a man then killing all men and having you and all men of your family rape all the women is a good strategy to really spread genes of your Y chromosome. Your genes will not be diluted if they are the only game in town. Now that I am thinking about it some more; I am really scared of Musk :D
More options
Context Copy link
How much more of a stake in a future can anyone have than literally being an immortal transhumanist?
These are stakes in different futures for different people. Elon Musk has a perspective about human longevity that I am sympathetic to. When multiple groups of people have different future visions, each person is going to align to the leaders who most share their own.
Suppose three tech-billionaires all find a genie (it can be an AI genie if you want) who will grant them one only vision of the future of AI and humanity.
The first wants the fruits of humanity to reach the stars and survive trillions of years. The genie says the way for this to happen is for AI to succeed humanity, which may be destroyed in this process. The first finds this acceptable, echoing "I believe it should be regarded as a privilege to be a stepping stone to higher things". He believes these AI beings are our descendants and the future belongs to them.
The second wants a transhumanist future of long-lividness and maybe techno-immortality. The genie says that for this to happen, human reproduction will have to be bottlenecked to prevent Malthusian destruction. Un-exalted humanity will be culled and may die out as they will be of little use to the exalted, and represent a threat to their resources. The second finds this acceptable since has no need for descendants, as he will occupy their place.
The third isn't opposed to AI space explorers or transhumanist improvements but mostly wants his children and their children and theirs after to have the option to live their life in traditionally biological ways in peace and prosperity. He wants them to be able to form human families and create new generations. The genie says that this is doable but may altogether prevent or delay the opportunity for AI and transhumanists.
So all three futures are not necessarily incompatible, but only one gets to be prioritized. You can call all three of theirs "stake" in the future (though the first much less so), but you can see that each primary purpose comes at the expense of certainty of the other two.
DaeschIndustries and Chrispratt, seem stupified and angry at the idea that I might endorse the third guy, at the expense of the other two because this isn't dEmOcRaTic. I have my values and want to see them survive. Democracy is not a terminal value. Usually democracy is a great compromise, but on an existential scale, it can break down if your real values have an existential bottleneck.
More options
Context Copy link
Children exist. Immortal transhumanists do not, and may never.
I find immortal or at least vastly-more-longliving transhumanism within our lifetime to be more likely than FOOM AGI society collapse.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
To be honest, I fail to see the step(s) between "database on steroids aces tests made largely to test people remembering various shit" and "my daughter would be murdered before reaching adulthood".
I am not arguing about the likelihood of AI outcomes. Yud seems hysterical to me. But what do I know either way.
I am making the (apparently controversial) claim that I like the idea of someone who has the future of their children as a weight on their decision making algorithm when pursuing transformative technologies.
Well, it's not about being a prophet. Sure, maybe tomorrow a new pandemic sweeps the globe, or the Sweet Meteor of Death finally arrives, or those angels finally get to their musical exercises on their trumpets. I don't see how one could avoid such things, or predict them - unless one is a prophet, and those are rare nowdays. And I am not against "having the future of the children" as a weight. In fact, I am no longer spring chicken, but I plan to have (barring unpredictable accidents) several good decades ahead of me myself, so I also have some stake in the future. I just don't how we got from this to "my children are going to be murdered before adulthood". And while we're at that, if they are serious about it, why are they having children and why all they doing about it is bloviating? I am not advocating anything, Heavens forbid, but I mean, if they sincerely believe their children will not survive to adulthood, the intensity of their belief does not seem to match the intensity of their action. I guess it's a good thing for us, but still confusing for me.
More options
Context Copy link
More options
Context Copy link
tens of billions of dollars of capital and lots of top talent spend the next 5-10 years making these systems more and more capable.
They get deployed everywhere because they are way easier to work with than humans.
Humans have little economic power.
The world becomes more complex, and full of agents smarter than humans, working full-time to manipulate them.
Humans are eventually stripped of power, just like we gradually came to dominate every species less smart than us.
Let me guess - you are thinking you have tons of economic power right now? And nothing in the current events makes you question this assumption? Or maybe you think you don't depend right now on a myriad of complex machines (though you probably don't understand them as machines, yet they are) and if any of them goes haywire - e.g. for some reason, some CIA analyst decides you are Bin Laden confidante, or your credit rating file gets deleted by a freak accident, or you criticize your government one time too many at a wrong time - your life wouldn't suddenly become very hard and complicated? So yes, the list of these machines becomes a bit different. That's it?
The world will become more complex - try to explain modern law to an ancient Sumerian, and he'd probably laugh at you and then declare you and all your brethren a society of insane masochists. Complexity of human affairs is raising for millenia. We are learning to deal with it, though it's not always easy (thus existential angst is to popular). Still don't see how this means that all children going to be murdered in less than 20 years.
I'm not claiming all children are going to be murdered in less than 20 years. I also don't think I have tons of economic power right now, and I agree I already depend on complex machines that I already can't understand or control.
I'm saying we're probably giving up what little control we had over the future of human civilization. Maybe a good analogy is: we're inviting unlimited immigration from a country with unlimited population, willing to work 24h/d for cents per hour, and are far more capable, loyal, and dependable than almost any human. Once we start, we'll never be able to stop.
Well, you personally aren't, but Yudkowski is. Or at least this is implied since there's no special reason to select his child from others to be murdered, one can reasonably conclude whatever applies to his child also applies to all other children.
I don't think so. At least not anything we have. I am also not sure who is in control "over the future of human civilization" right now, because so far whoever they are, they're not doing spectacular job. I mean, 18-th century war in the middle of Europe? Dudes, you were supposed to be so past that. Our only response to a threat like pandemic seems to be "let's try fascism, whatever will happen it'd be better with fascism, right?" Our solution to raising energy needs seems to be "let's try to shut down our most effective ways of getting energy and then invest heavily into ways that we know for a fact wouldn't satisfy our needs, and then let's shame each other for having needs. Oh, and destroying classical cultural artifacts on the way doesn't hurt too, just for fun". I'm not feeling there's any entity or entities that do anything that can be reasonably called "in control", but if they are - I certainly have no input into that and no reasonable way to ever get any close to having any input into that. So tell me again what I should be afraid of losing?
We're not able to stop writing, or using electricity, or modern medicine. But that doesn't mean any of those lead us to catastrophic consequences.
That's a good point. I'd like to spend more time thinking about in which senses this is true. However, I do still think we have a lot to lose. I.e. I'd still much rather live in the West than in North Korea, even if neither place has "humanity" in the driver's seat.
Okay, but I'm claiming that AGI will have disastrous consequences, and that the next 6 months or so are probably our only chance to stop using it (just like, as you point out, almost any other technology).
To me, it sounds a bit like those people that tell us AGW will kill us in the next 10 years, since early 1980s. Oh, interesting thought, maybe if in 6 months we'll all be doomed due to AGI, we can stop being worried about being doomed due to AGW?
I mean, I do think we can stop being worried about being doomed due to AGW. I realize there have been lots of false alarms by people that are hard to distinguish from each other in terms of credibility. From my POV, all I can do is check my own sanity, and then continue to cry wolf (legitimately, in my mind). I might be wrong and you might be right to dismiss me.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
In the ragged parlance of the youth: "human government has been tested and found wanting time and again" is not the own you think it is.
To be less mod-aggravating: just because you feel you have little control over the direction of human society, that doesn't mean you shouldn't be worried about something even more distant and ineffable making your existence look like a burden on the universe. The AI has a more plausible shot at Total World Domination than the slapdash great power competition that we've been under for a few centuries.
I don't think it's the "own". I think if you say "it will kill us all because X would happen" and I say "X is already happening and we are not dead yet" then it's pretty good argument that the direct casual link between X and killing us all is not established.
I am worried about a lot of things. Death, disease, government turning fascist, that kind of thing. If AI ever reaches the level of being worry-worthy, sure, I'd worry about that part too. I am just not seeing the "my child won't survive to adulthood" part.
That needs to be established. High IQ nerds think being able to do some tasks fast means total power, but somehow I don't see too many high IQ nerds in power. Even among people having Tons of Money not everyone at all is high IQ nerds, and the percentage of billionaires among high IQ nerds is not as large as they'd likely wish. So even imagining they'd build an electronic super-high-IQ nerd - which so far isn't proven at all, though seems more plausible than before - I am not sure it is established this means Total World Domination.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I have a permanent and maximal distrust of people with poor arguments that boil down to them being more entitled to make decisions about political matters because of whatever they think is important about their life and beliefs.
If you, as a parent, can be short-circuited with some Machiavellian «think of the children» pandering to support illiberal policies (as always happens with e.g. encryption – «Secret chats they say, now what if your child were sexually exploited, eh? wouldn't like that, huh!?»), this is reason to dismiss your opinions, not the other way around. If Yud (childless afaik) thinks he gets to decide for humanity because he's a «decision theorist» and very high IQ, that's retarded as well – he has to actually argue his case. Effective altruists claim to have a stake in quintillions of future humans, far beyond what you say. And they, too, have to do their homework and bring persuasive arguments to the table.
This is just the basic premise of a democracy. Loudly proclaiming your values is at most a rallying cry, it doesn't automatically convince anyone not yet convinced. A Chechen clan elder, a fast-breeding African billionaire with dynastic ambitions, an Orthodox Jew who dreams of the Messianic age for all of Israel, a gay atheist Jew tech bro who doesn't want to die ever, a Germanic trans activist running a deranged intentional community with the goal to liberate the trans-proletariat worldwide, a Russian immortalist who thinks death is the crime of gods and must be undone in general – any and all of them can claim to have a uniquely legitimate stake in the future. You disqualify some kinds of stakes using ad hominems stemming from your values and instincts – «unnatural», «fancies» and so on… mere rhetoric. You can be dunked on just as well with similar ad hominems. By the way, for a Christian, you are a tad too clannish and evolutionarily minded in your outlook; is this a dissident right thing or what? Do you only care about fertile members of your immediate family, God's man?
What you say is just so much special pleading. It's not clear why you have a stake in the health of the whole polity, sans contingent factors; clearly your value system, as described, allows to turn the society to shit so long as your own descendants – mixed with Musk's powerful seed, I guess – prosper through it. When having to choose, you'd go the way of Lot rather than try fixing Sodom, would you not? And while in Sodom, you'd rather build a tall fence and exploit the degenerates around, funneling wealth into your children's futures.
At least, why should anyone expect otherwise after this post?
I don't trust Altman, I don't trust Yud, and I don't trust you for the exact same reason. You cannot be bothered to obscure the self-serving, gratuitously unprincipled nature of your words.
P.S. I'm pretty sure there's no evidence that people with children act like they have more of a general stake in the future of the group/nation/humanity, beyond the trivial and narrowly nepotistic sense; if there is a difference in some society, this might be explained by self-selection, but then the dysgenic trend suggests we could see a negative correlation, if anything. I can't find the studies, though, and they're probably trash anyway. metaphor.systems should help if you're interested.
If you don't have children and want to become a transhumanist immortal being, you shouldn't trust me (hypothetically. In reality, I have no power or agency and wouldn't make enemies over something I can't control).
Self-serving? of course! So are all of your positions. Look I like liberal democracy. but I like it because it serves the world well, myself and my family included. The point at which it doesn't I don't have to religiously hold libertarian values.
Unprincipled? Absolutely not. This is a bullshit attack. My principles are based on values you disagree with, My positions which extend from my principles may be extrapolating on faulty data or predictive ability, but they exist. My principles are primarily toward the flourishing of my children and the of the existing human race. I think people with kids also have some extra buy in there. People without kids who want to appeal to democratic ideals, then use that to gamble the future of those with kids are less allied to my worldview.
Now I also have some WEIRD lifestyle preservation impulse. Because I do not come from Russia like you or India like selfmade, I am less inclined to rock the boat of my 'good life'. However, it is my Christian belief that lets me know that this particular self-interest is not morally acceptable past a very limited point. If you told me I could push a button that would preserve my lifestyle but keep the third world in poverty, part of me might like to, but I would not. Is that self-interest somewhat laundered through the 'altruistic' interest of my children. Yeah, and admittedly it becomes dicier there. But your interest in democratic ideals is likewise laundering of your own self-interest as well.
You and ChrisPratt both took the "cheeky" line too literally. I do not actually advocate a policy where only people with children get a stake.
Much more seriously, I am noting that Elon Musk's perceptions and goals about humanity are more readily parsable and agreeable to my POV than a childless technologist. Elon Musk has expressed a lot of views about human concern that I, (perhaps wrongly!) recognize as informed by the worldview of a parent, and that is a comfort against the rhetoric I find coming from a lot of other people. I said in my post that it could even be a product of my own bias, extrapolating too far gets what I called a "cheeky" heuristic, not an actual governance suggestion.
That folks without kids are so immediately hostile to the idea that folks with kids want to put the interest of their kids forward, is one of the biggest redpills against the techno-liberal worldview. I used to find the common argument is such circles that "think of he children is an emotive backdoor to authoritarianism", until I had chidren to think of. That doesn't mean I am an infinite safteyist. But it means I can recognize and reciprocate when other leaders are clearly thinking of the children.
Which I won't, but more due to your rabid tribalism and clear bad faith than these differences. I'll note that I've always wanted to and still hope to have a large traditional family besides living forever as an uplifted human (the question of whether this, combined with my values and probable tolerance for further self-alteration, would initiate a slide into profound non-humanity and disconnect has concerned me since, like, age 6), but that's neither here nor there.
No. If you admit this, you concede that your arguments about «stake» are disingenuous. I do not have to concede anything of this sort.
I also don't worship democracy. The point of my comment about democracy is that there is no agreeable external standard of a «good vision». Everything resolves either with a negotiated consensus or with a power competition that ends in more or less disagreeable and unequal compromises. We don't have power struggles here, so you've got to argue why your offer is better even by the standards of others. Perhaps you can address their higher-order values, showing why your standards allow for those to be satisfied better. Maybe you can offer some concession. Doubling down on asserting that your stuff is gooder and you are gooder is not productive.
Most irritatingly, there's a clever bait and switch with definitions of stake you use.
Here, you claim that your vision advances the common good simply because it is… good. Also aligned with people you agree with and whose satisfaction is more important by your account. So it's a «stake» not in a future where humanity thrives, but in the particular future with a version of thriving you prefer for your internal reasons, in a word – a preference. Okay. Naturally everyone thinks his preferred values are the best, else he'd have abandoned them. But this is just circular. This isn't a serious attempt to persuade: you ask that your collective values be respected (and in practice, you clearly hope to preclude the realization of other values), and if your numbers are sufficient, you demand that they be given supremacy. (You also clearly desire incompatibility – with the presumption your party will come out on top and snuff out others – because you find other visions morally abhorrent, a negative irrespective of contingent factors; you have a stake not simply in the future where baseline humans can peacefully exist, but where others cannot. But that's okay too. Most people this serious about religion are genocidal in their heart of hearts, I think, and for the most part they can behave themselves).
However, in your original comment, you did try to persuade. You argued that your political preferences, and those of other parents, are inherently more deserving of trust because your values and traits, chiefly having children (and wanting yourself and them to die, for whatever reason), give you «a stake» in the common long-term flourishing of humanity: according to this logic, you have skin in the game and it gives you an incentive to make more responsible choices than others, in this context, apparently wrt AI progress. This is how I understand e.g. the following.
I counter that this is bad psychology. Why would Altman (or me, or selfmadehuman, or even fruitier types in my list above) have less of a subjective stake? If he personally intends to be present indefinitely, he totally has a massive stake; we aren't debating whether his plan will work out but simply whether his idea of his stake in the future motivates him to act responsibly to effect less risky outcomes for the common good, in this case lesser odds of a rogue AI wiping out humanity like Eliezer fears (it sounds improbable that a misaligned AI would wipe out everyone but Altman; I'll leave the topic of Altman-aligned omnicidal singleton aside, though it is important in its own right).
Perhaps your brain is overloaded with oxytocine and so you feel that, since Altman doesn't have children like you do, he cannot act seriously: children are obviously (to you) the most valuable existence in the world, more important to you than you are, and Altman is not tethered to anything as important. I can easily believe that Altman cares more about his livelihood than you do about your entire family combined, and thus has a greater «stake». In any case, this is just psychological speculation about the magnitude of perceived value from humanity not getting whacked. I cannot look into your head any more than I can look into Altman's. I could also argue that Christians cannot be serious consequentialists, nor give much of a shit about preventing Apocalypse ≈indefinitely, and their stake is phony since the whole premise of their faith is eternal blissful immortality conditional on faithfulness to some deontological rules; so even Altman with his assumed materialistic egoism is more reliable. I won't, because this is an entirely worthless line of debate.
Can you appreciate the difference and why equivocation between those senses of the stake would irritate?
More mundanely, the society simply respects parents because through their procreation it perpetuates itself (also because this signals some baseline competence, under non-dysgenic conditions at least); and parents are hardwired to egoistically demand a greater share of the common pie – a greater stake, one could say – on behalf of their progeny, cowardly submit to any intimidation when that seems to protect their children, psychotically denigrate, belittle and rip off childless people (who end up feeling irrational shame) and do other silly things. This might be necessary for the system to work and, in fact, I've recommended doubling down on such uncouth behaviors.
Personally I am constitutionally incapable of feeling shame for being correct, though.
More options
Context Copy link
What do you think about encryption backdoors or bans based on "protecting children"?
More options
Context Copy link
More options
Context Copy link
Don't you want to become an immortal god or something?
Based on what I've been able to gather about your worldview and your arguments, the guy whose sole argument is "I want my children to be happy in a world that looks like the one we have now" is already starting off with a better and more convincing argument than yours - and I say that as someone who doesn't have children. You might be underestimating how naturally appealing that perspective is; you'd need pretty strong arguments on your own side to overcome it.
More options
Context Copy link
More options
Context Copy link
I think the X risk is real (eventually), and also that the automation problems are real.
Having said that, I cannot think of a worse way to handle it than a very public moratorium on development. All this means is that the people who actually abide by it are the ones we should be pushing to develop the AI — because they actually care about things like X risks. Which means that the first AI capable of doing the skynet thing are being developed by people who don’t care. China probably doesn’t care about X risk. The guy making a killer app doesn’t. The CEO of megacorp doesn’t.
I’m not really impressed by the skin in the game arguments. It’s reductive of the human experience. People are perfectly capable of caring about things that don’t affect them personally. I don’t need to personally have kids to care about kids, just like I don’t need to have an elephant to care about elephants. In fact, I think in some cases it makes for worse decisions because you cannot be quite as objective about things that affect you. If I knew that some law would cost me money, I would oppose it even if it would be objectively better. I’d cheat and Moloch would be pleased with me if I thought I could gain by it.
Then why are you proposing to leave it up to Sam Altman?
I would really like to hear a better way to handle the risks if you have any ideas.
I think honestly I’d boost the production of aligned AI by paying a bonus to the people who develop such a thing. AI cannot be stopped, as the first successful company to develop AI to commercial viability will dominate its industry, and the first country to do so will be the hyper power of the next century and maybe beyond. By paying a premium on producing an aligned AI that wouldn’t harm us, I think it increases the chances of getting past the Great Filter in one piece. Having the first AI come from people or places that don’t care about that increase X-risk. So getting the good guys to win (those concerned about the potential risks of AI) lowers X-risk, while moratoriums increase the risk.
Whoever produces real AGI already won before you pay this bonus. You should really be thinking of it more in terms of risk portfolios. If you are producing an unaligned ai there is a rising risk a predator drone is closing in on your location. This is a kind of risk that can properly be modeled venture capitalist firms. Balance the regulatory strength to the point where the mandatory insurance policy and its auditors keep the whole thing in check.
More options
Context Copy link
Thanks, that's a reasonable proposal and rationale. The thing is, it's not clear to me in which sense OpenAI, as an entity, effectively cares about X-risk. I say this knowing many OpenAIers personally, and that they certainly care about X risk. But what realistic options do they have for not always taking the next, most profit-maximizing step? I realize they did lots of safety-checking around their own release of GPT-4, but they also licensed it to Microsoft! I know they have a special charter that in principle allows them to slow down, but do you think their lead is going to grow with time?
I mean the first person or company to develop an AI capable of being used in production basically wins. And given that Microsoft owns a copy of GPT-4 I think it gives them an advantage. Keep in mind that most things computer tend to scale exponentially, not linearly, so the lead will only grow, unless someone creates something more powerful. Since this is the situation, moratoriums simply don’t work for the same reason other types of ban don’t work— the only people who abide the ban are the lawful-good types who would be most likely to pause if they saw something they thought was dangerous. Others won’t bother checking for safety or if they did wouldn’t stop a project for a safety concern.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'll call your 'don't get a say on AI development unless you have young children' and raise you 'you don't get to have a say on abortion unless you have a uterus' or 'you don't get a say in gun control unless you own an AR-15' or 'you don't get a say in our adventures overseas unless you serve(d) in the military.'
What's the general principle you want to employ here, and if you want to restrict it to certain use-cases, what's your rationale? In theory we should all have a say in all aspects of how our society is run. Maybe in practice we don't want the specifics of highly technical questions like the storage of nuclear waste to be decided by referenda, but self-determination and broad involvement of the populace in moral questions seems to be a fundamental value of the western political tradition.
I think two of your analogies would be better formulated here to be more, well, analogous to the OP:
-"If you live in a gated community with quick police response, you shouldn't push so hard for gun control."
-"If you had a son who was eligible to be drafted into the military, would you support military intervention as eagerly as you do?"
More options
Context Copy link
You're taking what was explicitly called out as a cheeky framing of what is more of a heuristic for why I trust Musk more than Altman, people with kids more than single people when talking about the future of civilization and asking me to generalize it into a principle. But sure let's play with it.
All three of your examples are Mad-libs fallacies, they are written the same way, but actually point at the opposite of my argument (if taken as a 'principle)
'you don't get to have a say on abortion unless you have a uterus'
'you don't get a say in gun control unless you own an AR-15'
'you don't get a say in our adventures overseas unless you serve(d) in the military.'
The more accurate analogy that fits with your examples is something like "You don't get a say on AI, unless you are working on AI" or own a LLM or something
But again, that is very far away from what I said. None of those examples are formulated to capture what I was talking about. They all angle at direct experience in the subject, with the partial exception of the abortion one, but that will quickly develop into an abortion debate.
Your examples are of agency in the policy based on exposure to the tools, while mine was agency based on effects of the outcomes. Again the abortion one only follows if you argue that the baby isn't a party with exposure.
So this is the part that I disagree, and my first round on the Motte helped disabuse me of. AI risk is a good example of where this kind of libertarian ethic breaks down.
My "general principle" looks something like this, but it's really a heuristic not a principle
If you are farming the commons, appeals to axiomatic autonomy and unlimited self-determination are weak.
EVERYTHING you do is farming the commons, though unequally weighted.
The more something farms the commons, the more it should be determined by those who's commons are affected than by the farmer's desires.
Something about how, if you extend this to longtermism, you've gone to far.
And you wrote a meandering post that went through Yudkowsky, Musk and Altman to conclude with being more concerned about economics than x-risk and why you and yours with children should have a say with regards to the future while those of us with 'transhumanist fancies' instead of children should not. Can you blame me for focusing on the only sentence in your post that was bolded when trying to distill a thesis?
I mean, I'm assuming there's some kind of framework behind your beliefs. You don't need a generalizable principle, but there needs to be more substance to your argument than "I have children and you don't therefore I decide" if you want to change my mind.
Fine, I'll lay some cards on the table instead of being a pain in the ass.
Reasonable arguments seem to be that people should have a say in the decision-making process if (1) they will be affected by the outcomes or (2) they have significant expertise in the area such that we think they'll make better choices than average Joe. I can imagine arguing for a flat system ('one person one vote') or a technocracy (decisions made by committees of experts) and our society falling in between.
Example (i), abortion: Women will obviously be affected by the outcome of the abortion discussion to a greater extent than men. Certain people would argue that they also know more about it than men (I can vouch that my female friends with children are certainly more intimate with the details of pregnancy, childbirth and nursing than their husbands) but that's a rabbit hole I'd probably rather avoid so you can strike it from the record if you like.
Example (ii), guns: AR-15 owners are obviously affected by the outcome of gun control regulation (confiscation of their arms) and arguably more knowledgeable about at least the mechanics of shooting and gun ownership.
Example (iii), military: Active military obviously have more of a stake in foreign policy decisions given that they'll be the boots on the ground, and seem highly likely to know more about the military and foreign engagements than your average civilian.
So no, I disagree with this statement:
Each of those examples has a stakeholder that will be deeply personally affected by the policies in addition to having more (as you put it) direct experience with the subject than the average person.
Perhaps I misspoke by saying 'self-determination.' A say in the direction of the community and nation-state in which they live may be more accurate.
Can you explain what you mean by farms the commons, and concretely what that refers to in this case? It carries connotations of private enterprise benefiting from subsidies or avoiding dealing with the externalities of their actions, but I assume that's not what you're going for here.
I think this splits too quickly into a discussion about principled views that I would be happy to have under separate cover. I'd rather revert to my only real point that, as a parent the concerns of other parents about their children is a force of commonality and a potential for alignment. I recognize that in Elon Musk to an extent, and I was surprised to see both Elizier express sentiment that at least the child of a loved one is top of mind for him.
I am of course, aware of the ways appeals to children can be an emotional camel's nose into the tent of control. But my perspective is to ask, why it works and whether that reason is not always wrong.
People with kids, and people with traditional families (neither Elon, nor Elizier have the latter) are going to weigh future planning differently than those without. Am I, someone invested in the survival of the traditional human family wrong to prefer the leaders and those with power over transformative technology to share my experiences and values?
Generally speaking I want leaders and decision makers and people with power to share my values (as does everyone everywhere all the time. Just because the liberal's values are liberalism, doesn't mean that their desire for leaders to prioritize liberalism isn't the same exact impulse). AI is not an exception to that, and might be, rather the most extreme case in my lifetime.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I see your "skin in the game" argument, but as a counterpoint, I have always distrusted parents when I hear them say things like "I'd happily vaporize half of humanity to save my own children." I understand the sentiment, but it doesn't make me think having children means you are more invested in the future thriving of humanity. It just means you are invested in the thriving of your own progeny, regardless of what it might cost others.
I understand, and am generally inclined to accept this point. Mostly, it is only my tribe with children who I trust the most. However, tribes are concentric circles, and here, I don't think your objection applies if we are talking about existential crises.
I am sure that Elon Musk would do a lot of (indirect) harm to me to see his children survive, thrive. But if AI has the possibility to conquer / destroy the whole world or flip the whole Western economy into unpredictable pieces, then my entire point is that a parent saving their own child, will necessarily care about saving at least some of humanity. Certainly more so than a singleton who abstractly reasons that a successor race of AI reaching the stars actually extends our legacy best.
If you have a button that might blow up the whole world or give you riches, I believe the risk reward calculus changes if you also have children and moreover the way you approach the question is less alien to me.
ON the other hand in the case of just economic board flipping, I think it's not coincidence that the two most hyped transhumanists on the board are non-Westerner expats from their own countries, and at least one of whom has no children. I trust the future most to those with families invested in a Western, Catholic, American life and move outward from there.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link