Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.
- 111
- 1
What is this place?
This website is a place for people who want to move past shady thinking and test their ideas in a
court of people who don't all share the same biases. Our goal is to
optimize for light, not heat; this is a group effort, and all commentators are asked to do their part.
The weekly Culture War threads host the most
controversial topics and are the most visible aspect of The Motte. However, many other topics are
appropriate here. We encourage people to post anything related to science, politics, or philosophy;
if in doubt, post!
Check out The Vault for an archive of old quality posts.
You are encouraged to crosspost these elsewhere.
Why are you called The Motte?
A motte is a stone keep on a raised earthwork common in early medieval fortifications. More pertinently,
it's an element in a rhetorical move called a "Motte-and-Bailey",
originally identified by
philosopher Nicholas Shackel. It describes the tendency in discourse for people to move from a controversial
but high value claim to a defensible but less exciting one upon any resistance to the former. He likens
this to the medieval fortification, where a desirable land (the bailey) is abandoned when in danger for
the more easily defended motte. In Shackel's words, "The Motte represents the defensible but undesired
propositions to which one retreats when hard pressed."
On The Motte, always attempt to remain inside your defensible territory, even if you are not being pressed.
New post guidelines
If you're posting something that isn't related to the culture war, we encourage you to post a thread for it.
A submission statement is highly appreciated, but isn't necessary for text posts or links to largely-text posts
such as blogs or news articles; if we're unsure of the value of your post, we might remove it until you add a
submission statement. A submission statement is required for non-text sources (videos, podcasts, images).
Culture war posts go in the culture war thread; all links must either include a submission statement or
significant commentary. Bare links without those will be removed.
If in doubt, please post it!
Rules
- Courtesy
- Content
- Engagement
- When disagreeing with someone, state your objections explicitly.
- Proactively provide evidence in proportion to how partisan and inflammatory your claim might be.
- Accept temporary bans as a time-out, and don't attempt to rejoin the conversation until it's lifted.
- Don't attempt to build consensus or enforce ideological conformity.
- Write like everyone is reading and you want them to be included in the discussion.
- The Wildcard Rule
- The Metarule
Jump in the discussion.
No email address required.
Notes -
Who exactly says "anything" is justified? That's a strawman if I've seen one.
Even Yudkowsky claims that dropping bombs on data centers is justified, not that we should blow up the entire planet in advance or return to the stone age.
Serious problems justify serious solutions, that's the whole point.
maybe I misunderstood
or extrapolated it too far
Fair enough, but I'd like to reframe your concerns with a hypothetical example-
Imagine we spot an asteroid on some deep space scan that has a significant non-zero chance of hitting Earth within a decade and causing a mass extinction event. For anything but <1% odds, any intervention necessary should necessarily take precedence over everything else.
As for AI, plenty of people think the odds are much much worse, and the time scales shorter
My position is that some basic and minimal rules should be upheld, for several reasons.
many ethical positions are actually coordination rules: society with random murder, rape and looting is simply less efficient than one that manages to avoid such destructive tendencies (and while you can claim that some external looting may be efficient: it got less efficient over history, and for asteroid impact we would want global coordination anyway)
if scenario X gives unlimited power to powerful they will happily invent fake X scenario or exaggerate it, we should limit incentives to that
there are many ethical positions that I would not want abandon, even if someone credibly claims that it will would have good consequences (I do not care how much convincing sophistry would be applied is that slavery and rape should be legal, I am going to oppose it anyway even if superintelligent aliens would arrive and announce that it should be done)
scenario X may be based on serious mistake and not actually apply
For asteroid impact: I would accept 50% asteroid tax, I would not accept slavery and outlawing criticism of government.
In general I would not accept "any intervention necessary", as it often results in counterproductive interventions or utterly not needed evil. Though I have no big illusions about my potential influence. Or would be likely to be convinced to support stupid policies anyway, lockdowns initially seemed a good idea to me (not examined yet whether it made sense to start them or whether it was stupid/evil/based on pure panicking).
Note that we had several cases in history of (2)/(4) scenario happening
But because those are just generally bad and probably won't stop the asteroid, not because unilateralism is bad, so I don't see what's wrong with the original premise of 'AI seems to eat other ethical concerns on a large scale'.
The problem is that some people would sincerely believe (or lie) that stuff like that is necessary.
Some policy being murderously stupid does not mean that it will not be enacted. Rejecting blatantly evil and unethical policies is far from foolproof but provides some coordination against really terrible ones.
(do I need to provide examples of tragically idiotic and evil programs enacted by governments?)
Okay. Some people will lie and make up foreign threats of aggression to justify wars and military buildups. Yet we still have a military. And that does mean that we, often, have thousands-millions of unnecessary deaths for unnecessary wars. But it still beats not having a military and then getting conquered by whoever feels like it. And during wars, you do have to suspend freedom of expression and freedom of movement to win, and all sorts of underhanded things that are bad during peacetime. And that is often done in unnecessary ways, but it's still done.
This is still analogous to the asteroid situation. It's worth making sure the asteroid isn't something someone made up, or a distributed mistake. But asteroids exist sometimes, and if they do exist it's worth putting your all into not having the asteroid hit.
And AI is worse than the asteroid in this case, because good outcomes aren't 'everything continues as normal', but 'AI everywhere and everything but good somehow', and nobody's really worked that last part out yet.
To be clear, I didn't actually advocate anywhere for 'the government forcing everyone to work on AI'. I just said that it seems to eclipse most other ethical concerns. I simply don't see why that stops being true even if it makes coordinating harder. Asteroids also make coordinating harder, but as before, they still exist.
I am not a a pacifist, I do not concern having military as unethical. But if someone starts going "we totally must murder all X and it is important and ethics should be ignored" (I know, it almost never is so blatant) then I hope that I would not support that.
"eclipse most other ethical concerns" is getting much better than "does seem to eat every other ethical concern"
But I would be still highly suspicious about such claims. "goal justifies all methods" repeatedly caused severe issues. And I am not convinced that ignoring most/all ethical concerns would actually help solving AI problems.
not always and not fully
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It only seems to if you accept some pretty specific premises -- all of which seem fantasitical to the population at large.
It's like saying 'the prospect of burning in Hell seems to eat other consequential concerns' -- it sure does! But only if you believe in Hell.
The population at large thought fantastical the telegraph, cars, oil, artillery, fighter jets, electricity, nuclear bombs, computers, and neural nets, a century before they arrived. They still came, and clever people predicted them.
Along with the flying cars, interplanetary (manned) spaceships, and other things that clever people predicted -- I honestly think that the popularity of science fiction and AI Doom scenarios in the rationalist community are not a coincidence. But 'would make a great science fiction story' is not a good predictor for 'is likely to happen IRL'.
Flying cars exist! There are multiple brands! They're just not very practical relative to cars/trains/planes. Link.
Interplanetary manned spaceships are clearly technically possible.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link