Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.
- 111
- 1
What is this place?
This website is a place for people who want to move past shady thinking and test their ideas in a
court of people who don't all share the same biases. Our goal is to
optimize for light, not heat; this is a group effort, and all commentators are asked to do their part.
The weekly Culture War threads host the most
controversial topics and are the most visible aspect of The Motte. However, many other topics are
appropriate here. We encourage people to post anything related to science, politics, or philosophy;
if in doubt, post!
Check out The Vault for an archive of old quality posts.
You are encouraged to crosspost these elsewhere.
Why are you called The Motte?
A motte is a stone keep on a raised earthwork common in early medieval fortifications. More pertinently,
it's an element in a rhetorical move called a "Motte-and-Bailey",
originally identified by
philosopher Nicholas Shackel. It describes the tendency in discourse for people to move from a controversial
but high value claim to a defensible but less exciting one upon any resistance to the former. He likens
this to the medieval fortification, where a desirable land (the bailey) is abandoned when in danger for
the more easily defended motte. In Shackel's words, "The Motte represents the defensible but undesired
propositions to which one retreats when hard pressed."
On The Motte, always attempt to remain inside your defensible territory, even if you are not being pressed.
New post guidelines
If you're posting something that isn't related to the culture war, we encourage you to post a thread for it.
A submission statement is highly appreciated, but isn't necessary for text posts or links to largely-text posts
such as blogs or news articles; if we're unsure of the value of your post, we might remove it until you add a
submission statement. A submission statement is required for non-text sources (videos, podcasts, images).
Culture war posts go in the culture war thread; all links must either include a submission statement or
significant commentary. Bare links without those will be removed.
If in doubt, please post it!
Rules
- Courtesy
- Content
- Engagement
- When disagreeing with someone, state your objections explicitly.
- Proactively provide evidence in proportion to how partisan and inflammatory your claim might be.
- Accept temporary bans as a time-out, and don't attempt to rejoin the conversation until it's lifted.
- Don't attempt to build consensus or enforce ideological conformity.
- Write like everyone is reading and you want them to be included in the discussion.
- The Wildcard Rule
- The Metarule
Jump in the discussion.
No email address required.
Notes -
If someone things that whatever goal is so important that all tool are justified and ethical concerns are unimportant, then it will not go well
I despise such hand wringing on whether or not something as basic as cause prioritization is warranted. The question is whether its true, and everything follows downstream of that.
Do you deny the general principle that some things can be considered to be more important than others? If not, then your issue is with the object level arguments for why AI is the most pressing issue of our time. Anyone who doesn't see the blistering speed of progress and the obvious issues arising from us creating something smarter than us that we are not ~100% sure we can control is, to put it bluntly, not making full use of even their own human intelligence. I don't trust their judgment of what a superhuman one would do.
Otherwise it's going "Oh no, won't someone think of the clogged toilets!" when your ship is about to hit an iceberg. Humans have been trading off things for each other for as long as we've existed, and I don't want to waste both of our time by giving a billion examples of it being true.
cause prioritization is entirely fine, deciding that anything is justified to reach goal X is not
Who exactly says "anything" is justified? That's a strawman if I've seen one.
Even Yudkowsky claims that dropping bombs on data centers is justified, not that we should blow up the entire planet in advance or return to the stone age.
Serious problems justify serious solutions, that's the whole point.
maybe I misunderstood
or extrapolated it too far
Fair enough, but I'd like to reframe your concerns with a hypothetical example-
Imagine we spot an asteroid on some deep space scan that has a significant non-zero chance of hitting Earth within a decade and causing a mass extinction event. For anything but <1% odds, any intervention necessary should necessarily take precedence over everything else.
As for AI, plenty of people think the odds are much much worse, and the time scales shorter
My position is that some basic and minimal rules should be upheld, for several reasons.
many ethical positions are actually coordination rules: society with random murder, rape and looting is simply less efficient than one that manages to avoid such destructive tendencies (and while you can claim that some external looting may be efficient: it got less efficient over history, and for asteroid impact we would want global coordination anyway)
if scenario X gives unlimited power to powerful they will happily invent fake X scenario or exaggerate it, we should limit incentives to that
there are many ethical positions that I would not want abandon, even if someone credibly claims that it will would have good consequences (I do not care how much convincing sophistry would be applied is that slavery and rape should be legal, I am going to oppose it anyway even if superintelligent aliens would arrive and announce that it should be done)
scenario X may be based on serious mistake and not actually apply
For asteroid impact: I would accept 50% asteroid tax, I would not accept slavery and outlawing criticism of government.
In general I would not accept "any intervention necessary", as it often results in counterproductive interventions or utterly not needed evil. Though I have no big illusions about my potential influence. Or would be likely to be convinced to support stupid policies anyway, lockdowns initially seemed a good idea to me (not examined yet whether it made sense to start them or whether it was stupid/evil/based on pure panicking).
Note that we had several cases in history of (2)/(4) scenario happening
But because those are just generally bad and probably won't stop the asteroid, not because unilateralism is bad, so I don't see what's wrong with the original premise of 'AI seems to eat other ethical concerns on a large scale'.
The problem is that some people would sincerely believe (or lie) that stuff like that is necessary.
Some policy being murderously stupid does not mean that it will not be enacted. Rejecting blatantly evil and unethical policies is far from foolproof but provides some coordination against really terrible ones.
(do I need to provide examples of tragically idiotic and evil programs enacted by governments?)
More options
Context Copy link
It only seems to if you accept some pretty specific premises -- all of which seem fantasitical to the population at large.
It's like saying 'the prospect of burning in Hell seems to eat other consequential concerns' -- it sure does! But only if you believe in Hell.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link