Transnational Thursday is a thread for people to discuss international news, foreign policy or international relations history. Feel free as well to drop in with coverage of countries you’re interested in, talk about ongoing dynamics like the wars in Israel or Ukraine, or even just whatever you’re reading.
- 31
- 2
What is this place?
This website is a place for people who want to move past shady thinking and test their ideas in a
court of people who don't all share the same biases. Our goal is to
optimize for light, not heat; this is a group effort, and all commentators are asked to do their part.
The weekly Culture War threads host the most
controversial topics and are the most visible aspect of The Motte. However, many other topics are
appropriate here. We encourage people to post anything related to science, politics, or philosophy;
if in doubt, post!
Check out The Vault for an archive of old quality posts.
You are encouraged to crosspost these elsewhere.
Why are you called The Motte?
A motte is a stone keep on a raised earthwork common in early medieval fortifications. More pertinently,
it's an element in a rhetorical move called a "Motte-and-Bailey",
originally identified by
philosopher Nicholas Shackel. It describes the tendency in discourse for people to move from a controversial
but high value claim to a defensible but less exciting one upon any resistance to the former. He likens
this to the medieval fortification, where a desirable land (the bailey) is abandoned when in danger for
the more easily defended motte. In Shackel's words, "The Motte represents the defensible but undesired
propositions to which one retreats when hard pressed."
On The Motte, always attempt to remain inside your defensible territory, even if you are not being pressed.
New post guidelines
If you're posting something that isn't related to the culture war, we encourage you to post a thread for it.
A submission statement is highly appreciated, but isn't necessary for text posts or links to largely-text posts
such as blogs or news articles; if we're unsure of the value of your post, we might remove it until you add a
submission statement. A submission statement is required for non-text sources (videos, podcasts, images).
Culture war posts go in the culture war thread; all links must either include a submission statement or
significant commentary. Bare links without those will be removed.
If in doubt, please post it!
Rules
- Courtesy
- Content
- Engagement
- When disagreeing with someone, state your objections explicitly.
- Proactively provide evidence in proportion to how partisan and inflammatory your claim might be.
- Accept temporary bans as a time-out, and don't attempt to rejoin the conversation until it's lifted.
- Don't attempt to build consensus or enforce ideological conformity.
- Write like everyone is reading and you want them to be included in the discussion.
- The Wildcard Rule
- The Metarule
Jump in the discussion.
No email address required.
Notes -
I'm not sure what is actually being reported here. So far I see two facts being alleged - that Israel is using AI system to figure out who could be Hamas operative, which is 90% accurate (spectacular number if true, to the point I even suspect they are being over-optimistic), and that Israel is using phone tracking to locate specific suspects. The latter has nothing to do with AI, as for the former - I am not sure what is supposed to happen after a certain person has been identified as "90% likely to be Hamas operative". The article uses phrases like "automated kill chain", but there's no evidence or even allegation such thing actually exists in any meaning of the word "automated" - do they mean if the system identifies a person, he would be automatically targeted by some killing machine without human supervision? If so, why don't they say it explicitly and describe what this system is and how they know about it? If no, then what "automated" means?
And on the other hand, I am not sure what kind of oversight you would put on such a system. Let's assume you indeed had a system which with probability of 90% can tell you whether or not certain guy in Gaza is in Hamas. Now, how would you verify it? Obviously, if you had some better system, you'd use that one from the start. You could review the data yourself - but do you have better than 90% accuracy? I mean, if you spot some hilarious bug in the system - on the level of "black vikings" and other hilarious bugs in public LLMs, sure. You can block that. Like if the system marked every guy with name "Muhamad" as Hamas member, than you can notice it and overrule the system. But let's say you didn't notice that. Moreover, you tried it 100 times and went out and captured those guys and 90 of them admitted that yes they are in Hamas, or you found Hamas membership card on them and so on. E.g. let's assume 90% is true. How do you oversight that system then? Verifying each person manually is impossible - there are like 50 thousands of them, and most of them are hiding and it's impossible to verify anything about them until they are either captured or dead. So what do you base your supervision on?
More options
Context Copy link