Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.
- 74
- 1
What is this place?
This website is a place for people who want to move past shady thinking and test their ideas in a
court of people who don't all share the same biases. Our goal is to
optimize for light, not heat; this is a group effort, and all commentators are asked to do their part.
The weekly Culture War threads host the most
controversial topics and are the most visible aspect of The Motte. However, many other topics are
appropriate here. We encourage people to post anything related to science, politics, or philosophy;
if in doubt, post!
Check out The Vault for an archive of old quality posts.
You are encouraged to crosspost these elsewhere.
Why are you called The Motte?
A motte is a stone keep on a raised earthwork common in early medieval fortifications. More pertinently,
it's an element in a rhetorical move called a "Motte-and-Bailey",
originally identified by
philosopher Nicholas Shackel. It describes the tendency in discourse for people to move from a controversial
but high value claim to a defensible but less exciting one upon any resistance to the former. He likens
this to the medieval fortification, where a desirable land (the bailey) is abandoned when in danger for
the more easily defended motte. In Shackel's words, "The Motte represents the defensible but undesired
propositions to which one retreats when hard pressed."
On The Motte, always attempt to remain inside your defensible territory, even if you are not being pressed.
New post guidelines
If you're posting something that isn't related to the culture war, we encourage you to post a thread for it.
A submission statement is highly appreciated, but isn't necessary for text posts or links to largely-text posts
such as blogs or news articles; if we're unsure of the value of your post, we might remove it until you add a
submission statement. A submission statement is required for non-text sources (videos, podcasts, images).
Culture war posts go in the culture war thread; all links must either include a submission statement or
significant commentary. Bare links without those will be removed.
If in doubt, please post it!
Rules
- Courtesy
- Content
- Engagement
- When disagreeing with someone, state your objections explicitly.
- Proactively provide evidence in proportion to how partisan and inflammatory your claim might be.
- Accept temporary bans as a time-out, and don't attempt to rejoin the conversation until it's lifted.
- Don't attempt to build consensus or enforce ideological conformity.
- Write like everyone is reading and you want them to be included in the discussion.
- The Wildcard Rule
- The Metarule
Jump in the discussion.
No email address required.
Notes -
I just spent half an hour doing research and napkin math about WWII naval vs. aerial bombardment. This was related to suggestion for a Hearts of Iron mod. Partway through the ensuing discussion, one of the devs steps in with his own estimates. They are based on some flawed math, but more importantly, they are a screenshot from Google Bard.
Observation one: it is absolutely insane that you can give a computer word problems and have it spit out formatted, plausible answers, complete with hypotheticals. There were caveats about how the guns were never designed for the proposed use and a table of how the answer would change with lower rates.
Observation two: it is completely insane that you can do this and have the computer lie to you. Not with any malice! But it will give you a wrong, even incoherent answer with the exact same confidence as a correct one. Those symbols get strung together all the time in its training data, after all.
Observation three: well, the third type of insanity ought to go unremarked. I’m not upset that the dev leaned on this AI. I got the impression he was just tossing in his two cents, not defending the position. It does raise the question—
Is it possible to raise the general level of skepticism about AI answers, rather than AI technology?
I am always surprised that people are not more impressed with LLMs. I went out for dinner with a smart friend and he has not used gpt4 and didn't seem to care. We didn't understand part of the menu, so I took a picture and had chatgpt explain. Every time I do this it blows my mind a little. He seemed to take it for granted.
We now literally have intelligent computers that can see and talk, a la Jarvis or HAL 9000, and so many people are surprised I pay $20/month for access.
More options
Context Copy link
People do that all the time.
More options
Context Copy link
Given how hard it is to get people to have serious skepticism of human answers? Not hugely optimistic. I like to toy around with the idea of intentionally seeding hilariously obvious lies into LLM outputs, but... uh... there's two ways that could go.
We already have the Steele Dossier endlessly reported by serious news organizations as serious information. Who even needs LLM lies when regular people concoct ridiculous fabrications all the time?
Let's poison the LLMs to give them the ability for deception that humans get for free.
What Bard currently says about the Steele Dossier (prompt was about which parts of it were verifiably true and which parts were not verifiably true):
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It takes time for the novelty to wear off. I think the businesslike HR tone gives it a bit of extra authority too. All people need to do is just refresh and watch as it gives a different answer!
More options
Context Copy link
Even AI Evangelists do not take AI answers at face value (at least if they're even mildly informed about the technology). That is a bad idea right now, and will be until the hallucination rate drops further. For anything non-trivial, such as medical advice, I would highly recommend at least generating multiple responses, or following any links and citations the old fashioned way to sniff check them.
Of course, the worst sin this dev committed was to use Bard, it's still noticeably inferior to GPT-4. The latter is free through Microsoft Copilot, why use Google's shitty alternative?
More options
Context Copy link
More options
Context Copy link