Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.
- 208
- 3
What is this place?
This website is a place for people who want to move past shady thinking and test their ideas in a
court of people who don't all share the same biases. Our goal is to
optimize for light, not heat; this is a group effort, and all commentators are asked to do their part.
The weekly Culture War threads host the most
controversial topics and are the most visible aspect of The Motte. However, many other topics are
appropriate here. We encourage people to post anything related to science, politics, or philosophy;
if in doubt, post!
Check out The Vault for an archive of old quality posts.
You are encouraged to crosspost these elsewhere.
Why are you called The Motte?
A motte is a stone keep on a raised earthwork common in early medieval fortifications. More pertinently,
it's an element in a rhetorical move called a "Motte-and-Bailey",
originally identified by
philosopher Nicholas Shackel. It describes the tendency in discourse for people to move from a controversial
but high value claim to a defensible but less exciting one upon any resistance to the former. He likens
this to the medieval fortification, where a desirable land (the bailey) is abandoned when in danger for
the more easily defended motte. In Shackel's words, "The Motte represents the defensible but undesired
propositions to which one retreats when hard pressed."
On The Motte, always attempt to remain inside your defensible territory, even if you are not being pressed.
New post guidelines
If you're posting something that isn't related to the culture war, we encourage you to post a thread for it.
A submission statement is highly appreciated, but isn't necessary for text posts or links to largely-text posts
such as blogs or news articles; if we're unsure of the value of your post, we might remove it until you add a
submission statement. A submission statement is required for non-text sources (videos, podcasts, images).
Culture war posts go in the culture war thread; all links must either include a submission statement or
significant commentary. Bare links without those will be removed.
If in doubt, please post it!
Rules
- Courtesy
- Content
- Engagement
- When disagreeing with someone, state your objections explicitly.
- Proactively provide evidence in proportion to how partisan and inflammatory your claim might be.
- Accept temporary bans as a time-out, and don't attempt to rejoin the conversation until it's lifted.
- Don't attempt to build consensus or enforce ideological conformity.
- Write like everyone is reading and you want them to be included in the discussion.
- The Wildcard Rule
- The Metarule
Jump in the discussion.
No email address required.
Notes -
Is there some esoteric force sabotaging Google's AI projects?
First there was the black Vikings, now there are random silly screenshots from their search AI. I suspect much is inspect element related but the meme has been bouncing around. There's a thread here: https://x.com/JeremiahDJohns/status/1794543007129387208
https://www.tomshardware.com/tech-industry/artificial-intelligence/cringe-worth-google-ai-overviews
I've been using Brave which has had a similar feature for some time. Brave's AI is generally useful and right 75% of the time, though you can never quite be sure. When it is wrong, it's never 'yes, doctors recommend you to smoke cigarettes while pregnant' tier wrong. I don't ask many questions that could be disturbingly wrong. Those who use google, are the silly results plausible, cherrypicked, invented? Is Microsoft using GPT-5 bots to sabotage the reputation of their competitors?
This is completely unfounded, but I suspect internal passive sabotage, by Google engineers who don't like the dominant internal politics, but who don't feel safe saying anything about it. Not precisely "quiet quitting", but more like a subtle "Amelia Bedelia rebellion", where they do what is required, but their actual goal is to make the people running the place look like fools.
I was part of the 2023 Google layoff, and still have a lot of friends at the company. Everyone is nervous and stressed as the layoffs continue. Remember (or discover) that 2023 was the first time Google laid anybody off; prior to that, if your job disappeared you'd get 6-12 months to find a new one within the company. The Google engineers I know are all trying to keep their heads down and just do their job right now.
So I don't think its likely that any coordinated group of Googlers are purposefully allowing these fuckups. Instead, what I think has happened is that Google grew up with teams of rockstar nerds who cared about the company, and a culture that allowed them to call out shit when they saw it. This was the culture that made Damore feel like he could and should write that memo, and that you can read about in Schmidt's book How Google Works. That culture stopped, and Google shifted from being mostly rockstar nerds into being mostly rockstar PMC nerd-managers. All the safeguards and procedures and culture that would catch these fuckups before they're released is immature/absent, because 5 years ago the nerds would fix these sorts of things without having to be told.
More options
Context Copy link
Quiet quitting/just not giving a shit is imo much more common & relevant. If you make your boss look like a fool, it is trivial for him to make it fall back on you. But if everyone tries to do slightly less than everyone else, bc doing more is plainly not rewarded, then you enter a race to the bottom that deteriorates everything.
More options
Context Copy link
More options
Context Copy link
For what it's worth, I've been unable to reproduce the cockroaches in penis answer. Though I'm sure at least some of the viral screenshots are legitimate, and there's definitely a team spending their entire memorial day weekend quashing these as they come up.
Technically, one issue is that there would be a different LLM being used in search than elsewhere. The one used in search would prioritize cost and speed above all else. A couple minutes worth of Google search inferences is a greater volume than probably a day's worth of inferences across all of ChatGPT, Gemini, and Claude. Naturally, quality is going to suffer. And even if Google were inclined to, it simply doesn't have the hardware to run its top-of-the-line model for every search query. (No one does.)
For comparison, Brave handles maybe 10 QPS? Google is closer to 100k QPS.
Google needs to improve quality, but that's probably not even its main priority right now: it needs to decrease costs.
More options
Context Copy link
The James Damore incident was evidence of a culture problem. Google is no longer a place where an autist can openly name a problem.
People good at internal politics muscled their way into the Google AI projects. Everyone else is afraid to criticize them.
After spending megabucks there's internal pressure to launch. So the project goes live with glaring flaws.
More options
Context Copy link
I think this is a general failing of LLMs. They're just regurgitating remixed training data, and when you ask weird questions like this, the likelihood that the relevant training data are dominated by trolling/joke answers is high.
Brave gives me:
And all of that is right, that's the origin of the meme, though ironically it cites google in support. I don't know if Brave is foolproof, maybe it has problems. But my experience is that it's usually pretty astute, its errors aren't hugely embarassing. Brave is a company running on a shoe-string budget, Google is supposed to be an AI titan. Their TPUs are supposed to be amazing, they're supposed to be in their own little sovereign corner. They have non-NVIDIA tech, scaling out on a different supply chain that makes them an AI juggernaut. Or so I read. But in reality, ChadGPT smashes them time and time again. Even Brave's rinky-dink open-source tech seems to work fine while Google is making a fool of themselves.
More options
Context Copy link
Surely a supervisory LLM could cut this shit by an order of magnitude ffs. "Is this likely to be true? y/n" just don't display it if no.
Yes it's a hack on the scale of the bitter lesson, but I think they're actually losing brand value here, crazy for Google of all people not to be conservative here.
Especially when their current pipeline already involves multiple additional LLMs interpreting and rewriting prompts for DEI anti-bias reasons!
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link