Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.
- 112
- 5
What is this place?
This website is a place for people who want to move past shady thinking and test their ideas in a
court of people who don't all share the same biases. Our goal is to
optimize for light, not heat; this is a group effort, and all commentators are asked to do their part.
The weekly Culture War threads host the most
controversial topics and are the most visible aspect of The Motte. However, many other topics are
appropriate here. We encourage people to post anything related to science, politics, or philosophy;
if in doubt, post!
Check out The Vault for an archive of old quality posts.
You are encouraged to crosspost these elsewhere.
Why are you called The Motte?
A motte is a stone keep on a raised earthwork common in early medieval fortifications. More pertinently,
it's an element in a rhetorical move called a "Motte-and-Bailey",
originally identified by
philosopher Nicholas Shackel. It describes the tendency in discourse for people to move from a controversial
but high value claim to a defensible but less exciting one upon any resistance to the former. He likens
this to the medieval fortification, where a desirable land (the bailey) is abandoned when in danger for
the more easily defended motte. In Shackel's words, "The Motte represents the defensible but undesired
propositions to which one retreats when hard pressed."
On The Motte, always attempt to remain inside your defensible territory, even if you are not being pressed.
New post guidelines
If you're posting something that isn't related to the culture war, we encourage you to post a thread for it.
A submission statement is highly appreciated, but isn't necessary for text posts or links to largely-text posts
such as blogs or news articles; if we're unsure of the value of your post, we might remove it until you add a
submission statement. A submission statement is required for non-text sources (videos, podcasts, images).
Culture war posts go in the culture war thread; all links must either include a submission statement or
significant commentary. Bare links without those will be removed.
If in doubt, please post it!
Rules
- Courtesy
- Content
- Engagement
- When disagreeing with someone, state your objections explicitly.
- Proactively provide evidence in proportion to how partisan and inflammatory your claim might be.
- Accept temporary bans as a time-out, and don't attempt to rejoin the conversation until it's lifted.
- Don't attempt to build consensus or enforce ideological conformity.
- Write like everyone is reading and you want them to be included in the discussion.
- The Wildcard Rule
- The Metarule
Jump in the discussion.
No email address required.
Notes -
What's your definition of AGI? The label feels more like a vibe than anything else.
For me, the multi-modal capabilities of GPT-4 and others [1][2] start to push it over the edge.
One possible threshold are Bongard problems[3]. A year ago I thought, while GPT-3 was very impressive, we were still a long way from AI solving a puzzle like this (what rule defines the two groups?) [4]. But now it seems GPT-4 has a good shot, and if not 4, then perhaps 4.5. As far as I know, no one has actually tried this yet.
So what other vibe checks are there? Wikipedia offers some ideas[5]
Turing test - GPT3 passes this IMO
Coffee test - can it enter an unknown house and make a coffee? Palm-E[1] is getting there
Student test - can it pass classes and get a degree? Yes, if the GPT-4 paper is to believed
Yes, current models can't really 'learn' after training, they can't see outside their context window, they have no memory... but these issues don't seem to be holding them back.
Maybe you want your AGIs to have 'agency' or 'conciousness'? I'd prefer mine didn't, for safety reasons, but would guess you could simulate it by continuously/recursively prompting GPT to generate a train of thought.
[1] https://ai.googleblog.com/2023/03/palm-e-embodied-multimodal-language.html
[2] https://arxiv.org/pdf/2302.14045.pdf
[3] https://metarationality.com/bongard-meta-rationality
[4] https://metarationality.com/images/metarationality/bp199.gif
[5] https://en.wikipedia.org/wiki/Artificial_general_intelligence#Tests_for_testing_human-level_AGI
I think AGI is not a hard thing you can define precisely like you can atomic elements, or the number 5. Like most things the definition is blurry around the edges. To me it'd be AGI when it can start behaving like a human. So I suppose when it's able to continuously interact with the world in a sensible way without repeated prompting.
More options
Context Copy link
Defining AGI would mean defining intelligence, which I can't do.
For my purposes, AGI is when you can put multiple humans and a chatbot in an IRC channel, offer a cash reward to identify the chatbot, and the humans do not accuse the actual chatbot at a disproportionate rate.
GPT4 passes the Turing test only if the human isn't examining it all that closely.
More options
Context Copy link
The vast majority of humans couldn't replace a single OpenAI employee, let alone all of them. I think your standard for intelligence is too high.
OP's question is about what you consider AGI. I consider it general intelligence, like that it can do a very wide variety of basic tasks and easily learn how to do new things. A human child once they're 3-5 years old is a general intelligence in my opinion. But yeah the exact definition is all in the eye of the beholder.
More options
Context Copy link
I see your point but I think @non_radical_centrist has one, too. Let's say we develop an AI that perfectly emulates a 70 IQ human named
LLM-BIFF
. That's general intelligence. Set all super-computers on earth to runLLM-BIFF
. DoesLLM-BIFF
recursively self-improve itself to becomeLLM-SHODAN
?There must be a narrow window of AI sophistication in which we have a generally intelligent program, but nevertheless one not intelligent enough to bootstrap itself and trigger a singularity. Whether this window lasts one iteration of AI development or much longer is the question.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link