Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.
- 126
- 2
What is this place?
This website is a place for people who want to move past shady thinking and test their ideas in a
court of people who don't all share the same biases. Our goal is to
optimize for light, not heat; this is a group effort, and all commentators are asked to do their part.
The weekly Culture War threads host the most
controversial topics and are the most visible aspect of The Motte. However, many other topics are
appropriate here. We encourage people to post anything related to science, politics, or philosophy;
if in doubt, post!
Check out The Vault for an archive of old quality posts.
You are encouraged to crosspost these elsewhere.
Why are you called The Motte?
A motte is a stone keep on a raised earthwork common in early medieval fortifications. More pertinently,
it's an element in a rhetorical move called a "Motte-and-Bailey",
originally identified by
philosopher Nicholas Shackel. It describes the tendency in discourse for people to move from a controversial
but high value claim to a defensible but less exciting one upon any resistance to the former. He likens
this to the medieval fortification, where a desirable land (the bailey) is abandoned when in danger for
the more easily defended motte. In Shackel's words, "The Motte represents the defensible but undesired
propositions to which one retreats when hard pressed."
On The Motte, always attempt to remain inside your defensible territory, even if you are not being pressed.
New post guidelines
If you're posting something that isn't related to the culture war, we encourage you to post a thread for it.
A submission statement is highly appreciated, but isn't necessary for text posts or links to largely-text posts
such as blogs or news articles; if we're unsure of the value of your post, we might remove it until you add a
submission statement. A submission statement is required for non-text sources (videos, podcasts, images).
Culture war posts go in the culture war thread; all links must either include a submission statement or
significant commentary. Bare links without those will be removed.
If in doubt, please post it!
Rules
- Courtesy
- Content
- Engagement
- When disagreeing with someone, state your objections explicitly.
- Proactively provide evidence in proportion to how partisan and inflammatory your claim might be.
- Accept temporary bans as a time-out, and don't attempt to rejoin the conversation until it's lifted.
- Don't attempt to build consensus or enforce ideological conformity.
- Write like everyone is reading and you want them to be included in the discussion.
- The Wildcard Rule
- The Metarule
Jump in the discussion.
No email address required.
Notes -
AI may invert the common wisdom that studying English is worthless and studying computer science is the wise decision. If AI takes off as anticipated, employers will look for word people who are trained in analyzing prompt replies, using specified and nuanced language, and consuming hundreds of pages of written text per day. English and history students would be great at this.
Look I'm one of the most strident defenders of the value of the humanities on TheMotte, but I have no idea what this is supposed to mean.
If AI is actually generally intelligent then you won't need to "analyze" its replies. You won't need to do prompt engineering, you won't need to do any of that. You'll just tell it to do something and it'll do it. Like any ordinary human. STEM professionals aren't all walking around in an autistic haze where they're unable to have basic interactions with other people. They're quite capable of telling subordinates what to do and verifying that the task was completed, using natural language. Hell, if it really came down to it, you could get the AI to analyze its own replies and do prompt engineering for you! That's what general intelligence entails.
And to the extent that AI falls short of general intelligence, it will likely continue as it does today as essentially a tool for domain experts. In which case, the most important factor for a human will still be their domain expertise and their ability to actually do the job at hand, rather than their AI whispering skills.
Of course there is something to be said for the skill of people management in general, being able to motivate people and keep them on task, playing office politics, things like that - those are real skills that not everyone possesses. But if we're at the point where we have to wrangle our AIs because they're too moody/lazy/rebellious to fulfill your request, that's a bigger problem - one that is more appropriate for the engineers (or the military) to solve, not English majors.
Have you ever actually worked with humans? :-)
More options
Context Copy link
??? Whatis this supposed to mean?
You convey to the AI what you want to see using precision in language. There is no way for the AI to know what you want without you supplying information to it. Like, if you’re an architect telling the builder what to build, or a sketch artist with probing questions to a witness, or any other basic way in which humans use language to obtain what they want from other humans.
Sure. And the limiting factor there is the person's technical domain expertise. Generic "communication" skills are of no help. A programmer can explain much better to another programmer how a piece of software should be constructed than an English major could.
More options
Context Copy link
This seems to presuppose that fluency with a language confers understanding of the concepts to which that language and its words can refer. Since you brought up computer science, I have doubts that an English major, without sufficient study of the prerequisites (formal logic, some calculus, basic algorithms, data structures, graph theory - i.e. "studying computer science") would be able to understand, let alone judge the veracity of a proof that an algorithm is correct and has certain performance characteristics. Complaints of "you never need that on the job" notwithstanding, an understanding of the actual problem domain under discussion in (for example) the English language is necessary to walk the AI through certain tradeoffs, designs, and eventually make decisions - or even know that they exist at all. Further, being able to supply a coherent rationale for why a particular decision was made beyond "
GodAI told me so" would also entail an understanding of those domain-specific complexities.I suppose I can conceive of an AI so powerful that it will understand, weigh, measure, and decide upon all of these factors on your behalf, while simultaneously being able to discern your intent despite you not having a sufficient understanding of the vocabulary to express it, but in this case the English student would himself also become obsolete.
Phrasing your desired outcome and precisely specifying it requires not just a fluency in language and merely an acquaintance of a domain's vocabulary but also an understanding of the concepts to which that vocabulary refers and the relationships between those concepts. Moreover, in order to intelligibly and productively have a conversation with the AI - that is, to respond back to its replies with follow-up questions or redirections in case it gets off track - one must understand the semantics, or the meaning of those letters on your screen: the things to which they point, and the sense of the statement. Otherwise, you may as well be Searle's Chinese Room, "unintentionally" shunting around meaningless symbols without any understanding of what they mean, all the while maintaining the pretense that you are actually having a sensible dialogue and are capable of moving around and making decisions in the space of solutions and tradeoffs.
Language is merely the technology we use, the medium through which information is serialized and conveyed across minds (including past and future instances of my own brain). As a tool or "bicycle" of the mind it is a good multiplier of one's cognitive capacity, in the same way a pickaxe is a good multiplier of your ability to mine rocks. However, knowing how to swing one will not give you any expertise in prospecting or insight into where to mine for gold. Expertise with a language does not imply expertise with all the possible landscapes, concepts, and ideas that can be expressed within it.
More options
Context Copy link
This is how computers already work. English majors turned out to be pretty much the worst type of people at actually communicating clearly. Even now that AI knows some human language it's mostly CS types communicating successfully with it.
What makes you think this trend will reverse so drastically?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Will they? I can see some of those skills transferring, but the patterns of behavior and the quirks and nuances that AI have are going to differ from those that a traditional English student is going to be used to. I would think that a healthy amount of computer science would help understand the underlying mechanisms of AI and thus have a better idea of how and why it misbehaves when it does and how to adjust prompts to fix that.
I expect that the value of an English degree will go up, but not enough to surpass that of a CS degree, since the value of those will also go up. Probably the best case would be people who double majored in English and CS, but I believe those are rare.
More options
Context Copy link
This has always been useful, and is less so about studying English than preciseness of thought. Linguistics maybe. End of day, it's about writing crisp documents.
The best software architects write the best requirements. The best managers drive concise meetings. The best lawyers are linguistic magicians. Don't get me started on politicians, tv experts and the entertainment business. Even doctors have to be insanely precise in how they communicate the specifics of a disease.
It's no surprise that Stanford's symbolic systems has produced a significant number of tech billionaires. It is a field studying exactly what you talk about.
Honestly I do yearn for a job where this is highly valued. Too often I’ve found myself in places where it’s just bull-in-china-shop stuff, doing things mindlessly and then gathering up the debris mindlessly. I wrote a 6-page memo recently to propose a project and outline its week to week operating mechanisms, medium/long term anticipated benefits and challenges foreseen. May as well have scratched my scrotum for 3 hours for all the good it did.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link