This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
From the pen of Scott: Come On, Obviously The Purpose Of A System Is Not What It Does
Scott offers several examples of why TPOASINWID results in absurd analysis. His examples are selected for maximal absurdity, so it's amusing that three out of four directly undermine his case, and the fourth is still a pretty good argument against his position.
This is a significantly more accurate statement than "the purpose of a cancer hospital is to cure cancer", because numerous considerations mitigate against curing cancer, things like economic considerations, bureaucratic constraints, and the work/life balance of the staff. And even when all these align such that curing this specific cancer is the system's goal, "curing cancer" might not mean what you think. I was especially amused by this exchange in the comments:
...written in the comment section of the author of Who By Very Slow Decay. Yes, very much like Chemo. This example, by itself, is probably the one I'd like Scott to address specifically.
It seems to me that this is a significantly more accurate statement than "the purpose of the Ukrainian military is to defend Ukraine from hostile military action." America and NATO are very specifically and very openly throttling aide to keep Ukraine from being defeated outright, but also from being able to hit back too hard. Stalemate appears to be the deliberate objective, and certainly has been the openly-stated objective of many Ukraine supporters in this very forum.
One could make a similar statement about the Russian military as well. Any description of the Russian military that doesn't account for the realities of coup-proofing and endemic corruption is not going to make accurate predictions about the real world.
His intention here is to achieve absurdity by narrowing the scope to one specific result, rather than the sum of results, and in fairness, he provides examples of X randos arguing in this fashion. "The purpose of the British Government is to keep a lid on the British People while pursuing goals orthogonal to their interests" seems a more parsimonious description, but even Scott's version seems more accurate than something like "the purpose of the British Government is to execute the will of the British people as expressed through democratic elections".
Again with the absurdity through inappropriate narrowing of scope. But even with a framing as uncharitable as this, it's worth noting that all systems have costs, and that description of a system that ignores the costs and how those costs are managed is a worse description than one that centers those costs. This is true even for descriptions that only consists of one significant cost, because the benefits of systems are generally far more obvious than the costs and thus the missing information is easier to find.
This is a bad article, and Scott should feel bad.
I'm that kind of person that is slightly obsessed following up the genealogy of the memetic slogans like "The purpose of a system is what it does" as it turns out it is minted by Stafford Beer one of the architects of Cybersyn. Let me give you the short version in my view what Cybersyn is. It is societal engineering through computational power. It is one of those horrible ideas that can't be flushed like stubborn turd that floats in the toilet minds of its proponents. It devolves into the Social Credit Score to subjugate the plebs. We are already half way there where consent is manufactured on reddit and other "social media" platforms with content moderation policies and the panopticon social approval of likes/dislikes, there are already reports of people of being debanked for their political opinions. Now we have that memetic idea surfacing again when we got the LLM:s that the failed experiments was missing it and they will fix it this time. It is extremely worrying that an avid reader of Trostsky is quoted again...
More options
Context Copy link
The purpose of a system is what it does is obviously dumb in a lot of cases- the purpose of NIMBY zoning boards(may they all find themselves dying unpleasant deaths) is to keep property values high. That this results in nothing getting built is an unfortunate side effect, and it does sometimes happen that NIMBY zoning boards allow things to get built. In others its actively absurd. In other cases it's a valuable reminder that mission statements are just bullshit, and in still others it's a description of institutional capture.
More options
Context Copy link
The whole article and the phrase which inspired it seem like desperate groping in the intellectual dark for the concept of The Principle Of Double Effect, and an illuminating example of the problems which arise when it is lacking.
The inability to distinguish between intended and unintended effects, and forseen and unforseen consequences, is lethal to a moral evaluation of human action.
More options
Context Copy link
Yep, Scott's at his worst when he's complaining about his outgroup. Not that most of the twitterati who employ POSIWID are particularly shrewd analysts, but the concept has plenty of explanatory value.
For another recent example of Scott getting sloppy, see his article on how the BAPist "based post-Christian vitalists" were hypocritical for caring about the victims of the Rotherham grooming gangs when they normally sneer at caring about poor people an ocean away as cucked slave morality. Of course, the obvious counterargument is that the Rotherham victims were white Westerners like themselves, aggressed upon by a far more alien outgroup.
I think he was actually closer to the mark there. You can see the hypocrisy when someone like KulakRevolt, for example, is calling for all of England to be burned down over the Rotherham gangs, as if he doesn't hold promiscuous fatherless girls from the lower classes in utter contempt himself. When all your grievances are formulated around tribal affiliations, you can argue that it's okay when we do it and bad when they do it, but you can't argue that you genuinely care about young girls being mistreated, and that sort of gives the game away when you're trying to convince people they should be outraged at rape and grooming when your actual objective is to stir hatred against your alien outgroup.
If Kulak hated European maidens he wouldn't have constructed his entire identity on the worship thereof. I don't think this is a good example at all.
The idea that you can't really care about your ingroup if you wouldn't care about them if they weren't part of it is a dangerous nonsense.
I ask you, would you love your Mother if she wasn't your mother. And if you wouldn't, how dare you say you love her? It's absurd. Who we are and what relationships we have is important and meaningful. It is not and never has been morally neutral.
I hate this gnostic reduction of our essence to some abstract individual will with every fiber of my body.
The Rotherham girls are not his ingroup just because they're white. He constantly talks about what he thinks should happen to white people who are also not in his ingroup.
His feigned outrage over "European maidens" being besmirched by Muslims is because Muslims are doing the besmirching, not because he actually cares about victimized white girls. If it were Irish grooming gangs responsible, he might contrive some anti-Irish reason to wash the streets in blood (he's certainly flexible like that), but more likely he'd just find something despicable brown people are doing elsewhere.
I'll note that, by reputation at least, (often drug enabled)abuse and grooming of lower-on-the-totem-pole teenaged girls is a 'the purpose of the system is what it does' for Kulak's claimed ingroup of reconstructionist pagans.
I don't really intend to go experience reconstructionist paganism. It may well be a false stereotype- and frankly doesn't much affect my (extremely negative) opinion of either reconstructionist paganism or idiot teenagers who experiment with it. But Kulak doesn't seem very upset about it either way. Nor does he seem to care very much about war rapes by the Russian Army, for another example of white people doing this.
Yeah, that's my point. If anyone else was drugging and raping teenage girls (including teenage white girls), Kulak wouldn't care. He just wants to see bloodshed. Also, his recent Braveheart Viking Hells Angel Paganism schtick and telling all his right-wing r3tvrn Christian followers that their religion is fake, gay and Jewish, is almost as hilarious to me as the people who still think he's an OF girl.
Isn't he/they an MTF?
No lol, he just picked an anime avatar and now some of his twitter audience unbelievably think he’s a woman. I don’t think he’s even claimed to be, so it’s not even a grift, it’s just weird or very stupid people.
More options
Context Copy link
No.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This like complaining that a Muslim cares about the Umma even though he cares about his sect or tribe more.
How dare people have Ordo Amoris? Their care must reduce to one bit!
Obviously people have circles of concern. And obviously just because someone doesn't extend their total moral community to all of humanity or all of creation doesn't make them abnormal. On the contrary.
Multi level tribalism is a perfectly acceptable and eugenic human behavior, albeit with some much talked about drawbacks. It is not however reducible to nihilism or egoism.
I think you give too much credit. I don't believe people like that feel ordo amoris for anyone at all. It's not about concentric circles of affinity, it's about identifying an enemy and manufacturing a grievance. I might believe some people feel some faint amount of "ordo amoris" for distant white girls because they happen to be white, even if they otherwise hold them in contempt, but not when every other message is about how they're dirt. Oh, now you care because a Muslim touched them? No heat graph meme argument is going to make that convincing.
Well I believe that you don't give people enough credit because they're part of your outgroup and that your standards of what people are allowed caring about without being hypocritical are bad models of people's behavior and therefore functionally useless except as the very sort of grievance they denounce.
The idea that people feeling empathy for the plight of people who look like and feel like them is bad, empty or without meaning in some way is, I believe, one of the great sins of Western civilization. And I don't feel difficulty defending anybody who feels such feelings, wicked as they may be, far from me as they may be.
Indeed, insofar as humanism has any degree of visceral grounding, it springs from this feeling and cannot denounce it without sapping itself.
Fair. People who hype genocidal warfare are indeed part of my outgroup.
I do not think you understand what my standards of what people are "allowed" to care about are.
This not what I believe.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It's perfectly possible to not-care-if-individual-Xs-comes-to-harm without hating Xs in general, or indeed, if you like Xs. Plenty of people like bunny rabbits, and might even sincerely love their pet rabbits, without turning into animal rights activists.
Would you say people who love pet rabbits in general but still love humans more don't truly love pet rabbits?
All you say is possible, I just don't believe it's an accurate or charitable description of almost anybody's concerns.
Suppose a man loves his pet rabbit, and finds pictures of rabbits abstractly cute, but happily eats rabbit meat without a twinge of guilt, and has never lifted a finger to campaign to ban the hunting or industrial farming of rabbits. Suppose that he has a personal enemy. Now suppose that he learns that this enemy sometimes goes rabbit-hunting; and suppose that, having found this out, he makes a stink, ranting to all who'll listen about how it's outrageous, how the guy must be brought to accounts, and now won't everyone see how much of a monster he is, like I've been saying all along: he's been blowing cute defenseless bunnies' brains out for fun, you can't deny it now.
In such a case I think it's fair to accuse this man of using the rabbit thing as a convenient weapon against someone he hated anyway; and to say his anger has very little to do with a sincere concern for rabbit welfare. Even if he really does love his pet rabbit.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
While his criticism kinda missed the mark, I do think there's something inconsistent about it. You can have a consistent ideology of supremacy for your own ethnic group, but in a globalized world it's not really compatible with being a Nietzschean individualist who sneers at caring about the weak in general. The archetypal ubermensch is a pre-Christian warlord - an aristocrat who strides above the petty concerns of his own nation's peasants and paupers. The 'master' isn't interested in whether the daughters of the slaves two counties over are getting raped and tortured, white or otherwise. Unless he considers those counties part of his holdings and, therefore, his alone to rape and pillage.
A guy who's concerned about tortured little girls an ocean away because they're white girls and he considers the fate of the white race his business, whether or not he stands to gain anything from it, has more in common with a guy who's concerned because he considers the fate of all Homo sapiens his business, than with a guy who actually only cares about himself, his kin, and maybe his nation.
People can't seem to get it through their heads that Nietzscheans aren't master moralists, they are would be designers of their own moral codes and specifically reject the impositions of acting as a master, or as a slave.
You are allowed to care for the weak or for anything or anyone insofar as you deduced on your own and not through social mimetism or scolding that this is right and true. But it has to come from you and not from whims but your own self legislated catechism.
I feel like this is the same brand of lazy criticism levied at objectivists for acting collectively despite being individualists. It's like people just imagine what the ideology is and what it precludes instead of actually asking or reading about it.
I'm aware actual-Nietzsche is more nuanced. But the guys Scott was debating aren't serious Nietzschean scholars, nor do they claim to be. Perhaps I should have just stuck with the tongue-in-cheek Based Post-Christian Vitalist coinage. The point is that these are people who sneer at the entire concept of Effective Altruism and indeed charity. You can't do that and care about Rotherham. It's untenable. If you're an American and you care what happens to the Rotherham girls, albeit only because they're white, then you're not coming from a completely different paradigm than the EAs. You just have an unpopular opinion on who the most relevant moral patients are.
Such as Objectivists would say, altruists, let alone utilitarian ones, do not have a monopoly on caring about people. And their claims that they do are an intellectually dishonest trick to refuse admitting that good natured feelings can be arrived at through other means than their pathology.
I would, actually, say that "altruist" objectively, etymologically describes anyone who cares about other people. It's what the "altr" means. Altruism is a broad church. Some altruists care about shrimps and others only care about humans. I see no reason why altruists who only care about white humans should act like they're something completely different.
It's simple. Non-altruists don't find alterity inherently valuable and it enters differently or not at all into their ethical calculus.
Arguing that they are still altruists because their calculus still leads them to conclusions similar to that of altruists in some cases is intellectual dishonesty.
You can redefine the word to be broad enough as to become useless. But that's not worth engaging with.
I claim that the calculus is the same. When it comes to caring whether perfect strangers live or die, suffer or thrive, in ways that will never affect you - either you do, or you don't. Those of us who do, I'm confident are, in an overwhelming majority, applying the same drives in the same ways. Sure, some of us care about the suffering of our countrymen, others about the suffering of our whole race, others still of the whole human race, and others still about the suffering of all animal life. But the only thing that changes between all those cases is how you draw the border between the people you care about, and the people you don't. It's still altruism even if it's race-specific, much as someone who cares about other humans but doesn't give a fuck about animals is still an altruist.
This isn't to say you can't have genuine non-altruists who, by coincidence, have similar practical aims to altruists. For example, you might object to rape gangs not because you care whether the victims suffer, but due to a deontological objection to rape. Or you might value the survival of your ethnic group, without caring about the suffering of any specific members within it per se, and treat the Rotherham gangs as one facet of a genocidal attack against your race as a whole. I wouldn't call those people altruists. But once you start talking about the suffering of random girls an ocean away as something which in and of itself should make your blood boil, something which you have a moral impetus to stop if you can, even though it's in no practical sense your problem - then, sorry, you're an altruist. Albeit a narrow altruist. And a lot of people screaming about the British rape gangs were using that kind of rhetoric.
(Of course, they may have been lying — perhaps Scott was too optimistic in taking those fragments of altruism as glimmers of an underlying better nature, rather than disingenuous, cynical attempts to play on actual altruists' emotions and win them over.)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Wouldn't deducing any moral code after reading Nietzsche by definition not be "on your own, not through mimetism" etc?
He enjoins you to have your own consideration of the moral problem. This, in my view does not recurse because you can look at it and disagree that making yourself moral legislator is a good idea.
There are people in the specific group this thread is talking about that believe in the possibility of a christo-nietzcheean synthesis for instance.
We quickly arrive at topics where logical contradiction is not disqualifying, however, so such logical descriptions are instrumental at best.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
"The purpose of a system is what it does" is a stupid opinion if it is taken as a general mathematical truth. The concept of purpose assumes intentionality (the purpose of something is the intent of the people who built/used/participated in it) and therefore the opinion assumes the effects of a system are always those intended by the actors, which is obviously false.
Most of the time "the purpose of a system is what it does" instead means that what the actors want is less important than what the system actually does (it provides more prediction power, as you said).
There are some cases however where intentionality is very important, for example if you kill someone the police and the court will be interested.
Unless that includes God or Nature as intent sources, I disagree that this is true.
The conceit of the phrase is precisely that things can have purpose unintended by their creators.
"Intended purpose" is not a tautology.
Yes it does if you believe nature or God have intents (it works better with God than nature, as most people who think that nature has intent also think that nature is a kind of god). People who don't think God exists or nature has intents also don't think there are purposes in nature.
Intended purpose means that the way you use the tool now (the purpose it's used for now) is what it was built for (the purpose of its creator). For example if you use your shoes to protect your feet it's their intended purpose, but if you use them to kill a fly it's not (presumably).
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
All too often 'systems' in practice get excused by idealism. POSIWID works as a shorthand to cut through that idealism.
Scott seems to be coming at this from some critical angle and I'm not entirely sure what the point of it is. You can wordplay anything into absurdity and uselessness.
More options
Context Copy link
Sibling non-CWR post: https://www.themotte.org/post/1836/scott-come-on-obviously-the-purpose
Wrote a comment there, but another thought:
I think Scott is attempting a kind of meta-joke. TPOASIWID is a very useful lens to interpret systems through, but in widespread DR Twitter use, it's mostly used as a way to ascribe bad intent to systems. And because TPOASIWID, you can only judge TPOASIWID by the use of TPOASIWID on Twitter, and so TPOTPOASIWIDIWID and that's creating bad Twitter takes, which isn't valuable or useful. QED.
Cute, but it misses the mark. It's about finding useful ways to interact with a system, not a universal acid allowing you to weak man any argument or analysis.
Are we going to henceforth lose every intellectual to some genre of twitter brainrot? Place your bets here.
More options
Context Copy link
More options
Context Copy link
Ok to be pedantic and trash Scott's argument, "POSIWID" is especially shortened from "The purpose of a system is to do some or all of the things that it does, while taking all of the other things it does as acceptable consequences."
And the contrapositive: "The purpose of a system is never something that it doesn't do."
Scott is being deliberately obtuse purposefully ignoring the obvious meaning of the phrase.
"The purpose of a system is never something that it doesn't do."
If my car fails to start one morning, that does not mean that my car ceases to be a car (if we define a car as a vehicle whose purpose it is to transport persons). Saying "POSIWID, hence this is not a car, perhaps it is a tiny house or outhouse" is not a good way to handle a broken car. "This thing was designed with a function in mind, but it does no longer serve its original purpose, so what purpose does it serve now, and is it worthwhile to fix it or get rid of it" seems a much more promising approach.
If a life-saving operation has a mortality of 1%, and it ends up killing little Timmy, saying that clearly the purpose of the operation was to murder him, not to safe his life, as "[t]he purpose of a system is never something that it doesn't do" would seem disingenuous.
But if it does break down, then it's something a car does, not something a car doesn't do.
A car doesn't fly. The purpose of a car is never to fly. A car does break down, and so the "doesn't do" doesn't apply.
More options
Context Copy link
Purpose of a system
A car is a system.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I always interpreted POSIWID as meaning that sustained normalized deviance is no deviance at all. If, say, a big tech OS project fails to ship year after year and company leadership fails to replace the project's management, then we have to conclude that either 1) the company-system is not under the control of agents with the ability to modify the world to achieve their goals or 2) the purpose of the OS project is not to produce an OS.
Otherwise, why wouldn't the OS project management been nuked from orbit after the fourth or fifth annual failure?
POSIWID doesn't mean, as Scott strawmans, that any side effect of a system is desirable or a failure of a system to fully achieve that goal reveals that goal as a lie. Total nonsense. If a cancer ward were curing only half its patients and despite having funding and expertise refused to install a new radiation machine that would increase the cure rate to 2/3, and if hospital administration tolerated this state of affairs, then we would be forced to conclude that THAT SPECIFIC cancer ward's purpose was not to cure cancer.
POSIWID only works in negation
Are you saying that he cherry-picked the tweets he screenshotted, and the median usage of POSIWID is much more nuanced?
Yes. Or more specifically, he demolished the retard version of POSIWID then claimed victory over the nuanced version. That's wrong and called strawmanning
Scott is an utilitarian. My mental model of him says that if you have a charity to rescue cats from trees, but it only rescues one cat from a tree per year despite having an annual 10M$ budget, then it is fair to conclude that their actual main purpose might be something different than rescuing cats. This is a standard critique of inefficient charities from an EA perspective.
Or take research towards fusion power. It has been going on for sixty years, and while we are making progress, we do not have fusion power plants yet. Now, you can take three stances.
Of course, if you take the last stance, then the next problem is alchemists who search for the philosopher's stone. In hindsight, we know that this was a fools errand, and only their lack of epistemic purity lead them to believe such a thing could exist at all. Their whole paradigm was -- not to put to fine a point to it -- dogshit, and if they had read the Sequences, they should have known. (Yes, I know about the woo aspect of alchemy -- but reaching enlightenment seems very much like a consolation prize if you fail to gain immortality et cetera. I am sure they did not emphasize the allegoric aspect to their funding agencies.)
On the other hand, hindsight is 20/20, and the ideas that form the basis of the scientific method would not be developed for centuries, so they were working with the mental tools which they got, and sometimes walking in a random direction is better than standing still until you exactly know which way to go.
Per POSIWID, the purpose of alchemy was to accidentally discover chemical reactions while denying that purpose.
We already have a perfectly good word for the relationship between alchemy and their accidental discoveries. That word is outcome.
Even more bluntly, consider a dog licking a TV screen which shows bacon being fried. The outcome is the dog licking an LCD. The purpose of the action is -- presumably -- that the dog wants to taste the bacon. Describing the system "dog" as a system which tries to taste bacon, but sometimes fails and tastes plastic instead gives us a much better model of reality than just saying "TPOSIWID, thus this dog likes to lick plastic".
And it's hard to imagine anyone sincerely believing the purpose of the dog-TV system is plastic licking. Maybe I'm sanewashing it, but ISTM there's a logical and useful way to understand POSIWID:
Let there be a system S, an agent with control authority over the system A, and some outcome X that A claims S is to produce
Observe that S falls short of ostensible goal X
Let B be an action that A can take to make S produce more of outcome X at positive ROI
Observe that A does not execute action B
Given the above, e must conclude based on A's failure to do B that A's purpose for S is not solely X. Maybe B is not actually positive ROI because we lack an understanding of its true costs. Maybe A is retarded and doesn't understand that B is available to him. But, if we assume B is positive ROI and that A is a competent actor, what alternative do we have to concluding that A is optimizing S for some unstated goal Y, not only X?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
One might argue he cherrypicked for stupid usages the moment he chose to get his example from tweets.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Hard disagree.
I think that a grossly simplified way to look at a system might be that it maximizes a particular utility function. Naturally, different people have different utility functions, and so might feel different about a system and the trade-offs it makes. Even though, it is rare that different people assign the opposite signs to a terminal goal, and more often that they simply differ in relative weight. If someone claims that the terminal goal of the NRA is to enable school shootings, or that the terminal goal of gun control legislation is to render Americans defenseless against tyranny, they are missing that point. The truth is simply that tyranny resistance and avoiding school shootings are both worthy goals, and different people will have different ideas about both their relative importance and how gun control might affect them.
Of course, in reality, systems are made out of individual actors who have their individual utility functions (as far as they are rational), and a key instrumental (if we are being charitable) goal of almost any system is to perpetuate its own existence.
I think that if I have a page of a book, and either describe it as "a mostly white page" or "a page darkened by ink", both of these descriptions are very inadequate, and it is not very worthwhile to quibble over which one is worse.
This being said, if you have to communicate to a space alien what a dentist does, what do you think is the better description?
Both of these statements are true and describe things a dentist does, but I would argue that the latter statement is slightly less terrible a description. An actually adequate description would acknowledge that people generally go to the dentist to prevent or fix tooth decay (the latter of which often hurts somewhat), but also that dentistry is a high income profession (thus attracting people interested in making money) and most dentists operate as a business and thus there exists a principal-agent problem e.g. for judging the cost-benefit ratio of secondary services like professional tooth cleaning.
More options
Context Copy link
How can you tell when your scope is appropriately widened? Okay, the purpose of the bus system isn't to emit CO2. Is the purpose to do that and drive vehicles on NYC streets? Is the purpose to do that and pay out bennies to bus drivers? Is it to do that and move paying customers around? Is it to do that and also house a few homeless people? Is it to do that and reduce traffic overall?
And we can't look at the bus system in isolation, right? It's part of the city government, which itself is embedded in layers of government and society. Why is it not inappropriate to even attempt to analyze the purpose of the NYC bus system in isolation of the entire world?
At least I agree that we can limit our scope to planet earth, since there doesn't seem to be any agency being exercised by anyone outside of it. The question is where to set the scope in between busses emitting CO2 and everything that goes on on earth.
More options
Context Copy link
More options
Context Copy link