This made me reflect that I hadn't actually thought critically about the phrase (at least, commensurate to how often it's used). For fun, if you think the purpose of a system is what it does, write what you think that means, before reading Scott's critique, then write if you've updated your opinion. For example:
(Spoilers go between two sets of "||")
Jump in the discussion.
No email address required.
Notes -
This seemed like a particularly bad and uncharitable post by Scott. The examples he chooses at the top worded in what seems like an intentionally ridiculous manner, e.g. characterizing it as "the purpose of a cancer hospital is to cure two-thirds of cancer patients," rather than "the purpose ... is to cure as many cancer patients as possible, constrained on the available resources and laws, which happens to be two-thirds of them." Or "The purpose of the Ukrainian military is to get stuck in a years-long stalemate with Russia" rather than "the purpose ... is to defend Ukraine, constrained on the available resources and international politics, with a years-long stalemate against Russia an acceptable result."
To the point at hand, I always saw the phrase as something riffing on the same sort of concept as "sufficiently advanced incompetence is indistinguishable from malice." All systems have both intended consequences and unintended consequences, this is obvious. But what's troublesome is, all systems have unforeseeable unintended consequences, this is also obvious; as such, it's incumbent on the people who design systems to include subsystems to detect and react to unforeseen unintended consequences. And if they didn't include a subsystem like that or didn't make such a subsystem robust, then we can conclude that the purpose of the system included being entirely tolerant of whatever unintended consequence is at hand. And in practice by my observation, it often tends to involve, as one of its primary purposes, the designers of the system feeling really good about themselves and their conscious intentions, versus having the purpose of actually accomplishing whatever they consciously intended to accomplish.
I think what's troublesome is that a lot of systems have foreseeable "unintended" consequences, and the debate over POSIWID is whether failure to prevent a foreseeable consequence means that it must have been an intended consequence.
I think you aren't wrong, but I also see it a little differently, in the context of failing to prevent a foreseeable consequence. Rather, is such a failure an indication that part of the purpose of the system is to cause those foreseeable consequences unintentionally? As they say, you can't make an omelet without breaking a few eggs, and if "breaking eggs" refers to causing meaningful harm, it can feel really bad to intend to do it. But omelets are delicious, so why not create a system where eggs get broken without you having to intend it?
Then that gets into question of what "intent" even means, and whether someone's "conscious" intent is their "true" intent.
I agree with you here. I was kinda expecting Scott's article to get into the question of just what it means for a broad society-wide institution to have a "purpose" which would likely get into issues like your last sentence, but he never went there.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link