This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Tyler Cowen has a Conversation with Jennifer Pahika on Reforming Government
I will pull one little segment.
I want to pull on some threads in the vein of my previous comments on military research, development, and procurement. They talked about this some, but were also talking more broadly. I think the problem to be solved is perhaps most clearly cognizable in this domain. Reordering the discussion a bit, we'll start from the outcomes, the things that we're trying to achieve:
As I put it:
Look at the lead time for something like a modern fighter jet. What's the chance that the guy who originally greenlit the program is still around to be 'accountable' if/when it's actually used in a hot conflict, such that its performance can be assessed against the competition? Do you handicap that assessment at all? He made his decision a decade ago, seeing a certain set of problems that they were trying to solve. A decade or two later, your adversaries have also been developing their own systems. Should he be punished in some way for failing to completely predict how the operating environment would change over decades? Suppose he made the decision in Year X, and it came into service in Year X+10. It hypothetically would have performed perfectly well for a decade or two, but you just never had a hot war and never saw it. By the time Year X+25 rolls around and you do get into a hot war, it's now hot garbage in comparison to what else is out there? Is he blameworthy in some way? Should he be held 'accountable' in some way? There's a good chance he's now retired or even dead, so, uh, how are you going to do that?
Obviously, there is a spectrum here, but I would argue that a modern fighter jet is more toward the middle of the spectrum than at the far end. Yes, there are plenty of faster-turnaround things, but there are also lots of long lead time things. Even just think about the components/subsystems of the fighter jet. By the time a decision is made to greenlight the larger project, most of these have to be relatively mature. The gov't and company involved can probably take some risk on some of these, but they can't do too many. They want a fair amount of subsystems that they are confident can be integrated into the design and refined within their overall project schedule. That means that all of that investment had to be done even earlier.
Back to that guy who makes the decision. Who is that? Probably a general or a political appointee. Possibly a group of gov't stakeholders. How does he decide what to buy? Remember, he's trying to predict the future, and he doesn't actually know what his adversaries are going to do in the meantime. He has no direct outcomes by which to do this. He doesn't yet have some futarchy market to somehow predict the future. He basically just has educating himself on what's out there, what's possible, what's at various stages of maturity, and where various people think stuff might be going. As I put it in the doubly-linked comment:
And so, I think Tyler would claim, this fundamentally drives these decisions to be focused on process rather than outcome. The outcome isn't accessible and likely isn't going to be. Instead, people basically just implement a process to ensure that the decisionmaker(s) are talking to the right stakeholders, getting a wide variety of input, not just shoveling contracts to their buddies, etc. Sure, these decisionmakers still have some leeway to put their mark on the whole thing, but what's the plan for adding more 'accountability' to them that isn't just, "Whelp, let's make sure they go through enough process that they don't have the obvious failure modes, and then sort of hope that their personal mark on the process is generally good, because we've built up some trust in the guy(s) over some years"?
Now, think like a company or research org that is considering investing in lower maturity subsystems. It's a hellova risk to do that with such an incredibly long lead time and, frankly, a pretty low chance of having your product selected. You're going to care a lot about what that process looks like, who the relevant stakeholders/decisionmakers are, and what their proclivities are. If you're pretty confident that the guy(s) in charge mostly don't give a shit about airplanes, you're even more unlikely to invest a bunch of money in developing them or their components. Will some crazy company spent thirty years to develop a fully-formed system, getting no contracts anywhere along the way, just hoping that once the generals see it complete and in action (ish, because again, there's not a hot war and you can't really demonstrate the meaningfulness of having a thousand airplanes with your one prototype), they'll finally be 'forced' to acknowledge how effective it's going to be, finally unable to brush it off, and finally actually buy it for bazillions of dollars? I guess, maybe, sometimes. But probably not very often. Thus, I think it's pretty unlikely that the gov't can just completely wash its hands of any involvement in the research/development pipeline and just say, "Companies will always bring us fully-formed products, and we'll decide to buy the best ones." Pahlka touches on a need for the gov't to "insource" at least some parts of things:
Again, I think she's talking more broadly, but that bit about software and operations being very melded is quite poignant when thinking about military applications.
Getting back to the problem of not knowing what's going to be effective in the future, the traditional solution is to just fund pretty broadly, through multiple mechanisms. Not sure about airplanes? Have one guy/group who seem to like airplanes go ahead and fund a variety of airplane-related stuff. Have some other guy who doesn't like airplanes fund some other stuff. There's obviously a bunch of risky capital allocation questions here, and decisions ultimately have to be made. Those are tough decisions, but how do you add 'accountability' to them? I don't know. I think the easy/lazy way is basically a form of just looking at your 'guys' (your airplane guy, your submarine guy, etc.) and ask, "What have you done for me lately?" The obvious concern is that that makes all your guys focus their portfolios much more heavily toward shorter timelines. But part of the point of the government being 'eternal' is that it should be able to be thinking on longer time horizons, because that may end up being more valuable than just short time horizon things that can be more tightly linked to 'outcomes' or 'accountability'.
I started off being a bit taken aback by the idea Tyler proposed that we should almost just abandon accountability. I've generally been somewhat pro-accountability, and I know folks here have talked about it a lot. But upon reflection, with my pre-existing but not previously-connected thoughts on military procurement, it makes a bit more sense to me that there is a real tension here with no real easy solutions.
I recently read Barry Lam's excellent pamphlet Fewer Laws, Better People, which has a similar theme around increasing discretion granted rather than focusing on formal discretion-free rules, so that is influencing my thoughts on this. I highly recommend the book
Two thoughts about what accountability means.
"Nobody ever got fired for buying IBM." When you make people accountable for their decisions, you encourage conservatism. You target not their results, because nobody totally controls results, you can only punish them for bad process. And this leads to conservative process. You stick with the big reputation contractor, you never take a risk or do anything bold. Defensive medicine. Follow the procedure, check the boxes, and whatever happens happens. We don't like this result.
On the other hand, people point to internal loci of control, we need people to want to act with excellence, with skin in the game. But think of War and Peace, of the Grand Armee marching into Russia. If Kutuzov had been held accountable for the loss of Moscow, we'd all be speaking French. Every other general was worried about being held accountable socially, of being judged a coward. Kutuzov alone was willing to take on the social opprobrium of being judged a coward, of losing Moscow, of running away from Napoleon. Napoleon expected Kutuzov to act like every other brave general he'd faced, afraid of being held accountable, when the right decision was to behave like a coward.
The important thing is to pick the right people, and trust them. Give them discretion to achieve their goals. And then hope for the best.
I always think it's worth noting that out of all the kings of Israel in the bible, there are maybe three and a half good ones. Out of 70-some Roman emperors, only perhaps a dozen were any good. Out of 43 presidents, the majority were pretty mid. Ditto kings of England, or France, or Ottoman Sultans, or Chinese Emperors. History consists mostly of mediocrities, a single great leader sets up the system and everyone coasts off that for dozens or hundreds or thousands of years.
This doesnt really affect your argument, but your example with Kutuzov is really funny: https://en.wikipedia.org/wiki/Michael_Andreas_Barclay_de_Tolly#Napoleon's_invasion. In short, Kutuzov continued Barcley de Tolly strategy, but also had fortune of being ethnic russian.
That's why I cited to Tolstoy specifically, rather than the always contested and complicated historical record.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It’s not “accountability” in some nebulous sense. It’s accountability to having done the right process regardless of what happens. And this does skew things away from actually getting things done because there’s always a chance that doing something will result in a bad outcome that could be prevented by doing the processes. So in order to avoid the consequences of being wrong and held to account for a potential failure, you do processes to cover your own ass and who cares if the project gets done at all. It’s a question of the incentives being put in place such that you avoid actual accountability by abusing the accountability system such that you protect yourself from accountability by doing and creating lots of processes and not actually getting things done.
The solution, to my mind is to shift accountability to the results of the project. If you can’t get the job done, you’re accountable for that, and if you can’t do the project right you’re accountable for that. If the project is building a road, the accountability should not be in filling out forms to authorize the road, or quadruple checking that the processes are followed to the letter. Instead shift accountability to the correct, safe, and timely building of the road.
The issue, as they point out, is that outcomes are heterogeneous. If the outcome is a combination of your decision and random noise and circumstance outside of your control, then outcome will be weakly correlated with the actual value you provide. Half of punishments and rewards will be deserved, and half will be simply responding to the whim of fate.
If your punishment/reward mechanism is long-term enough, like say the profits of a company that can accumulate over time and wash out the negatives with positives, then risky but positive expectation behaviors will work. If your mechanism is "fire any CEO who has a year with negative profit, no matter why it turned out negative" then you're likewise going to incentivize conservative behavior that guarantees the bare minimum at the cost of unlucky but smart people who take risks with positive expected value.
Such things can be adjusted on the basis of what the project actually is. If the project is highway construction, then if the road is not functional, or the road doesn’t get built within a reasonable timeframe, then obviously that’s something to be accountable for. There might be more long term projects— I imagine getting drugs approved is more of a safety problem, and I think you could expand the scope of accountability to include long term health effects ten years on.
The trouble with procedure based accountability is that it basically incentivizes foot-dragging by punishing people for not following thousands of procedures, but effectively not caring at all if the results ever happen. I’ll admit that random bad luck can happen, but over a long enough timeframe, say you do ten projects a year, at least half would be successful by chance, and perhaps another quarter could be made to work by careful work. That would give a person on that position a 7/10 success rate, which is pretty good.
Can you provide some sense of what you have in mind for a project like I was talking about, say, a new fighter jet?
I mean im not a military expert so that’s mostly why I’m not thinking specifically about the military process. However, there are things you can do in the case of planes, mostly stress testing them in ways that simulate combat and picking those that perform well. You don’t want a jet fighter that shakes apart at combat speeds or on quick turns, and so you simulate those things. And you can have those tests, im not completely opposed to procedures and tests, but they must be in service to the end goal which in this case is a fighter jet that can handle combat conditions, and has guns/missiles that fire accurately and explode as needed on impact.
As far as generals predicting the future of combat, this is a stickier problem, simply because it involves building when you don’t know exactly what you need. If we go to war with Iran, we need something different than if we go to war with China. There’s no real work around for not knowing what to plan for, though I think the generals have better ideas about how to approach the problem Than I do. Gun to head, I might go with an internal version of a warfare prediction market and listen more to the guys capable of predicting shorter term scenarios correctly. This would be a rough proxy for the ability to predict long term trends.
My point is to get the general systems aligned with accomplishing the things they’re tasked with doing. I want my highway department to build roads, not file endless paperwork on environmental impact, on obscure safety issues, or on the precise details of the demographics of the companies hired to build the road. At the end of the day what I and most of the public want are roads built and maintained that are reasonably safe to drive on.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Accountability based on outcomes can also encourage behavior that increases tail risks. In the wake of the 2008 financial crisis, the popular metaphor for this was “picking up nickles in front of a steamroller.” It involves taking risks with a negative expected value, but where the downside is a costly but improbable occurrence. This can appear to work very well for a number of years, until the improbable happens.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This seems like a solved and understood problem. And Cowen himself is aware of the solution and has had interviews with the people that proposed it.
The solution is skin in the game. The person making the decisions needs to be personally impacted by outcomes.
That impact doesn't have to always be punishment, as @faul_sname points out below.
There is probably some low hanging fruit for accountability. Military projects should be tied to specific generals that care about a good legacy. And possibly a politician as well. Let those names become a curse or a word that means reliability to the grunts.
School boards should require that they have kids at the school. And possibly they should only be elected by those who have kids in school. It's possible that mixing in traditional politician accountability systems has made these positions worse. They should maybe be anonymous, or at least part of the board should be.
We require that politicians live in the areas or districts they represent. That is a decent start. Economic tie ins or closer representational tie ins should also exist. Lords of an area used to share their name with the area.
It mostly just feels that accountability is an afterthought. Something added in as a shitty ineffective process, because no one really cares about the hard work of real accountability systems. This feels backwards. The power shouldn't be allowed to exist in the first place without accountability. The Constitution was written partly as a way to say "this is how we won't make the same screwups as the last government".
Let the people in power figure out their own accountability systems, or just don't let them have power.
...
I don't see how that's really "skin in the game", by your own definition. It doesn't seem similar in kind to your other examples. Take the recent F-47 award. There are specific generals and politicians "tied to" it, at least at the moment of the major decision to award. I guess accountability has been achieved? What about all of my other discussion about the difficulties of judging the outcomes ten to twenty-five years from now? Donald Trump is certainly a politician who is trying to put his name on it. Whether you agree with his name being tied to it, looking at a life expectancy table, he's almost certainly going to be dead by the time some of those outcomes come 'round. Does he have "skin in the game" by your definition?
It's not an either or thing. It's a gradient.
Some things increase skin in the game.
I think tying names and reputation to weapons systems is one way to have skin in the game, but it's obviously not very much skin if it's only a small part of their reputation.
More options
Context Copy link
More options
Context Copy link
Seems like it's time for @faceh to tap his sign again. It must be getting worn down by now, maybe we can buy him a new one.
No idea what his sign is.
This sign
I have less idea than I did before. Shouldn't a metaphorical sign, be, idk, twelve words max?
In short: the elites in our society no longer have skin in the game because they are protected from facing consequences for their failures. And he expects that this will go poorly, so we need to go back to making the elite face consequences for failure.
I feel like "the sign" one taps should be a single sentence. Like the classic tweet:
I meant fuck YOUR feelings, my feelings must be handled gently, like a baby bird.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I wonder if there's not an alternative way of framing all of this, not as "should we have accountability" but rather, "must accountability be externally legible, and what are the costs and consequences if it must?"
As an example, one of the interesting things about the modern university system is it bolts two incompatible accountability systems on top of each other.
When my wife got her PhD, it was a long, grueling, intensive process. In particular, though, it was expensive in the sense that she had a world class expert in her field who paid quite a lot of attention to her during that multiyear process (she fortunately had a good and ethical advisor). And you can see (if this is working correctly) the outlines of an older system of accountability; in theory, my wife went through an intensive acculturation process by an existing cohort of experts who could, by the end of the process, vouch that my wife had internalized values and norms that meant she could be trusted by the broader cohort of researchers in her field, and thus ought to be able to independently drive a research program. That doesn't mean there's not also lots of peer review and criticism and whatever else, of course, just that she went through a process that, if it worked correctly, meant she should have an internal mechanism of accountability that meant she could be trusted, in general. All of this is much, much clearer in action if you look at universities operating many decades ago, when they had much less money, much less bureaucracy, and generally much more independence.
But clearly the current version of the University is flooded with extra deans, and administrators, and IRB reports, and massive amounts of paperwork, and giant endowments that are lawfare targets, and many layers of bureaucracy, and a bunch of arguably screwed up personal values from cultural evolution the last few decades. And many of those changes are intended to keep everyone in line and make sure everything is legible to the broader system. And so, in those spaces, the older model of producing virtuous professionals who can work cheaply by their own guidance is frequently superseded by this other "trustless society" model. And everything is slow, and expensive, and the values of the bureaucracy is often at odds with getting good work done, for all the reasons discussed in the linked conversation.
Or, to use another example, I've seen this claim made, by certain irritated black activists connected to screwed up urban neighborhoods, that there's just as much crime going on out in the white suburbs, but the cops are racist and just don't enforce laws out there. Which honestly, the first time I read that, was generally just kind of shocking and equal parts hilarious and depressing. Because of course, the entire point of going to a good suburb is that a critical mass of people have internalized an illegible, internal sense of accountability that means they mostly don't actually need cops around all that often. And everyone around them knows that about them, and about themselves. That's literally why certain people find them kind of stifling. (Obviously there are things that happen in suburbs like weed smoking or domestic abuse or whatever. But obviously we're talking about questions of degree here) Meanwhile, in distressed neighborhoods, you simply have to have cops and a legible system because a critical mass of people do not internalize that sense of accountability, and so you need the external accountability of the legible state.
Anyone who has worked in an effective small startup, versus a giant profitable corporation has almost certainly run into these same divides, I suspect.
Getting back to the question of government in this context, a few years ago, I read through Michael Knox Beran's "WASPS: The Splendors and Miseries of an American Aristocracy", which was a great book, as well as C. S. Lewis's "Abolition of Man". And they were a really nice pairing to capture some of these big questions, about whether a society needs to produce leaders who have an internal sense of morality and virtue, who try to do the right thing at any given moment based on an internally cultivated sense of accountability, versus the transition to a world where accountability is an external, entirely legible thing where independent judgement and virtue can't be relied on and instead bureaucracy and technocracy solve all problems (like, say, the way that Uber driver reviews might, as just one simple example). And I think you can find upsides and downsides to each approach.
Thanks for the rec! I've been thirsty for something exactly like this but didn't know where to begin looking. Serendipitous.
You might also be interested in George Marsden's "The Twilight of the American Enlightenment: The 1950s and the Crisis of Liberal Belief", Thomas Leonard's "Illiberal Reformers", and Helena Rosenblatt's "The Lost History of Liberalism: From Ancient Rome to the Twenty-First Century", all of which also cover this same era and dig into some overlapping topics and themes.
I've been trying to understand the shift from the worldview of the progressive era (where a lot of our inherited institutions were built and cemented) to... well, whatever emerged in the 60s and 70s, and all of these books were really useful for me in that regard. Leonard's book was a bit dry, but lots of great information. The other two read pretty easily, IIRC.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Excellent post. One thing jumped out at me:
Punishment for failure seems like exactly the wrong way to handle accountability for a project that has a low probability of success. The motivation to reduce a 99% chance of being punished in 20 years to a 95% chance of being punished in 20 years just isn't going to be that large. This is especially true if the people involved are self-selecting into the position - nobody is going to self-select into a position with a near-certainty of punishment in 20 years unless the benefits now outweigh even a certainty of punishment in 20 years, so the punishment just can't be that severe.
Talk about rewarding the guy who made a prescient prediction 20 years ago, on the other hand, and I think the dynamics flip. Going from a 1% chance of collecting a $10M prize in 20 years to a 5% chance of collecting that same prize is substantial and motivational. Think of how hard scientists chasing Nobel prizes work.
Flip the probabilities (i.e. a competent person would have a 99% success rate on a project and an incompetent one would have a 95% success rate) and I think the argument for accountability in the form of punishment makes more sense than accountability in the form of reward. That's sort of how it goes with professional licensing, and it's a pretty solid strategy in that context.
But yeah, "we should abandon accountability" sounds bad and counterintuitive but I think Tyler is right to call out "accountability" in the specific form of punishment for failing to achieve highly uncertain outcomes.
More options
Context Copy link
More options
Context Copy link