@Capital_Room's banner p

Capital_Room

rather dementor-like

0 followers   follows 0 users  
joined 2023 September 18 03:13:26 UTC

Disabled Alaskan Monarchist doomer


				

User ID: 2666

Capital_Room

rather dementor-like

0 followers   follows 0 users   joined 2023 September 18 03:13:26 UTC

					

Disabled Alaskan Monarchist doomer


					

User ID: 2666

The problem with "just literally walk out of the hood" is something that applies not just to African-Americans, but poor people of various stripes around the world: family. One of the major reasons given for why poorer people have poor spending habits is that if you are known to have any money available, your kin will come to prevail upon you as to why they need some. If you "make it big," every auntie and half-sibling and unemployed cousin is going to come begging for a little help. Unless you're willing to simply cut ties with your entire family — not an easy ask for anyone (save maybe the most atomistic of WEIRDs) — your success is mostly going to be eaten up by your extended clan, often making it not worth the effort.

Plus, there's also violence affecting family as well. You can work your way up, get into a top university, get married, become a respected judge or an english professor, live in a nice LA neighborhood, send your kids to private school… and then, one day, your nephew back in "the hood" in Philly has pissed off the wrong bunch and now has to come live with you for awhile.

On disparate impact, prejudice, American civil rights law, and academic vs. lay definitions of words.

(Or, why HBD won’t save you.)

This is an adaptation of a couple of long replies I made to a mutual on Tumblr, relevant to some recent arguments made here. Specifically, how sophisticated academic and legal arguments can differ from the version that trickles out through journalism and politics into the general population, and how people misunderstand the post-Griggs “disparate impact” regime (further cemented by the 1991 civil rights act), which is at once less ridiculous and yet more extreme in its implications than many of its critics think.


I saw someone here recently characterize said doctrine as the idea that “if a process produces disparate impact, then someone somewhere must have done something discriminatory.” But this itself can mean very different things depending on how one defines “something discriminatory.” Are we referring to treating individuals differently, or to treating groups differently? At one end, you get a kind of unfalsifiable “blood libel” reminiscent of classic antisemitic tropes about Jewish “elite overrepresentation,” and at the other, you get a tautology.

A major of the problem is essentially a conflict over definitions; that too many people mean too many different things when they use terms like “racism.” So I’m going to go ahead and do the thing around these parts of “tabooing” the term to start with. Instead, I’m going to talk about two distinct things. First, there’s “invidious discrimination.” That is, discrimination motivated by racial prejudice and stereotypes — treating individuals differently due to their race (judging by “color of their skin” rather than “content of their character,” as it were); what most ordinary people, especially those on the center right or the older left, are thinking of when they think of “racism.” Then there’s “disparate impact” — the existence of statistical differences between racial and ethnic group outcomes.

The standard criticism of Griggs v Duke Power is that it came to the ridiculous conclusion that disparate impact is itself presumptively evidence of invidious discrimination by someone, somewhere, in the hiring process until proven otherwise. Not too long ago, I saw someone (I think it was on Tumblr) who argued that yes, this would be a stupid thing for a court to conclude… but that this is not, in fact, what the court found in that case. Instead, they essentially deferred (as the courts usually do) to the EEOC’s own understanding of their mission. And what was that? Remember, they are the Equal Employment Opportunity Commission.

Well, what does equal opportunity mean in the context of employment? The answer the EEOC came to is, essentially, that “equal opportunity” means you are equally likely to be hired, which means that the rate at which a racial or ethnic group gets hired should be roughly in proportion to their prevalence in society. That is, anything which makes blacks less likely to be hired constitutes a lack of equal opportunity. That using an IQ test results in disparate impact is itself the problem. It doesn’t matter why. it doesn’t matter that the employer has no discriminatory intent. It doesn’t matter whether or not the makers of the IQ test had any racist stereotypes or ideas about blacks, conscious or unconscious. It could well be because blacks just have lower IQs. That last is not an excuse for hiring blacks less, it is simply an explanation as to how and why IQ tests deny blacks equal employment opportunity. All that matters, for the purpose of civil rights law, is that if it causes a minority group to be less likely to be hired on average, it is presumptively forbidden (unless proven absolutely essential).

To go back to “something discriminatory,” the argument is that a thing is racially discriminatory if it produces statistically distinct outcomes for different racial groups — that is, it gives rise to disparate impact. As I said before, with this definition, “if a process produces disparate impact, then someone somewhere must have done something discriminatory” becomes a tautological statement.

This is very much in line with the academic consensus. Ibram X. Kendi makes it quite clear in his glossaries, when he defines racism and anti-racism in terms of racial equity and racial inequity, which he in turn essentially defines in terms of disparate impact. That is, “anti-racism” is anything which narrows or eliminates disparate impact. And anything that doesn’t — that is, not only things which increase disparate impact, but anything that maintains it — is racist. Invidious discrimination is, ultimately, irrelevant.


Back when I used to do some math and physics blogging on Wordpress over a decade ago, I’d occasionally get crank comments about some piece of jargon which also has a different meaning in colloquial usage, and how consequently, physicists or mathematicians are “using the word wrong” and need to stop. I don’t quite recall the math examples (except that one was in group theory), but for physics, the most memorable example was the angry comments holding forth the position that “you can’t taste quarks” therefore speaking of “flavors” of quarks is wrong, and physicists must stop. I’m pretty sure there are also words whose usage as legal terms differ in important ways from the layman’s ordinary usage.

No, the experts are not going to change their established terminology to assuage the linguistic prescriptivism of random cranks asserting the absolute supremacy of lay definitions.

To an ordinary person for whom the term “racism” refers primarily to invidious discrimination and prejudice, statements like “colorblind racism” and “you can be prejudiced against white people, but can’t be racist against them” (I remember someone giving a quote to this effect from the show “Dear White People”) make no sense. Which implies that those making such statement are using a different definition (particularly the latter statement). If instead, you define “racism” as meaning disparate impact, then those become quite straightforward, even obvious. Does “colorblindness” reduce or maintain differences in racial outcomes? Note that ignoring a thing seldom makes it go away. Does issuing covid vaccines on racial lines increase or decrease the white-black gap in health outcomes? (This last example remains perhaps the biggest “red flag” issue for my mutual.) Does openly discriminating against whites, asians, and jews (my mutuals go-to acronym here is JAW, in contrast to BIPOC) make the aggregate outcome differences between them and BIPOC bigger or smaller?

As one can see from the likes of Kendi, this latter is the academic definition, the term as used by the technical “experts” in the field. Further, if you buy the argument about Griggs, it’s also the legal definition — the definition used by the people who enforce civil rights law and policies “against racism.” And just like with quark flavors, you, the random layman, are not going to get them to change. Yes, one can argue that the term “racism” carries serous moral and legal weight in the way “flavor” does not, that definitional mismatch about such an emotionally-loaded term allows way too much “strategic equivocation” and other such games to let pass, and that a rectification of names is needed, but even then one shouldn’t expect the lay definition of the masses to win out over the elite definition.


One can also assert that the original intent of the civil rights law that created this system was to eliminate invidious discrimination, not to eliminate disparate impact, but this is disputed. (For example, Tim Wise does so here: “No, Precious, No One “Changed” the Meaning of Racism.”) In particular, there was a narrative in the early days that held that disparate impact was fully downstream from invidious discrimination, thus allowing the conflation of the two definitions of “racism.” If one did care more about ending disparate impact itself, well, then banning invidious discrimination was still the way to go about solving it. Except, of course, it was becoming clear by the time of Griggs that this didn’t hold. That eliminating Jim Crow and making things “colorblind” wouldn’t fully close the gaps. Hence the definition split.

(IIRC, it was @Hoffmeister25, either here or at the old place, who said that, in his experience as a white left-winger talking to blacks about these sorts of issues, American blacks were indeed mostly this latter set. That, to the extent they signed on to “colorblindness,” it was because they thought it would fully solve the outcome gap — which was always what they really cared about — and once it became clear that it wouldn’t do that, they increasingly moved on in search of something that would.)

I believe it is in Stamped from the Beginning that Kendi specifically addresses and rejects this narrative. Racist ideas do not produce racist institutions, he has argued, but instead it’s the other way around. Our institutions result in disparate outcomes between races — have done so since the first blacks arrived in any notable numbers in Western societies (hence “from the beginning”) — and people come up with ideas to explain it, and when those ideas propose that the “problem” to be “fixed” lies somewhere with the underperforming minorities themselves, rather than the system, those are racist ideas.

Earlier in the original Tumblr thread, the other party said the following:

Arguments that, for instance, fit people and fat people should have the same lifespan, so we should redirect healthcare spending from fit people to fat people until they both live the same average lifespan, would mean reducing the total lifespan lived for a net loss in life-years.

For instance, to take the weight example, supporters might be open to “make everyone take a class about how fit people and fat people should have the same outcomes,” or, “redirect healthcare funding from fit people to fat people until lifespans equalize,” but wouldn’t be open to “invent ozempic”.

That’s pretty strange, isn’t it? Trying to equalize lifespan on the back end, resulting in a net loss of life-years, is way more oppressive than inventing a new diet pill.

This is where that example, and the Ozempic analogy comes in. Because that method of addressing different life outcomes between “thin” and “fat” treats obesity as the thing to be fixed, not that the obese have different outcomes. Sure, this might be okay to hold in the case of something like obesity — but even then, note my past comments, here and here, on Carleton University's Fady Shanouda attacking said medication as "fatphobia”, even "the elimination of fat bodies”, and “that treatments for "the so-called obesity epidemic" were "steeped in fat-hatred.”” But for people like Kendi, it’s never okay in the case of racial groups.

It’s like the stupid “positive action/self-esteem” shit we got in elementary school, about how “you’re fine just the way you are" (even back then, I knew that I was in some way broken and defective). It doesn’t matter if you think the “problem” is inborn, or cultural (the classic black conservative ‘stop listening to the rap music, get married and adopt bourgeois norms’ position), it’s still a racist idea if it holds that underperforming groups aren’t “fine just the way they are.”

Years ago, left-wing mixed-race HBD blogger Jayman was making pretty much the same argument, even as he asserted that the outcome gaps were almost entirely genetic. It’s the duty of society, he argued, to perpetually redistribute from the genetic “haves” to the genetic “have-nots” along racial lines, until racial equity is achieved. Genetic engineering to fix those genetic have nots — even of the IVF with genetic screening kind — is “Nazi stuff.” (My mutual is very bullish on these technologies.) Crime rate differences between races are because blacks are genetically predisposed to crime… and therefore it’s not their fault, and the solution is to punish blacks less often and less harshly for the same criminal acts as whites, until their fraction of the prison population matches their fraction of the general population. Yes, it means white people accepting continuing victimization by black criminals — at one point in HBDChick’s comments section, Jayman described the contemporary situation as a “one-sided race war” by blacks against JAWs… and then asserted that “a two-sided war is always worse” than a one-sided war.

Other writings of his in a similar vein point to a couple of analogies — mine, not his. First, the classic injunction that a man must never hit a woman… even if she’s hitting him, first. He can try to gently restrain her, but otherwise, he’s obligated to stand there and take it… because he’s stronger and she’s weaker. Even more extreme, but also more broadly accepted: if you’re an adult, and a small child throwing a tantrum is pounding on your leg with their tiny fists, you definitely aren’t allowed to “hit them back,” no matter what. You stand there and take it because you can take it, and hitting back would do far, far more damage. Cue classic “when white people riot” meme with pictures of the Third Reich. BIPOC, due to their ‘genetic disprivilege,’ can’t do as much damage as JAWs can, and JAWs can also collectively *absorb( more attacks thanks to their ‘genetic privilege.’ Thus, they have a duty to “stand there and take it” with regards to racialized wealth redistribution, racialized vaccine distribution, or random subway shovings, and just as any adult man who “hits back” against a woman or a child is a brute, any white person who won’t simply accept this sort of thing as the price of their superior genes is a racist Klansman Nazi who will be dealt with accordingly. The goal is, as with Kendi, to ensure statistically equal outcomes for racial groups as they currently exist, and anyone who opposes that is racist.


Now, plenty of people have called for changes to current civil rights law to address this definitional issue and change things “back” to fighting invidious discrimination rather than fighting disparate impact. But the proposals won’t work, because they tend to miss how we got here. I think it was Chris Rufo who, while holding up Nixon of all people as the example to follow, called for the creation of a new Federal task force to track down and punish “anti-white discrimination” in the institutions. Given the nature of how people are hired for Federal bureaucracies, the nature of our credential-issuing institutions, and such, just who will end up running said institution in the long run?

I also recall reading recently about a British think-tank created in the wake of the Rotherham scandal to specifically address Islamic radicalization and lack of assimilation. Why were they being brought up? Because their most recent action was to release a book list with a warning of ‘if someone you know is reading these books, they may be on the path of radicalization to becoming a white supremacist.’ The list included works by Orwell, CS Lewis, and a book on the Rotherham scandal. So this institution, despite its founding mission, has decided that the real problem they need to fight is ‘Islamophobic white supremacy’ amongst the native British population.

Personnel is policy. The same thing applies with attempts to “repeal and replace” civil rights law to “get back” (again, see Tim Wise) to the lay “racism=discrimination” definition and away from the academic “racism=disparate impact” definition. Laws are but words on a page unless they’re enforced. And no matter how much we might say “this time when we say ‘fighting discrimination’ we really mean fighting discrimination, including against white people, not “disparate impact,”’ so long as the people who interpret and enforce it are the same bunch as we have now — who all belong to the same academic consensus understanding as to what what “racism” is and what their mission to fight it means — you’re going to keep getting the same results as we do now. And there is no (peaceful, legal) mechanism to replace that personnel.


Now, why does this matter? The answer to that question seems to be ‘because we (for certain elite values of “we”) have come to recognize (i.e. have decided) that it is our biggest issue and highest moral priority as a society, in keeping with the fundamental value of Equality, and, perhaps more importantly, have enshrined this into our law. And why have our elites chosen to define “racism” this way? Well, first there’s all the cynical, power-seeking and power-maintaining reasons for doing so. But even that tends to give way under “generational loss of hypocrisy.” To quote @WhiningCoil:

I’m reminded of some joke about the difference between a cult and a religion. A cult is all made up by people. In a religion, all those people are dead. We’re coming up on generations that have only known demoralization propaganda, and who’s parents have only known demoralization propaganda. Whatever kayfabe social signaling hating cis white males, normal women, or wholesome white families used to mean, the people uncritically consuming it and signal boosting it now don’t understand it’s only supposed to be insincere virtue signaling. They’re ready to start pogroms now.

Thus, many of them probably actually believe it. Why? Well, because, as noted above, it’s what they and all their peers were taught (without “getting the joke” as it were), and it’s what their peer groups enforce as the moral consensus. But also because it fits with Haidt’s “moral foundations.” For people whose moral foundations are based primarily around the “fairness” axis, with the “care/harm” axis as the only other one in their worldview, appeals to “equity” will always have the strongest effect. (See also Moldbug’s “Puritan hypothesis.”)

One may or may not be familiar with the ultimatum game? (If not, I’d recommend take a moment to read about it.) Even though, in terms of one’s personal outcomes, it’s always rationally preferred to take a non-zero split no matter how unfair, most human beings are indeed willing to pay a price in lost opportunity to “punish” a (positive-sum) outcome they find too “unfair.” And, per Haidt, some people are far more sensitive to “unfairness” than others Some people would reject a $51/$49 split. Some might reject a $501/$499 split.

It’s why appeals to aggregate well-being — like in the fatness example, about how redistributing healthcare to equalize lifespans for fit-vs-fat (as opposed to treating the latter with Ozempic) will lead to a net loss of aggregate life-years — tend not to work. Because plenty of people care more about the relative distribution than the absolute aggregate. They see equitable destitution as morally preferable to fabulous prosperity even slightly unequally distributed. In their view, making people worse off in absolute terms is good if it also makes them more equal. This is a matter of terminal goals and moral axioms.

This also appeals to one of humanity’s worst tendencies: envy. Not just wanting what other people have (and you don’t have), but resenting those who have more than you. If your primary drive is that nobody ever have more than you do, then views centering “equity” like this allow you to portray your envy and resentment as moral virtue, which makes those views more attractive than alternatives that don’t.


I hope this helps clarify why the whole “HBD as counter to disparate impact” argument won’t really work. Even if you convince people “blacks have genetically lower average IQs,” or whatever, then you’ve only just explained why IQ tests are racist — because blacks deserve to be hired at proportional rates despite the lower average IQ. And so on. Under the currently-dominant framework of our society, it doesn’t matter how much biology contributes, if any, to current inequality, it is still our legal and moral duty to change “the system” in whatever ways necessary to produce equitable outcomes despite it. The established institutions of our society — government, academia, media, NGOs, etc. — are filled top-to-bottom with true believers who hold this as a terminal value, and it’s not going away until they all do (which is a problem, because there’s no voting them out).

But the blue tribe's motivation is harder for me to explain to myself. Why do they hate the red tribe so much?

My own theory is best summarized by a tag I often use on Tumblr: "Puritans gonna Puritan." See Albion's Seed and Yarvin's days as Moldbug.

In his posthumously-published The Collapse of American Criminal Justice, William J. Stuntz devotes an entire chapter (chapter 6, "A Culture War and Its Aftermath) to an earlier culture war waged "[b]etween the late 1870s and 1933," essentially by Puritan-descended New England elites, against various "vices." The most famous being alcohol — the one area where they failed — but also Mormon polygamy; lotteries and gambling; prostitution and "white slave trafficking" (see the Mann Act, and the original name thereof); Mormon polygamy; and "obscene materials" (including pamphlets on birth control techniques; see the Comstock laws).

And as a different author (I don't remember which) noted, these moral crusades began pretty much as soon as the spread of the telegraph became possible for teetotal New England Puritans to read in their newspapers about how Borderers and Cavaliers down South or out West lived. Because those people were Doing Wrong, and thus had to be made to behave right.

Mencken defined "Puritanism" as "the haunting fear that someone, somewhere, may be happy," but a better definition might be "haunting fear that someone, somewhere, may be doing wrong." You are your brother's keeper (after all, remember the origin and context of that phrase). "Let not any one pacify his conscience by the delusion that he can do no harm if he takes no part, and forms no opinion. Bad men need nothing more to compass their ends, than that good men should look on and do nothing." "An injustice anywhere is an injustice everywhere." And so on.

A friend of mine once told me, years ago, about how a coworker of his came in one Monday morning teary-eyed and demanding a meeting so that the business could decide what they were going to do, collectively, to help address the plight of the Rohingya. A week ago, this woman had never heard of them, and probably wouldn't have been able to locate Myanmar on a map. But she saw a news report about them, and that was enough for her to feel the burning need not only to "do something" herself, but to recruit everyone else she knows to do the same. It's something I see all the time online "you don't want to intervene in [bad thing X]? Then you obviously approve of [X]!" Don't want to send more into Ukraine? Then you must think the Russian invasion was 100% justified, you Putin boot-licker!

There is a certain kind of person for whom moral disapproval and the drive to intervene are one and the same thing, inseparable. To them, a lack of a burning need to stop a thing is proof that you don't actually disapprove of it. It's the classic stereotype of the D&D Paladin played badly: "see evil, smite evil." They are constitutionally incapable of shrugging and saying "none of my business." And the Blue Tribe is full of them.

Consider every missionary of an evangelizing, expansionist faith who has set out to convert the heathen — by fire and sword if necessary — because it's their duty, it's the right thing to do, and it's for the heathen's own good. If you have the One True Faith, the true set of Universal Human Rights, the Objectively Correct Morality, then you have a duty to spread and enforce it everywhere you can.

Why fight the Red Tribe? Because if you don't, you are complicit in every wrong they do. If you let the Red Tribe keep being transphobic rather than try to stop them, then the blood of every trans kid in a Red Tribe area who commits suicide is on your hands. Like Kendi says, you are either actively anti-racist, or you are racist. It's one or the other. You are either fighting evil, or you are evil.

Why does the Blue Tribe hate the Red Tribe? Because it's in their nature to hate anyone who fails to share their values. Because this need to be a moral busybody, a crusader, a Social Justice Warrior, is a core characteristic of the Tribe, woven into their culture (and probably also a non-trivial amount of genetic predisposition).

Why does the Blue Tribe continually attack the Red Tribe, trying to force them to convert, or otherwise eliminate the "Red culture"? Because they're fundamentally incapable of not doing so. They can't stop themselves, and thus they will never stop.

That's my view, at least, for whatever it's worth.

I find myself increasingly perplexed by the people who think a second Trump term would be any kind of a big deal; that there’s anything he’d be able to do in a second term he wasn’t able to do in the first. It’s primarily in fellow right-wingers that I find this attitude most vexing, but it also holds to a lesser degree for the people on the left who hyperbolically opine in outlets like Newsweek and The Economist about how a second Trump term would “end democracy” and “poses the biggest danger to the world.”

Really, it’s not even about Trump for me, either. I don’t really see how a DeSantis or a Ramaswamy presidency would amount to anything either. What can they possibly accomplish, except four years of utterly futile attempts at action that are completely #Resisted by the permanent bureaucracy? Giving “orders” to “subordinates” that prove as efficacious as Knut the Great’s famous command to the tides?

I hear about how the president can do this or that, according to some words on paper, and I ask “but can he, really?” Mere words on paper have no power themselves, and near as I can tell, the people in DC haven’t really cared about them for most of a century now, nor is there any real mechanism for enforcing them.

If I, a random nobody, come into your workplace and announce that you’re fired, of course you still have your job. Security will still let you in when you show up each day, you can still log in and out of whatever, your coworkers will treat you the same, and you’ll still keep getting paid. Now, suppose your boss announces that you’re fired… but everyone else there treats that the same as the first case? You still show up, you still do the work, you still get paid. Are you really fired, then?

TSMC's ever-delayed plants in America need capable staff.

A relevant piece I read recently in The Hill: "DEI killed the CHIPS Act"

The Biden administration recently promised it will finally loosen the purse strings on $39 billion of CHIPS Act grants to encourage semiconductor fabrication in the U.S. But less than a week later, Intel announced that it’s putting the brakes on its Columbus factory. The Taiwan Semiconductor Manufacturing Company (TSMC) has pushed back production at its second Arizona foundry. The remaining major chipmaker, Samsung, just delayed its first Texas fab.

This is not the way companies typically respond to multi-billion-dollar subsidies. So what explains chipmakers’ apparent ingratitude? In large part, frustration with DEI requirements embedded in the CHIPS Act.

Commentators have noted that CHIPS and Science Act money has been sluggish. What they haven’t noticed is that it’s because the CHIPS Act is so loaded with DEI pork that it can’t move.

There’s even plenty for the planet: Arizona Democrats just bragged they’ve won $15 million in CHIPS funding for an ASU project fighting climate change.

That project is going better for Arizona than the actual chips part of the CHIPS Act. Because equity is so critical, the makers of humanity’s most complex technology must rely on local labor and apprentices from all those underrepresented groups, as TSMC discovered to its dismay.

In short, the world’s best chipmakers are tired of being pawns in the CHIPS Act’s political games. They’ve quietly given up on America. Intel must know the coming grants are election-year stunts — mere statements of intent that will not be followed up. Even after due diligence and final agreements, the funds will only be released in dribs and drabs as recipients prove they’re jumping through the appropriate hoops.

For instance, chipmakers have to make sure they hire plenty of female construction workers, even though less than 10 percent of U.S. construction workers are women. They also have to ensure childcare for the female construction workers and engineers who don’t exist yet. They have to remove degree requirements and set “diverse hiring slate policies,” which sounds like code for quotas. They must create plans to do all this with “close and ongoing coordination with on-the-ground stakeholders.”

No wonder Intel politely postponed its Columbus fab and started planning one in Ireland. Meanwhile, Commerce Secretary Gina Raimondo was launching a CHIPS-funded training program for historically black colleges.

So, no, the people in charge are indeed willing to prioritize "equity" over having microchips.

the grinding stone of competition

There's a lot of ruin in a nation. The "fall of Rome" was a centuries-long decline only visible in hindsight; from within, it just seemed like a series of individual, unrelated crises. The Global American Empire is still the sole hegemon of our "unipolar" world order, and can remain on top for quite some time despite ongoing encrudification. If it can take down any major potential competitors while that still holds — ensure China "grows old before it grows rich" and collapses from its terrible demographics, grind down Russia until it breaks apart, et cetera — and uses its remaining power to spread the ideology to as much of humanity as possible, then there won't really be anyone really left outside the GAE to "bomb it into oblivion" even as it decays.

People seemed to put up leaders who legitimately wanted to solve whatever the problem was, and the writers tended to play that straight up. The person not only wanted to do good, but he was allowed to defeat evil and fix the problems and we actually had a happy ending.

I'm reminded here of a Tanner Greer piece at City Journal I read recently, on the popularity of dystopian YA novels (one of the many pieces drawn upon in an effortpost I'm currently mentally composing, involving Weberian rationalization, software “eating the world,” “computer says ‘no’,” Jonathan Nolan TV series, “Karens” wanting to talk to a manager, the TSA, Benjamin Boyce interviewing Aydin Paladin, and the Butlerian Jihad):

This is the defining feature of the YA fictional society: powerful, inscrutable authorities with a mysterious and obsessive interest in the protagonist. Sometimes the hidden hands of this hidden world are benign. More often, they do evil. But the intentions behind these spying eyes do not much matter. Be they vile or kind, they inevitably create the kind of protagonist about whom twenty-first century America loves to read: a young hero defined by her frustration with, or outright hostility toward, every system of authority that she encounters.

The resonance these stories have with the life of the twenty-first-century American teenager is obvious. The stories are, as perceptive film critic Jonathan McAlmont observes, “very much about living in a world where parents discuss things out of earshot.” The protagonists all struggle “to perform the role that grownups have assigned [them], despite the fact that [they] are still coming to terms” with their own identity and purpose. Teenage frustration with a lack of agency is the fuel that propels Anglophone pop culture. The prewar imagescape of these novels supplies extra emotional resonance, styling the problem of out-of-date authority as a holdover from a stuffier, more restrictive past. For the hero of a YA tale, this general problem would be resolved in the final, climactic battle with the powers that be. In his or her quest for victory, the protagonist would journey from pawn to player. There are few transformations for which the modern teenager yearns more.

And yet, these stories also increasingly resonate with modern adults as well:

This obsession is grounded in experience. It is not just twenty-first-century teenagers who feel buffeted by forces beyond their control. Bearing the brunt of a recession we did not cause, facing disastrous wars the stakes of which were unclear at best, the citizens of the liberal West spent the last two decades nursing the wounds of lost agency. This loss extends past grand politics. A series of studies have traced this process in the United States. Increasingly, Americans “bowl alone”: the social clubs, civic societies, and congregations that once gave normal people meaningful social responsibilities have declined significantly. Most issue-oriented action groups that remain are staffed by professionals who seek only money from their members. As a growing number of Americans live in crowded cities, government becomes more remote and less responsive to any individual’s control—a problem exacerbated by the increasingly national cast of American politics. More important still, one-third of Americans now find themselves employed by corporations made impersonal by their scale. The decisions that determine the daily rounds of the office drone are made in faraway boardrooms—rooms, one might say, “where adults discuss things out of earshot.” What decides the destiny of Western man? Credit scores he has only intermittent access to. Regulations he has not read. HR codes he had no part in writing.

For the most part, the citizens of the West have accepted this. They have learned to comply with expert directives. They have learned to endure by filing complaints. They have learned to ask first when faced with any problem: “Can I speak to the manager?” They have accustomed themselves to life as a data point.

Yet if these novels speak to the sum of our anxieties, they are a poor guide to escaping them. In the world of YA speculative fiction, those who possess such power cannot be trusted. Even worse than possessing power is to seek it: our fables teach that to desire responsibility is to be corrupted by it. They depict greatness as a thing to be selected, not striven, for. This fantasy is well fit for an elite class whose standing is decided by admissions boards, but a poor guide for an elite class tasked with actually leading our communities.

The key part that stood out to me was the final two paragraphs:

Yet outside of the modern fairy realm, power is not given, but created. The morality of the twenty-first-century fairy tale is in fact a road map to paralysis. Its heroes begin as the playthings of manipulative and illegitimate authorities, their goodness made clear by their victimhood. But faced with this illicit order, nothing can be done: even rebellion can be trusted only to unwilling rebels. Our fairy tales imagine a world where only those who do not want power are deemed fit to use it. Translate that back to reality, and we are left with a world where all power is, and will always be, deemed illegitimate. No magic curses justify the power of our managerial class; ultimately, their legitimacy rests on how well they wield it.

In the stories of the modern fairy realm we see the seeds of stagnation. Protesters who occupy Zuccotti Park without the faintest notion of what their occupation should accomplish, political parties that seize all branches of the government without a plan for governing, Ivy League students pretending that they are not, in fact, elite—all of this flows from a culture that can articulate the anxieties of the overmanaged but cannot conceive of a healthy model of management. We cannot suffer ourselves to imagine righteous ambition even in our fantasies. Responsible leadership is not possible even in our fairy world. Little wonder so few strive to realize it in the real one.

We seem to have become allergic to the idea of human leadership, of having a person — and not a faceless bureaucracy — actually make decisions, use common sense, exercise personal agency, with "the buck stops here" responsibility for them. And it's the latter that really stands out. It's not just that we seem to fear the idea of having someone else in charge of us — though we submit readily to Hannah Arendt's rule of Nobody, "a tyranny without a tyrant" — but that we're perhaps even more afraid of stepping up and taking charge ourselves, of bearing responsibility for that power and its consequences. We find it better to be a human cog in the machine, able to say "I don't make the rules, I just follow them," than to take ownership of the exercise of power.

(Can you imagine someone in the West writing a story of an orphaned child soldier achieving his lifelong ambition of becoming military dictator, and not having it be played as a tragedy?)

Seems kind of ill-advised that they're calling attention to the system being, "you can vote for whoever you want, as long as they're one of the state-approved choices".

Why? It's entirely in line with things I've been seeing for years now.

I recently saw someone on Tumblr trying to ground some of the more extreme left-wing fears by arguing that the worst case for a "Trump dictatorship" is that we become… Hungary. And I was reminded of something else I had seen recently — a screenshot of a Keith Olbermann tweet that presented exactly that as the horror scenario to be avoided at all costs: that we're still under threat of the end of Our Democracy and becoming an 'authoritarian' state "like Hungary or Poland." Yes, this was before the recent Polish election. Some undemocratic "authoritarianism" that was, huh?

What's wrong with Hungary, anyway? The answer I get, when I push back and get people to dig down, is that guys like Orbán aren't supposed to win no matter how popular with the voters.

Roger Kimball, in discussing Colorado, made a point similar to yours:

In fact, what they have just voted to preserve is not democracy but “Our Democracy™.” Here’s the difference. In a democracy, people get to vote for the candidate they prefer. In “Our Democracy™,” only approved candidates get to compete.

Well, for years I've been seeing people, from Curtis Yarvin to random YouTube comments, all make the same point about how you can't just let people "vote for the candidate they prefer," and all giving the same example why. So I looked to see if anyone had explicitly made that same connection in this case. The closest is Joe Matthews at Zócalo: "The Case for Taking Trump Off the Ballot." Just like banning the AfD, removing Trump is what "defensive democracy" demands.

To paraphrase Yarvin in a Triggernometry interview, we saw what happens when you let the people vote for the candidate they prefer without limiting it to approved candidates… 'in early-1930s Germany.' We can't ever risk "repeating the mistake Weimar Germany made when they let Nazis take office just because a plurality voted for them" (as one YouTube comment put it). If you don't limit the options to "state-approved choices" and let people vote for whoever they want… they'll vote for Hitler. Never Again. Never again can the masses be allowed to choose their own leaders unguided. If we are to be a democracy, then "democracy" must be defined as something other than that. (Like 'democracy is when elites enact the Rousseauan "common will" — as determined by a technocratic intellectual vanguard — whether the masses like it or not; and therefore the greatest threat to Our Democracy is a "populist" who will do the unthinkable and give the voters what they want.')

Matthews:

Blocking candidates or parties from elections doesn’t come naturally to democratically minded people. Nor should it—it’s a despot move. Autocracies and dictatorships routinely maintain and extend their power by blocking opposition figures from standing for office, such as when the Chinese government banned pro-democracy candidates in Hong Kong’s 2020 vote.

But then…

It is also why it makes sense for people around the world to examine how Germany, where the Nazi party took power through elections, reckons with those who threaten its democracy.

Or, from Tumbler user Eightyonekilograms:

I mean, I didn’t say there was an actionable strategy. Actually I’m pretty sure there isn’t one: for a societal system based both on laws and implicit norms (which they all are), you have to stop someone like Trump— someone who has no shame and no regard whatsoever for the law or the norms— before he gets any power. By the time you get to the point we’re at now, it’s way too late: all the options are bad. Either you disqualify him, which is flagrantly undemocratic and will be seen and reacted to as such, or you don’t, and now you’ve set up a ghastly incentive gradient. If there’s no punishment (whether legal or electoral) for attempting a coup, then there’s no reason not to try over and over again until you succeed. Which is not theoretical, it’s exactly what we’re observing now: Trump knows that punishment is unlikely, so he feels free to say he’ll be a dictator on day one, the Heritage Foundation isn’t even bothering to be secret about assembling the “Project 25” team that will put an end to that pesky democracy, etc.

(Emphasis in original)

So, yes, you do have to "save democracy from itself," even if that requires "undemocratic" measures like Colorado has taken.

Or so goes the argument.

So again we see that whatever the supposed rules and procedures about how these things are “supposed to” work, in reality what matters is what you’re able to get men with guns to enforce. (As I’ve said before, a lesson I learned in 7th grade.)

And this gets to one of my common political arguments and frustrations — the perennial criticism of my support for restoring human authority and decision-making. In (the portion I watched of) Benjamin Boyce’s interview with Aydin Paladin, he makes this standard argument against her monarchism: but if you have a king, then won’t he become a tyrant, and take away people’s freedom by enacting a parade of horribles… all of which, Aydin pointed out in reply, are things which democratically-elected governments have done. People ask ‘what if the local aristocrat makes an unfair/unjust/tyrannical decision?’ as if modern bureaucracies can’t do the same (and throw in all the sorts of mistakes and irrationalities — like the classic ‘you must fill out and submit Form A before we can give you Form B, you must fill out and submit Form B before we can give you Form A’ class of problems — of which only bureaucracies are capable).

What if Baron Such-and-such throws you in the dungeon without trial? Well, what if the Pennsylvania Ag Department does it? The difference seems to be that the bureaucracy adds diffusion of responsibility. If the Baron locks you up, everyone knows who to blame. But when it’s a faceless bureaucracy, full of jobsworth human cogs, who ‘don’t make the rules, just follow them,’ where nobody is to blame; and, like @pigeonburger notes below, nobody in government really suffers serious consequences.

Some people talk about “Brazilification,” viewing us as moving in the direction of that South American nation. I say should be worried less about becoming like Brazil the country, and more about becoming like Brazil the Terry Gilliam film.

But infertile opposite-sex couples could always get married...?

I have a pet analogy I've been using for over a decade on this point, and I recently encountered a term that helps better encapsulate what said analogy is gesturing toward: "ordered toward"

Consider hand grenades. Then consider a movie prop "grenade" that looks like the real thing, but isn't. The law treats those two things very differently, and for a clear and obvious reason: real grenades explode, fake movie props don't.

But, one might argue, some subset of "real" grenades are "duds": due to manufacturing defects, the effects of time, or whatever, don't explode when you release the spoon. But the law makes no effort to carefully identify and separate out the duds, to be classed with the movie props as "non-explosive" — instead, it classifies them with the fully-functional grenades. Therefore, the law can't actually be about "explosive vs. non-explosive," and the line drawn between real and movie-prop grenades is illegitimate and should be removed.

Of course, most people would likely reject this argument. The key is precisely the phrase I spoke of before: ordered toward. A real grenade is ordered toward exploding — even if, thanks to our living in an imperfect, entropic universe, some subset fall short of that purpose — while a movie prop is not ordered toward exploding. For a "dud" grenade, the "non-explosiveness" is incidental, accidental. For the look-alike movie prop, the non-explosiveness is inherent.

In short, this is an argument that teleology can constitute a valid "joint" upon which reality may be "cleaved," particularly when it comes to law.

(It continues to dismay me how many secular people firmly accept the creationist philosophical principle that "purpose" requires a conscious purpose-giver, when an important element of the theory of evolution by natural selection is that it provides an explanation of how an undirected, atelic process can produced directed, telic entities. The usual rejoinder people make, when I argue this, is to conflate the process of natural selection with the products of natural selection; which, as I like to say, is like confusing tennis shoes with a tennis shoe factory.)

So unless you're prepared to accept perpetual animosity between you and your political enemies,

My political enemies have demonstrated, to my satisfaction, that they will hold perpetual animosity against me and mine, and so I'm pretty much ready to return the favor. What then?

But the overall framework of fat oppression presupposes that the core of the problem is the way society treats obese people, and the movement’s primary goal is to reduce messages that inflict shame. This shame, activists argue, is the main source of suffering for fat people.

A particularly extreme version of this may be Carleton University's Fady Shanouda:

A Canadian professor who specializes in "fat studies" claimed that aiming for an obesity-free future was "fatphobic" and blasted the "biopolitics" agenda as an attack against fat people.

Fady Shanouda is an associate professor at the Feminist Institute of Social Transformation at Carleton University in Canada. Shanouda "draws on feminist new materialism" to examine the intersections between "fat studies, "colonialism, racism…, and queer- and transphobia."

The Critical Disability Studies scholar wrote that it was "fatphobic" to have a public health conversation and to tamp down on obesity, according to a Monday article in The Conversation.

In particular, Shanouda believes the marketing of the drug Ozempic – as a method to combat obesity – was the latest example of fatphobia in the culture.

"The latest wonder drug… [was] invented to help diabetics regulate blood glucose levels, but has the notable side-effect of severe weight loss. It has been heralded by many to culminate in the elimination of fat bodies. The fatphobia that undergirds such a proclamation isn’t new," Shanouda said.

The professor lamented how the effectiveness of obesity treatments could eliminate "fat activism" and "the fat liberation movement."

He added that treatments for "the so-called obesity epidemic" were "steeped in fat-hatred."

"Elimination of fat bodies." Shanouda talks about a drug that helps people lose weight — one they voluntarily take — with the sort of language I usually see used to talk about things like ethnic cleansing. I'm not sure how much Grandma's rules of politeness address that.

from brewing to mycoprotein cultivation

AIUI, most of these involve single-celled organisms, with their own abilities to fight off rival microbes that animal muscle cells, adapted to the presence of a broader immune system, lack. And for the rest, look at how much the products cost — and that's usually chemicals produced by the organisms rather than the cultured cells themselves. Or how much a financial hit is taken if a vat or batch "goes bad." You'll be required to maintain a food production plant more sterile than a medical lab, at industrial scale.

Again, I read a lot of stuff without remembering where I read it, so I don't have cites on hand, but a quick google search gave this link: "Lab-grown meat is supposed to be inevitable. The science tells a different story."

It’s a digital-era narrative we’ve come to accept, even expect: Powerful new tools will allow companies to rethink everything, untethering us from systems we’d previously taken for granted. Countless news articles have suggested that a paradigm shift driven by cultured meat is inevitable, even imminent. But Wood wasn’t convinced. For him, the idea of growing animal protein was old news, no matter how science-fictional it sounded. Drug companies have used a similar process for decades, a fact Wood knew because he’d overseen that work himself.

Wood couldn’t believe what he was hearing. In his view, GFI’s TEA report did little to justify increased public investment. He found it to be an outlandish document, one that trafficked more in wishful thinking than in science. He was so incensed that he hired a former Pfizer colleague, Huw Hughes, to analyze GFI’s analysis. Today, Hughes is a private consultant who helps biomanufacturers design and project costs for their production facilities; he’s worked on six sites devoted to cell culture at scale. Hughes concluded that GFI’s report projected unrealistic cost decreases, and left key aspects of the production process undefined, while significantly underestimating the expense and complexity of constructing a suitable facility.

“After a while, you just think: Am I going crazy? Or do these people have some secret sauce that I’ve never heard of?” Wood said. “And the reality is, no—they’re just doing fermentation. But what they’re saying is, ‘Oh, we’ll do it better than anyone else has ever, ever done.”

GFI’s imagined facility would be both unthinkably vast and, well, tiny. According to the TEA, it would produce 10,000 metric tons—22 million pounds—of cultured meat per year, which sounds like a lot. For context, that volume would represent more than 10 percent of the entire domestic market for plant-based meat alternatives (currently about 200 million pounds per year in the U.S., according to industry advocates). And yet 22 million pounds of cultured protein, held up against the output of the conventional meat industry, barely registers. It’s only about .0002, or one-fiftieth of one percent, of the 100 billion pounds of meat produced in the U.S. each year. JBS’s Greeley, Colorado beefpacking plant, which can process more than 5,000 head of cattle a day, can produce that amount of market-ready meat in a single week.

And yet, at a projected cost of $450 million, GFI’s facility might not come any cheaper than a large conventional slaughterhouse. With hundreds of production bioreactors installed, the scope of high-grade equipment would be staggering. According to one estimate, the entire biopharmaceutical industry today boasts roughly 6,300 cubic meters in bioreactor volume. (1 cubic meter is equal to 1,000 liters.) The single, hypothetical facility described by GFI would require nearly a third of that, just to make a sliver of the nation’s meat.

It’s a complex, precise, energy-intensive process, but the output of this single bioreactor train would be comparatively tiny. The hypothetical factory would need to have 130 production lines like the one I’ve just described, with more than 600 bioreactors all running simultaneously. Nothing on this scale has ever existed—though if we wanted to switch to cultivated meat by 2030, we’d better start now. If cultured protein is going to be even 10 percent of the world’s meat supply by 2030, we will need 4,000 factories like the one GFI envisions, according to an analysis by the trade publication Food Navigator. To meet that deadline, building at a rate of one mega-facility a day would be too slow.

All of those facilities would also come with a heart-stopping price tag: a minimum of $1.8 trillion, according to Food Navigator. That’s where things get complicated. It’s where critics say—and even GFI’s own numbers suggest—that cell-cultured meat may never be economically viable, even if it’s technically feasible.

“A key difference in the CE Delft study is that everything was assumed to be food-grade,” Swartz said. That distinction, of whether facilities will be able to operate at food- or pharma-grade specs, will perhaps more than anything determine the future viability of cultivated meat.

The Open Philanthropy report assumes the opposite: that cultivated meat production will need to take place in aseptic “clean rooms” where virtually no contamination exists. For his cost accounting, Humbird projected the need for a Class 8 clean room—an enclosed space where piped-in, purified oxygen blows away threatening particles as masked, hooded workers come in and out, likely through an airlock or sterile gowning room. To meet international standards for airborne particulate matter, the air inside would be replaced at a rate of 10 to 25 times an hour, compared to 2 to 4 times in a conventional building. The area where the cell lines are maintained and seeded would need a Class 6 clean room, an even more intensive specification that runs with an air replacement rate of 90 to 180 times per hour.

The simple reason: In cell culture, sterility is paramount. Animal cells “grow so slowly that if we get any bacteria in a culture—well, then we’ve just got a bacteria culture,” Humbird said. “Bacteria grow every 20 minutes, and the animal cells are stuck at 24 hours. You’re going to crush the culture in hours with a contamination event.”

Viruses also present a unique problem. Because cultured animal cells are alive, they can get infected just the way living animals can.

“There are documented cases of, basically, operators getting the culture sick,” Humbird said. “Not even because the operator themselves had a cold. But there was a virus particle on a glove. Or not cleaned out of a line. The culture has no immune system. If there’s virus particles in there that can infect the cells, they will. And generally, the cells just die, and then there’s no product anymore. You just dump it.”

If even a single speck of bacteria can spoil batches and halt production, clean rooms may turn out to be a basic, necessary precondition. It may not matter if governments end up allowing cultured meat facilities to produce at food-grade specs, critics say—cells are so intensely vulnerable that they’ll likely need protection to survive.

Of course, companies could try. But that might be a risky strategy, said Neil Renninger, a chemical engineer who has spent a lot of time around the kind of equipment required for cell culture. Today, he is on the board of Ripple Foods, a dairy alternatives company that he co-founded. Before that, for years, he ran Amyris, a biotechnology company that uses fermentation to produce rare molecules like squalene—an ingredient used in a range of products from cosmetics to cancer therapeutics, but is traditionally sourced unsustainably from shark liver oil.

“Contamination was an issue” at Amyris, he said. “You’re getting down to the level of making sure that individual welds are perfect. Poor welds create little pits in the piping, and bacteria can hide out in those pits, and absolutely ruin fermentation runs.”

The risks are even more dire when it comes to slow-growing animal cells in large reactors, because bacteria will overwhelm the cells more quickly. At the scale envisioned by proponents of cultured meat, there is little room for error. But if aseptic production turns out to be necessary, it isn’t going to come cheap. Humbird found that a Class 8 clean room big enough to produce roughly 15 million pounds of cultured meat a year would cost about $40 to $50 million dollars. That figure doesn’t reflect the cost of equipment, construction, engineering, or installation. It simply reflects the materials needed to run a sterile work environment, a clean room sitting empty.

According to Humbird’s report, those economics will likely one day limit the practical size of cultured meat facilities: They can only be big enough to house a sweet spot of two dozen 20,000-liter bioreactors, or 96 smaller perfusion reactors. Any larger, and the clean room expenses start to offset any benefits from adding more reactors. The construction costs grow faster than the production costs drop.

Also "Is Lab-Grown Meat Commercially Feasible?":

The first of Humbird's grievances is the need for a cheap and plentiful supply of nutrients for the cells. [15] Currently, such cell food is produced for pharmaceutical purposes, so is expensive and not produced in the vast quantities required have cultured meat supplant animal meat on the global market. [15] In fact, nutrients are the currently the most expensive part of cultured meat production. [15] On top of that, the most popular source for key biochemicals needed for proper cell growth is fetal bovine serum (FBS). [16] FBS is harvested (lethally) from unborn cattle after the mother is slaughtered. [16] A replacement for FBS will have to be found to keep the ethics people on cultured meat's side. Additionally, the cells' food would need to be extremely clean. In the case of animal meat, any trace toxins in the animal feed are (mostly) filtered out by the animal's liver, and do not end up in the muscle. However, for cultured meat, the cellular slurry inside the bioreactor has no liver, meaning any toxin left in the feed is put directly on your plate.

An effective scale-up of cultured meat production would also require an incredibly clean work environment. The warm, nutrient-rich bioreactor, ideal for animal cell growth, is also the perfect environment for pathogens (bacteria and viruses). If a single pathogen managed to get a foothold in the bioreactor, it would quickly overwhelm the animal cells, killing the entire batch. This restriction requires labs to be at least Class 6 cleanrooms. [15] Importantly, since that level of sanitation requires all pipes, windows, etc. to be perfectly sealed, as well as ventilation replacing the air 25 times an hour, they get much more expensive with size. Essentially, you can have a large factory or a clean factory. Cultured meat requires both. In animals, pathogens are mostly dealt with by the immune system. Since the cell slurry has no immune system, great care and expense must be invested to ensure the cells' safety.

The final problem I'll discuss is the limits on the size of the bioreactors. Larger bioreactors are more space-efficient, allowing you to have smaller cleanrooms, reducing those sanitation costs. However, larger bioreactors are also more susceptible to disease, since pathogens can ruin the entire batch. Beyond that cost balance lies another problem with larger bioreactors: waste management. When left to their own devices, cells build up waste products which slow down future cell growth. Cycling out this waste effectively is only possible in small bioreactors, requiring more reactors, therefore larger and much more expensive cleanrooms. [15] Another possible solution is to use slow-growing cell cultures, since they are more waste-efficient, however less frequent batches means again more reactors are required, again ratcheting up the price. [15] In animals, waste is extracted via blood vessels. Since cell cultures have no blood vessels, cell waste becomes a problem.

Are the rich really so attracted to the idea of cheap servants that they would see their communities destroyed?

As TIRM notes, it's not their communities. I've seen plenty of libertarian types point out that one can find enclaves of "1st world" living conditions in most any destitute "3rd world" country — you just have to be rich enough to afford it. And if you're the right kind of rich, then importing a lot of "cheap servants" will make you even richer — or at least make this sort of gated community cheaper — even as the country as a whole declines.

(Again, I like to point to open borders advocate Nathan Smith's "How Would a Billion Immigrants Change the American Polity?", because I much appreciate his forthrightness about the outcomes he prefers — though I think a better comparison than the Roman or British Empires would be the UAE.)

I don't understand how people who are in favor of mass-immigration can just so completely throw caution to the wind. Even with high confidence that mass immigration won't be a problem, if you're wrong, it's game over.

Well, there's the Nathan Smith "How Would a Billion Immigrants Change the American Polity?" position — from people often given to repeating Milton Friedman's comment about mass immigration and the welfare state and, like him, arguing for picking the former over the latter — which holds that the pressures of mass immigration will force the system to adapt (in ways these sorts of people find personally favorable) to keep functioning. While Smith makes analogy to Rome, the better point of comparison is the UAE.

And, I think it was back on Twitter a few years ago, I remember Bryan Caplan making similar "the system will have to adapt" arguments, and someone pushed back, pointing to our politicians and asking what happens if they don't make the changes open borders libertarian types ask for. He gave the same response he's given some other times: mass immigration is ultimately a self-limiting problem. Immigrants tolerate the language barrier and cultural difference issues because they're outweighed by the economic benefits of living in a country like America or Canada. Thus, if the effects of this immigration become increasingly detrimental, the economy and quality of life will decline, reducing that incentive to keep coming. Once America is reduced to a level near Mexico, Latin American immigration will stop, and perhaps even reverse (and similarly with Canada versus the sources of its immigrants).

Sure, someone argued back, but then you've still wrecked the country, even if the process eventually stalls out. That, Caplan replied, is just another reason to support immigration — because if that does happen, well, English is enough of a lingua franca in academia that a famous economist like him can get a job teaching at pretty much any university anywhere on Earth. (And as for those who aren't famed econ professors like him? That's their problem.)

And I think it was Tyler Cowen who made the point that "3rd world countries" aren't uniformly terrible; that in the cities you can find pockets where the elites live in "1st world" conditions with the added bonus of cheap personal servants — you just have to be able to afford it. But, much in line with Smith's position, if you're one of those who isn't in job competition with immigrant labor, but instead positioned to benefit from it, then your economic gains will allow you to pay for the gated community, the private security, etc. to let you maintain your 1st world lifestyle even if most the rest of the country ends up immiserated, with the added benefit of affordable personal servants and cheap chalupas.

So, IME, a lot of "f— you, got mine" attitude, and confidence that no matter how bad it gets, the consequences will only fall on the little people beneath them.

Well, that and a lot of "bleeding heart" types who simply don't think about long-term or large scale consequences, and who, at their worst, deny that unintended and second-order consequences are even a thing.

So, why are federal gun laws enforced in gun-friendly states?

I can think of several factors that contribute to this.

First, what does it mean for a state to be "gun-friendly"? I mean, most people on the pro-gun side support "reasonable" restrictions — where "reasonable" is often heavily influenced by status-quo bias (the conservative side of the leftward ratchet) — and the "2nd Amendment right to personal nukes" position is mostly just a few fringe (if vocal) libertarian types. And states are not politically homogenous; even your most "gun-friendly" state is going to have plenty of people — particularly in the cities — who support increasing gun restrictions.

In particular, the people in state government — particularly the lawyers and paper-pushing bureaucrats — you'd be counting on to push and coordinate this resistance to enforcement skew both urban and especially college-educated, which means they skew left and anti-gun. (Personnel is policy, and modern forms of government ensure urban leftist personnel.)

Second, way too many on the right are believers in "the rule of law." Like the sportsman who will not respond to a cheating opponent by cheating back because he has too much "respect for the game," they believe in the importance of procedure over outcome — following the rules and doing the right thing over getting better results. They are deontologists and virtue ethicists, not utilitarians. Fiat justitia ruat caelum. For what does it profit a man to gain the whole world and forfeit his soul? Better to suffer defeat, torture, and death while upholding your values than to attain a political victory by compromising them. (Because God will reward you for the former and damn you for the latter.)

Indeed, for any "the left is doing [x], why isn't the right doing [x] back?" question you can pose, you're sure to find someone on the right insisting that our steadfast, virtuous refusal to do [x] is the thing that separates us from the left, that to do [x] back would not just be sinking to the level of our enemies, it would be to become our enemy, and that anyone who would consider doing [x] is a leftist, no matter their other positions.

Third, quod licet Jovi, non licet bovi. The master's tools will never dismantle the master's house. What works for the left against the right will not necessarily work for the right against the left. Leftists can get away with doing things for left-wing causes that would see rightists punished severely if they tried to use them for right-wing ones. It's not hypocrisy, it's hierarchy.

I expect that pillarization eventually fixes this.

How? Pillarization requires a relatively neutral central power that allows multiple parallel institutions to keep existing.

but among a big chunk of society Christendom College will have the same effect

How so — assuming said school even remains open and accredited? One gives you a whole bunch of employment options and elite connections, the other is a useless piece of paper no employer will respect (for fear of getting sued, if nothing else).

a swing towards right wing curmudgeons

There was a lengthy comment by a guest in one of Neema Parvini's recent videos relevant to this point (at about 1:23:00); let me see if I can transcribe it:

The time scale isn’t to do with presidencies, and people think in time horizons that are far too narrow. Policy is not decided every four years. Policy is not made up every four years. There is permanent continuation policy of the American government and the American Empire that has taken place since 1945; and that much is very, very evident.

The people saying leaders matter — Trump was never the leader of America. Trump was never in power. That is the lesson you should take away from this. He didn’t know how many troops were in Syria; they simply lied to him and waited until he left office. Donald Trump was not in power because the levers of power lead nowhere; and emotionally attaching to who is in the presidency is part of the problem that we have, in that most Western policy in most places is on rails. I do not believe that the last ten years of the Tory government in Britain, to answer AA’s question, would have been particularly different under a Labour government. The Equality Act is something that was dreamt up by Labour and brought by the Tories.

People talk about the Uniparty. It’s really a basic bitch Libertarian talking point, but they’re right. There really is no difference in terms of macro-policy, in terms of a decades-long time scale, whether you have a blue government or red government, because nations are not ruled by elected officials. Democracy does not function. Nations are ruled by the permanent bureaucracy, which does not change.

And to bring my final point into this, a political victory, and a real transfer of power is what took place in Ukraine in 2014, with a process known as “lustration,” where they banned every government official — including judges — from being part of the government for five to ten years. That is what it looks like when an American-directed vassal actually wants to change who is in office.

Who is in office does not change in Western democracies, and that is a fundamental misunderstanding of this point. We shouldn’t celebrate this, and we shouldn’t have an emotional stake in it, because who is in power is not changing. It is not changing in the Netherlands, it is not changing in Argentina, and it has not changed in Italy. Who is in power has not changed, and I think that is really why talking about the posters put over the permanent regimes and bureaucracies — which largely are vassals of global American power, let’s be fair — is relatively meaningless. It’s impossible to say whether you were better off or worse off under certain regimes, because the regime does. Not. Change.

Representation does matter, but those making the decisions are so ideologically committed that they’re willing to hurt their own bottom line in order to “do the right thing.” They’re so committed to their ideals that they’re willing to depress their own effectiveness by more than 30%.

Except it's not this straightforward, for two reasons. First, try proving that these decisions are actually hurting the bottom line. As the old quote attributed to various famous businessmen goes: "Half the money I spend on advertising is wasted; the trouble is I don't know which half." Advertising is anything but an exact science, and business outcomes are subject to many hard-to-disentangle factors. So how would one convince bosses or coworkers that this isn't the way to get more business?

Secondly, the interests and incentives of an institution are not the interests and incentives of the people within it. As I've seen it put elsewhere (particularly in discussions of the police, but also other fields), the first and highest job duty of any employee is not what it says on their job description, it's to make the boss happy. Of course, the usual way one does so is by performing the specific tasks for which one was hired, but those are ultimately just means to that end. If your boss insists on something being done a particular way, a particular way that's stupid and costs the business money, and instead you do it a different way that saves the business money, how do you suppose it will impact your continued employment if the boss finds out?

I've seen multiple people point out with respect to the whole Bud Light thing, that while going with Mulvaney may not have been a good choice for the business as a whole, it was probably the best choice for the advertising people who originally recommended that course with regards to their future employment opportunities elsewhere within the advertising industry, particularly as compared to the opposite strategy. "Nobody gets fired for buying IBM" and all that.

So nobody need actually go "I'm doing this no matter how much money it costs me!" They need only have uncertainty as to what will or won't cost the business more customers, combined with a solid understanding of what best suits their own personal, long-term job interests independent of a particular company's interests.

This relates to something I've seen referred to as "generational loss of hypocrisy." The first "generation" who put out some bit of hyperbolic, extreme rhetoric may not really believe it, nor live by it. They might quietly carve out unprincipled exceptions for themselves in practice, or acknowledge the performativity of it all in private among themselves.

But if, when "in public," they keep preaching the message consistently, for long enough, then at least some of the next "generation" who absorb it will end up taking it seriously.

There's someone I've interacted with a bit online who, since at least a few years ago, repeatedly raised the issue of the extreme nature and implications of much of academic "decolonization" discourse, especially the bits about being "unconcerned with settler futurity." The common rejoinder to these was always that nobody actually takes any of that stuff literally, or would ever actually follow through to the terrible-yet-logical conclusions implied…

…and yet, now we are seeing that, no, quite a few people do indeed take all that seriously.

A cult leader may have been a conman who made it all up as a grift, but if the group manages to persist long enough after his death, it will probably end up made of true believers.

Accountability literally anywhere in government would be great…

Agreed. But the problem is that, as identified by Max Weber, a hallmark of modernity is rationalization, and the resulting bureaucratization, which serves incredibly well to diffusion of responsibility and a "rule of no one" in which nobody in government is ever accountable.

So some sort of privately certified pre-nup?

The problem here — as with every "you can get a traditional marriage if you want one" argument — is enforcement. Indeed, I've seen people try to argue "a properly-written pre-nup is all you need to make your marriage as it would be before no-fault divorce" online — at which point everyone else points out Diosdado v. Diosdado, and they either fall silent or resort to 'well, the Diosdados must have not done it properly; it must be possible somehow' sputtering.

Generally, it looks to me like the sort of "parallel society" thing that only really works for Hasidim and Mennonites.

But, is it actually discriminatory?

What would you say to someone who answered “yes” to this question? Because it comes down to how you define “racial discrimination.” To discriminate is to make distinctions, to distinguish, “to separate from another by discerning differences.” The examples you give all discriminate by race at the individual level; they treat individuals differently according to their race. But what about discrimination at the group level? If a system produces statistically distinct outcomes between two racial groups, even if no individual is treated different on account of their race, does it not still distinguish, separate, and discern differences between the two groups as groups? Does it not then discriminate between the racial groups?

If you read academic race theory works, you end up finding working definitions, usually implicit, in line with exactly this — “racial discrimination” being defined as the presence of “disparate impact” — regardless of whatever causes it. If blacks do have lower average m scores, then anything that sorts on m will discern that difference between the racial groups (unless you deliberately compensate for that racial difference), and thus discriminate between the racial groups. Thus, lower m scores are not an excuse for hiring fewer blacks, they’re the reason using m scores at all is racist.

the algebra test might have real predictive power

Are “having predictive power” and “racially discriminatory” mutually exclusive? They certainly aren’t so if you use the above definition. In fact, given your example, the more predictive power your test has with respect to m, the more racist it is.

Whether the test is actually discriminatory now comes down to whether “general intelligence” is real, and also that if it is real, can it be measured by an algebra test?

Again, no. Even if “general intelligence” is real, if it statistically differs between races as groups, then an algebra test that measures it — or any other test that does so — will be racist for exactly that reason.

will prevent the sort of racism Americans hate.

It doesn’t matter what sort of racism ordinary Americans hate, it only matters what sort of racism American elites hate. And looking at the people who are the credentialed “experts” in these matters, and how the people who interpret and enforce the relevant laws have been drawing upon their views for decades, they’re very much in line with the “disparate impact = racism” view, in which any system that produces statistical differences in outcomes is definitionally racist regardless of the reason why, no matter what real statistical differences it may be detecting. It doesn’t matter if “general intelligence is also closely correlated with job performance in pretty much every job”; that just means that a “colorblind* selection for job performance is racist.

but I do not know who is the puppet master.

I figure this is because there aren't any — there's no one person, or even small group, in charge, just a "prospiracy" of numerous left-wing bureaucratic "cogs" each following personal and social incentives to produce what looks like coordinated action. No "strings," just emergent behavior.

I've read/watched a couple of debates on this topic, and my thoughts are rather inchoate and unformed as of yet (I often thought both sides made good points), but I'll try to lay a couple of them out here:

First, I tend to agree with a couple of people out there (I don't remember which) who argued that, while they agree with the substance, "Christian nationalism" makes for a poor label.

Second, as I've said on Tumblr, I think we need more people saying what Heidi Przybyla has said, as to the defining characteristic of "Christian nationalism":

The thing that unites them as Christian nationalists — not Christians, because Christian nationalists are very different — is that they believe that our rights as Americans and as all human beings do not come from any Earthly authority. They don't come from Congress, from the Supreme Court, they come from God.

Yes, we need more people saying that Americans' rights come from Congress and the Supreme Court (except when they overturn Roe or keep Trump on the ballot), and anyone who says otherwise (such as Thomas Jefferson, or the rest of the Founding Fathers, or pretty much any American statesman up until maybe half a century ago) is a dangerous, fanatical “Christian Nationalist” theocrat.

Third, this is in many ways an extension of some debates that have been going on longer than I've been alive, particularly the "you can't legislate morality" vs "all law is 'legislating morality'" debate; and also the question of what extent the voters in our democracy are allowed, by way of their elected representatives, to enact laws that reflect their collective moral values, specifically when those values are informed by their religion and you have the 1st Amendment. I'm reminded of times in the gay marriage debate, when proponents would argue that secular arguments against gay marriage are really just religious arguments if the person making them is Christian, to a level that almost approached 'separation of church and state means only atheists get to make laws.'

I'm reminded of Sullivan's The Impossibility of Religious Freedom. That true moral neutrality of the public square is impossible, and that there will always be some sort of dominant moral framework to the law which "overrides" any religious views to the contrary; that in the early days of America, this was a generally Protestant Christian framework, with freedom of religion primarily being a truce between denominations not to use the state to settle matters of doctrinal differences; and that since that's no longer the case, we should instead "solve" the current tensions by moving toward the French, by rethinking "freedom of religion" to mean something more like "freedom of conscience," but that once you step outside your church/synagogue/mosque/temple, your moral views must become totally subordinate to a "secular" moral system.

In other words, that with the Constitution forbidding the establishment of theistic religion, the void may — and must — be filled by the unofficial establishment of a nontheistic religion-substitute.

Fourth, I don't remember where it was, but I recall one commenter on the issue of "Christian nationalism" arguing that the reason it's rising as a new boogeyman for the left is because many of them had thought, post-Obergefell, that they had indeed pretty much achieved Sullivan's solution and banished the last traces of "Christian morality" from the public square. But then Trump and Dobbs happened, and older people on the Christian right — who'd previously taken the existence of a sort of "Moral Majority" Judeo-Christian consensus in America for granted — realized just how much they'd lost in the public square and began shedding some of their passivity. That Christian-informed moral views might begin inching back into our politics, particularly with the influx of Hispanic immigrants, is thus a trend to be squashed.

In short, that this is simply a fight as to which set of values is going to have cultural dominance.

Does anybody else find The Atlantic's "If Trump Wins" issue hilarious? Just reading the titles and blurbs for some of those 24 pieces actually had me chuckling.