@quiet_NaN's banner p

quiet_NaN


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 22:19:43 UTC

				

User ID: 731

quiet_NaN


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 22:19:43 UTC

					

No bio...


					

User ID: 731

Is it worth saving a child's life if the consequence is that it makes a murderer harder to convict? I don't think most people would have to think too hard about that one.

Yes. Worst case, the murderer walks free, but that baby is saved.

The expected value of murders a murderer will commit after being found innocent due to a procedural flaw is much lower than one plus the average number of murders he will commit if he is not caught at this chance.

Depends a lot on the aims for the date, imho. If the main goal is to have sex with her, then being the perfect and tough guy who is of course not whining about the cold and will in fact lend miss little princess his jacket if she should feel cold could be the winning strategy.

On the other hand, if you are looking for a long term relationship and don't want to keep the princess/servant dynamic in the long term, you might want to show a bit of imperfection and vulnerability just to see how she will react. Will she ignore your plight? Will she be willing to go into a clothing store with you and let you pay for a scarf? Will she gift you a scarf? Will she do something completely unexpected, like conning a receptionist? Will she dump you on the spot?

I think that agency is somewhat orthogonal to morality.

Low agency people (like myself, TBH) who do what all the others around them are doing are unlikely to stand out in either a good or bad way much. (They will still have a large overall impact which could be good or bad, though.)

Some high agency found EAs and try very hard to make the world a better place. Some high agency people try to rip off people to fund their underage sex islands. Some build successful companies producing dental drills. There is a much larger variety of ethical impact per person, but one can hardly say that they are all bad.

On one hand, it's worrisome if a chick is so blasé about lying—if she so casually lies to a hotel receptionist in a low stakes situation, what if she's similarly down to lie to me in a higher stakes situation?

For a lost item, the lying part is weighting much heavier than the stealing part.

Now don't get me wrong, I lie. Not for profit, and generally not to people close to me, but certainly to authorities to make interactions go smoothly. If I get into a traffic stop and I am asked if I take any medication, then I could be truthful and give them a list of drugs, and hope that they will eventually figure out that these drugs do not impair the ability to drive a car. Instead, I will simply lie to their face that I do not take any medication. But I generally do not seek out situations where I will lie.

Happily lying to a receptionist for shit and giggles and because you want a scarf is a whole other ballpark. Dark triad territory.

I think that a lot depends on the goals of the guy on the date. If he is looking to get laid, or for an accomplice in a bank robbery, her showing risk-taking behavior and a disregard for conventional morality is certainly increasing her value.

If he is looking for a long term relationship as a well-regarded member of society, that would indeed be a red flag. On the second date, people (I think) still bother to try to hide their flaws. If it does not occur that "I have a very loose morality around the subjects of property and (more importantly) truth" might be worth hiding, one might start to wonder what kind of flaws she might actually be hiding on top of that. Perhaps she is in a marriage she has not told you about, or works as a con artist or pickpocket.

Children benefit from stay-at-home moms; I did, anyway.

I believe you, but I would still argue that there are opportunity costs. A one-year-old requires a caretaker 24x7, and presumably might benefit from that caretaker being their mother. A ten-year-old requires much less adult supervision. Someone to cook dinner and make sure that they either attend or have called by then is certainly helpful, but 24x7 supervision would be actively harmful.

Now, if your model stay-at-home mom starts having kids age 18 and then has a child every other year for as long as nature will allow, I will grant you that she will have her hands full taking care of her kids for a significant fraction of her work life. But in most Western marriages, it is not like that. Instead, she will have two or three children, which will keep her occupied for a decade, but once her smallest child goes to school, she will have a lot of time on her hands for the better part of her work life.

I am not arguing that working 40h a week is the only valid model of how to spend your life, and if someone is happy playing video games or join some club or have an OnlyFans career or dedicate their life to gardening, who am I to tell them that they are wrong? Still, having opted not to have earned a degree seems somewhat likely to limit your options at self-actualization, and earning a degree remotely at age 40 is likely going to be harder.

And if your values differ from those of the broader culture, daycare is likely to drag your kids at least part way to that culture.

I think that this is unavoidable in general. I would advise to raise kids in a culture you are at least halfway comfortable with. Even with homeschooling and everything, you can not completely shield your child from the local culture. Sure, there are some who try, like some Muslim families trying to raise their daughters according to Sharia law in the middle of Western cities, but I think that their success is mixed at best.

Personally, I would not fret overly much about it. I was raised (mildly) Roman Catholic, and it did not stop me from seeing the light of Igtheism at 15 or so. While I am sure that there are some horror stories about some overachieving kindergarten teacher telling white kids to hate themselves, I think the median version of the SJ creed taught to kids is much less harmful. Like Santa Claus, blank-slatism is the sort of lie which is unlikely to harm the development of a kid much. They can still learn about the Ashkenazi intelligence hypothesis and HBD later.

There are (roughly) two kinds of religiously motivated murders.

One is the sacrifice, where you want to send your god a juicy piece of meat or some virgin pussy or kid as a bribe or tribute. Generally, the sacrifice is a mean to an end, the process is really a transaction between the one sponsoring the sacrifice and god. Sure, you might get extra virtue points for sacrificing your favorite daughter, but if she happens to have her period on the set date you can just sacrifice another daughter. Generally, you want your sacrifices to be pure and hale. Sacrificing a lame goat or a disobeyant child might be seen as an insult, after all.

The other type of murder is a punishment for a religious transgression, real or imagined, such as witchcraft, blasphemy, heresy. This is primarily a matter between the accused and the community, just like a secular crime.

This is well illustrated by the concept of the scapegoat. You start out with two goats. One stays pure and is sacrificed to god, the other gets the sins transferred to it and is then abandoned in the desert, for god to punish it as he wants. Full of sins, it would not make a good sacrifice for god, after all.

While punishments are widespread, pure sacrifices of humans are very much optional for religions. In the religions of the book it only appears (to my knowledge) in YHWH's fucked up little mind games he plays with Abraham, with the sacrifice being stopped. The Romans -- themselves not shy about infanticide -- likewise stamped it out where they could.

Of course, there are also mixed forms. For example, the Christian tradition of burning someone at the stake for religious transgressions is very much reminiscent of burnt sacrifices by earlier religions. I think that sometimes, it is explicitly stated that the purpose of this form of death penalty is to purify the victim so that they can get into heaven despite their crime. This is more seen as a 'favor' to the victim than as a favor to god, but parsing it as "souls for the soul lord!" does not seem entirely wrong.

The idea that it was Pilate's job to follow "due process" and that he was "derelict in his duty" is delightfully ahistorical. The laws which Pilate followed were the laws of Rome. Roman law was not very concerned with the rights of non-citizens, their brothels and salt mines were full of slaves. And Jesus was very much not a Roman citizen. As a military governor, the job ob Pontius Pilate, as far as the Senate was concerned, was to keep the peace and facilitate the extraction of wealth. How he did this was totally up to him. If one day he woke up and decided to drown a tenths of the infants in Jerusalem in boiling pig fat, Rome would only object to that as far as it lead to instability.

The fact that he even personally bothered to preside over the case is more a concession to the political touchiness of the subject than any due process. Quite frankly, the local elites were really pissed at Jesus because he had interfered with their religion by causing a ruckus with the money-changers (which ultimately threatened their business model). And Pilate decided that it would be in Rome's best interests to placate them by putting Jesus to death. Given that the followers of Jesus did not rise up in rebellion, it is hard to argue that he was wrong with his decision. (A Gibbonite would blame the fall of Rome on Christianity, but Pilate could not possibly have foreseen that.)

Quite frankly, by messing with religious institutions, Jesus was kind of asking for it, either intentionally or in a FAFO way. Most places and times did not have strong freedom of speech norms, and Jesus would have fared little better if he had criticized dominant religious practices in pretty much any culture. If he had tried his little stunt in front of the temple of Athena or Saturn or Odin or a medieval cathedral or in early Boston or in front of a mosque in contemporary Tehran or Riyadh or in front of some Buddhist temple in Myanmar, he would have fared little better. Sure, in today's Western world, he might have gotten away with just a night in a prison cell and a fine (or no penalty at all if he had opted to practice his free speech by just demonstrating with a sign "God hates money-changers"), but of all the atrocities committed in the name of Rome, the killing of Jesus likely does not even make the top million.

Implying that this vastly destructive war that killed 60 million people could or should have been handled differently or, God forbid, avoided is basically heresy.

I do not think that saying "Hitler should not have attacked Poland" is very controversial, so you are likely not talking about what the Nazis could have done differently. In fact, the Western Allies tried to avoid the war by appeasing Hitler, because nobody was keen on repeating WW1. Now, you can argue that the UK and France should just have sat this one out, watching from the sidelines as Hitler takes Western Poland and then invades the USSR. Sure, that would have avoided the Blitz and the invasion of France -- or more accurately postponed them until Hitler was done with the East, but the immensely destructive war on the East front would still have happened. What is your recipe for avoiding that one? The USSR retreats to Siberia and lets Hitler take Moscow?

Nor is it very controversial that Stalin was not a nice person and it would have been better if he had behaved differently.

In the particulars, the behavior of Western allies is also substantially criticized. For example, ACOUP on strategic air power

I must admit I do not generally extend this charity to fellows like Arthur Harris or Curtis LeMay who were fairly explicit that their goal was to simply kill as many civilians as possible in order to end the war.

Or take the Internment of Japanese Americans

In 1983, the commission's report, Personal Justice Denied, found little evidence of Japanese disloyalty and concluded that internment had been the product of racism. It recommended that the government pay reparations to the detainees. In 1988, President Ronald Reagan signed the Civil Liberties Act of 1988, which officially apologized and authorized a payment of $20,000 (equivalent to $53,000 in 2024) to each former detainee who was still alive when the act was passed.

I think that the job of housewife is on its way out, and has been on its way out for the last century.

Back in 1800, with no washing machines or fridges, it was a full-time job to take care of the needs of a family (especially as family size was large due to lack of contraceptives). A man (or anyone) who worked full time simply did not have the time to take care of washing his clothes and cooking his meals.

Luckily, we made these chores much less time-consuming and freed women to do more useful work. And they do. There are mothers who are teachers, physicians, clerks and a myriad of other professions.

Naturally, the markets (especially housing) have reacted to this reality (plus a ton of other factors), and the age where you could raise a family with a single income from not-highly-specialized labor is over.

As you point out, social changes have made the strategy of just marrying a man and relying on him to provide for you high-risk, because if he is rich enough to pay for you to stay at home and watch the kids, he is likely also rich enough to replace you with a younger, more attractive woman in a decade or two.

I think that a big point of both men and women going to college is the signaling value both towards employers and towards potential mates. Roughly, the same qualities which are valued in an employee (somewhat smart, willing to submit to an institutional system, ability to achieve long-term goals, etc) are also good qualities in a partner. A degree, especially in a strongly regulated field like law or medicine, will significantly update your estimate on the earning potential of a person. Then there is education as a mark of social class. A man from a family of academics will probably not marry someone who dropped out of high school. (Sure, there will always be some men who prefer to marry 18yo village girls, but "I will just wait for some Trump-like man to marry me" will not work for the vast majority of them.)

I agree that there are probably bullshit degrees pursued by women who really want to graduate college with an MRS degree, but I think that the answer is not not cut down on women in college, but to push degrees which can actually earn money.

The pattern "Earn a degree, get pregnant at 30 and then become a stay-at-home mum" is obviously not very efficient. But I don't think we will go back to "get pregnant at 20 and then become a housewife". What society should aim for is "Earn a degree, get pregnant at 30 (if you want), re-enter the workforce a few years later (e.g. part-time)".

I think that sourcing the basics, e.g. a breadboard, wired resistors, capacitors, LEDs, jumper wires, some opamps, is not that hard.

Amazon or (in Germany) Conrad have you covered there (if you don't mind overpaying compared to what the parts would cost in bulk).

If you increase your budget to 200$, then different people will want very different things. Matrix LCDs, TTL logic chips, myriads of sensors, servos. Some will want passive SMD components (with different preferences to size).

And in that stage, they probably also want components which are not sold by Conrad, which is when things get painful.

There are, of course, companies which carry zillons of electronic components, e.g. Farnell, Mouser, RS, Digikey. Their stock is well curated, you can filter based on dozens of criteria until you end up with what fits your needs. In fact, having used these websites I have come to despise the shopping experience on Amazon, where little in the way of curation happens and accessories for X regularly appear in the category X.

Alas, these electronics vendors do not typically sell to hobbyists. Presumably, cutting five chips from a reel and packing them for sale is not in itself very profitable, but simply a prerequisite to sell a reel of your chips to companies, eventually. Unlike corporations, private persons rarely scale up their projects to a scale where serious money gets spent, and complying with the consumer protection regulations is just not worth it.

So you sometimes find yourself in the situation where you know that four different companies carry the chip you want, but none of them want to sell to you. (These days, it might be possible that you can get it from China, if you don't mind the wait, though.)

I think that "write an effortpost on substack/LW/reddit/tumblr/..." might actually be a fun essay assignment (even if it would be hard to grade if the teacher lacks subject knowledge).

I think that one problem with essay assignments is that the student is typically aware that it is extremely well trod ground. Generations of students before them have written about theme X in book Y. The chance that they will make a point which will cause the teacher -- the one person who will (optimistically) read their essay (unless they also leave the grading to an LLM) -- to actually wake up and go "wait a minute, this is new" are very slim.

"Everything has been said before, but not yet by everyone" and all that.

It is like tasking someone to simulate having sexual intercourse with a sex doll and then being surprised if the person is not showing a lot of effort.

To be blunt, college hasn’t been about education for a very long time, and it strikes me as hilarious that anyone who attended one writes these sorts of handwringing articles bemoaning the decline of education in college. 99% of students who were ever in university (perhaps with the exception of tge leisure class) have ever gone to college seeking the education for the sake of education. For most of us, it’s about getting job skills, getting a diploma, padding a resume, etc. if learning happens on the side, fine, but most people are looking at college as a diploma that will hopefully unlock the gates to a good paying job.

While I can only speak for myself, I studied a STEM subject because I was genuinely interested in it. Sure, the fact that STEM people usually find well-compensated work was a consideration, but not the major one. I certainly did not research which subject would have the highest expected salary. I also embarked on a lengthy PhD for rather meager pay, but I was fine with that.

Some of the stuff I learned as a student I get to use in my job, while some other stuff I sadly/luckily do not have reason to use. And as usual, a lot of the relevant skills I picked up outside class.

I am also somewhat privileged in that my parents paid for my education (i.e. the cost of living in a small room for 5+ years -- universities themselves are almost free in Germany). But I never felt I was attending just for the signaling value of the diploma.

Have private businesses operate the dorms and cafeterias (plural, they need to compete) and let students live off-campus the moment they want.

This is how we generally do things in Germany, to a large degree.

Okay, almost. The "Studentenwerk" (a government-sponsored citywide institution) typically runs a canteen on campus and also provides low-end housing significantly below market value (typically off-campus, though), but they are legally distinct from the university, and students are not required to interact with them in any way (besides paying a minimal fee, perhaps). Plenty of students rent private rooms or flats and prefer private food vendors.

No electronics other than sometimes a scientific calculator. No graphing calculators since they can be programmed with the relevant formulae; including fake screens that say all memory has been just now wiped.

Hot take: calculators are for experimental physics exams. In mathematics, they should not be required. If the exam is about multiplying five digit integers, then a calculator would defeat the purpose of the task. If the exam is about integration, then you can easily make sure that there will not be a lot of five digit integers to multiply.

Granted, some math classes are mostly to enable students to use calculators for their science classes. So sure, if the point is to learn to calculate logarithms with a calculator, you require a calculator -- no point in having students learn to use a slide rule. Likewise, for basic probability theory, a calculator will make a lot more practical applications accessible.

For my last two years of high school, Texas Instruments had somehow convinced my school board that their graphic calculators were great and educational. Our final tests featured tasks such as "determine the approximate root of this function with the graphical calculator". We did not cover a lot of math in these two years. I like to hope that graphical calculators are not a thing any more (a smartphone can do anything such a calculator can do, but much better), but if they still are, I would implore any school board deluded enough to think they would help teach math to at least make it a priority that the devices they mandate come with a decent programming language (LISP, Python, Haskell, Perl, whatever) so that kids do not have to waste two years programming in TI BASIC instead of paying attention to class.

wrt strict liability, there is a whole 60 page lawcomic arch about it.

I am mostly on board with Nathan there. Strict liability for regulatory offenses seems bad, and relying on luck / selective enforcement / prosecutorial discretion to keep people who collect a few feathers out of jail seems bad.

My main disagreement with that arch is DUI. For one thing, the offense is not hidden in some law about fishery regulations that nobody has read, you get told about it when you train for your driving license. For another, when driving a car we actually expect people to pay close attention to stay within the regulations which they were trained on. "Yes, I should have stopped on that left-yields-right intersection, but you see, I just assumed that there was no car coming from the right and did not look, so I clearly lack mens rea" or "Officer, my speedometer is broken. I thought I was within the speed limit" will not fly, then why should "I know that DUI is a crime, and I know that I had a few drinks, but I was under the impression that I was slightly under the BAC limit"?

A similar objection could be made to the argument against statutory rape. Everyone knows that people presenting as young adults come in two flavors, "jailbait" and "legal". Anyone who has sex with such a person without verifying their category is taking a calculated risk. There might be other arguments against that law, but the fact that the person committing the offense could not possibly have known rings hollow to me.

If (1), (2), and (3) are true, then something like UBI can be seriously considered and we can all live in Fully Automated Luxury Gay Space Communism.

This is similar to a point made on LW a few weeks ago, as a critique to the national security framing of ASI.

Almost none of the people who are likely to build ASI are evil on a level where it would matter in the face of a technological singularity. At the end of the day, I don't care much how many stars are on the flags drawn on the space ships which will spread humanity through the galaxy. Let Altman become the God-Emperor of Mankind, for all I care. Even if we end up with some sick fuck in charge who insists on exclusively dining on the flesh of tortured humans, that will not really matter (unless he institutes a general policy of torturing humans).

Who is the first to build AI matters only if

(1) AI alignment is possible but difficult, or

(2) AIs will fizzle out before we get to post-scarcity.

Of course, both of these are plausible, so practically we should be concerned with who builds AI.

I think a plateau is inevitable, simply because there’s a limit to how efficient you can make the computers they run on. Chips can only be made so dense before the laws of physics force a halt. This means that beyond a certain point, more intelligence means a bigger computer. Then you have the energy required to run the computers that house the AI.

While this is technically correct (the best kind of correct!), and @TheAntipopulist's post did imply an exponential growth (i.e. linear in a log plot) in compute forever, while filling your light cone with classical computers only scales with t^3 (and building a galaxy-spanning quantum computer with t^3 qbits will have other drawbacks and probably also not offer exponentially increasing computing power), I do not think this is very practically relevant.

Imagine Europe ca. 1700. A big meteor has hit the Earth and temperatures are dropping. Suddenly a Frenchman called Guillaume Amontons publishes an article "Good news everyone! Temperatures will not continue to decrease at the current rate forever!" -- sure, he is technically correct, but as far as the question of the Earth sustaining human life is concerned, it is utterly irrelevant.

A typical human has a 2lb brain and it uses about 1/4 of TDEE for the whole human, which can be estimated at 500 kcal or 2092 kilojoules or about 0.6 KWh. If we’re scaling linearly, if you have a billion human intelligences the energy requirement is about 600 million KWh.

I am not sure that anchoring on humans for what can be achieved regarding energy efficiency is wise. As another analogy, a human can move way faster under his own power than its evolutionary design specs would suggest if you give him a bike and a good road.

Evolution worked with what it had, and neither bikes nor chip fabs were a thing in the ancestral environment.

Given that Landauer's principle was recently featured on SMBC, we can use it to estimate how much useful computation we could do in the solar system.

The Sun has a radius of about 7e8 m and a surface temperature of 5700K. We will build a slightly larger sphere around it, with a radius of 1AU (1.5e11 m). Per Stefan–Boltzmann, the radiation power emitted from a black body is proportional to its area times its temperature to the fourth power, so if we increase the radius by a factor of 214, we should increase the reduce the temperature by a factor of sqrt(214), which is about 15 to dissipate the same energy. (This gets us 390K, which is notably warmer than the 300K we have on Earth, but plausible enough.)

At that temperature, erasing a bit will cost us 5e-21 Joule. The luminosity of the Sun is 3.8e26 W. Let us assume that we can only use 1e26W of that, a bit more than a quarter, the rest is not in our favorite color or required to power blinkenlights or whatever.

This leaves us with 2e46 bit erasing operations per second. If a floating point operation erases 200 bits, that is 1e44 flop/s.

Let us put this in perspective. If Facebook used 4e25 flop to train Llama-3.1-405B, and they required 100 days to do so, that would mean that their datacenter offers 1e20 flop/s. So we have a rough factor of Avogadro's number between what Facebook is using and what the inner solar system offers.

Building a sphere of 1AU radius seems like a lot of work, so we can also consider what happens when we stay within our gravity well. From the perspective of the Sun, Earth covers perhaps 4.4e-10 of the night sky. Let us generously say we can only harvest 1e-10 of the Sun's light output on Earth. This still means that Zuck and Altman can increase their computation power by 14 orders of magnitude before they need space travel, as far as fundamental physical limitations are concerned.

TL;DR: just because hard fundamental limitations exist for something, it does not mean that they are relevant.

And yes, I know that "AI" is still a misnomer, I understand that LLMs are just token predictors, and I think people who believe that any neural net is close to actually "thinking" or becoming self-aware, or that really, what are we but pattern-matching echolaliac organisms? are drinking kool-aid

I am kind of in the middle ground between "they are just stupid stochastic parrots, they don't think!" and "obviously they will develop super-intelligent subagents if we just throw more neurons at the problem!", while I suspect that you are a bit more likely to agree with the former.

The latter case is easy to make. If you train a sufficiently large LLM on chess games written in some notation, the most efficient way to predict the next token will be for it to develop pathways which learn how to play chess -- and at least for chess, this seems to mostly have happened. Sure, a specialized NN whose design takes the game into account will likely crush an LLM with a similar amount of neurons, but nevertheless this shows that if your data contains a lot of chess games, the humble task of next-token-prediction will lead to you learning to play chess (if you can spare the neurons).

By analogy, if you are trained on a lot of written material which took intelligence to produce, it could be that the humble next-token-predictor will also acquire intelligence to better fulfill its task.

I will be the first to admit that LLMs are horribly inefficient compared to humans. I mean, a LLM trained on humanity's text output can kinda imitate Shakespeare, and that is impressive in itself. But if we compare that to good old Bill, the latter seems much more impressive. The amount of verbal input he was trained on is the tiniest fraction of what an LLM was trained on, and Shakespeare was very much not in the training set at all! Sure, he also got to experience human emotions first-hand, but having thousand of human life-years worth of description of human emotions should be adequate compensation for the LLM. (Also, Bill's output was much more original than what a LLM will deliver if prompted to imitate him.)

Of course, just because we have seen an LLM train itself to grok chess, that does not mean that the same mechanism will also work in principle and in practice to make it solve arbitrary tasks which require intelligence, just like we can not conclude from the fact that a helium balloon can lift a post card that it is either in principle or in practice possible with enough balloons to lift a ship of the line and land it on the Moon. (As we have the theory, we can firmly state that lifting is possible, but going to the Moon is not. Alas, for neural networks, we lack a similar theory.)

More on topic, I think that before we will see LLMs writing novels on their own, LLMs might become co-authors. Present-day LLMs can already do some copy-editing work. Bouncing world building ideas off an LLM, asking 'what could be possible consequences for some technology $X for a society' might actually work. Or someone who is skilled with their world-building and plotlines but not particularly great at finding the right words might ask an LLM to come up with five alternatives for an adjective (with connotations and implications) and then pick one. This will still not create great prose, but not everyone reads books for their mastery of words.

I would not call the Jews "universally hated". For example, I don't hate Jews. More generally, while antisemitism has a long history in Christian Europe, and pogroms happened in many places over many centuries, I think "universal hatred" is a bit of an over-simplification. For one thing, Judaism was (sometimes) tolerated in a way which other religions (besides Christianity) were not tolerated. Most Christian rulers would not have suffered a temple to the Norse gods within their realm, for example. I also think that Muslims generally displayed even less of a deadly hatred against Jews pre-1900, there was the 1066 Granada pogrom, but Wikipedia lists few other pogroms.

(Also, there is an argument made that the biggest victim group of Hitler are gentile Slavs, but I concede that the one group he was really fanatic about genociding are certainly the Jews.)

And while you can describe the European theater of WW2 as an "intra-white" thing, I would argue that this is simply because Germany did not have any borders with non-White countries. Nazi ideology has a ranking of "races", with the "nordic race" being the most noble, and the Slavs being the least noble white people (apart from certain minorities), but they certainly consider Blacks to be inferior to even Slavs.

Neither the Western Allies nor the USSR had racism as a major part of their doctrine, so framing WW2 as the proud racists vs the people who reject racism is not exactly wrong.

Genghis Khan is some 800 years ago, the Nazis are 80 years ago. Perhaps in a few hundred years, there will be Nazi-themed restaurant chains called "Adolf Hitler Wirtshaus" which will serve vaguely German dishes which will be invented near the end of this century.

This matches somewhat with the outgroup/fargroup distinction. Genghis Khan is very much in the fargroup outside Asia. You can safely dress up as him for Halloween and nobody will bat an eye, he is playing in the same league as Darth Vader or Sauron.

I think that a lot of factors play a role in determining when a tragedy or atrocity loses its gravity. Raw numbers are one thing -- a single death is more easily shrugged off than a million (but I would argue that this scales only logarithmically, because humans are scope insensitive. Accidents are forgotten quicker than atrocities.

Sometimes, a traumatic event becomes almost permanently imprinted in a culture. As far as Roman occupiers go, Pontius Pilate is hardly one of the worst. Using the death penalty against some guy who has offended local religious sentiments as a favor to the local elites is just how the sausage gets made, hardly a reign of terror. But because the killing of that guy spawned one of the most successful memes of all times, dressing as him for Halloween is probably a bad idea.

For the Federal Republic of Germany, Nazism plays a central role in the founding mythology. Where before Germany had been a Great Power run on Prussian militarism and patriotic fervor, with a tenuous relationship to democracy, it basically reinvented itself after WW2, rejecting its ambitions to rule the world and fully embraced democracy. (While keeping all the Nazis around, but that problem solved itself through time.)

Of course, there is another state in whose founding Hitler inadvertently played a major role, which is modern Israel. As long as these two states are around, they will remember the Nazis as the Big Bad.

That twitter post mostly links to a longer article on the National Catholic Reporter.

It starts by quoting JD Vance:

"There is a Christian concept that you love your family and then you love your neighbor, and then you love your community, and then you love your fellow citizens, and then after that, prioritize the rest of the world. A lot of the far left has completely inverted that."

Now, JD Vance is is a Catholic, and he is making a claim about a "Christian concept" which is vaguely reminiscent of Subsidiarity.

Now, I am not a fan of non-political organizations meddling in day-to-day political affairs, be it the American Mathematical Society or the RCC. But that does not mean that these organizations should keep quiet when they feel that their teachings are misrepresented. If Trump claims that 15 is a prime number, then I will not consider it undue meddling if the AMS releases a press statement which says that he is wrong. If JD Vance had called it a common-sense, Protestant, Jewish or Hindu concept, then I would consider the NCR reaction undue, like most cases of "my religion says what you do is bad". But a bishop disagreeing with a Catholic who explicitly invoked Christianity does not seem undue to me, never mind "dunking"

The desired outcome for the donators is that leftists see that trying to cancel people as racists no longer destroys them when the victim instead get lots of money, stop doing so, and therefore no one gets into those situations anymore (i.e. no viral shitstorm happens when people say "nigger"). Similar to how, althouth it strains the comparison, the West is hoping that Putin realizes that invading another country is not worth it because of the support they'll be getting.

I do not think that this will work. The left can cause shitstorms a lot easier than the right can cough up money.

And even if that was not true, the non-exploitable equilibrium would be if the left stopped trying to cancel people because they realized that the minute they focused their anger on someone, they would be showered in money by their opponents. I am not holding my breath for that. It would require playing politics on level two, and most people play level one. I mean, the single most important asset Trump had for winning the primaries was the left-leaning press, which loved to hate him. "You won't believe what the horrible racist has done now" etc. They never stopped to consider that the median R primary voter would be rather unsympathetic to them, and might consider "Trump really riles up the liberals" a point in his favor.

I find the lack of info a bit strange. Presumably, he was in the top ten candidates, so I would have expected newspapers to have a full dossier on him. Just because the Catholics might not care much for his politics in this moment, it does not mean that the rest of the world should adopt the same standards.

The Guardian has mostly his biographic data. BBC has a bit of commentary:

As 80% of the cardinals who took part in the conclave were appointed by Francis, it is not all that surprising that someone like Prevost was elected, even if he was only recently appointed.

He will be seen as a figure who favoured the continuity of Francis' reforms in the Catholic Church.

Prevost is believed to have shared Francis' views on migrants, the poor and the environment.

Although he is an American, and will be fully aware of the divisions within the Catholic Church, his Latin American background also represents continuity after a Pope who came from Argentina.

During his time as archbishop in Peru he has not escaped the sexual abuse scandals that have clouded the Church, however his diocese fervently denied he had been involved in any attempted cover-up.

So he might be American, but probably is not MAGA-adjacent.

Seldom have I heard a story where I had so little sympathy for any side. It makes the characters in the alligator river story seem like paragons of morality by comparison.

Like, if a kid tries to steal from your sons bag, perhaps don't call him a racial slur? Unless he is like 14, even calling him a "little shit" would probably be in bad taste.

And if you observe some Karen calling a kid a racial slur after he has just tried to steal from her kid's bag, perhaps leave it at a "shut the fuck up, you racist bitch", and don't escalate to social media?

And if you repeat racial slurs while someone is pointing a camera on you, and you are not already openly a KKK member, nor are Donald Trump, don't be surprised if the shitstorm hits you.