site banner

Culture War Roundup for the week of November 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

If you do figure it out, I expect at least a LW post or two about it 🙏

the likely result of that is that you go "haha nice" and adjust your strategy such that you don't end up in that trade cycle again

I agree that this is how it'll likely work out (and it does in smart humans), but isn't that tantamount to enforcing internal consistency, just under adversarial stimulus? The crux of it is that becoming more consistent is a good thing, not that I have a strong opinion on how hard you should optimize for it. And after enough cycles of this process, should you not be closer to having VNM apply to you in a more meaningful fashion?

(BTW, my phrasing was unclear, I was asking whether the VNM constrains future behavior of an agent by the same utility function you derived for past behavior, so your response has me a little confused as to whether you agree or disagree with that!)

Speaking of which I have just bugged him about how to throw money at NAO, because I believe they should receive lots of funding and have just realized that I have done exactly nothing about that belief.

Ah, would that I had enough money to throw at a housefly and hope to stun it, but at least you're putting yours to noble ends haha.

You'll be happy to know that I did in fact throw some fairly substantial amounts of money at jefftk and friends for their wastewater surveillance / sequencing / anomaly detection project. Significantly prompted by us having this conversation.

I'm genuinely heartened to hear that! I hope you don't mind if I use it as an excuse for a second-hand Effective Altruist card haha. (j/k, but presumably the ones doing admin or fundraising work get away with it).

To discuss my current stance on bioterrorism x-risk in slightly more detail, such monitoring will significantly hamper lone crazies/small groups the most. Of course, they have a hard time of it, at least in the West. Just about the hardest part of making a truly extinction-level threat of a pathogen is not dying to Alpha 0.3 halfway through the process. Look at all the issues even BSL-3+ labs have with leaks. Some degree of surveillance, even if hardly the most rigorous, has been implemented. I believe most reputable purveyors of on-demand DNA/RNA/protein synthesis are filtering for known pathogens/GOF/bioweapons, though I suspect it is hardly robust, or insurmountable to a determined adversary willing to get their hands dirty.

This doesn't solve the issue of more well-resourced actors, especially in jurisdictions with more laissez-faire policies, but nation-states are generally not omnicidal (citation hopefully not needed). Wastewater surveillance will, at the very least, tell us something is up, and might even be good enough to save civilization if a pathogen isn't ridiculously good.

I'm glad you made the donation, and I sincerely hope they manage to scale cheaply and ubiquitously enough that we get some hint of what's coming that's more robust than doctors near wet-markets wondering about a really weird flu.

If you do figure it out, I expect at least a LW post or two about it 🙏

If I do, I will definitely make an LW post or two about it. May or may not happen, I have quite a lot going on in the next two months (and then more going on after that, because a lot of the stuff going on is "baby in 2 months").

I agree that this is how it'll likely work out (and it does in smart humans), but isn't that tantamount to enforcing internal consistency, just under adversarial stimulus?

I think the disagreement is more about how often the adversarial stimulus comes about. I expect that in most cases, it's not worth it to generate such an adversarial stimulus (i.e. it costs more than 0.01 A for an adversary to find that trade cycle, so if they can only expect to run the cycle once it's not worth it). So such an agent would trend towards an internally consistent equilibrium, given a bunch of stimuli like that, but probably not very quickly and the returns on becoming more coherent likely diminish very steeply (because the cost of incoherence decreases as the magnitude decreases, and also the frequency of exploitation should decrease as the payoff for exploitation decreases, so the rate of convergence should slow down more than linearly over time).

Ah, would that I had enough money to throw at a housefly and hope to stun it, but at least you're putting yours to noble ends haha.

That'll change with the officially becoming a doctor thing, I expect. And also becoming a doctor helps rather more directly with the whole pandemic preparedness thing.

Congrats on becoming a dad soon! May, by the time they're old enough to understand, this is all a thing of the past and they get to know nothing but happiness and unbounded opportunity.

That'll change with the officially becoming a doctor thing, I expect. And also becoming a doctor helps rather more directly with the whole pandemic preparedness thing.

"But Pagliacci, I am a doctor!" I was joking about how awful the salary is here in India (it's marginally less awful in the UK, neither hold a candle to the US). Ah, I'm hardly the worst off in the world, I'll manage!

I posted an initial call for hypotheses to LW, got a couple of good ones, including "the SL policy network is acting as a crude estimator of the relative expected utility of exploring this region of the game tree" which strikes me as both plausible and also falsifiable.

I'll keep you posted.

Oh I see you attracted Gwern, a sign of a good question if nothing else, he's nerdsniped into making QCs on the regular

My contribution to the cause extends to super upvoting you, somehow my stupidity has accrued me sufficient karma to count for something.

Do let me know if there's a consensus or you come to a firm conclusion!