site banner

Culture War Roundup for the week of August 21, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

14
Jump in the discussion.

No email address required.

I am not arguing that you can't get a single standard deviation of gain using gene editing, and I am especially not arguing that you can't get there eventually using an iterative approach. I am arguing that you will get less than +1SD of gain (and, in fact, probably a reduction) in intelligence if you follow the specific procedure of

  1. Catalogue all of the different genes where different alleles are correlated with differences in the observed phenotypic trait of interest (in this case intelligence)
  2. Determine the "best" allele for every single gene, and edit the genome accordingly at all of those places.
  3. Hopefully have a 300-IQ super baby.

The specific thing I want to call out is that each of the alleles you've measured to be the "better" variant is the better variant in the context of the environment the measurements occurred in. If you change a bunch of them at once, though, you're going to end up in into a completely different region of genome space, where the genotype/phenotype correlations you found in the normal human distribution probably won't hold.

I don't know if you have any experience with training ML models. I imagine not, since most people don't. Still, if you do have such experience, you can read my point as "if you take some policy that has been somewhat optimized by gradient descent for a loss function which is different from, but correlated with, the one you care about, and calculate the local gradient according to the loss function you care about, and then you take a giant step in the direction of the gradient you calculated, you are going to end up with higher loss even according to the loss function you care about, because the loss landscape is not flat". Basically my point is "going far out of distribution probably won't help you, even if you choose the direction that is optimal in-distribution -- you need to iterate".

Actually waiting for gene edited baby to grow is slow, and illegal

Yep. And yet, I claim, necessary if you don't want to be limited to fairly small gains.

Arguing that than it would break well before 1 SD, is... just wishful thinking. There's still a lot of low hanging fruit.

Note that this is "below 1SD of gains beyond what you would expect from the parents, and in a single iteration". If you were to take e.g. Terry Tao's genome, and then identify 30 places where he has "low intelligence" variants of whatever gene, and then make a clone with only those genes edited, and a second clone with no gene edits, I would expect the CRISPR clone to be a bit smarter than the unaltered clone, and many SD smarter than the average person. And, of course, at the extreme, if you take a zygote from two average-IQ parents, and replace its genome with Tao's genome then the resulting child would probably be more than 1SD smarter than you'd expect based on the IQs of the parents, because in that case you're choosing a known place in genome space to jump to, instead of choosing a direction and jumping really far in that direction from a mediocre place.

Maybe technical arguments don't belong in the CW thread, but people assuming that the loss landscape is basically a single smooth basin is a pet peeve of mine.

I am sorry for using "just wishful thinking", this was bad.

Hopefully have a 300-IQ super baby.

I don't current state of art, but I think setting all genes to "high IQ allele" would have linear projection for IQ well past 300. So getting 300 IQ would need to avoid setting some alleles.

if you take some policy that has been somewhat optimized by gradient descent for a loss function which

I have some experience with gradient descent methods, thought, not with ML. I challenge the premise "somewhat optimized", we are currently living in dysgenic age. If we were talking about making Borzoi dogs run faster, I'd have agreed.

If you were to take e.g. Terry Tao's genome, and then identify 30 places where he has "low intelligence" variants of whatever gene, and then make a clone with only those genes edited, and a second clone with no gene edits, I would expect the CRISPR clone to be a bit smarter than the unaltered clone,

Alternatively, we could just skip detection on which alleles have low IQ and just eliminate very rare alleles, which are much more likely to be deleterious (e.g. replace allele with frequency below given threshold with its most similar allele with frequency above threshold) without studying any IQ.

Maybe technical arguments don't belong in the CW thread,

Well, people on this forum don't discuss genetics in detail at all.

but people assuming that the loss landscape is basically a single smooth basin is a pet peeve of mine.

It's a basin in some places until we travel to a mountain ridge. Since we are decades away from even trying "set all genes to specific allele" - even for model organisms - very few people discuss it.

In your hypothetical bet, how would result "IQ as intended, but baby brain too large for pregnancy to be delivered naturally" count?

I challenge the premise "somewhat optimized", we are currently living in dysgenic age.

The optimization happened in the ancestral environment, not the last couple hundred years. Current environment is probably mildly dysgenic but the effect is going to be tiny because the current environment just hasn't been around for very long.

Alternatively, we could just skip detection on which alleles have low IQ and just eliminate very rare alleles, which are much more likely to be deleterious (e.g. replace allele with frequency below given threshold with its most similar allele with frequency above threshold) without studying any IQ.

I expect this would help a bit, just would be surprised if the effect size was actually anywhere near +1SD.

In your hypothetical bet, how would result "IQ as intended, but baby brain too large for pregnancy to be delivered naturally" count?

If the baby is healthy otherwise, that counts just fine.