site banner

Friday Fun Thread for February 3, 2023

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

5
Jump in the discussion.

No email address required.

But given the big results were all generalizable, it's likely that some (at least half of) smaller results will generalize too

Agreed, but even if 75% of the smaller results generalize, there's a high enough failure rate that it's very risky to update at all based on any particular survey result.

And even results that don't generalize will still be interesting, because they're about a group similar to us.

Eh... how similar though? I agree it's interesting but I don't particularly want my intuitions informed by evidence which may be quite faulty.

Also see decent-accuracy political polls with Xbox users - nonrepresentative data can be useful, although I don't think it's as useful as that paper would suggest.

Xbox users are much closer to the typical person than Aella followers are. I agree that nonrepresentative data can be useful, but at the same time, this is a very sexuality-focused person asking her sex-focused followers about sex questions, so this seems uniquely likely to not generalize.

I think we probably agree that there is some threshold of study quality below which it's not really worth paying attention to the results at all; we just may disagree on where that threshold is and where this survey lies. My threshold for survey quality, above which I actually pay attention to what it says, is very high because I think most studies generally get things wrong. I also think this survey is quite low-quality. Based on your wordings such as:

apparently women have more vivid dreams than men

it sounds like you think this survey passes that threshold, despite that question having only a 1.13 average difference between men and women. I think it is very reasonable to simply ascribe that difference to confounders. Even something simple like a difference in average age between men and women (which seems quite plausible) could easily be enough to explain that difference on its own, and there are probably ~10 other equally likely confounders that could explain it.

There's a chance that the survey result to that question is genuine, but given all the extremely powerful confounders that could push it one way or another, I think the most prudent course of action is to simply ignore it entirely and not update at all based on it.

there's a high enough failure rate that it's very risky to update at all based on any particular survey result.

Oh, to clarify, I don't think it's a good idea to take any survey result as 'true, because it's in the survey'. That's a very high standard - I wouldn't even say that about a lot of large RCTs or meta-analyses in medicine (you're not uniformly sampling them, and the characteristics of an "interesting" RCT to a random person makes it more likely to be wrong somehow) . (e.g. fluvoxamine, which a lot of rats made a massive deal over because of a few trials, ended up not showing benefit in later trials, i think). And most survey-readers are much too credulous about the results, whether it's a serious poll or a fun one like aella's. But this survey is interesting to look through and see potential associations, and then investigate them more.

it sounds like you think this survey passes that threshold

I think there's a decent (50%? idk) chance that'd generalize to the general population. Aella claims it replicates in other studies, although I didn't find any on google scholar quickly. My choice of the dreams + pedo examples was to argue that, even though such associations probably are present, I don't think they're that interesting.

Sure, I don't think you're taking the survey as absolute fact either. What I mean is that it's low-quality enough that (as an imperfect human) I don't consider it evidence at all. If I were a perfect bayesian updater then I could consider all the relevant factors, weigh hypotheses, etc. and update my beliefs by 0.01% towards believing that women dream more than men, but I'm not perfect, so it's safer to just not update them at all based on such terrible evidence.

But this survey is interesting to look through and see potential associations, and then investigate them more.

That's true, there's some value to it there, but again I'd be a little worried about it coloring my beliefs about things if I thought about it too much. This sort of data is very hard to find elsewhere, but can really color your day-to-day interpretations of how people act in real life. Since it's so hard to prove or disprove, those biases can just stick around for a long time if you let them.

If I were a perfect bayesian updater then I could consider all the relevant factors, weigh hypotheses, etc. and update my beliefs by 0.01% towards believing that women dream more than men

I don't like 'bayesian thinking' as an idea (and think 'thinking's bayesianness is overstated in rationalism). It's entirely possible to see something, and then say 'huh, that could be true, and it'd be interesting if it was', and then spend time evaluating how plausible it is / looking for more evidence for it without that corresponding to a probability. You can be smart enough to consider unlikely hypotheses without it contaminating your probabilities. And this still adds up to 'the results haven't told me anything new of meaning' for me anyway.

Even if these surveys really were 'coloring your beliefs', I think the best move would be to read so many of them that you viscerally notice the contradictions and absurdity, and then stop having them color your beliefs. Otherwise, all sorts of random things people say will 'color your beliefs', even if you don't seek them out.

It's entirely possible to see something, and then say 'huh, that could be true, and it'd be interesting if it was', and then spend time evaluating how plausible it is / looking for more evidence for it without that corresponding to a probability.

Yeah, and this is pretty much what I'm referring to when I say it's not worth the time. The responses in the survey are of so little value to me, and so unlikely to be related to the truth, that I don't want to spend any cognitive energy investigating them. I'd spend more time/energy on them if I had more, the data seemed more valuable, or the conclusions lined up in interesting ways with things I already believed.

I don't like 'bayesian thinking' as an idea (and think 'thinking's bayesianness is overstated in rationalism).

Mostly agreed here--it's one of many useful cognitive tools, nothing more. I like it more as a means of informing my normal thinking process than as an actual way to think.

Even if these surveys really were 'coloring your beliefs', I think the best move would be to read so many of them that you viscerally notice the contradictions and absurdity, and then stop having them color your beliefs.

I already do, and I did scan through this latest survey, I just don't think its results should rise to the level of "this could be true and it would be interesting if it was." Investigating these hypotheses takes cognitive energy which could be spent on more worthwhile hypotheses. That's what I mean when I say we should ignore the results. Surely most of them are true but that alone doesn't give the study any value; it has to actually be insightful somehow.

I mostly agree - I just think it's because of the surveyness of it, as opposed to the selection-biasedness of it. If this chaos survey had a representative sample, that wouldn't really change my estimate of it. I think reading its results is significantly less useful than reading reddit.com/r/all/new if you want to learn random facts or patterns about people (although mostly because I think the latter is somewhat useful).