This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
New from me - Effective Aspersions: How the Nonlinear Investigation Went Wrong, a deep dive into the sequence of events I summarized here last week. It's much longer than my typical article and difficult to properly condense. Normally I would summarize things, but since I summarized events last time, I'll simply excerpt the beginning:
Roko was banned for revealing Alice and Chloe's real names. It's not hard to figure out their names, but I'll refrain from revealing them, to prevent the search engines from linking them to this.
I want to highlight this comment, contrasting the nonlinear environment with normal professional employment. Erica had the insight that Alice and Chloe might be "exploited immigrants," and indeed they are from Germany and Denmark.
Chloe is still active in EA, with a similar job title, but hopefully her current job is lower stress and more aligned with her interests. Her boyfriend from Puerto Rico has also continued in the EA space and has several posts on EA forums.
Alice has been deleting some of her online activity, and possibly changing her name. She frequents vegan restaurants and continues to be poly (amazingly, with prediction markets).
When the real names are that easy to find, the ethics of enforcing a prohibition on 'doxxing' get a bit weird. What, exactly, are you protecting?
Probably, most people are just lazy and won't look anyway, so it still has a significant effect on the number of peripheral people who know. But I think people feel like they're really protecting alice/chloe's names more than they are.
It's also somehow funny that he only got a 1 week ban from the forum. It feels very short.
(note: I only quickly crosschecked with your descriptions, not with the nonlinear post content)
When Scott wrote that the NYT article would make his job more difficult, I was sympathetic but curious. It's easy to find his previous handle yvain and his old blog and then his personal site where he has his name. Despite his poor op-sec of not making a hard break between identities, patients couldn't easily google his name to find his somewhat controversial postings. All was well until adding Scott's full name to Metz's article did more harm than good.
The current case isn't quite so clear. Starting with Alice and Chloe's real names, you quickly (without archive.org) find references to Nonlinear. But you have to put a few things together to connect with the current controversy.
Even without doxxing, it may be awkward when Alice/Chloe apply for their next job in the EA sphere. Upon seeing their résumés, the interviewer might ask, "was your experience at infamous Nonlinear as bad as Alice's?"
Of course, the reputations of Ben / Kat / Emerson are much more directly impacted. I think the common theme is that they didn't know (or care) about the normal standards for investigative journalism / employment. Not that I would've done much better, but spirited rejection of Chesterton's fences to escape local optima probably makes things worse.
More options
Context Copy link
Norms, generally. Deanonomizing people is something I'd rather not become allowable, and the incompetence of others shouldn't affect me.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm getting incredibly sick of the "rationalist" affectation/verbal tic of "statistically" "quantifying" your predictions in contexts where this is completely meaningless.
"But I think it would still have over a 40% chance of irreparably harming your relationship with Drew"
"Nonlinear's threatening to sue Lightcone for Ben's post is completely unacceptable, decreases my sympathy for them by about 98%"
What it does mean to have a 40% chance of irreparably harming her relationship with Drew? Does that mean that there's a 60%, 70% etc. chance of it harming her relationship with Drew, but in a way that could be fixed, given enough time and effort? What information could she be presented with that would cause her to update her 40% prediction up or down?
The numbers are made up and the expressions of confidence don't matter. It's just cargo cult bullshit, applying a thin veneer of "logic" and "precision" to a completely intuitive gut feeling of the kind everyone has all the time.
Here's the rationalist theory:
Let's say you do this ten thousand times over the course of a few years. Make a list of every prediction, and count how many predictions in the '40%' category were true. If it ends up as '40%', you're making good predictions. Or, take all your predictions and score them according to a brier score or another scoring rule, and if you have a low score you're making good predictions.
The theory is that you can take statements and make predictions, and often the best you can do is '60%' or '20%' while maintaining a good score, and that this says something about the structure of decisionmaking.
I don't like it personally. I think the complexities you need to explore are mostly unrelated to the exact numbers. But you probably can, after the fact, in most scenarios say 'yeah, her relationship was harmed' or 'no, it wasn't', and then score your prediction, and if your calibration is reasonable and you're not manipulating them then it might mean something!
More options
Context Copy link
I disagree. Even if the numbers are somewhat made up, having a ballpark figure that tells you the relative probability of certain events that would result from a decision you’re planning to make.
Going to the Drew example, if I think that doing something (say going to school in another city and trying to have a LDR is going to result in a 40% chance that I’ll lose the relationship entirely, and a 60% chance that I’ll damage it in away that would be difficult but not impossible to fix, then I can use that to decide if that would be more important to me than the job opportunities, the scholarships, or whatever else I gain from going to school away from him. Might doesn’t give you enough information for a true reality check imo, because it treats low probability events equally to large probability events. Even using verbal categories like low, medium and high probability, especially when making a group decision aren’t precise enough to communicate what I’m actually thinking. Low is how low? For you it might be 5%, for me it’s 20%. We can’t communicate that well if we don’t know what the terms are.
I see an opening parenthesis without a closing one. Is your comment unfinished?
More options
Context Copy link
I think part of the point is the numerical values convey an unwarranted degree of precision based on the process that generated them. Say your estimate is 20% probability for X. Why not 21%? 19%? 25%? 15%? What's the size of the error term on your estimate? Is your forecasting of this outcome so good as to warrant a <1% margin? Of course, estimation of that error term itself has the same problems as generating the initial estimate.
I don't think this is a good objection. Numbers are often approximate. 20% means 'somewhere between 10% and 30%' as much as 'around a hundred pounds' might mean '75-125 pounds'. On the other hand, I usually think it's better to actually say what ideas and conditionals inform your judgement rather than just saying a number, and I'm not sure what the number adds to the former.
The numbers at least for me give me a ballpark estimate of what I think will actually happen given a certain set of conditions. If I say 25% (which in my mind is generally within 10% of the number I give) that communicates in a way that “low probability” doesn’t because “low” doesn’t mean anything. My low might be 25%, your low might be 5%. And making decisions, in a group setting especially, requires precision so that when weighing options you can know with some degree of certainty what people think are likely and unlikely and to what degree. This allows you to discuss whether an X% (+/-10%) risk of something happening being serious enough to make that decision a bad idea. If low can mean anything between 5% and 35%, it’s going to cause people to either overestimate the risk and be too cautious, or underestimate it and take risks that they might not take otherwise.
More options
Context Copy link
My point is that it's important to make that uncertainty explicit because not everyone talking to you is going to understand that. Maybe you think 20% is shorthand for 10-30% but someone else thinks it's precisely 20% or is actually 15-25% or some other range. I think the "around a hundred pounds" is a good example because "around" conveys a degree of uncertainty on the "hundred pounds." If I was quoted a price of some good at "a hundred pounds" (no "around") and later found out it was actually 125 I would feel like I was deceived.
Probability already inherently indicates uncertainty though! You can just say you're combining the different 'levels' of uncertainty (what that means is debatable), and the average of [10..30] is 20.
But the average of [5..35] and [0..40] are also 20. Do you think all three of these ranges are conveying the same information because their average is 20? I don't.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Interesting article.
In countries where prior restraint is common, keeping the target of an investigation in the dark might be the only way to publish at all. Of course, in the US prior restraint is virtually non-existent, so that this certainly does not apply to this case.
Likewise, there might be tactical considerations not confront the target of an investigation early on. "Dear Macbeth, just as an heads up, we are currently investigating an alleged involvement of you into the demise of the previous king" is certainly not required. But if you have spend months gathering the facts, then you should give the other party some time to respond.
In general, there is a cooperative mindset and an adversarial mindset. From my perspective, both can be wrong in some cases. If your target has clearly defected from humanity so completely that no further cooperation with them is ever possible, trying to destroy them with your investigation may be imperative. So if you discover proof that the Nazis are running death camps, you will probably not want to give them two weeks of time to do damage control, preempt your story by releasing damning information with their own framing and generally put their spin on it.
In the real world, few people and organisations are so beyond redemption that destroying them by turning arguments into soldiers is worth the price on your soul and the damage to the epistemic commons. The Sequences are very clear that humans are not naturally good on doing Bayesian updates from partisan information.
Civil courts work (mostly) with two adversarial sides presenting self-interested arguments because there are certain standards of evidence and the judges know the takes to be partisan and spent some effort to find the truth between the stories of both sides.
The court of public opinion may occasionally stumble on the truth if a matter appears very one-sided and also is very one-sided, but in most cases, almost nobody will find the shy flower of the truth on the Verdun-esque battlefield left by unrestricted argument warfare.
More options
Context Copy link
Excellent piece, great work all round with an important point I hope gets through to the ea community. To be fair to that Oliver guy re the hypothetical I can't blame him for thinking that "ignore complaints from the target and publish on time, they're probably lying anyway" is the standard for investigative journalism, because regardless of what professional journalists are taught (and taught to say), that is often how the majority of them behave. Which is not to say Herzog or Singal behave that way, or even Lewis (I don't recall ever reading investigative journalism by her, but I know she is a very careful and serious journalist), just that if Oliver hadn't been an amateur looking on from outside the profession he would have known there is only one answer any journalist who wants to maintain their reputation can give.
To be charitable to Oliver, that could also be why he thinks deadlines matter at all outside scheduled publications, although from other points you made in the piece I think it's clear he just wanted to publish it as soon as possible (which should have been a red flag for anyone close to him prior to publication that his motives were skewed.)
More options
Context Copy link
Great post. This whole controversy is pretty fascinating but also seems like something you could sink dozens of hours into learning about without coming to any clear conclusions about what actually happened, who's telling the truth, etc. Nevertheless, here are a few things that come to mind after reading a bit about it.
More options
Context Copy link
More options
Context Copy link