@TracingWoodgrains's banner p

TracingWoodgrains

the leaves that are green turn to brown

18 followers   follows 0 users  
joined 2022 September 04 19:22:43 UTC

No longer active in this community. Catch me on Twitter or Substack.


				

User ID: 103

TracingWoodgrains

the leaves that are green turn to brown

18 followers   follows 0 users   joined 2022 September 04 19:22:43 UTC

					

No longer active in this community. Catch me on Twitter or Substack.


					

User ID: 103

I feel like you’re eliding the point in arguing against my case that his behavior follows from his ethics by referring to the drowning child argument rather than the argument I linked, in which he states explicitly that sexual ethics is unimportant and sex raises no unique moral issues at all.

I’m not the one who tied them together—he is! “Why are you focusing on petty things like sex when there are kids starving in Africa?” is only the slightest rephrasing of his argument. I absolutely would expect someone who takes Singer’s explicitly stated attitude towards sexual ethics to have looser sexual ethics than someone who takes the mainstream societal view, and while it would be unfair to pre-judge him based on that, it is eminently reasonable to take it into account after the fact.

Ah—I have no idea whether he explicitly said such a thing and would be quite startled if he did. From my angle, the fact of an affair and concurrent/subsequent collaborative work are already sufficient to establish a degree of fairly serious misconduct, where the spectres of professional reward and punishment inevitably loom given the power dynamics in play.

Oh, I see a lot of open questions and a lot of room for my judgment to shift in a number of directions—but few beyond complete falsehood that are highly exculpatory for Singer. Your hypothetical is not impossible but even in a scenario like that the mixing of career benefits and an affair is morally fraught.

Yes—the email I screenshot in my Twitter thread on the matter. Unless it’s fake, which I place low likelihood on given that she submitted it as evidence in a court filing, it’s strongly indicative that they had an affair and that she was not the only affair partner at the time.

EDIT: The court filings also include an email from him rebuking her about an interaction they had at a fundraiser meeting for her charity, which he was on the board of. The contents of that interaction and email, as described in the court filing, are not nearly as clear of evidence but are still worth mentioning.

In what sense am I not being skeptical enough? My strongest conclusion by far is based on the email from Singer she entered into evidence and the evidence of their collaboration during the time frame of the alleged affair. Did you read the email? Unless it is inauthentic, it makes it hard for me to see a world where they were not having an affair, he did not initially lie to her in at least one way about it, or he was not having at least one other affair at approximately the same time.

It’s worth being skeptical of her claims, and I am, visibly so and stated every time I post about it. I agree that the “made advances on every female coauthor” claim in particular strains credulity. But there is enough that does not rely solely on her word to make it noteworthy and tough for me to dismiss in full.

It’s not priced in, though, except perhaps to the extremely aware. Not a single article has been written about it, it gets not a single mention in his biographies, virtually nobody in the public knows any details of it. If it was an open secret, it certainly never escaped the circles closest to him, and while it’s possible and natural to assume he’d be the sort of person not to take serious issue with it, that doesn’t reveal much if anything about him actually doing it.

It makes sense, yes. But many things make sense without actually being part of people’s stories. He has been meticulous at keeping it out of the public eye.

I have no idea who Walter Block is without looking him up. Singer is one of a small handful of living philosophers to make it into standard intro to philosophy courses. He is the only living person in the lede of Wikipedia’s article on utilitarianism and is, I would guess, virtually universally considered the greatest living utilitarian. He’s made Time top 100 lists and received a long list of public honors.

By any measure, he is one of the most influential ethicists of all time, certainly one of the most influential living ones. Few people’s ideas have shaped and shifted the public idea of morality as his have. He is almost singularly influential in his field.

Than virtue ethics, deontology, or contractualism? Yes. I am not claiming they are more likely than people who do not actively aim towards upholding high, clearly articulated ethical standards, but yes, I assert that moral systems have measurable impacts on people’s behavior in important ways, and the safeguards against cheating within utilitarianism—and particularly, by Singer’s own explicit admission, in his brand of it—are straightforwardly less than those in other ethical systems.

There’s no evidence either way about an arrangement except the accuser’s claim that he lied about having one.

If you do not consider breaking monogamous relationships up and giving career benefits to affair partners in a domain where he holds immense power to be evidence of wrongdoing, I will not be able to convince you otherwise, but my impression is that most people (correctly, in my estimation) disapprove of both.

When:

  1. someone is in a monogamous relationship, 2. Singer propositions her, 3. They have an affair, and 4. He publishes alongside her through the course and in the immediate aftermath of the affair…

I see very little left to demonstrate.

I think his comment on sexual ethics provides a hint as to what his rationalization of having affairs would be: people get so caught up on sexual ethics when what really makes a difference in the world are things like donating to overseas charities and advocating for animal rights. Yes, his affairs were selfish, but they were a small selfishness as he was pushing large groups towards immense utilitarian good, so to focus on it is a mere distraction. Particularly if nobody finds out—as you say, what’s the harm?

Even in utilitarian terms, this is a rationalization. He knows the second-order effects of affairs and knows what society’s actual feelings on sexual ethics are. He knows, surely, that it is the stuff of scandals and cratered reputations, that it could bring immense harm not just to him but to the ideas he champions, to his philosophy as a whole.

And you can argue that in a utilitarian frame, but we are all at war with our own minds to one extent or another, and the possibility of rationalization depends on the strength of one’s safeguards. Singer’s brand of utilitarianism is unusually bad, I would argue (and I think his quote on sexual ethics supports my argument), at providing defenses against rationalizing sexual misconduct to oneself.

In news that went mostly unnoticed at the time but has since picked up some steam, Peter Singer was sued pro se by a woman who alleged they had an affair twenty years ago and that he's had affairs with many other women, including many co-authors, over his career. Her lawsuit was pretty transparently weak due to statute of limitations issues and the affair being consensual--the "damages" she claimed were the loss of the house her ex-fiance bought as he was breaking up with her due to the affair--but the claims in it are nothing short of a terrible look for Singer: propositioning and sleeping with married and unmarried women in his field over a long period of time, giving career benefits (eg coauthorship) to affair partners, misrepresenting himself as having a "Don't ask, don't tell" arrangement with his wife and lying to affair partners about having multiple simultaneous affairs, and more. It was dismissed after a demurrer claiming no actionable claims was granted: that is, no facts were actually discovered or litigated.

In terms of hard evidence, she included several emails between Singer and her in the filing, one of which included him confessing to her that he had multiple other apparent affair partners. They collaborated on at least four op-eds during the affair or its immediate aftermath, and she contributed a chapter to a book he wrote, so it does appear that her portrayal of career benefits for affair partners has some substance.

I read the court filings and have contacted the parties involved; I'm working on a more detailed article about the whole thing. If you'd like to see the court files yourself, the relevant court is here. Search for case number 22CV01792. The accuser also wrote a shorter essay about it on her website.

While she should not be viewed as a fully reliable narrator, the evidence suggests the truth of her claims that they had an affair, that he admitted to her he was having other affairs, and that she got career benefits from the affair. It's a bit mysterious to me that nobody has touched the story, but at least until a somewhat obscure December YouTube video, about the only place I can find the allegations having been discussed is a quiet EA forum thread.

It caught my attention because of that lack of attention despite its clear newsworthiness. It's the sort of thing I think is easy, but incorrect, to dismiss as mere gossip: Peter Singer is one of the leading ethicists of our time, and I believe his behavior follows from his ethics in visible, important ways. More specifically, I think classical utilitarianism as a whole suffers from a lack of respect for duty to the near in ways that this sort of misconduct highlights.

I don't think it's the sort of thing that should, or will, define Singer. I do, however, think that it's the sort of thing that should be part of his life story and so far has conspicuously not been.

New from me - Effective Aspersions: How the Nonlinear Investigation Went Wrong, a deep dive into the sequence of events I summarized here last week. It's much longer than my typical article and difficult to properly condense. Normally I would summarize things, but since I summarized events last time, I'll simply excerpt the beginning:

Picture a scene: the New York Times is releasing an article on Effective Altruism (EA) with an express goal to dig up every piece of negative information they can find. They contact Émile Torres, David Gerard, and Timnit Gebru, collect evidence about Sam Bankman-Fried, the OpenAI board blowup, and Pasek's Doom, start calling Astral Codex Ten (ACX) readers to ask them about rumors they'd heard about affinity between Effective Altruists, neoreactionaries, and something called TESCREAL. They spend hundreds of hours over six months on interviews and evidence collection, paying Émile and Timnit for their time and effort. The phrase "HBD" is muttered, but it's nobody's birthday.

A few days before publication, they present key claims to the Centre for Effective Altruism (CEA), who furiously tell them that many of the claims are provably false and ask for a brief delay to demonstrate the falsehood of those claims, though their principles compel them to avoid threatening any form of legal action. The Times unconditionally refuses, claiming it must meet a hard deadline. The day before publication, Scott Alexander gets his hands on a copy of the article and informs the Times that it's full of provable falsehoods. They correct one of his claims, but tell him it's too late to fix another.

The final article comes out. It states openly that it's not aiming to be a balanced view, but to provide a deep dive into the worst of EA so people can judge for themselves. It contains lurid and alarming claims about Effective Altruists, paired with a section of responses based on its conversation with EA that it says provides a view of the EA perspective that CEA agreed was a good summary. In the end, it warns people that EA is a destructive movement likely to chew up and spit out young people hoping to do good.

In the comments, the overwhelming majority of readers thank it for providing such thorough journalism. Readers broadly agree that waiting to review CEA's further claims was clearly unnecessary. David Gerard pops in to provide more harrowing stories. Scott gets a polite but skeptical hearing out as he shares his story of what happened, and one enterprising EA shares hard evidence of one error in the article to a mixed and mostly hostile audience. A few weeks later, the article writer pens a triumphant follow-up about how well the whole process went and offers to do similar work for a high price in the future.

This is not an essay about the New York Times.

The rationalist and EA communities tend to feel a certain way about the New York Times. Adamantly a certain way. Emphatically a certain way, even. I can't say my sentiment is terribly different—in fact, even when I have positive things to say about the New York Times, Scott has a way of saying them more elegantly, as in The Media Very Rarely Lies.

That essay segues neatly into my next statement, one I never imagined I would make:

You are very very lucky the New York Times does not cover you the way you cover you.

[...]

I follow drama and blow-ups in a lot of different subcultures. It's my job. The response I saw from the EA and LessWrong communities to [the] article was thoroughly ordinary as far as subculture pile-ons go, even commendable in ways. Here's the trouble: the ways it was ordinary are the ways it aspires to be extraordinary, and as the community walked headlong into every pitfall of rumormongering and dogpiles, it did so while explaining at every step how reasonable, charitable, and prudent it was in doing so.

Yeah. And honestly, there are worse things than being paid in exposure. I'd describe that as the primary compensation for my podcast job (my bosses pay me a perfectly fair hourly wage, but I'm certainly not doing it for the money). It's just worth being clear-eyed about precisely what that entails and when it's appropriate.

At least the callout post came with testimony from people who had actually worked at Nonlinear. It had quotes and screenshots and other forms of evidence of the kind that convince us of many things every day. It turns out these statements did not reflect reality and the screenshots were carefully curated to present a particular narrative. This is a risk we run any time we trust someone's testimony about a situation we don't have first hand experience with. This is an ordinary, and probably unavoidable, epistemic failure mode.

Not good enough.

Yes, the callout post came with all of those things. Here's what else it came with:

  • An emphatic warning from a trusted community member that he had reviewed the draft the day before publication and warned of major inaccuracies, only one of which got corrected.

  • The subjects of the post claiming hard evidence that many of the claims in the post were outright false and begging for a week to compile and send that evidence while emphasizing that they'd had only three hours to respond to claims that took hundreds of hours to compile.

  • A notice at the top, treated as exculpatory rather than damning, that it would be a one-sided post brought about by a search for negative information.

Any one of those things, by itself, was a glaring red flag. All three of them put together leave absolutely no excuse for the post to have been released in the state it was in, or for an entire community that prides itself on healthy epistemics to treat it as damning evidence of wrongdoing. If it had been published in the New York Times rather than the effective altruism community, every single rationalist would—rightly—be cursing the name of the news outlet that decided to post such a piece.

This is ordinary in Tumblr fandoms. It's ordinary in tabloids. It's jarring and inexcusable to see the same behavior dressed up in Reasonable, Rational, Sensible language and cheered by a community that prides itself on having better discourse and a more truth-seeking standard than others.

While I don't endorse "come on, you should totally draw art for my product"–type behavior, I do think the position would have been appealing and appropriate for a certain type of person I am not far from. My monthly salary on top of room and board was significantly larger as a military enlistee, but I also wasn't traveling the world. I think they were realistically underpaying for what they wanted but also think "don't take the job" is an adequate remedy to that.

I take your point about the writing style, but for me it's secondary to the core impression that the investigation was very badly mishandled in a way that makes examining things now feel unfair. The initial report should not have been released as-is and it reflects poorly on the whole EA/LW-rationalist community that it was. Given the poor choices around its release, I don't feel inclined to focus too much on what really looks like mundane and predictable workplace/roommate drama.

They never promised $75k/year in compensation, $10k of which would be cash-based. This was the compensation package listed in their written, mutually agreed upon employment contract:

As compensation for the services provided, the Employee shall be paid $1,000 per month as well as accommodation, food, and travel expenses, subject to Employer's discretion.

They included another text in evidence where they restated part of it:

stipend and salary mean the same thing. in this instance, it's just $1000 a month in addition to covering travel, food, housing etc

The only apparent mention of $70000 as a number happened during a recorded interview (edited for clarity, meaning retained):

We're trying to think about what makes sense for compensation, because you're gonna be living with us, you're gonna be eating with us. How do you take into account the room and the board and stuff and the travel that's already covered? What we're thinking is a package where it's about the equivalent of being paid $70k a year in terms of the housing and the food, and you'll eat out every day and travel and do random fun stuff. And then on top of that, for the stuff that's not covered by room and board and travel is $1000 a month for basically anything else.

I would not personally take a job offering this compensation structure, but they were fully upfront about what the comp package was and it came pre-agreed as part of the deal. I see no grounds for complaints about dishonesty around it.

I won't claim it's entirely discontinuous from the past, but I think it's notable that eg Ben expressed fury at the lack of changes since FTX and the EA community as a whole has recent memories of being dragged through scandal after not being suspicious enough.

EDIT: Oliver, too, mentions being intimidated by FTX and not sharing his concerns as one of the worst mistakes of his career.

Geeks, MOPs, and Sociopaths remains the classic diagnosis of this.

Three months ago, LessWrong admin Ben Pace wrote a long thread on the EA forums: Sharing Info About Nonlinear, in which he shared the stories of two former employees in an EA startup who had bad experiences and left determined to warn others about the company. The startup is an "AI x-risk incubator," which in practice seems to look like a few people traveling around exotic locations, connecting with other effective altruists, and brainstorming new ways to save the world from AI. Very EA. The post contains wide-ranging allegations of misconduct mostly centering around their treatment of two employees they hired who started traveling with them, ultimately concluding that "if Nonlinear does more hiring in the EA ecosystem it is more-likely-than-not to chew up and spit out other bright-eyed young EAs who want to do good in the world."

He, and it seems to some extent fellow admin Oliver Habryka, mentioned they spent hundreds of hours interviewing dozens of people over the course of six months to pull the article together, ultimately paying the two main sources $5000 each for their trouble. It made huge waves in the EA community, torching Nonlinear's reputation.

A few days ago, Nonlinear responded with a wide-ranging tome of a post, 15000 words in the main post with a 134-page appendix. I had never heard of either Lightcone (the organization behind the callout post) or Nonlinear before a few days ago, since I don't pay incredibly close attention to the EA sphere, but the response bubbled up into my sphere of awareness.

The response provides concrete evidence in the form of contemporary screenshots against some of the most damning-sounding claims in the original article:

  • accusations that when one employee, "Alice", was sick with COVID in a foreign country and nobody would get her vegan food so she barely ate for two days turned into "There was vegan food in the house and they picked food up for her, but on one of the days they wanted to go to a Mexican place instead of getting a vegan burger from Burger King."

  • accusations that they promised another, "Chloe", compensation around $75,000 and stiffed her on it in various ways turned into "She had a written contract to be paid $1000/monthly with all expenses covered, which we estimated would add up to around $70,000."

  • accusations that they asked Alice to "bring a variety of illegal drugs across the border" turned into "They asked Alice, who regularly traveled with LSD and marijuana of her own accord, to pick up ADHD medicine and antibiotics at a pharmacy. When she told them the meds still required a prescription in Mexico, they said not to worry about it."

The narrative the Nonlinear team presents is of one employee with mental health issues and a long history of making accusations against the people around her came on board, lost trust in them due to a series of broadly imagined slights, and ultimately left and spread provable lies against them, while another who was hired to be an assistant was never quite satisfied with being an assistant and left frustrated as a result.

As amusing a collective picture as these events paint about what daily life at the startup actually looked like, they also made it pretty clear that the original article had multiple demonstrable falsehoods in it, in and around unrebutted claims. More, they emphasized that they'd been given only a few days to respond to claims before publication, and when they asked for a week to compile hard evidence against falsehoods, the writers told them it would come out on schedule no matter what. Spencer Greenberg, the day before publication, warned them of a number of misrepresentations in the article and sent them screenshots correcting the vegan portion; they corrected some misrepresentations but by the time he sent the screenshots said it was too late to change anything.

That's the part that caught my interest: how did the rationalist community, with its obsession with establishing better epistemics than those around it, wind up writing, embracing, and spreading a callout article with shoddy fact-checking?

From a long conversation with Habryka, my impression is that a lot of EA community members were left scarred and paranoid after the FTX implosion, correcting towards "We must identify and share any early warning signs possible to prevent another FTX." More directly, he told me that he wasn't too concerned with whether they shared falsehoods originally so long as they were airing out the claims of their sources and making their level of epistemic confidence clear. In particular, the organization threatened a libel suit shortly before publication, which they took as a threat of retaliation that meant they should and must hold to their original release schedule.

My own impression is that this is a case of rationalist first-principles thinking gone awry and applied to a domain where it can do real damage. Journalism doesn't have the greatest reputation these days and for good reason, but his approach contrasts starkly with its aspiration to heavily prioritize accuracy and verify information before releasing it. I mention this not to claim that they do so successfully, but because his approach is a conscious deviation from that, an assertion that if something is important enough it's worth airing allegations without closely examining contrary information other sources are asking you to pause and examine.

I'd like to write more about the situation at some point, because I have a lot to say about it even beyond the flood of comments I left on the LessWrong and EA mirrors of the article and think it presses at some important tension points. It's a bit discouraging to watch communities who try so hard to be good from first principles speedrun so many of the pitfalls broader society built guardrails around.

Did you? I barely noticed that happened on my timeline. It didn't really pop up except from, like, a BARPod listener tagging me.

I very frankly do not care who someone is "adjacent to"; I care who they are and what they say. Your favored public thinkers, whoever they are, are extremely unlikely to talk about things as in the link above. Know why I can say that? Because I follow just about everyone who talks visibly about that stuff. People should talk about it more. That he does so is a credit to him and a strike against those who complain that I would so much as mention him.

Your line about "Jews" betrays your own ignorance about him, incidentally. He's anything but antisemitic, and inasmuch as I have disputes with him on that broad topic, it is that he sees Israel's hands as rather cleaner than I personally do. Your opponents do not all fit into a single bucket that you can label "fascist" and have done with it, and I have little patience for dark insinuations of this sort.

He posts astute and data-driven threads on a wide cariety of topics worth talking about, including a number that are of obvious direct personal interest to me. Why on earth wouldn’t I recommend him?

The best advice I can give for Twitter is basically “follow eigenrobot and work outward from there.” tszzl (roon) and growing_daniel are also good “hub accounts” for the tech side of things. If you prefer to start with motte-adjacency, go with ymeskhout, AnechoicMedia_, sonyasupposedly, CremieuxRecueil, and Kulak.

There are a lot more accounts I could mention (I follow around 900 people) but the clear entry point to Twitter for people from here is the ACX-adjacent section (TPOT), which maintains a high standard of amiability/sanity norms while talking about much the same stuff as here.

I’ve been spending a lot more time on Twitter lately, particularly since I can mottepost there now. What I formerly read as fundamental constraints in the directions you point turn out to be mediated pretty heavily by the part of it a person spends time in and who they choose to interact with. There’s a self-selecting group in and around the ACX-adjacent parts of Twitter that is pleasant and full of smart, well-mannered, somewhat ideologically sympathetic people, with two clear advantages in my view:

  1. The decentralized nature means that incompatible personalities can self-select into slightly different subcommunities where people who get on with both can still interact with both in what feels like the same space, meaning in particular that the ideological range is much broader than here.

  2. The public nature means that when you chat with people in your quiet corner, your posts will occasionally leave the bubble and contact a much wider audience, sometimes including the public figures you talk about. In the recent OpenAI drama, for example, the interim CEO was a well-known regular in Twitter’s ACX-adjacent sphere.

IIRC Tracing has "I am a furry, here is my favorite gay furry porn I made with the gay furry porn AI generator" in all his bios.

Hm?

It's pretty much limited to profile pictures, to be honest, and I don't really talk about porn at all. I do AI generate pleasant SFW furry pictures on a quiet Twitter alt from time to time, but that's about it. It's only really a major part of my persona in BARPod Lore, and that's mostly just because that's what's appropriate for the show's vibe.