faul_sname
Fuck around once, find out once. Do it again, now it's science.
No bio...
User ID: 884
The Against Malaria Foundation is a pretty solid choice, and is the one that makes up most of my charitable contributions. If you care more about quality than about quantity of life, you might also consider Deworm the World. Their pitch is also refreshingly concrete and not "woke" at all:
More than 913 million children are at risk for parasitic worm infections like soil-transmitted helminths and schistosomiasis.
These infections mainly occur in areas with inadequate sanitation, disproportionately affecting poor communities. Children infected with worms are often too sick or weak to attend school because their body can’t properly absorb nutrients. If left untreated, worm infections lead to anemia, malnourishment, impaired mental and physical development, and severe chronic illnesses.
A safe, effective, and low-cost solution does exist — in the form of a simple pill taken once or twice a year. Regular treatment reduces the spread of the disease and helps children stay in school and live healthier and more productive lives.
Since 2014, Deworm the World has helped deliver over 1.8 billion deworming treatments to children across several geographies – for less than 50 cents per treatment. We work closely with governments to implement high-quality and cost-effective mass deworming programs which are resulting in dramatic reductions in worm prevalence.
Every year, GiveWell publishes a detailed analysis of the cost effectiveness of each charity in a spreadsheet that documents their assumptions and their model. If you care to do so, you can also make a copy of the spreadsheet and plug in your own numbers, though I basically never do that.
But yeah, no reason to give money to a global health charity that has politics you hate. The impact per dollar between the listed global health charities just doesn't vary by all that much.
I am not arguing that you can't get a single standard deviation of gain using gene editing, and I am especially not arguing that you can't get there eventually using an iterative approach. I am arguing that you will get less than +1SD of gain (and, in fact, probably a reduction) in intelligence if you follow the specific procedure of
- Catalogue all of the different genes where different alleles are correlated with differences in the observed phenotypic trait of interest (in this case intelligence)
- Determine the "best" allele for every single gene, and edit the genome accordingly at all of those places.
- Hopefully have a 300-IQ super baby.
The specific thing I want to call out is that each of the alleles you've measured to be the "better" variant is the better variant in the context of the environment the measurements occurred in. If you change a bunch of them at once, though, you're going to end up in into a completely different region of genome space, where the genotype/phenotype correlations you found in the normal human distribution probably won't hold.
I don't know if you have any experience with training ML models. I imagine not, since most people don't. Still, if you do have such experience, you can read my point as "if you take some policy that has been somewhat optimized by gradient descent for a loss function which is different from, but correlated with, the one you care about, and calculate the local gradient according to the loss function you care about, and then you take a giant step in the direction of the gradient you calculated, you are going to end up with higher loss even according to the loss function you care about, because the loss landscape is not flat". Basically my point is "going far out of distribution probably won't help you, even if you choose the direction that is optimal in-distribution -- you need to iterate".
Actually waiting for gene edited baby to grow is slow, and illegal
Yep. And yet, I claim, necessary if you don't want to be limited to fairly small gains.
Arguing that than it would break well before 1 SD, is... just wishful thinking. There's still a lot of low hanging fruit.
Note that this is "below 1SD of gains beyond what you would expect from the parents, and in a single iteration". If you were to take e.g. Terry Tao's genome, and then identify 30 places where he has "low intelligence" variants of whatever gene, and then make a clone with only those genes edited, and a second clone with no gene edits, I would expect the CRISPR clone to be a bit smarter than the unaltered clone, and many SD smarter than the average person. And, of course, at the extreme, if you take a zygote from two average-IQ parents, and replace its genome with Tao's genome then the resulting child would probably be more than 1SD smarter than you'd expect based on the IQs of the parents, because in that case you're choosing a known place in genome space to jump to, instead of choosing a direction and jumping really far in that direction from a mediocre place.
Maybe technical arguments don't belong in the CW thread, but people assuming that the loss landscape is basically a single smooth basin is a pet peeve of mine.
I thought it was claimed by the birds this year?
You were off by a year.
Why would you assume "aliens" not "previous Earth civilization" in that case?
So literally some takes from 5 years ago and a different account, which, if I'm correct about which name you're implying guesswho used to post as, are more saying "in practice sexual assault accusations aren't being used in every political fight, so let's maybe hold off on trying drastic solutions to that problem until it's demonstrated that your proposed cure isn't worse than the disease".
Let he who has never posted a take that some people find objectionable cast the first stone.
I think this is referring to this sequence
ymeskhout Trump got hit by two gag orders from two different judges [...] So with that out of the way, how does it apply to Trump? Judge Chutkan's order restricts him from making statements that "target" the prosecutor, court staff, and "reasonably foreseeable witnesses or the substance of their testimony". [...] Discrediting witnesses is harder to draw a clean line on, because again there's a gradient between discrediting and intimidating. I think Trump should have the absolute and unrestricted right to discuss any of his charges and discredit any evidence and witnesses against him.
guesswho I'm not sure why it's important to discredit a witness in the public eye, instead of at trial where you're allowed to say all those things directly to the judge and jury. Especially in light of the negative externalities to the system itself, ie if we allow defendants to make witnesses and judges and prosecutors and jurors lives a living nightmare right up until the line of 'definitely undeniably direct tampering', then that sets a precedent where no sane person wants to fill any of those roles, and the process of justice is impeded. [...]
sliders1234 [...] Girl who you had a drunken hook up texted you the next day saying how much fun she had with you last night. You ignore her text. 2 weeks later she claims rape. It’s in the newspaper. Suddenly your name is tarnished. Everyone in town now views your condo building as feeding money into your pocket. Sales slump. Now do you see why this hypothetical real estate developer would have a reason to hit back in the media? He’s being significantly punished (maybe leading to bankruptcy) without ever being found guilty in the court of law. Of course Trump has motivations to hit hard against the judge and prosecuting attorney. The more partisan they appear the more it makes him look better and get the marginal voter.
guesswho [...] I guess what I would say is that 1. that sees like a really narrow case [...] 2. I would hope a judge in that case wouldn't issue a blanket gag order [...] 3. yeah, there may have to be some trade-offs between corner-cases like this and making the system work in the median case. [...] I'm open to the idea that we should reform the system to make it less damaging to defendants who have not been convicted yet, but if we are deciding to care about that then these super-rich and powerful guys worrying about their reputations are way down on my list under a lot of other defendants who need the help more urgently.
That technically counts as "considering it fair that a defendant can be bound not to disparage a witness against them in a sexual assault case, even if the defendant is a politician and the rape accusation is false". But if that's the exchange @FCfromSSC is talking about it seems like a massive stretch to describe it that way.
As long as sulfuric acid and nitrate salts are still available, the acid mix shouldn't be too hard
Words spoken by someone who is about to have fewer hands and/or eyes.
How the heck does anyone accumulate a bankroll of $20M if they can only make at best $50/hour grinding at the lower stakes?
They don't. The people playing those games are not professional poker players choosing that particular game because they've done the math and established that playing that game is Kelly optimal. They're compulsive gamblers who are good at poker and like high-stakes bets. Making things more complicated is that you have people like Phil Ivey who are both very good poker players that have a massive edge in terms of skill, and are also compulsive gamblers.
As a side note, if you look at the most successful poker players you're going to see cases where luck played a substantial part in their success (i.e. they made Kelly overbets, and got lucky and won those bets). Asking how to be successful at that level is like asking how to be successful at playing the lottery.
a backup plan to "go back to grinding at poker." ... Apparently it works
It "works" but:
- The pay is bad. You will be making something on the order of 10-20% of what an actual professional with similar skill levels makes, and on top of that you will experience massive swings in your net worth even if you do everything right. The rule of thumb is that you can calculate your maximum expected hourly earnings by considering the largest sustained loss where you would continue playing, and dividing that by 1000. So if you would keep playing through a $20,000 loss, that means you can expect to earn $20 / hour if your play is impeccable.
- The competition is brutal. Poker serves as sort of a "job of last resort" to people who, for whatever reason, cannot function in a "real job". This may be because they lack executive function, or because they don't do well in situations where the rules are ambiguous, or because they can't stand the idea of working for someone else but also can't or won't start their own business. The things that all these groups have in common, though, is that they're generally frighteningly intelligent, that they're functional enough to do deliberate practice (those who don't lose their bankroll and stop playing), and that they've generally been at this for years. At 1/2 you can expect to make about $10 / hour, and it goes up from there in a way that is slower than linear as the stakes increase, because the players get better. At 50/100, an amazing player with a $500k bankroll might make about $50 / hour. I do hear that this stops being true at extremely high stakes, like $4000/$8000, where compulsive gamblers become more frequent again (relative to 50/100, the players are still far better than you'd see at a 1/2 or even a 10/20 table). But if you want to play 4000/8000 games you need a bankroll in the ballpark of $10-20M, and also there aren't that many such games. For reference, I capped out playing 2/5 NL, where I made an average of about $12 / hour. Every time I tried to move up to 5/10 I got eaten alive.
- The hours are weird. Say goodbye to leisure time on your evening, weekends, and holidays. Expect pretty regular all-nighters, because most of your profit will come from those times when you manage to find a good table and just extract money from it for 16 hours straight.
- It's bad for your mental health. When I was getting started, I imagined that it would be a lifestyle of pitting my mind against others, of earning money by being objectively better at poker than the other professional players. It is in fact nothing like that at all. Your money does not come from other professional players, and in fact if there are more than about 3 professional players at a table of 10, you should leave and find another table, because even if you are quite good, the professional players just don't make frequent enough or large enough mistakes that exploiting their mistakes will make you much money. No, you make your money by identifying which tables contain (in the best case) drunk tourists or (in a more typical case) compulsive gamblers pissing away money that they managed to beg, borrow, or steal in a desperate attempt to "make back their losses". It is absolutely soul sucking to realize that your lifestyle is funded by exploiting gambling addicts, and that if you find yourself at a table without any people destroying their lives it means you're at the wrong table.
In summary, -2/10 do not recommend.
• The poker player. This is the hardest to explain, they they seem to be able to read people, manipulate people and navigate around smart people in a manner that no one can. They aren't immediately obvious as the smartest in any room, but they somehow always get their way. Often end up CEOs or millionaires somehow.
As someone who has actually played poker at a reasonably competitive level, I think this type of intelligence should be broken into two almost orthogonal components.
- The edge-seeker: Is always tracking many possibilities, always tracking prices, and always looking for small exploitable ways that others are doing things wrong such that this person can eke out some small benefit from taking advantage of that weakness. Think "theory-heavy poker player" or "Jane Street employee" - not necessarily great at textbook math (though probably at least "pretty good"), but excellent at quickly building up very detailed models and ruthless at discarding models that don't provide an edge.
- The politician: Always tracking the expected mental states of others, viewing things from their perspective in order to figure out what signals to send to maximize the chance that that person acts in a way beneficial to the politician. Think "used car salesman", "politician", or "con artist" (but I repeat myself)
As a note, in actual poker games we call the second type "fish", and the key to making money at poker is to ensure you're sitting at a table with a lot of people like that.
Anyway, in terms of the question at hand I'd add a couple of more feminine-coded types of this kind of thing where excellence really does make a notable difference.
- The teacher: Like the politician, tracks the probable internal mental models of many people at once. However, instead of using this knowledge to exploit weaknesses, instead seeks to refine their mental models to be more useful to them.
- The diplomat/organizer: Tracks the motivations of multiple possibly conflicting parties, tries to mediate communication between them to come to a mutually agreeable solution
- The gossip: Tracks the goings and doings of a significant number of people, and also the interests and biases of those people, in order to share the juiciest news and secrets with the people who will react the most strongly to them (hey, I didn't say all of the female-coded types were going to be prosocial)
Of course instead of calling them "male-oriented" and "female-oriented" it might be more accurate to call them "systems-oriented" and "people-oriented". Systems-oriented thinking does scale much better than people-oriented thinking in the best case, although I think if you look at the median case instead of the outliers that's probably flipped.
Sorry for the slow reply, there's a bit to address.
Exactly. My goal is to investigate how exactly that happens. How we reason, how evidence works on us, how we draw conclusions and form beliefs.
Yeah, I like to think about this too. My impression is that there are two main ways that people come to form beliefs, in the sense of models of the world that produce predictions. Some people may lean more towards one way or the other, but most people are capable of changing their mind in either way in certain circumstances.
The first is through direct experience. For example, most people are not born knowing that if you take a cup of liquid in a short fat glass, and pour it into a tall skinny glass, that the amount of liquid remains the same despite the tall skinny glass looking like it has more liquid. The way people become convinced of this kind of object permanence is just by playing with liquids until they develop an intuitive understanding of the dynamics involved.
The second is by developing a model of other people's models, and querying that model to generate predictions as needed. This is how you end up with people who think things like "investing in real estate is the path to a prosperous life" despite not being particularly financially literate, nor having any personal experience with investing in real estate -- the successful people invest in real estate and talk about their successes, and so the financially illiterate person will predict good outcomes of pursuing that strategy despite not being able to give any specifics in terms of by what concrete mechanism that strategy should be expected to be successful. As a side note, expect it to be super frustrating to argue with someone about a belief they have picked up in this way -- you can argue till the cows come home about how some specific mechanism doesn't apply, but they weren't convinced by that mechanism, they were convinced by that one smart person they know believing something like this.
For the first type of belief, I definitely don't consider there to be any element of choice in what you expect your future observations to be based on your intuited understanding of the dynamics of the system. I cannot consciously decide not to believe in object permanence. For the second type of belief, I could see a case being made that you can decide which people's models to download into your brain, and which ones to trust. To an extent I think this is an accurate model, but I think if you trust the predictions generated by (your model of) someone else's model and are burned by that decision enough times, you will stop trusting the predictions of that model, same as you would if it was your own model.
There are intermediate cases, and perhaps it's better to treat this as a spectrum rather than a binary classification, and perhaps there are additional axes that would capture even more of the variation. But that's basically how I think about the acquisition of beliefs.
Incidentally I think "logical deduction generally works as a strategy for predicting stuff in the real world" tends to be a belief of the first type, generated by trying that strategy a bunch and having it work. It will only work in specific situations, and people who hold that kind of belief will have some pretty complex and nuanced ideas of when exactly that strategy will and won't work, in much the same way that embodied humans actually have some pretty complex and nuanced ideas about what exactly it means for objects to be permanent. I notice "trust logical deduction and math" tends to be a more widespread belief among mathematicians and physicists, and a much less widespread belief among biologists and doctors, so I think the usefulness of that heuristic varies a lot based on your context.
We reason based on data.
When we take data in, we can accept it uncritically, and promptly form a belief. This is a choice.
Interesting. This is not really how I would describe my internal experience. I would describe my experience as something more like "when I take data in, I note the data that I am seeing. I maybe form some weak rudimentary model of what might have caused me to observe the thing I saw, if I'm in peak form I might produce more than one (i.e. two, it's never more than two in practice) competing models that both might explain that model. If my model does badly, I don't trust it very well, whereas if it does well over time I adopt the idea that the model is true as a belief".
But anyway, this might all be esoteric bullshit. I'm a programmer, not a philosopher. Let's move back to the object level.
One of the bedrock parts of Materialism is that effects have causes.
Ehhh. Mostly true, at least. True in cases where there's an arrow of time that points from low-entropy systems to high-entropy systems, at least, which describes the world we live in and as such is probably good enough for the conversation at hand (see this excellent Wolfram article for nuance, though, if you're interested in such things -- look particularly at the section titled "Reversibility, Irreversibility and Equilibrium" for a demonstration that "the direction of causality" is "the direction pointing from low entropy to high entropy, even in systems that are reversible").
Therefore, under Materialist assumptions, the Big Bang has a cause.
Seems likely to me, at least in the sense of "the entropy at the moment of the Big Bang was not literally zero, nor was it maximal, so there was likely some other comprehensible thing going on".
We have no way of observing that cause, nor of testing theories about it. If we did, we'd need a cause for that cause, and so on, in a potentially-infinite regress
I think if we managed to get back to either zero entropy or infinite entropy we wouldn't need to keep regressing. But as far as I know we haven't actually gotten there with anything resembling a solid theory.
So, one might nominate three competing models:
• The cause is a seamless physics loop, part of which is hidden behind the back wall. • the universe is actually a simulation, and the real universe it's being simulated in is behind the back wall. • One or more of the deists are right, and it's some creator divinity behind the back wall.
I'd nominate a fourth hypothesis "the big bang is the point where, if you trace the chains of causality back past it, entropy starts going back up instead of down. time is defined as the direction away from the big bang" (see above wolfram article). In any case, the question "but can we chase back the chain of causality further somehow, what imbues some mathematical object with the fire of existence" still feels salient, at least (though maybe it's just a nonsense question?)
In any case, I am with you that none of these hypotheses make particularly useful or testable predictions.
But yeah, anyone claiming that materialism is complete in the way you are looking for is, I think, wrong. For that matter, I think anyone claiming the same of deism is wrong.
It is common here to encounter people who claim the human mind is something akin to deterministic clockwork, and therefore free will can't exist
I think those people are wrong. I think free will is what making a decision feels like from the inside -- just because some theoretical omniscient entity could in theory predict what your decision will be before you know what your decision is doesn't mean you know what that decision would be ahead of time. If predictive ML models get really good, and also EEGs get really good, and we set up an experiment wherein you choose when to press a button, and a computer can reliably predict 500ms faster than you that you will press the button, I don't think that experiment would disprove free will. If you were to close the loop and light up a light whenever the machine predicts the button would be pressed, a person could just be contrary and not press the button when the light turns on, and press the button when the light is off (because the human reaction time of 200ms is less than the 500ms standard we're holding the machine to). I think that's a pretty reasonable operationalization of the "I could choose otherwise" observation that underlies our conviction that we have free will. IIRC this is a fairly standard position called "compatibilism" though I don't think I've ever read any of the officially endorsed literature.
That said, in my personal experience "internally predict that this outcome will be the one I observe" does not feel like a "choice" in the way that "press the button" vs "don't press the button" feels like a choice. And it's that observation that I keep coming back to.
Finally, we can adopt an axiom. Axioms are not evidence, and they are not supported by evidence; rather, evidence either fits into them or it doesn't. We use axioms to group and collate evidence. Axioms are beliefs, and they cannot be forced, only chosen, though evidence we've accepted as valid that doesn't fit into them must be discarded or otherwise handled in some other way. This, again, is a choice.
This might just be a difference in vocabulary -- what you're calling "axioms" I'm calling "models" or "hypotheses", because "axiom" implies to me that it's the sort of thing where if you get conflicting evidence you have to throw away the evidence, rather than having the option of throwing away the "axiom". Maybe you mean something different by "choice" than I do as well.
Primarily, the belief that one's other beliefs are not chosen but forced seems to make them more susceptible to accepting other beliefs uncritically, resulting in our history of "scientific" movements and ideologies that were not in any meaningful sense scientific, but which were very good at assembling huge piles of human skulls. Other implications branch out into politics, the nature of liberty and democracy, the proper understanding of values, how we should approach conflict, and so on, but these are beyond the scope of this margin. I've just hit 10k characters and have already had to rewrite half this post once, so I'll leave it here.
If we're going by "stated beliefs" rather than "anticipatory beliefs" I just flatly agree with this.
In conclusion, I'm pretty sure this is all the Enlightenment's fault.
That pattern of misbehavior happened before the enlightenment too though. And, on balance, I think the enlightenment in general, and the scientific way of thinking in particular, left us with a world I'd much rather live in than the pre-enlightenment world. I will end with this graph of life expectancy at birth over time.
Bro, that's just an illusion due to you smuggling in your non empirical (read: religious) belief that other people's feelings matter.
My belief that other people have conscious experience and my belief that that conscious experience matters are not the same belief. The belief that other people's experiences matter to me is something that comes from my moral framework -- and yes, many people use religious teachings as their moral framework, so in that sense you could view it as similar to religion. But again, it's helpful to distinguish between that-which-is and that-which-should-be. I do expect that my sense of that which should be is downstream of some empirically verifiable properties of multi-agent systems, and also a shit-ton of random chance, but I don't have super strong intuitions for what those properties are, nor do I think that I'm morally obligated to change my own behavior away from what my moral intuitions say I should do just because I learn something new about game theory.
Do you feel bad about ripping off video game characters?
I don't think video game characters have conscious experiences. That seems like a pretty non-extreme viewpoint to me "video game characters are conscious", as a world model, generates quite bad predictions about future observations. In a pure consequentialist sense, I do expect it's fairly likely that the game designers will punish the player's decision to rip off a character, but also it's not like winning the game is a moral obligation, so I might rip off a video game character because I expect that to lead to more entertaining dialogue.
Honestly though, what position are you even trying to argue for here? I am very skeptical that you endorse the solipsist position yourself (though if you do I expect your reasoning there, and particularly any observations you could make that would convince you that it wasn't true, would be an interesting conversation).
When there's a conflict and your belief systems disagree, who wins?
I think it's one of those "the hardest decisions are the ones that ultimately matter the least" sorts of things -- if there was some strong reason to choose one side over the other, the decision would be an easy one (unless it's hard because you're missing obtainable information, in which case you should maybe go obtain that information). In my case I'd say that generally, all else being equal, I'm going to go with whatever would sound intuitively right to someone unsophisticated (though all else is not equal very often). I'm not that attached to that approach though -- I have mostly settled on it as a matter of pragmatism, and it seems to be working pretty well so far.
You can't very well faithfully serve two masters but you can totally faithfully serve zero masters.
"other people are actually just p zombies behaving as if they are conscious like me" generates predictions that are just as good
I genuinely don't think it does. Unless you mean "believing" that in the classic "invisible dragon in my garage" sense, which I don't count as actually belief. Rule of thumb - if you're preemptively coming up with excuses for why your future observations will not support your theory over competing theories, or why your theory actually predicts exactly the same thing that the classic theory predicts and the only differences are in something unfalsifiable, that should be a giant red flag for your theory.
For example: I think that my experience of consciousness is caused by specific physical things my nervous system does sometimes. If I slap some electrodes on my scalp to take an electroencephalogram, and then do some task that involves introspecting, making conscious decisions, and describing those experiences, I expect that I will see particular patterns of electrical activity in my brain any time I make a conscious decision. I expect that the EEG readouts from other people would have similar patterns.
For the p-zombie explanation to make sense, we would either have to say that my experience of consciousness and the things I said about it were not caused by things happening in my nervous system, or we would have to say that those patterns in my nervous system and the way I described my experience were related to my consciousness, but in other people there was something else going on which just happened to have indistinguishable results. And also we would predict in advance that any time we try to use the "p-zombie" hypothesis what we actually end up doing is going "what do we predict in the world where other people's consciousness works the same way as mine" and then saying "the p-zombie hypothesis says the same thing" -- the p-zombie hypothesis does not actually predict anything on its own.
That's a way better life than actually being pro social all the time.
As an empirical matter, I think that if you try rating your internal subjective experience after ripping off a stranger who gets angry at you but who you'll never see again vs your internal subjective experience after helping a stranger who expresses gratitude but you'll never see again, you may be surprised at which one results in higher subjective well-being. That doesn't really have any bearing on the factual questions of other peoples' internal experiences, just a prediction I have about what your own internal experience will be like.
What exactly is the problem with using with the world model imparted by some religion, in contexts where the world model of that religion has a track record of making accurate predictions and reason does not?
I don’t think there are a huge number of such contexts, but there are definitely some (e.g. "if you strive to be honest and fair in your actions by the standard religious definitions, that genuinely will turn out better for you in the long run" makes good predictions in a tight-knit community even if the "reasonable" position is that you could probably get away with cheating in situations where you don’t see any way that you would get caught). You can of course try to galaxy-brain some reason that what the religion says is actually the same conclusion you would come to using pure logic, but I think "look around and see which approaches work well and which ones don't, and try out the ones that work well for others, and keep doing them if they work even if you don't fully understand why" is a perfectly legitimate approach.
In my experience it's very nice to have a strong-theoretical-model-backed lens you can use to interpret your empirical observations. But you can operate without such a lens, or with a lens based on a model that is known to be flawed (all models are wrong, some are useful).
Assuming that other consciousnesses exist does not produce better advance predictions of experiences
Sure it does! I talk about consciousness, and what I say about it is caused by how I myself experience consciousness. If consciousness exists in others, I expect them to talk similar experiences to consciousness to the ones I have, and if it doesn't exist in others, well then it's pretty weird that they'd talk about having conscious experiences that sound really similar to my conscious experiences for some reason that is not "they are experiencing the same thing I am". If others were p-zombies, then sure all of their prior utterances may have sounded like they were generated by them being conscious, but absent a deeper understanding of how exactly their p-zombification worked, I could not use that to generate useful predictions of what their future utterances about consciousness would be (because, as we've established, the p-zombies are not just reporting on their internal state, but instead doing something else which is not that).
Modeling others as experiencing the same consciousness as I do does in fact lead to better advance predictions of my observations. It doesn't do so in a very philosophically satisfying way if you want to talk about axioms and proofs, but pragmatically speaking "other people are also conscious like me" sure does seem like a useful mental model for generating predictions.
I can't prove it but assuming that other minds exist sure does seem to produce better advance predictions of my experiences. Which is the core of empiricism.
I agree that it was badly mishandled. I think it's valuable to tell EAs that the "people will try to get you to take a job where they say you'll be paid in experience/exposure, be mindful of that dynamic" but singling out a single organization to that degree makes it sound like it's a problem specific to that organization (which it is not, even within the EA space I personally know of another org with similar dynamics, and I'm not even very involved with the space).
I personally still wouldn't work for nonlinear but then I also would have noped out in the initial hash-out-the-contract phase.
I read the same doc you did, and like. I get that "Chloe" did in fact sign that contract, and that the written contract is what matters in the end. My point is not that Nonlinear did something illegal, but... did we both read the same transcript? Because that transcript reads to me like "come on, you should totally draw art for my product, I can only pay 20% of market rates but I can get you lots of exposure, and you can come to my house parties and meet all the cool people, this will be great for your career".
I don't know how much of it is that Kat's writing style pattern matches really strongly to a particular shitty and manipulative boss I very briefly worked for right after college. E.g. stuff like
As best as I can tell, she got into this cognitive loop of thinking we didn’t value her. Her mind kept looking for evidence that we thought she was “low value”, which you can always find if you’re looking for it. Her depressed mind did classic filtering of all positive information and focused on all of the negative things. She ignored all of my gratitude for her work. In fact, she interpreted it as me only appreciating her for her assistant work, confirming that I thought she was a “low value assistant”. (I did also thank her all the time for her ops work too, by the way. I’m just an extremely appreciative boss/person.)
just does not fill me with warm fuzzy feelings about someone's ability to entertain the hypothesis that their own behavior could possibly be a problem. Again, I am probably not terribly impartial here - I have no horse in this particular race, but I once had one in a similar race.
Concrete note on this:
accusations that they promised another, "Chloe", compensation around $75,000 and stiffed her on it in various ways turned into "She had a written contract to be paid $1000/monthly with all expenses covered, which we estimated would add up to around $70,000."
The "all expenses" they're talking about are work-related travel expenses. I, too, would be extremely mad if an employer promised me $75k / year in compensation, $10k of which would be cash-based, and then tried to say that costs incurred by me doing my job were considered to be my "compensation".
Honestly most of what I take away from this is that nobody involved seems to have much of an idea of how things are done in professional settings, and also there seems to be an attitude of "the precautions that normal businesses take are costly and unnecessary since we are all smart people who want to help the world". Which, if that's the way they want to swing, then fine, but I think it is worth setting those expectations upfront. And also I'd strongly recommend that anyone fresh out of college who has never had a normal job should avoid working for an EA organization like nonlinear until they've seen how things work in purely transactional jobs.
Also it seems to me based on how much interest there was in that infighting that effective altruists are starved for drama.
I think that's a very pragmatic and reasonable position, at least in the abstract. You're in great intellectual company, holding that set of beliefs. Just look at all of the sayings that agree!
- You can't reason someone out of something they didn't reason themselves into
- It is difficult to get a man to understand something, when his salary depends on his not understanding it
- We don't see things as they are, we see them as we are
- It's easier to fool people than to convince them that they have been fooled
And yet! Some people do change their mind in response to evidence. It's not everyone, it might not even be most people, but it is a thing that happens. Clearly something is going on there.
We are in the culture war thread, so let's wage some culture war. Very early in this thread, you made the argument
What does replacing the Big Bang with God lose out on? Both of them share the attribute of serving as a termination point for materialistic explanations. Anything posited past that point is unfalsifiable by definition, unless something pretty significant changes in terms of our understanding of physics.
What does replacing the Big Bang with God lose out on? I think the answer is "the entire idea that you can have a comprehensible, gears-level model of how the universe works". A "gears-level" model should at least look like
- If the model were falsified, there should be specific changes to what future experiences you anticipate (or at the very least, you should lose confidence in some specific predictions you had before)
- Take the components of your model. If you take one of those parts, and you make some large arbitrary change to it, the model should now make completely different (and probably wrong, and maybe inconsistent) predictions.
- If you forgot a piece of your model, could you rederive it based on the other pieces of the model?
So I think the standard model of physics mostly satisfies the above. Working through:
- If general relativity were falsified, we'd expect that e.g. the predictions it makes about the precession of Mercury would be inaccurate enough that we would notice. Let's take the cosmological constant Λ in the Einstein Field Equation, which represents the energy density of vacuum, and means that on large enough scales, there is a repulsive force that overpowers the attractive force of gravity.
- If we were to, for example, flip the sign, we would expect the universe to be expanding at a decreasing rate rather than an increasing rate (affecting e.g. how redshifted/blueshifted distant standard candles were).
- If you forget one physics equation, but remember all the others, it's pretty easy to rederive the missing one. Source: I have done that on exams when I forgot an equation.
Side note: the Big Bang does not really occupy a God-shaped space in the materialist ontology. I can see where there would be a temptation to view it that way - the Big Bang was the earliest observable event in our universe, and therefore can be viewed as the cause of everything else, just like God - but the Big Bang is a prediction (retrodiction?) that is generated by using the standard model to make sense of our observations (e.g. the redshifting of standard candles, the cosmic microwave background). The question isn't "what if we replace the Big Bang with God", but rather "what if we replace the entire materialist edifice with God".
In any case, let's apply the above tests to the "God" hypothesis.
- What would it even mean for the hypothesis "we exist because an omnipotent, omniscient, omnibenevolent God willed it" to be falsified? What differences would you expect to observe (even in principle)
- Let's say we flip around the "onmiscient" part of the above - God is now omnipotent and omnibenevolent. What changes?
- Oops, you forgot something about God. Can you rederive it based on what you already know?
My point here isn't really "religion bad" so much as "you genuinely do lose something valuable if you try to use God as an explanation".
I don't think reasoned beliefs are forced by evidence; I think they're chosen. He's arguing that specific beliefs aren't a choice, any more than believing 1+1 = 2 is a choice.
The choice of term "reasoned belief" instead of simply "belief" sounds like you mean something specific and important by that term. I'm not aware of that term having any particular meaning in any philosophical tradition I know about, but I also don't know much about philosophy.
He's arguing that specific beliefs aren't a choice, any more than believing 1+1 = 2 is a choice.
That sounds like the "anticipated experiences" meaning of "belief". I also cannot change those by sheer force of will. Can you? Is this another one of those less-than-universal human experiences similar to how some people just don't have mental imagery?
The larger point I'm hoping to get back to is that the deterministic model of reason that seems to be generally assumed is a fiction
I don't think I would classify probabilistic approaches like that as "deterministic models of reason".
But yeah I'm starting to lean towards "there's literally some bit of mental machinery for intentionally believing something that some people have".
I assume you have some reason you think it matters that we can't use mathematics to come up with a specific objective prior probability that each model is accurate?
Edit: also, I note that I am doing a lot of internal translation of stuff like "the theory is true" into "the model makes accurate predictions of future observations" to fit into my ontology. Is this a valid translation, or is there some situation where someone might believe a true theory that would nevertheless lead them to make less accurate predictions about their future observations?
You'll be happy to know that I did in fact throw some fairly substantial amounts of money at jefftk and friends for their wastewater surveillance / sequencing / anomaly detection project. Significantly prompted by us having this conversation.
More options
Context Copy link