NunoSempere
No bio...
User ID: 1101
Why are we not better, harder, faster, stronger
Now here: https://nunosempere.com/blog/2023/07/19/better-harder-faster-stronger/ (on the motte here: https://www.themotte.org/post/593/why-are-we-not-harder-better). I'm curious to get your perspective.
Breezewiki is good. And in general, OP might want to look into https://github.com/libredirect/browser_extension
Updating in the face of anthropic effects is possible
Now here: https://nunosempere.com/blog/2023/05/11/updating-under-anthropic-effects/. Pasting the content to save you a link:
Status: Simple point worth writting up clearly.
Motivating example
You are a dinosaur astronomer about to encounter a sequence of big and small meteorites. If you see a big meteorite, you and your whole kin die. So far you have seen n small meteorites. What is your best guess as to the probability that you will next see a big meteorite?
In this example, there is an anthropic effect going on. Your attempt to estimate the frequency of big meteorites is made difficult by the fact that when you see a big meteorite, you immediately die. Or, in other words, no matter what the frequency of big meteorites is, conditional on you still being alive, you'd expect to only have seen small meteorites so far. For instance, if you had reason to believe that around 90% of meteorites are big, you'd still expect to only have seen small meteorites so far.
This makes it difficult to update in the face of historical observations.
Updating after observing latent variables
Now you go to outer space, and you observe the mechanism that is causing these meteorites. You see that they are produced by Dinosaur Extinction Simulation Society Inc., that the manual mentions that it will next produce a big asteroid and hurl it at you, and that there is a big crowd gathered to see a meteorite hit your Earth. Then your probability of getting hit rises, regardless of the historical frequency of small meteorites and the lack of any big ones.
Or conversely, you observe that most meteorites come from some cloud of debris in space that is made of small asteroids, and through observation of other solar systems you conclude that large meteorites almost never happen. And for good measure you build a giant space laser to incercept anything that comes your way. Then your probability of of getting hit with a large meteorite lowers, regardless of the anthropic effects.
The core point is that in the presence of anthropic effects, you can still reason and receive evidence about the latent variables and mechanistic factors which affect those anthropic effects.
What latent variables might look like in practice
Here are some examples of "latent variables" in the real world:
-
Institutional competence
-
The degree of epistemic competence and virtue which people who warn of existential risk display
-
The degree of plausibility of the various steps towards existential risk
-
The robustness of the preventative measures in place
-
etc.
In conclusion
In conclusion, you can still update in the face of anthropic effects by observing latent variables and mechanistic effects. As a result, it's not the case that you can't have forecasting questions or bets that are informative about existential risk, because you can make those questions and bets about the latent variables and early steps in the mechanistic chance. I think that this point is both in-hindsight-obvious, and also pretty key to thinking clearly about anthropic effects.
would pay for some of them if not for my desire to be anonymous
Happy to be paid in monero. You can reach out to me at nuno.semperelh@protonmail.com with a burner account.
My consulting rates are now here: https://nunosempere.com/consulting/. I'll put up a list of bounties in a while; if you are particularly interested, I have an RSS endpoint here: https://nunosempere.com/blog/index.rss (or you could sign up per email, if you are a wimp: https://nunosempere.com/.subscribe/)
I'm curious which ones you (or other motte people) think would be most interesting for you in particular, rather than "useful in general".
in defense of Marx.
I was not expecting this.
On pruning science, or, the razor of Bayes: one of many thoughts of «what if Lesswrong weren't a LARP» the need to have a software framework, now probably LLM-powered, to excise known untruths and misstatements of fact, and in a well-weighed manner all contributions of their authors, from the graph of priors for next-iteration null hypotheses and other assumptions.
Also interesting
Rationalist reversals: the notion of «Infohazard» is the most salient example of infohazard known, anthropic shadow as an anti-bayesian cognitive bias and reasoning yourself into a cult.
Curious about this.
Also interested.
It does sound interesting to me.
Here are some drafts I have, though not particularly CW.
-
Acetylcysteine as the first treatment for a cold/mucus in Spain but not in Britain
-
Ze Dreadful German In Ze Writings of Curtis Yarvin. I would respect the guy if he was a gentleman and a scholar, actually knew German and used it to better capture the Zeitgeist and express his Weltanschauung, but instead we get a Blitzkrieg of stilted phrases which annoy me.
-
Comparison of The Driver (1978), Driver (2011), Baby Driver (2017). Same plot, different decades.
-
Base rates of success dating docs.
-
Tetlock forecasting approach vs Subjective Bayesianism
-
My ideal prediction market playbook
-
Optimize hard or GTFO
-
A retelling of El Mio Cid, an Spanish epic poem where a recurring theme is that the hero would be a good and loyal knight if only he had a good king as lord.
-
A lot of shit on OpenPhilanthropy, FTX and EA.
-
Utilitarianism for Democrats
-
Utilitarianism for Republicans
-
Why are we not better, harder, faster, stronger
-
Updating in the face of anthropic effects is possible
-
Betting and consent
-
How to host an autarkic/uncensorable site.
-
Tetlock vs subjective bayesianism
-
Something on the limits of Bayesianism
-
I want to nerd out a bit on infrabayesianism / what one should do if one expects that one's hypothesis may not be able to represent future events.
-
Bounties, things I would pay for
-
My consulting rates
-
Criticism as a demand side problem
-
My preferred deviations from common English
-
Some observations on the speed of qalia
-
People's choices determine a pseudo ordering over people's desirability
This is more than what I would have though, typing this out.
I've been posting a stream of similar ideas on my blog (https://nunosempere.com/blog/), with an eye to those that I think could be more valuable. But if this community is particularly interested in any of these, I'll probably be happy to re-prioritize.
Kudos.
Thanks for your long and thoughtful comment, /u/magic9mushroom. I appreciate it, and you bring up some good points.
That said, I'm kind of miffled that you don't quite mention why you believe the things that you believe. The obvious answer to why is that you cover a lot of points, and you are already covering a wide range of topics, so going into the whys would take too much time. But at the same time, I've also observed that pattern in other discussions (e.g. here and here), and it sort of makes me think that we could do better.
I don't necessarily disagree. In particular, I think that from the considerations I mention, we can conclude that the specifics of how the x-risk would develop are still up in the air, and that his is somewhat valuable info.
Thanks, I appreciate this list!
Yeah. To reply to the first part, my answer to that is to realize that knowledge is valuable insofar as it changes decisions, and to try to generate knowledge that changes decisions that are important. YMMV.
Log in
Heads up that I couldn't log in with my normal username and password.
virtually nobody has ever done this before
A similar proposal I've heard of is recursive prediction markets. E.g,. you hold a prediction market on what the probability another prediction market will/would assign when asked what the chance that a researcher spending a lot of time on a topic would conclude. I did some early work on this here: https://www.lesswrong.com/posts/cLtdcxu9E4noRSons/part-1-amplifying-generalist-research-via-forecasting-models and here: https://www.lesswrong.com/posts/FeE9nR7RPZrLtsYzD/part-2-amplifying-generalist-research-via-forecasting, and in general there is some work on this under the name "amplification".
This could be solved by offering bets. In particular, Insight Prediction has a bunch of liquid markets: https://insightprediction.com/c/5/russia-ukraine
Neat piece, thanks for writing it.
which means few have any idea
...which means that questions were selected for being uncertaint
I have yet to see anyone who can do it well
maybe you're not hanging out in the right places
How would one go about using this?
- Prev
- Next
A. There is a heap of inertia B. Enthusiastic people with a grand plan are working in fields which already have inertia C. Therefore enthusiastic people which have a grand plan will be bogged down in that previously existing inertia.
I mean, sure. But then the answer would seem to not work inside fields which already have huge amounts of negative inertia: to try to explore new fields, or to in fact try to create a greenfield site. To give a small example, the Motte does happen to be its own effort, and thus seems less bogged down. Or, many open source projects were started pretty much from scratch.
Any thoughts on why people don't avoid fields with huge amounts of inertia? Otherwise the inertia hypothesis doesn't sound that explanatory to me.
More options
Context Copy link