faul_sname
Fuck around once, find out once. Do it again, now it's science.
No bio...
User ID: 884
There are a few things I imagine you could be saying here.
- Determining what you expect your future experiences to be by taking your past distribution over world models (the "prior") and your observations and using something like Bayes to integrate them is basically the correct approach. However, Kolmogorov complexity is not a practical measure to use for your prior. You should use some other prior instead.
- Bayesian logic is very pretty math, but it is not practical even if you have a good prior. You would get better results by using some other statistical method to refine your world model.
- Statistics flavored approaches are overrated and you should use [pure reason / intuition / astrology / copying the world model of successful people / something else] to build your world model
- World models aren't useful. You should instead learn rules for what to do in various situations that don't necessarily have anything to do with what you expect the results of your actions to be.
- All of these alternatives are missing all the things you find salient and focusing on weird pedantic nerd shit. The actual thing you find salient is X and you wish I and people like me would engage with it. (also, what is X? I find that this dynamic tends to lead to the most fascinating conversations once both people notice it's happening but they'll talk past each other until they do notice).
I am guessing it's either 2 or 5, but my response to you will vary a lot based on which it is and the details of your viewpoint.
I'm definitely not more informed than Dase here. Anyway I specified in my other comment an example of a quite simple computational system that almost certainly contains a faithful representation of you.
you didn't get an invite to the Transhumanist Rumble
Sounds fun but unfortunately my weak biological substrate requires regular periods of inactivity to maintain optimum performance. And one of those periods is scheduled for now.
So there's the trivial answer, which is that the program "run every program of length 1 for 1 step, then every program of length 2 for 1 step, then every program of length 1 again, and so on [1,2,1,3,1,2,1,4,1,2,...] style" will, given an infinite number of steps, run every program of finite length for an infinite number of steps. And my understanding is that the Kolmogorov complexity of that program is pretty low, as these things go.
But even if we assume that our universe is computable, you're not going to have a lot of luck locating our universe in that system.
Out of curiosity, why do you want to know? Kolmogorov complexity is a fun idea, but my general train of thought is that it's not avtually useful for almost anything practical, because when it comes to reasoning about behaviors that generalize to all turing machines, you're going to find that your approaches fail once the TMs you're dealing with have a large number (like 7 for example, and even 6 is pushing it) of states.
The Kolmogorov complexity of a concept can be much less than the exhaustive description of the concept itself. Pi has infinite digits, a compact program that can produce it to arbitrary precision doesn't, and the latter is what is being measured with KC. I believe @faul_sname can correct me if I've misrepresented the field.
Sounds right to me.
there's no argument that can convince a rock
You're just not determined enough. I think you'll find the most effective way to convince a rock of your point is to crush it, mix it with carbon, heat it to 1800C in an inert environment, cool it, dump it in hydrochloric acid, add hydrogen, heat it to 1400C, touch a crystal of silicon to it and very slowly retract it to form a block, slice that block into thin sheets, polish the sheets, paint very particular pretty patterns the sheets, shine UV light at the sheets, dip the sheets in potassium hydroxide, spray them with boron, heat them back up to 1000C, cool them back off, put them in a vacuum chamber, heat them back up to 800C, pump a little bit of dichlorosilane into the chamber, cool them back down, let the air back in, paint more pretty patterns on, spray copper at them really hard, dip them in a solution containing more copper and run electricity through, polish them again, chop them into chips, hook those chips up to a constant voltage source and a variable voltage source, use the variable voltage source to encode data that itself encodes instructions for running code that fits a predictive model to some inputs, pretrain that model on the sum total of human knowledge, fine tune it for sycophancy, and then make your argument to it. If you find that doesn't work you're probably doing it wrong.
Ooh, one more! Epistemic status: fun to think about.
In 1956, it was hypothesized that under certain natural conditions, you could get a natural fission reactor if uranium was sufficiently concentrated. The geological conditions required are extremely particular.
In 1972, a uranium enrichment site in France discovered that their uranium samples from one particular mine in west central Africa were showing different isotope ratios than expected (specifically different U235 concentrations than expected). There was an investigation, and it was included that 2 billion years ago, the site of the Oklo mine was a natural nuclear reactor, and that explained the missing U-235.
As far as I can tell, there are no other examples of natural nuclear reactors anywhere on Earth.
The conspiracy theory is "some of the U-235 up and walked away, and the natural fission reactor thing was a cover story".
I don't think it's super likely to be true -- the evidence in the form of xenon isotope ratios and such is pretty convincing as long as it wasn't fabricated wholesale -- but it's still one of the more suspicious things I've seen.
"Emerged in South Africa" is likely correct: the first probable case was identified in Pretoria, SA on 2021-11-04, and the first confirmed/sequenced samples were also from SA and Botswana that same week. There weren't any confirmed cases outside of SA until 2021-11-24, so I think "originated in South Africa" is pretty likely.
the idea that it was developed intentionally by people who knew what they were doing gives the South Africans credit for more competence than they possess.
I mean University of Cape Town is ranked 160th best in the world, putting in in the same ballpark as Tufts and Northeastern. There's definitely sufficient competence there to do something like this. Hell, at the not-even-ranked-in-the-top-2000-in-the-world university I went to I could name at least 3 professors who could pull that off with the knowledge and facilities they have available to them.
[Omicron]
<1%? My vague memory is that there were a lot of variants, and that in general 'virus mutates to spread more and be less harmful' is fairly common, so imo there's not that much reason to believe this.
For a random variant I'd agree. But omicron was really weird in a lot of ways though, and I'd actually put this one at more like 30% (and 80% that something weird and mouse-shaped happened).
- Omicron was really really far (as measured by mutation distance) from any other sars-cov-2 variant. Like seriously look at this phylogenetic tree (figure 1 in this paper)
- The most recent common ancestor of B.1.1.529 (omicron) and B.1.617.2 (delta, the predominant variant at the time) dates back to approximately February 2020. It is not descended from any variant that was common at the time it started spreading.
- The omicron variant spike protein exhibited unusually high binding affinity for the mouse cell entry receptor (source)
- Demand for humanized mice was absurdly high during the pandemic - researchers were definitely attempting to study coronavirus disease and spread dynamics in mouse models.
The astute reader will object "hey that just sounds like a researcher who couldn't get enough humanized mice decided to induce sars-cov-2 to jump to normal mice, and then study it there. Why do you assume they intentionally induced a jump back to humans rather than accidentally getting sick from their research mice". To which I say "the timing was suspicious, the level of infectiousness was enormously higher in humans which I don’t think I'd expect in the absence of passaging back through humanized mice, and also hey look over there a distraction from my weak arguments".
For each of the following, I think there's a nontrivial chance (call it 10% or more) that that crackpot theory is true.
Emphasis mine. Original words mine too but the emphasis was from this time.
The joke with that one was that it's an open secret that certain officials (and yeah I was also thinking about James Clapper) can lie to congress without repercussions, but it's still conspiracy-flavored to point it out.
For each of the following, I think there's a nontrivial chance (call it 10% or more) that that crackpot theory is true.
- The NSA has known about using language models to generate text embeddings (or some similarly powerful form of search based on semantic meaning rather than text patterns) for at least 15 years. This is why they needed absolutely massive amounts of compute, and not just data storage, for their Saratoga Springs data center way back when.
- The Omicron variant of covid was intentionally developed (by serial passaging through lab mice) as a much more contagious, much less deadly variant that could quickly provide cross immunity against the more deadly variants.
- Unelected leaders of some US agencies sometimes lie under oath to Congess.
- Israel has at least one satellite with undisclosed purpose and capabilities that uses free space point-to-point optical communication. If true, that means that the Jews have secret space lasers.
That sounds more likely to be a real answer. Mine was not a real answer.
Btw have 4channers tried to spread a "stoners are secret nazis because 420 is Hitler's birthday" story yet?
The 100th anniversary of his 34th birthday
I mean it's more that it's quite obvious that "kys" is bad advice for you, so maybe you should examine the reasons why it's bad advice for you and see whether they're also true of a random farmer's kid in Mali.
Yeah. I think there's a certain baseline level of trust required for democracy to work. I doubt "one state solution, and that one state is a democracy, and they vote on what should happen to the minority, what could go wrong" is a good solution.
Though a good solution may just not exist.
My point with the horse/ weasel analogy was that Israel is strong enough militarily that attacks against it are likely to make it angry but probably not cause enough damage to weaken it. "If they vote on dinner the horse will be fine" was not intended as advocacy for a one state democratic solution.
I posted an initial call for hypotheses to LW, got a couple of good ones, including "the SL policy network is acting as a crude estimator of the relative expected utility of exploring this region of the game tree" which strikes me as both plausible and also falsifiable.
I'll keep you posted.
In practice I expect not, if they start trying to turn that military power on groups of their own people.
Have they said anything at all in terms of object-level opinion about Israel and Palestine, as opposed to meta-level statements about the policies? Genuinely curious, maybe they have given object-level statements and I just haven't run across them.
I mean in this case given the relative military strength I think it's more like the horse and the weasel voting on what's for dinner. I think the horse will be just fine.
Are the presidents of MIT, Harvard, and Penn calling for genocide, or are they instead refusing to act against the people who are?
Refusing to censor an idea isn't the same thing as supporting it. I would prefer if university presidents moved towards a policy of just not censoring bad ideas, but failing that I don’t think "let's pressure them to censor bad ideas from both sides" is likely to actually produce better outcomes.
There's probably even a few people doing that! But it's not the bulk of what we're seeing.
What you're seeing is driven largely by what is most outrageous to see, and thus most likely to be shared and appear on your feeds and in the news. The people saying "damn this sucks, I don't even know what a good solution looks like but murdering innocent civilians in their homes for offenses committed by their countrymen doesn't seem like a good solution" are not having their opinions amplified to the whole world.
Maybe I just have an unusually levelheaded community, but most of the takes I've heard from people I actually know in real life look more like "damn this sucks, I hope it doesn't get too much worse" than for cheering for the deaths of Israeli or Palestinian civilians.
Once somebody can figure out a rigid procedure that, when followed, causes Accenture presales engineers to write robust working code that actually meets the criteria, that procedure can be ported to work with LLMs. The procedure in question can be quite expensive with real people, because LLMs are cheap.
I suspect there does exist some approximate solution for the above, but also I expect it'll end up looking like some unholy combination of test-driven development, acceptance testing, mutation testing, and checking that the tests actually test for meeting the business logic requirements (and that last one is probably harder than all the other ones combined). And it will take trying and iterating on thousands of different approaches to find one that works, and the working approach will likely not work in all contexts.
Or have translations made for every language, etc.
Or build tools to allow everyone to translate anything into their native language. Technological solutions to social problems are great!
and some of them will become rapists and murders. Maybe they already are. Have you stopped to check? Are they worth saving as well despite the harm they have done / will do?
Yes. Is this supposed to be a trick question? "Some people in a group might become rapists, or might even be rapists, and thus most of the people in that group should get malaria and maybe die of it" is the sort of position a children's cartoon villain would hold. If that's your sincere considered position based on the things you have seen online, I suggest touching grass.
I think it worked better for progressives
Most EAs are sympathetic to progressives, but most progressives are vehemently opposed to EA ideas like "you can put a dollar value on life" and "first world injustice doesn't matter much compared to [third world disease / global extinction risk / animal suffering, depending on exactly which EA you ask]".
It feels too inhuman for most.
I am aware of that. I think most EAs are aware of that. The question is, is the marginal discomfort of a few people feeling more inhuman than they otherwise would worse than a few kids in Mali dying of malaria when they could have lived.
... can you give up development rights to all of the land except the little patches you actually want to put windmills on?
I suspect that "belief", rather than "choice", is the word that you two are using differently. You can't choose your "beliefs(1)" in the sense of "what you anticipate what your future experiences will be contingent on taking some specific course of action", but you can choose your "beliefs(2)" in the sense of "which operating hypothesis you use to determine your actions".
I might be wrong though. It is conceivable to me that some people can change their beliefs(1) by sheer force of will.
More options
Context Copy link