@astrolabia's banner p

astrolabia


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 01:46:57 UTC

				

User ID: 353

astrolabia


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 01:46:57 UTC

					

No bio...


					

User ID: 353

the longhouse ought to cover the entirety of the light cone

Close, but I think the argument is "if your longhouse doesn't cover the lightcone, you can expect your colonies to spawn their own universe-eating longhouses and come knocking again once they're much bigger than you." Then the options become: Our shitty longhouse forever, or a more competitive, alien longhouse / colonizers to come back and take all our stuff.

As far as I can tell, our only hope is that at some scales the universe is defense-favored. In which case, yes, fine, let a thousand flowers bloom.

My p(doom) went up again when I realized how hard it is for governments to remain aligned with their citizens. As a simple example, they can't seem to raise a finger against mass immigration no matter how unpopular it is, because it has an economic justification. See also: WW1. Replacing humans throughout the economy and military is going to be irresistable. There will probably be another, equally retarded, culture war about how this second great replacement is obviously never going to happen, then not happening, then good that it happened.

TL;DR: Even if we control AIs well, humans are going to be gradually stripped of effective power once we can no longer contribute economically or militarily. Then it's a matter of time before we can't afford or effectively advocate for our continued use of resources that could simulate millions of minds.

Yes, consequentialism are rule-following are special cases of each other. You got me. The usual meaning of the word refers to situations in which they differ, i.e. any rule other than "maximize utility".

Sounds like you still agree with us doomers? We don't expect human greed / competitive pressures to go away any time soon, which is why we're worried about exactly the kinds of money-winning scenarios you propose.

I agree it's kind of a matter of degree. But I also think we already have so much power-seeking around that any non-power-seeking AI will quickly be turned to that end.

I agree, but I also still see most people steadfastly refuse to extrapolate from things that are already happening. For a while, fanciful doom scenarios were all we had as an alternative to "end of history, everything will be fine" from even otherwise serious people.

I'm really not trying to play gotcha games. I guess we are playing definition games, but I guess I'd say you have to choose which you prioritize: The well-being of everyone, or following rules. If you follow rules only for the sake of the well-being of everyone, then I guess I'd call you a consequentialist. I'm not trying to be clever or counter-intuitive.

I agree that Yud leans heavily on some unrealistic premises, but overall I think he gets big points for being one of the few people really excited / worried about the eventual power of AI at the time, and laying out explicit cases or scenarios rather than just handwaving.

I agree that bay area rationalists can be a little messianic and culty, though I think it's about par for the course for young people away from home. At least you can talk about it with them.

I also think that most x-risks come simply from being outcompeted. A big thing that Yud got right is that it doesn't matter if the AI is universalist or selfish or whatever, it will still eventually try to gather power, since power-gathering agents are one of the only stable equilibria. You might be right that we won't have to worry about deontological AI, but people will be incentivized to build AIs that can effectively power-seek (ostensibly) on their behalf.

I agree that even adaptation can be successfully adapted to by an adversary. My claim is merely that adaptive agents (e.g. consequentialists) will eventually outcompete agents that operate according to fixed rules (consequentialists). In your example, the adversaries are adaptive. If they followed fixed rules, they would be poor adversaries.

we are in it for the well-being of everyone, too

If you justify your deontology in terms of its consequences, doesn't that make you a consequentialist who thinks that certain rules happen to be the optimal policy?

Okay, well I include some degree of adaptation in my definition of "very intelligent". In fact, adaptation is the main advantage that consequentialists have over deontologists.

Hmmm. I think you're on to something. I think we need to distinguish between utilitarianism done well, and done poorly. I agree it's easy to do poorly - I think that's part of why we love rules so much - they're easier to follow than trying to come up with a good strategy from scratch for every situation. I guess my claim is that, in the presence of enough adversarial intelligence or optimization, following even pretty good rules won't protect you, because the adversary will find the edge cases they can exploit. At that point you have to adjust your rules, and I claim the only effective way to do that in a way that avoids exploitation is very intelligent consequentialism.

Okay. I agree it seems hard, but I think there's something like a 15% chance that we can coordinate to save some value.

I personally find it hard to care viscerally, at least compared to caring about whether I could be blamed for something. The only way I can reliably make myself care emotionally is to worry about something happening to my kids or grandkids, which fortunately is more than enough caring to spur me to action.

I don't think you'd normally go from "We might not be able to coordinate to stop disaster" to "Therefore we should give up and party". Maybe there's something else going on? I personally think this means we should try to coordinate to stop disaster.

Can you give some examples of these crazy views and goals?

I think we can make a more concrete claim, which is that deontologists are doomed in the long run due to competition and natural selection. Their rules will consistently be used against them. Today it's asylum seekers, tomorrow it will be ultra-charming machines that will claim moral primacy over whoever has resources.

I agree with all this. I guess I don't expect it to be in anyone else's interest to run even an uploaded and upgraded version of myself. Perhaps there will be some sort of affirmative action or mind-scraping such that some small vestiges of me end up having any influence. So I would consider your upgrading plan to be an instance of "crushed by competition", though it may be slightly better than nothing.

I have a pretty much identical outlook to you, including the kids thing. The biggest question on my mind is which kinds of futures we could plausibly aim for long term in which humans not crushed either by competition or the state.

This is why I'm afraid of AI. Once most humans are economically and militarily obsolete, and can't go on strike, we will leak power one way or another, and it will eventually end up in the hands of whoever controls the value-producing robots and chip factories.

Like Mewis said above though, you might encourage extortion if you pay off anyone who raises arms against you.

I think that's why they required the defectors to bring a very expensive MiG with them.

To be fair, when I started learning math + stats, I found the use of greek letters intimidating and confusing, especially rarely-used ones like ξ. Of course there's nothing wrong with them besides their unfamiliarity, but I try to start with English letters in my own math writing and only reach for greek letters when I'm running out of those.

Hmmm, are you just saying that you need to "choose your pain", and also accept that trying to do things will sometimes turn out badly? Or something more specific to romance?

I read the article, and was surprised to find I agreed with most of what she said. Every one of her opinions is about as manosphere/redpilled/motte-ish as you could imagine being printed in the NYT in 2023.

The new book being discussed is about how modern feminism has not just failed men, but effectively forbidden productive discussion of their problems. Bravo!