@astrolabia's banner p

astrolabia


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 01:46:57 UTC

				

User ID: 353

astrolabia


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 01:46:57 UTC

					

No bio...


					

User ID: 353

Man, lately Tyler just seems so off the mark. He keeps talking about AGI in the most head-in-the-sand, dunky and dismissive way without even making clear claims or any concrete arguments. I think AGI doom is one of the most mind-killing topics out there amongst otherwise high decouplers.

Fair enough, but I give him partial credit for asking for a withdrawl, though I don't know any details.

Yes, I agree. I am just saying that looking dangerous is also usually necessary to get good deals.

I didn't follow this closely, but didn't he order withdrawl from Afghanistan and Syria, but the generals slow-played it?

Maybe we're talking about different things. I'm thinking of Obama talking about red lines in Syria, then not doing anything about it. Or Putin hinting about using nukes over foreign involvement in Ukraine and then not. I agree one can also go too far and be easily baited.

Trump doesn't like war in and of itself, but he hates being seen as "weak" far, far, FAR more. Avoiding situations that "make us look weak" is the amorphous basis of his entire foreign policy.

Aren't these almost the same thing? The way you avoid wars is by being seen as strong and, crucially, as willing to fight if necessary. Countries that appear weak, or appear strong but unwilling to fight, are the ones that end up being attacked.

Whoops, thanks for correcting me and for providing a link.

Oh, I stand corrected.

In the early 1900s, we were at 40% of escape velocity.

I think this is misleading. The number care about is more like "life expectancy conditioned on being a healthy adult", which I don't think was changing much back then, nor has changed much recently. But probably is still going up a little if you control for demographics, which, in my limited understanding, have been changing in the West to mask (small) improvements in longevity.

I agree with most of this, but I feel like some male shit-talking and joking, at least in a group setting, also has an element of faux-combat. Constantly challenging each other is a form of play-fighting, but it's also a test - someone who regularly can't come up with a comeback or simply shuts down will eventually lose status and become more likely to be simply dominated by the others.

If someone wanted Trump to win, wouldn't they want to manipulate the market in the opposite direction, to make it look like they're in danger of losing? I'd be less likely to vote for my preferred candidate if I thought they had it in the bag.

I don't think any human society comes close to crossing that line when it comes to alcohol

Not even remote native communities with super high alcoholism rates, and where lots of kids get fetal alcohol syndrome?

alcohol has proven its ability to coexist alongside the development of advanced human societies over the course of several millennia.

Yes, but not native american societies. And there's plenty of evidence that Eurasians have had some serious selection pressure to help them deal with alcohol (and alcoholism is still a huge problem in Europe + North America).

Who is to say that someone or something won't decide 'oh these little people who bought shares pre-singularity didn't really contribute'

I agree, but I think it's scarier than that. I don't think anyone will have to decide or coordinate to get rid of the non-contributors, they'll just be out-competed. It won't be one capitalist owning all the resources, it'll be the entirety of productive civilization, just like it is today.

This is why I like Canada's constitutional monarchy. It lets people personify the country and worship its head of state without necessarily endorsing the current policy leader.

I still don't understand what you think the biggest problem is - the current manageable ones, or future, potentially unmanageable ones?

I agree it's not a foregone conclusion, I guess I'm hoping you'll either give an argument why you think it's unlikely, even though tens of billions and lots of top talent are being poured into it, or actually consider the hypothetical.

I can't think of a single task that AI could replace.

Even if it worked??

Fair. Maybe a better analogy would be: You and your whole family are in an old folks home, and the country and all jobs are now being run by immigrants from another, very different, country. You fear that one day (or gradually through taxation) they'll take away your stuff and if they do, there'll clearly be nothing you can do about it.

It's not clear to me if you think there are plausible unmanageable, extinction-level risks on the horizon.

The revealed preferences of Yuddites is to get paid by the establishment to make sure the tech doesn't rock the boat and respects the right moral fads.

I feel like this is unfair. The hardcore Yuddites are not on the Trust & Safety teams at big LLM companies. However, I agree that there are tons of "AI safety" people who've chosen lucrative corporate jobs whose output feeds into the political-correctness machine. But at least they get to know what's going on that way and have at least potentially minor influence. The alternative is... be a full-time protester with little resources, clout, or up-to-date knowledge?

The main LLM developers don't share methods or model weights. But they claim that if they didn't make enough money to train the best models, no one would care what they say.

That's a fair point. Here's work along the lines that you're requesting: https://arxiv.org/abs/2306.06924

The important thing is for our civilization to have an incentive to keep us around. Once we're superfluous, we'll be in a very precarious position in the long run.

Is being stuck in an old folks' home utopian?

I have considered it

I'm only going to evaluate the implications of ... products they actually have

It seems like you have not, in fact, considered the possibility of models improving. Is this the meme where some people literally can't evaluate hypotheticals? Again, doomers are worried about future, better models. What would you be worried about if you found out that models had been made that can do your job, and all other jobs, better than you?

This is a reasonable argument, but there's a big different between having robots that can do something things for us (like digging ditches) while humans can still do other things better, versus having everything being done better by machines. In the current world, you get growth by investing in both humans and machines. In the latter world, you get the most growth by putting all your resources into machines and the factories that make them.

How is someone supposed to warn you about a danger while there's still time to avert it? "There's no danger yet, and focusing on future dangers is bad messaging."