@astrolabia's banner p

astrolabia


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 01:46:57 UTC

				

User ID: 353

astrolabia


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 01:46:57 UTC

					

No bio...


					

User ID: 353

I agree that setting the precedent of meddling with family formation is a bad one. I'm just saying that I don't understand what your advice looks like in practice. If my local municipality proposes subsidizing building a daycare, how do you vote?

I agree that individual returns to societal-level advocacy are usually small, but again I don't understand where you draw then line between "advocacy for strong families" versus "Attempting to optimize policies for the societal production of kids".

Having more kids always results in having more kids. Raises are to get market value for my labor, not because I have kids.

If having kids is so central, then why spend time trying to get market value for your labor, instead of spending that time having more kids?

nonintervention might result in ethnic replacement or demographic collapse, but these are common enough over recorded history that I don't have any personal problem with it.

Something bad being common doesn't make it OK - it makes it scarier! And both of these things increase the chance that your descendants won't be able to have as many kids as they otherwise would.

I don't understand the distinction between working on having your own kids versus advocating for policies that'd make it easier for you and yours to have more kids. Surely you'd advocate for a raise to help pay for your own kids? How about for lower taxes at a municipal level? How about per-kid payments at a federal level?

Man, lately Tyler just seems so off the mark. He keeps talking about AGI in the most head-in-the-sand, dunky and dismissive way without even making clear claims or any concrete arguments. I think AGI doom is one of the most mind-killing topics out there amongst otherwise high decouplers.

Fair enough, but I give him partial credit for asking for a withdrawl, though I don't know any details.

Yes, I agree. I am just saying that looking dangerous is also usually necessary to get good deals.

I didn't follow this closely, but didn't he order withdrawl from Afghanistan and Syria, but the generals slow-played it?

Maybe we're talking about different things. I'm thinking of Obama talking about red lines in Syria, then not doing anything about it. Or Putin hinting about using nukes over foreign involvement in Ukraine and then not. I agree one can also go too far and be easily baited.

Trump doesn't like war in and of itself, but he hates being seen as "weak" far, far, FAR more. Avoiding situations that "make us look weak" is the amorphous basis of his entire foreign policy.

Aren't these almost the same thing? The way you avoid wars is by being seen as strong and, crucially, as willing to fight if necessary. Countries that appear weak, or appear strong but unwilling to fight, are the ones that end up being attacked.

Whoops, thanks for correcting me and for providing a link.

Oh, I stand corrected.

In the early 1900s, we were at 40% of escape velocity.

I think this is misleading. The number care about is more like "life expectancy conditioned on being a healthy adult", which I don't think was changing much back then, nor has changed much recently. But probably is still going up a little if you control for demographics, which, in my limited understanding, have been changing in the West to mask (small) improvements in longevity.

I agree with most of this, but I feel like some male shit-talking and joking, at least in a group setting, also has an element of faux-combat. Constantly challenging each other is a form of play-fighting, but it's also a test - someone who regularly can't come up with a comeback or simply shuts down will eventually lose status and become more likely to be simply dominated by the others.

If someone wanted Trump to win, wouldn't they want to manipulate the market in the opposite direction, to make it look like they're in danger of losing? I'd be less likely to vote for my preferred candidate if I thought they had it in the bag.

I don't think any human society comes close to crossing that line when it comes to alcohol

Not even remote native communities with super high alcoholism rates, and where lots of kids get fetal alcohol syndrome?

alcohol has proven its ability to coexist alongside the development of advanced human societies over the course of several millennia.

Yes, but not native american societies. And there's plenty of evidence that Eurasians have had some serious selection pressure to help them deal with alcohol (and alcoholism is still a huge problem in Europe + North America).

Who is to say that someone or something won't decide 'oh these little people who bought shares pre-singularity didn't really contribute'

I agree, but I think it's scarier than that. I don't think anyone will have to decide or coordinate to get rid of the non-contributors, they'll just be out-competed. It won't be one capitalist owning all the resources, it'll be the entirety of productive civilization, just like it is today.

This is why I like Canada's constitutional monarchy. It lets people personify the country and worship its head of state without necessarily endorsing the current policy leader.

I still don't understand what you think the biggest problem is - the current manageable ones, or future, potentially unmanageable ones?

I agree it's not a foregone conclusion, I guess I'm hoping you'll either give an argument why you think it's unlikely, even though tens of billions and lots of top talent are being poured into it, or actually consider the hypothetical.

I can't think of a single task that AI could replace.

Even if it worked??

Fair. Maybe a better analogy would be: You and your whole family are in an old folks home, and the country and all jobs are now being run by immigrants from another, very different, country. You fear that one day (or gradually through taxation) they'll take away your stuff and if they do, there'll clearly be nothing you can do about it.

It's not clear to me if you think there are plausible unmanageable, extinction-level risks on the horizon.

The revealed preferences of Yuddites is to get paid by the establishment to make sure the tech doesn't rock the boat and respects the right moral fads.

I feel like this is unfair. The hardcore Yuddites are not on the Trust & Safety teams at big LLM companies. However, I agree that there are tons of "AI safety" people who've chosen lucrative corporate jobs whose output feeds into the political-correctness machine. But at least they get to know what's going on that way and have at least potentially minor influence. The alternative is... be a full-time protester with little resources, clout, or up-to-date knowledge?

The main LLM developers don't share methods or model weights. But they claim that if they didn't make enough money to train the best models, no one would care what they say.

That's a fair point. Here's work along the lines that you're requesting: https://arxiv.org/abs/2306.06924

The important thing is for our civilization to have an incentive to keep us around. Once we're superfluous, we'll be in a very precarious position in the long run.

Is being stuck in an old folks' home utopian?