site banner

Friday Fun Thread for September 29, 2023

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

1
Jump in the discussion.

No email address required.

cause prioritization is entirely fine, deciding that anything is justified to reach goal X is not

Who exactly says "anything" is justified? That's a strawman if I've seen one.

Even Yudkowsky claims that dropping bombs on data centers is justified, not that we should blow up the entire planet in advance or return to the stone age.

Serious problems justify serious solutions, that's the whole point.

maybe I misunderstood

the issue of AI and future technology transforming everything does seem to eat every other ethical concern if you think enough about it

or extrapolated it too far

Fair enough, but I'd like to reframe your concerns with a hypothetical example-

Imagine we spot an asteroid on some deep space scan that has a significant non-zero chance of hitting Earth within a decade and causing a mass extinction event. For anything but <1% odds, any intervention necessary should necessarily take precedence over everything else.

As for AI, plenty of people think the odds are much much worse, and the time scales shorter

My position is that some basic and minimal rules should be upheld, for several reasons.

  1. many ethical positions are actually coordination rules: society with random murder, rape and looting is simply less efficient than one that manages to avoid such destructive tendencies (and while you can claim that some external looting may be efficient: it got less efficient over history, and for asteroid impact we would want global coordination anyway)

  2. if scenario X gives unlimited power to powerful they will happily invent fake X scenario or exaggerate it, we should limit incentives to that

  3. there are many ethical positions that I would not want abandon, even if someone credibly claims that it will would have good consequences (I do not care how much convincing sophistry would be applied is that slavery and rape should be legal, I am going to oppose it anyway even if superintelligent aliens would arrive and announce that it should be done)

  4. scenario X may be based on serious mistake and not actually apply

For asteroid impact: I would accept 50% asteroid tax, I would not accept slavery and outlawing criticism of government.

In general I would not accept "any intervention necessary", as it often results in counterproductive interventions or utterly not needed evil. Though I have no big illusions about my potential influence. Or would be likely to be convinced to support stupid policies anyway, lockdowns initially seemed a good idea to me (not examined yet whether it made sense to start them or whether it was stupid/evil/based on pure panicking).

Note that we had several cases in history of (2)/(4) scenario happening

I would not accept slavery and outlawing criticism of government.

But because those are just generally bad and probably won't stop the asteroid, not because unilateralism is bad, so I don't see what's wrong with the original premise of 'AI seems to eat other ethical concerns on a large scale'.

The problem is that some people would sincerely believe (or lie) that stuff like that is necessary.

Some policy being murderously stupid does not mean that it will not be enacted. Rejecting blatantly evil and unethical policies is far from foolproof but provides some coordination against really terrible ones.

(do I need to provide examples of tragically idiotic and evil programs enacted by governments?)

Okay. Some people will lie and make up foreign threats of aggression to justify wars and military buildups. Yet we still have a military. And that does mean that we, often, have thousands-millions of unnecessary deaths for unnecessary wars. But it still beats not having a military and then getting conquered by whoever feels like it. And during wars, you do have to suspend freedom of expression and freedom of movement to win, and all sorts of underhanded things that are bad during peacetime. And that is often done in unnecessary ways, but it's still done.

This is still analogous to the asteroid situation. It's worth making sure the asteroid isn't something someone made up, or a distributed mistake. But asteroids exist sometimes, and if they do exist it's worth putting your all into not having the asteroid hit.

And AI is worse than the asteroid in this case, because good outcomes aren't 'everything continues as normal', but 'AI everywhere and everything but good somehow', and nobody's really worked that last part out yet.

To be clear, I didn't actually advocate anywhere for 'the government forcing everyone to work on AI'. I just said that it seems to eclipse most other ethical concerns. I simply don't see why that stops being true even if it makes coordinating harder. Asteroids also make coordinating harder, but as before, they still exist.

Okay. Some people will lie and make up foreign threats of aggression to justify wars and military buildups. Yet we still have a military.

I am not a a pacifist, I do not concern having military as unethical. But if someone starts going "we totally must murder all X and it is important and ethics should be ignored" (I know, it almost never is so blatant) then I hope that I would not support that.

"eclipse most other ethical concerns" is getting much better than "does seem to eat every other ethical concern"

But I would be still highly suspicious about such claims. "goal justifies all methods" repeatedly caused severe issues. And I am not convinced that ignoring most/all ethical concerns would actually help solving AI problems.

And during wars, you do have to suspend freedom of expression and freedom of movement to win

not always and not fully

It only seems to if you accept some pretty specific premises -- all of which seem fantasitical to the population at large.

It's like saying 'the prospect of burning in Hell seems to eat other consequential concerns' -- it sure does! But only if you believe in Hell.

The population at large thought fantastical the telegraph, cars, oil, artillery, fighter jets, electricity, nuclear bombs, computers, and neural nets, a century before they arrived. They still came, and clever people predicted them.

Along with the flying cars, interplanetary (manned) spaceships, and other things that clever people predicted -- I honestly think that the popularity of science fiction and AI Doom scenarios in the rationalist community are not a coincidence. But 'would make a great science fiction story' is not a good predictor for 'is likely to happen IRL'.

More comments