The linked post seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations like
-
selection effects at the level of which arguments are discovered and distributed
-
community epistemic problems, and
-
increased uncertainty due to chains of reasoning with imperfect concepts
as real and important.
I'd be curious to get perspectives form the people of the Motte, e.g., telling me that I'm the crazy one & so on.
Regards,
Nuño.
Jump in the discussion.
No email address required.
Notes -
Thanks for your long and thoughtful comment, /u/magic9mushroom. I appreciate it, and you bring up some good points.
That said, I'm kind of miffled that you don't quite mention why you believe the things that you believe. The obvious answer to why is that you cover a lot of points, and you are already covering a wide range of topics, so going into the whys would take too much time. But at the same time, I've also observed that pattern in other discussions (e.g. here and here), and it sort of makes me think that we could do better.
I mean, in the most direct sense I literally was at 90% of the maximum comment length, although I could have split it. I'm also just kind of bad at monologuing (not restricted to this sort of thing; I nearly failed year 12 due to essay requirements).
I'll explain a few things that jump out to me as non-obvious assertions. You want me to go deeper on something else, please point to it.
This kind of depends on who's involved, but:
It's a lot easier to catch up than to forge ahead
There aren't all that many nukes at the moment (though a return to Cold-War levels isn't impossible)
A lot of nukes - particularly from whoever starts lobbing nukes first - will be spent on blowing up enemy nukes in their silos, or will get shot down. In the specific case of a near-future US-China nuclear war, if the US went alpha-strike I imagine it'd get somewhere in the low double digits of nukes exploding over its cities in retaliation, possibly even single digit. That's manageable, if serious; Japan came back from worse (sure, there were only two nukes and they were small, but the conventional bombing was apocalyptic).
In a lot of AI scenarios you care more about the most-advanced country than about the advancement level of the world as a whole, and there are a few Western nations unlikely to get nuked (New Zealand, for instance).
Still, the soft-error issue is kind of a wild card and could put AI on hold for longer than that; I don't really know how big a problem it would be. So I was overstepping a bit here.
The big dangers of a nuke, in rough order of occurrence, are:
You are inside the building-collapse radius. Generally-speaking, ur ded. Hence my note that compounds inside this radius are not useful.
You're caught unprepared outside said radius, but within the much-larger "light damage" radius, and get wounded by broken glass/thermal burns. Having other people around who are watching out for you reduces the likelihood that you'll be caught unprepared, and - if they have medical supplies - massively increases the likelihood that you'll be treated (since they'll prioritise you, whereas general relief won't).
Potential of fallout poisoning the water supply. This doesn't last very long (couple of weeks at most), but it doesn't need to for you to face Morton's Fork. Water supplies help here (I have 20L of water as insurance).
Potential of supply-chain failure in cities - especially in case of a lot of cities getting nuked at once - in which case all hell breaks loose as people start looting and fighting each other for food. Being in a cult compound with food is about as safe as you can possibly be here; you're a hard target. I haven't bothered stockpiling food, because in the scenarios where I'd need it I wouldn't be able to keep it.
Neural nets are useful because they work without you needing to know how they work. The problem is, that means you don't know how they work. You get something that does what you want in the situations it's trained for - but you don't know why; it's a blackbox. It might want to do that thing for a lot of different reasons, and for alignment purposes you care a lot what those reasons are. If you were capable of understanding what something that both solves the problem and is aligned looks like, you'd write it directly rather than summoning it up via deep learning.
Humans are (semi-)alignable neural nets, but the thing is that we're not blank slate random neural nets; we're heavily pre-wired at the genetic level and then learn on top of that. Morality is partially hardwired into humans (see for instance The Righteous Mind by Jonathan Haidt). And evolution could select for that without us gaming the process, because you can't trick or destroy evolution; it cares about results, not words, and it's a basic consequence of the way the universe works. You can get aligned neural nets, in theory, if you have some way of seeing what they do when released and then judging them on it, but when dealing with near-human or superhuman AI (the dangerous sort) you can't do that - putting them in sim will likely result in them spotting the simulation and faking it, while giving them a real chance to kill all humans has the slight issue that if they take it you're dead and you don't get to continue the process.
The others I'm aware of are natural events (resilience of humanity and fossil record suggest this is something like 1/500,000,000 years; I think humanity would survive a Chicxulub via preppers and artificially-lit hydroponics, although obviously most humans wouldn't and a Siberian Traps would be dicier), new physics (we've a ways to go before reaching cosmic-ray energies, particle colliders are really expensive so a substantial amount of people would have to agree it's a good idea, and there's the possibility of just doing it in space as a precaution which becomes more feasible as time goes on), and geoengineering gone wrong (GCR from this is easy enough, but X is not; a solar shade, for instance, won't cut it because we would notice that someone blocked out the Sun and blow up the shade with missiles, leaving potentially billions of people starving/freezing/(if termination shock) boiling to death but no X. The two things I can think of that would do it and are theoretically possible are literally redirecting a >Chicxulub body into the Earth - I don't know how big it'd need to be, but Ceres would definitely do it - and deliberately triggering runaway greenhouse with fluorinated gases. Both would require a very large amount of investment and I don't think it's very likely for someone to do them without getting noticed and stopped).
Pandemics are a huge GCR but not much of an X-risk; if the human population decreases enough, the chain of transmission will break sooner or later. I guess with some sort of engineered mind-controlling plague where infected people actively and intelligently try to infect others it'd be easy enough, but I kind of doubt that's possible. The big X-risk from biotech IMO is from fully-synthetic life that can outcompete the biosphere entire (e.g. an alga that uses PNA + non-RNA ribosomes, isn't profitably digestible due to incompatible biochemistry, and can survive at lower CO2 concentration than normal plants - this would bloom across huge chunks of the ocean due to lack of phosphate requirement and pull down the biosphere's carbon into useless gunk on the seafloor, starving everyone and everything).
By "management" I mean literally everyone whose job is to oversee others, from line managers up to the CEO. You hook up surveillance in every room and have the AGI order everyone around. The increase in productivity is because you don't have to pay any of these - currently-highly-paid - people, just maintain the surveillance and computers (also no loss from internal office squabbling/miscommunication, as you replaced it all with one "person"). This is a stupid idea in the long-term for obvious reasons, but it's short-term selfishly advantageous for the shareholders.
More options
Context Copy link
More options
Context Copy link