This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I think the reason people assume absolute dominance (either of the most powerful ASI or of the humans in charge of it if control can be solved/maintained) is that once you get to super intelligence it’s theorized you also get recursive self-improvement.
Right now it doesn’t matter for mundane human automation of tasks like image or text generation if one model is 3% smarter than another. In the ASI foom scenario, an ASI 0.1% smarter than another immediately builds a infinite advantage because it rapidly, incrementally improves itself ever faster and more efficiently than the ASI that started just a little bit less intelligent than itself. Compute / electricity complicate this, but there are various scenarios around that anyway.
1.001^100 is approximately 1.1, 1.001^1,000 is approximately 2.7, and 1.001^10,000 is approximately 22,000, for reference - I suppose a lot depends on how quickly self-improving AI and shorten the cycle time required for self-improvement.
More options
Context Copy link
I can definitely see how a super intelligence might be able to build an even better super intelligence, but it seems unlikely there wouldn't be some substantial diminishing returns at some point in the process. And if those happen when it's still within the relative grasp of humans, then conquest by them would be a lot more difficult, just like how smart humans don't actually seem to be ruling the world over dumb humans. That it too could replicate and do so near perfectly helps that (if it was 100 humans vs 100 smarter robots, the robots probably win) but it would have a ways to go to get past the "just nuke the server location lol" phase of losing against dedicated humans.
IIRC the correlation between IQ and net worth (roughly proportional to what fraction of the world you rule) is like 0.4; I'd agree that's not very impressive, but if there's a single more significant factor I don't know what it is.
I'd argue that there's a strong restriction-of-range effect here, though. Humans went through a genetic bottleneck 20k generations ago, and our genetic diversity is low enough that the intellectual difference between "average environment" and "the best we can do if cost is no object" is two standard deviations. If you consider intelligent hominids just a little further removed (Neanderthals, Denisovians, and there's fainter evidence of more), there was enough interbreeding to pick up a couple percent of their genes here and there but it's not too much an oversimplification to just say we wiped them out. And that's just a special case of animals as a whole. Wild mammals are down to about 4% of mammal biomass now, and that's mostly due to deliberate conservation efforts rather than any remaining conflict. A bit more than a third of biomass is us, another several percent is our pets, and the majority is the animals we raise to eat.
It definitely 100% helps to be intelligent, but net worth isn't really that proportional to the fraction of the world you rule, especially when you exclude the times where someone took power and then used that power to become wealthy. There's been plenty of idiots in powerful positions before (like most of Russian history), there are plenty of idiots in power today and there will be plenty of idiots in the future.
Putting it down to just mammal biomass is misleading IMO, we make up 0.01% of total biomass and 2.5% of animal biomass. https://ourworldindata.org/life-on-earth
The majority of life on earth are plants, accounting for over 80% and including bacteria it goes up to 95% of life. These are not just dumb, they are (to the best of our knowledge) incapable of thought and yet not only dominate the planet but do so through such an extreme that we can not live without them.
Even the very animals we eat as food are thriving from the perspective of reproduction and evolution. Until humans are gone (or stop eating them for some reason), their survival is all but guaranteed. Happyness might be something we as thinking beings strive for, but not necessary from the biological perspective of spread spread spread. Our pets are very much the same way, they benefit drastically being under the wing of humanity.
An AI might not be in need of humans in the same way, especially as we begin to improve on autonomous movement but human conquest of Earth is not a great example to use IMO. The greatest and smartest intelligence ever will keep us around if we're seen as useful. They'd probably us keep around even if we aren't as long as we don't pose a threat.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I only see the exponential one. Where do you see recursion? Or why do you think it is needed?
More options
Context Copy link
Quite right, that's why I'd prefer many parties at near-parity. Better not to give the leader the opportunity to run away with the world.
If foom is super-rapid then it's hard to see how any scenario ends well. But if it's slower then coalitions should form.
More options
Context Copy link
Which scenarios would these be? Massive overcapacity buildup? Hoping that in the path of self improvement the AI figures out a more efficient use of resources that doesn't require significant infrastructure modifications?
More options
Context Copy link
I always got the sense that LW was, and the AI alignment movement continues to be, stuck with the idealistic memeplex that '70s economics and classical AI had about the nature of intelligence and reasoning. The sense is that uncertainty and resource limitations are surely just a temporary hindrance that will disappear in the limit and can therefore simply be abstracted away, so you can get an adequate intuition for the dynamics of the "competing intelligences" game by looking at results like Aumann agreement.
It's not at all clear that this is the case; the load to model the actions of a 0.1% dumber competitor, or even just the consequences of the sort of mistakes a superintelligence could make in its superintelligent musings (to a sufficient degree of confidence to satisfy its superhuman risk aversion), may well outscale the advantages of being 0.1% more intelligent (whatever the linear measure of intelligence there is), to the point where there is nothing like a stable equilibrium that has the intellectually rich getting richer. Instead, as you are ahead, you have more to lose, and your 0.1% advantage does not protect you against serendipity or collusion or the possibility that one of those narrowly behind you gets lucky and pulls ahead, or simply exploits the concavity of your value functions to pull a "suicide bombing" on you, in the end forcing you to actually negotiate an artificial deadlock and uplift competitors that fall behind. Compare other examples of resource possession where in a naive model the resource seems like it would be reinvestable to obtain more of the same resource - why did the US not go FOOM among nations, or Bill Gates go FOOM among humans?
Malthusianism reigned until 80s works like Simon's Ultimate Resource revived cornicopian thought.
More options
Context Copy link
It is also (hilariously) possible that the most intelligent model may lose to much dumber more streamlined models that are capable of cycling their OODA loops faster.
(Of course seems quite plausible that any gap in AI intelligence will be smaller than the known gaps in human intelligence and smart humans get pwned by stupid humans regularly.)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link