This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
What makes you think there are huge unrealized wins in unknown algorithmic improvements. In other domains, e.g. compression, we've gotten close to the information theoretic limits we know about (e.g. Shannon limits for signal processing), so I'd guess that the sustained high effort applied to AI has gotten us close to limits we haven't quite modeled yet, leaving not much room for even superintelligence to foom. IOW, we humans aren't half bad at algorithmic cleverness and maybe AIs don't end up beating us by enough to matter even if they're arbitrarily smart.
I don't think that it is the case, just that it is possible. I called it the nightmare version because it would enable a very steep take-off, while designing new hardware would likely introduce some delay: just as even the worlds most genius engineer in 2025 can not quickly build a car if he has to work with stone age tech, an ASI might require some human-scale time (e.g. weeks) to develop new computational hardware.
You mention compression, which is kind of a funny case. The fundamental compressibility of a finite sequence is its Kolmogorov complexity. Basically, it is impossible to tell if a sequence was generated by a pseudo-random number generator (and thus could be encoded by just specifying that generator) or if it is truly random (and thus your compression is whatever Shannon gives you). At least for compression, we have a good understanding what is and what is not possible.
Also, intuition only gets us so far with algorithmic complexity. Take matrix multiplication. Naively done, it is O(n^3), and few people would suspect that one can be better than that. However, the best algorithm known today is O(n^2.37), and practical algorithms can easily achieve a scaling of O(n^2.81). "I can not find a algorithm faster than O(f(n)), hence O(f(n)) is the complexity class of the problem" is not sound reasoning. In fact, the best lower bound for matrix multiplication is Omega(n^2).
For AI, things are much worse. Sure, parts of it is giant inscrutable matrices, where we have some lower bounds for linear algebra algorithms, but what we would want would be a theorem which gives an upper bound for the intelligence given a certain net size. While I only read Zvi occasionally, my understanding is that we do not have a formal definition of intelligence, never mind one which is practically computable. What we have are crude benchmarks like IQ tests or their AI variants (which are obviously ill-suited for appearing in formal theorems), but they at most give us lower bounds what on what is possible.
Kolmogorov complexity is, IMO, a "cute" definition, but it's not constructive like the Shannon limit, and is a bit fuzzy on the subject of existing domain knowledge. For lossy compression, there is a function of how much loss is reasonable, and it's possible to expect numerically great performance compressing, say, a Hallmark movie because all Hallmark movies are pretty similar, and with enough domain knowledge you can cobble together a "passable" reconstruction with a two sentence plot summary. You can highly compress a given Shakespeare play if your decompression algorithm has the entire text of the Bard to pull from: "Hamlet," is enough!
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link