site banner

Culture War Roundup for the week of November 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

Hmm this sounds alarming, I wonder what the new capability was, it must be something very powerful and dange...

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Oh.

Truth is, 90% of all work is stupid. The difference between a committee of competent Harvard grads from every major (smart and competent, but no genius) and the kind of people who create true innovation is a couple of orders of magnitude.

AI might be around the corner, but super-human intelligence that can innovate (Neumann, Terence Tao) is much much much farther away than we think.

Truth is, 90% of all work is stupid.

This strikes as a very "I work in academia and so does everyone I know" type of take.

I work in academia and so does everyone I know

I have been fortunate to be surrounded by people much smarter than me, but academia style snark was central to me not doing a phd. Thanks for calling me out. Admittedly, my comment came off as snarky. I should rephrase it.

Some examples: Most middle manager jobs don't help in any realistic way. Most manual labor is yet to be robo-automated because human labor is cheap, not because we can't do it. Most musicians/artists do not produce anything other than shallow imitations of their heroes. Most STEM trained practioners act more as highly-skilled monkeys who imitate what they are taught with perfect precision. Hell, even most R1 researchers spend most of their time doing 'derivative' research that is more about asking the most obvious question than creating something truly novel.

There is nothing wrong with that. I respect true expertise. It needs incredible attention to detail, encyclopedic knowledge of all edge cases in your field and a craftsman's precision. However, if a problem that needs just those 3 traits could be done badly by an AI model in 2010...... then it was going to be a matter of time before AIs became good enough to take that job. Because they were already recognized to be solvable problems, the hardware and compute just hadn't caught up yet. These jobs are stupid in the same way sheep herding for a Collie is hard or climbing a mountain as a goat is stupid. They are some combination of the 3 traits I mentioned above, performed masterfully. But, the skills needed can all be acquired and imitated.

That is the sense in which I say 90% jobs are stupid. Ie, given enough time, most average humans can be trained to do 90% of average jobs. It takes a couple of order-of-magnitude more time for some. But the average human is surprisingly capable given infinite time. In hindsight, stupid is the wrong word. It's just that when expressed like that, they don't sound like intelligence do they. Just a machine made of flesh and blood.

Here is where the 'infinite time' becomes relevant. AIs do actually have infinite time. So, even if the model is stupid in 'human time', it can just run far more parallel processes, fail more, read more & iterate more until it is as good as any top 10% expert in whatever they spend these cycles on.

Now coming to what AIs struggle to do, let's call that novelty. I believe there are 3 kinds of true novelty : orthogonal, extrapolative and interpolative. To avoid nerd speak here is how I see it :

  • Interpolative - Take knowledge from 2 different fields and apply they together to create something new.
  • Extrapolative - Push the boundaries within your field using tools that already exist in that field, but by asking exploratory what-if questions that no one has tried yet.
  • Orthogonal - True geniuses are here. I don't even know how this operates. How do you think of complex numbers. How do you even ask the 'what if light and matter are the same' kind of questions ? By orthogonal, I mean that this line of inquiry is entirely beyond the plane of what any of todays tools might allow for.

The distinction is important.

To me, Interpolative innovation is quite common and honestly, AIs are already starting to do this sort of well. Mixing 2 different things together is something they do decently well. I would not be surprised if AIs create novel 'interpolative' work in the near near future. It is literally pattern matching 2 distinct things that looks suspiciously similar. AIs becoming good at interpolative innovation will accelerate what Humans were already doing. It will extend our rapid rise since the industrial revolution, but won't be a civilizational change.

Models have yet to show any extrapolative innovation. But, I suspect that the first promising signs are around the corner. Remember, once you can do it once , badly, the floodgates are open. If an AI can do it even 1 in a million times, all you need is for the hardware, compute and money to catch up. It will get solved. When this moment happens is when I think AI-security people will hit the panic button. This moment will be the trigger to super-human hood. It will likely eliminate all interesting jobs, which sucks. But, to me, it will still be recognizable as human.

I really hope AIs cant perform Orthogonal innovation. To me, it is the clearest sign of sentience. Hell, I'd say it proves super-human sentience. Orthogonal innovation often means that life before-and-after it is fundamentally different to those affected by it. If we even see so much as an inkling of this, it is over for humans. I don't mean it's over for 99% of us. I mean, it is over. We will be a space faring people within decades, and likely extinct in a few decades after.

Thankfully, I think AI models will be stuck in interpolative land for quite a while.

(P.S : I am vey sleep deprived and my ramblings are accurately reflecting my tiredness sorry)

Necroing this due to AAQC, but have you had any luck getting GPT-style AI to do good interpolation? I've tried, but it doesn't like bridging fields very much - you really have to push it and say 'how might this narrow sub-field be relevant to my question', otherwise you just get a standard google summary.

Most manual labor is yet to be robo-automated because human labor is cheap, not because we can't do it.

"Stupid" is not the same thing as "useless". Sure, a plumber crawling around in the attic looking for a tiny leak in a pipe may be something 'stupid' that could be better off automated, but when you have water running down your walls, you'll be glad of the 'stupid' human doing the 'stupid' job fixing the problem.

Most middle manager jobs don't help in any realistic way

I think this is frequently overstated. A good manager really does coordinate and organise and make decisions about who is working on what, what the requirements are, and the technical workers and product suffer if that work is not done.

Most manual labor is yet to be robo-automated because human labor is cheap, not because we can't do it.

No, getting robots to do manual labour is super difficult. Sensing and accurately moving in the physical world is still well out of reach for many applications.

Most STEM trained practioners act more as highly-skilled monkeys who imitate what they are taught with perfect precision

Well, not quite, we actually solve problems, usually in the form of "how can I meet the requirements in the most efficient way possible". Sure, we're not usually breaking new innovative ground, but it's actually work, and it's not stupid. I write embedded software for controlling motors. These motor controllers are used in industrial applications all over the world, from robots to dentist drills.

That is the sense in which I say 90% jobs are stupid. Ie, given enough time, most average humans can be trained to do 90% of average jobs.

That's a stupid definition of stupid jobs.

Stupid because given enough time most average humans can be trained to recognise it, or stupid like this question?

I don’t think that’s a big gap. And many geniuses have also had very weird beliefs. So the typical Harvard grad can regurgitate a bunch of things smart people say and then make connections between different thoughts. Seems like OpenAI has accomplished that.

What’s a genius? It seems a bit autistic that they can ignore what they’ve learned and try new things. Some are legit insights and some completely stupid.

That sounds like an AI hallucination.

So then true innovation would just be a bunch of processing power testing whether the hallucination had some missed insight.

I’ve seen too many geniuses also do stupid stuff. Like Bill Gates I believe many here have said he was the top of the top. But he’s also done some dumb stuff and many things where I think I had better ideas.

I don’t think Musks is smarter than me at all. But he benefited from right place right time to gain some skills and maybe some different personality traits.

It is funny how many people believe themselves smarter than Musk yet he is probably the most accomplished person in human history in pursuits that clearly require a lot of intelligence.

We have Musk’s academic data and 60+ years of psychometric data on the centrality of IQ to human cognitive performance / ‘g’.

So yes, given we know his record it’s completely fine to say that you’re more intelligent than him. In the same way, an unknown singer could reasonably say she was a better singer than Taylor Swift even though, generally speaking, singing ability almost certainly does correlate with success as a musician.

Whenever I criticize Musk, people tell me I'm too anal about him overhyping his companies and what they're about to do. I suppose hype is par for the course and shouldn't be taken too seriously, but the only way a statement like this is even remotely close to true, is if he delivered on all the hype, and Starship was well on it's way to a crewed mission to Mars by next year, we already had self-driving robo-taxis, we had a functioning, profitable hyperloop somwhere, etc., etc., ALL of these predictions and promises would have to come true for him to be "the most accomplished person in human history in pursuits that clearly require a lot of intelligence". As it stands, I'd rate him below Trump.

You wildly understate how hard it is to start and build massive companies. Doing it three times in different fields (with two of them being crazy) is insane.

Building them from zero is hard, yes. Buying them and acting as the frontman might require some talent, but nothing that would put him anywhere near "the most accomplished person in human history in pursuits that clearly require a lot of intelligence". I'd also have to check the accomplishments of various titans of industry, but I honestly doubt he stands out.

The only one that really fits the bill is Tesla. And even then, he helped Tesla go from very small to very large.

My question for you is can you find one guy who was an early founder of two 100b+ companies?

Let’s figure out the list (and we can normalize for today’s dollars). Do you think it large or small?

The only one that really fits the bill is Tesla

Neuralink as well, I think. Wasn't he also putting his face on OpenAI for a while?

My question for you is can you find one guy who was an early founder of two 100b+ companies?

Like I said, I'd have to go through the list of accomplished industrialists. That said, I feel like we're already shifting goalposts, when I think of the most accomplish people in history, stock market points is not the measure I go by.

Do you think it large or small?

Small, but I don't expect him to be on it's top.

More comments

My SAT scores check out for that. He’s supposedly a 1400 though potentially a little harder when he took it. His IQ testing points to average Ivy grad level which a lot of people here would fit that profile.

All of human history is pushing it, but let’s say top 0.000001%.

If he’d achieved less, people would feel less bad about themselves, so he’d be more intelligent. I wonder what reddit would think of leonardo: Right, his father was upper middle class, that says it all. I doodle too. Anyone can see the stuff doesn’t work, one stupid idea after another. Sure he can paint, but so could I with the right training. I could never desecrate corpses though, that’s beneath me.

Might be a slight exaggeration but the number of people instrumental in creating two hundred billion dollar companies is pretty small. Three is unheard of. Three in different areas? Totally unprecedented.

You could put Musk up with Alexander, Khan, Ford, John Galt…

One of those is not like the others...

Yeah, Khan was a genetically engineered terrorist, both of those things should disqualify him.

Ah, I was thinking of Ghenghis, not Noonien Singh.

Elon?

I think people may interpret his missteps as evidence of being a normie, but everything about the man screams autist savant.

Yeah, I firmly believe myself as being smarter than 99% of the population (and have seen enough independent confirmations of this that I have very high confidence in this) but I would never think that I am smarter than Elon Musk. I would take a bet easily that Musk is smarter than me. It's amazing how plenty of people chastise those who think they are smarter than a large portion of the population but then these very same people think they are at the same level as Musk etc...

Someone being more intelligent doesn't mean you can't see when they are habitually making errors in some area.

I feel like Musk has "doctor's syndrome" where his success and competence in one area (or a couple) leads him to believe he has superior insight into all areas. Only for Musk and other famous people this gets supercharged both by their success and by their fans.

@orthoxerox quoted a story yesterday about a top journalist being outfoxed by a regular police officer during an interrogation, because they played someone else's game on their home terf. Well, Musk is constantly doing this and occasionally making a fool of himself is inevitable.

He is very competent, driven and successful but he isn't god.

Conversely, people are doing the same thing as Musk, they notice how he fumbles about in their area of expertise (or it gets pointed out by others), making overconfident claims and predictions, and therefore assume that he is a fool, or at least not as smart as his success would imply.

I feel like Musk has "doctor's syndrome"

Nobel disease is a sufficiently established term to have a wikipedia page, and I feel is more accurate as Musk's accomplishments are at the level of a Nobel prize winner, than of a mere doctor.

It was not intented as a dig at Musk, I wasn't aware of the term Nobel disease.

Oh no, I didn't think you belittled him, it is just that I remembered there was a Nobel prize who had unorthodox ideas about HIV and thought this was similar to Musk, which led me to discover that the term already existed.

More comments