This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
OpenAI researchers warned of AI breakthrough before CEO ouster according to Reuters. It seems that, disappointingly, there's more to the Sama exit than just petty politics.
I had found myself greatly reassured by the thought that, actually, this whole debacle was just (human) politics as usual - and not the eerie dawn of some new era.
Have other motizens noticed a substantial disconnect between their foremost worry the past while, and that of the normies in their life? Everyone else is chanting for Palestine, and I'm chanting sotto voce for a decade or two more of human supremacy before the singularity. And anytime I could comfort myself by the thought that, well, Serious People are not yet concerned, I see some preposterous headline from selfsame Serious People about how hillwalking is white supremacy, or equivalent bullshit. The illusion is bollocked.
Other reports that it isn't true that this contributed to his exit - https://twitter.com/alexeheath/status/1727472179283919032
I find this immensely funny for some reason that no one is incentivized to issue an public correction. Reuters parroted anonymous source claiming a breakthrough and backtracking on that well is a loss of credibility, and OpenAI can't go just "there is no breakthrough" without hurting the for profit investments part of the org. I read the breakthrough claim and did a little further digging and just threw out there with Russel's Teapot!
More options
Context Copy link
More options
Context Copy link
There's not going to be a Singularity, and the human supremacy that will continue is that of the "I am immensely wealthy because I own billions of dollars worth of stock" type. Elon and Jeff will still be playing with their rocket ships even as business Microsoft AI for Azure chips away at the white collar office jobs which formerly were the safe and secure good choices.
More options
Context Copy link
Yes. But I word it differently. Russia acts like 17th century imperialist (or a whole timeline when people were afraid of Mongols and needed broad borders); Hamas acts like Bronze Age raiders; then you have the semi-normie rich neolib crowd that’s atleast modern; then there is this wtf is going on in the AI labs. I feel like on one timeline I’m seeing every civilization scale at the same time. Probably throw in some trads too. It’s like playing Civilization in the Bronze Age, the age of conquest, probably some renaissance, some modern, a little Cold War Taiwan shit and whatever comes after happening all at once. Does any of the rest of it even matter besides what happens at some 1000 person lab in Silicon Valley.
Well Putin can put an end to Silicon Valley in half an hour, he has that power. Taiwan is critical to the H100 production line. Neolib normies buying the latest Iphones are soaking up a great deal of top-tier silicon.
Many of the storylines are connected!
Though we've also got climate change as a red herring, a lot of strangeness is circulating on that front:
Lethal humidity? Really?
It's a real thing. When temperature and relative humidity get high enough at the same time, homeostasis breaks down. Sweating can no longer bring your body temperature low enough to prevent heat stroke, you brain starts to cook, your kidneys stop working, etc.
Eg, when the wet bulb temp. is above homeostasis and you have 100% relative humidity, you physically can't cool down even in the shade and you are guaranteed to die pretty quickly if you can't get inside or in a cave or something.
This is an unusual condition in the modern history of the planet and even in the BAD climate change timeline will be seasonal and local, but it has already started to happened in short bursts in eg. pakistan and the punjab region of india; where you get just extraordinarily shitty weather even in good times.
Wet bulb only guarantees mass casualties if all power fails because there’s no electricity. Energy is becoming more plentiful and may well become significantly cheaper in the coming years and decades.
True, but ehhhhhhh.
This assumes a level of robustness to the power grid that is not present, and will never be present under market conditions.
I mean, one kinda bad but not unknown or unknowable snow storm knocked out the grid for one of the richest areas in the richest country in the world.
Re. cheap energy: again, yes but. The only source of energy I know of that is dropping price is renewables, which I would not stake my life or even my extreme discomfort on.
Again, this could be solved easily with sufficient investment. I just don't believe that investment will ever come from any market driven source.
More options
Context Copy link
Do you believe renewables will be improved sufficiently and in time to offset the declining EROEI of oil and gas?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Link to quote source?
It's paywalled, so I didn't give the source: https://www.theaustralian.com.au/nation/andrew-forrest-in-grim-warning-on-climate/news-story/8ef0bf2d852da37b88132cf0d6f8b6e9
I mean, wet-bulb exceeding 30-35 degrees is a thing and it does mean mass casualties if people are stuck in it.
And I'd generally expect fatalities from a USA-PRC nuclear exchange to be ~1 billion or somewhat less (I'm a pessimist on cities' ability to survive state/infrastructure failure, but consider nuclear winter to be essentially a hoax), so "something else can give higher casualties" isn't exactly a contradiction in terms. West-Russia would probably be a bit lower; West-Russia-PRC would be a fair bit higher, but still far short of "everyone".
But I think that in practice wet-bulb events will not wind up killing 1 billion+, if only because people will abandon areas prone to them.
How many wet bulb mass casualty events have there been so far? Now increase the temperature by 1 degree Celsius. How many will there be?
To say this is a near term threat comparable to AI is ridiculous.
Let's keep in mind that it's also a solvable problem but we CHOOSE not to solve it. We could use nuclear power, we could increase the reflectivity of clouds, we could fertilize the oceans. The same people who catastrophize about climate change refuse to consider those solutions. Therefore, the risks to climate change must be LESS than the risks of those things, which are minimal on a world historical scale.
More options
Context Copy link
There's tons of places in south east Asia that get very hot and very humid. Yet they are absolutely packed with people. It heavily suggests to me that this idea - which let me remind you is being pushed by a billionaire heavily invested in green tech - is not actually a real problem.
"People will die at 35 degrees wet-bulb" is very much a real problem. The questions are the degree to which this will actually start happening (probably not a lot; we're looking at something like 3 degrees warming of GMST and the tropics/subtropics will get less than that) and the degree to which people will actually stay there to get killed.
The tropics don't normally get to 35 wet-bulb, which is not a coincidence - if they did, humans would have evolved with a higher body temperature to allow survival there. The highest Singapore's ever gotten, for instance, is something like 33.6, and it's usually much lower.
More options
Context Copy link
More options
Context Copy link
They won't, because states will just start geoengineering.
Or even install air conditioning
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'll be overly generous and attribute it to a failure of the climate control in an OAI data center causing a short circuit that makes the AI run amok.
More options
Context Copy link
That just sounds like Twiggy being Twiggy.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
If it was anywhere even near sentient AI then the Feds would have taken over by now. No, I don't mean 1 random DC strategist on the board. I mean that OpenAI's network would have been air-gapped and massive gag orders would have been placed on anyone. No multicolored twitter hearts. OpenAI, for all their generation defining technology, still has a rather spotty record of crying wolf when it comes to sentient AI. I don't think this one is any different.
But, but , but ......... it is likely that they have stumbled upon another step change improvement over GPT4, which likely means they can destroy another few hundred startups, businesses and careers.
It wouldn't take too much to make all but the top 10% of the following jobs obsolete:
Note, the biggest issue with Agents has been that they lose context part way through that process or meander. But all current agent architectures are super-naive when compared to the kind of swarm-RL stuff that has been out for a good decade. With GPT-4 Turbo 128 they have effectively solved all RAG, which allows it to pretty much surf the entire internet without meandering for a lot longer. Thus making its intelligence upto-date and functionally infinite.
My guess is that they have managed to fully stabilize agents for certain usecases and are fairly sure that they can deploy 'robot employees' for certain jobs in the span of a year.
But I might be wrong.
If it is just better MCTS, slightly better RAG and better GPT-4 RLHF then I will be soo disappointed. Yes, it is much better, but honestly, it speaks more to Google's incompetence and Facebook's complete not-giving-a-fuck for OpenAI to build up this kind of lead. None of this is fundamentally novel.
We are in an era of free-lunch where people think OpenAI are the best around just because everyone around can barely walk without tripping over themselves. (I say this as someone who still considers OpenAI to be the best applied engineering team assembled since Xerox Parc)
I too considered OpenAI to be the gold-standard, but I was astounded to find that the recently-released Assistant API maintains state, but with minimal/zero synthesis. Thanks to testers (@self_made_human) -- I learned quite quickly that some sort of synthesis is necessary -- to do anything more sophisticated than "search" requires a cognitive architecture that remembers and forgets.
https://drive.google.com/file/d/17u4X8O_2TxZXZRv_P7x-177aMzigrJmJ/view?usp=drive_link https://drive.google.com/file/d/185oaULl_29F9-mXQ420KQIKq_rU_crpv/view?usp=drive_link https://drive.google.com/file/d/184szy3fS4PmFF_D1Ock_NNrxCeg-sFV4/view?usp=drive_link
Hmm, I think editing in usernames doesn't ping them as usual, but either way I'm happy to have helped!
More options
Context Copy link
More options
Context Copy link
This may be theoretically true but strikes me as much too optimistic.
I use AI tools all day every day and continuously have my mind blown and say "this is going to change everything" to myself and to my wife if she's not tired of listening to me, but whenever I talk to other people, even other technology professionals and hear them tell me how they don't find this useful, I become resigned to the fact that it's going to take decades for the AI tech we have now to permeate the rest of industries. Just as it took decades and a fucking pandemic before we began to accept remote work as a viable way to function (though perhaps not optimal) even though people who were hip have been doing it since the late 90s.
More options
Context Copy link
I think we're going to get the worst of all worlds there. Take translators - companies and even government offices like the justice system will turn to using machine translation instead of human translators because cheaper! faster! but there will be a lot of nuance lost (I think things like slang, simile, etc. will not be understood but we'll get literal translation) and even inaccuracies. Not much comfort there to the guy who gets convicted of a crime because the AI translation buggered up his statements to the cops and in court, but hey it saves $$$$ to the taxpayer (allegedly).
Same on down the line. Virtual doctors that don't order tests because they are set to "cheapest, most generic diagnosis" and miss out on the rare but does happen case where this time it is a zebra not a horse. Being able to pay for a real live human doctor is going to be the next division between "huh how come rich people live longer? 'tis a mystery!" classes.
More options
Context Copy link
You are assuming feds are competent.
They aren't.
There are plenty of highly competent people in the USG at senior levels.
Yeah- Like Jake Sullivan you mean?
EDIT: maybe perhaps be more specific. Which person is competent and at what ? Because the record of past 30 years is somewhat dismal.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
My impression from Zvi's infodumps is that the NatSec crowd is kinda sleeping on AI. I imagine a rogue-AI incident would more than suffice to wake them up, but that's no good if it kills us.
I think CIA people (Will Hurd), RAND people (Tasha McCauley) and Georgetown people (Helen Toner) on the board of OpenAI were keeping them informed at least a little bit, but who knows how they'll do now!
More options
Context Copy link
NatSec isn't "sleeping on AI" so much as they've concluded that LLMs are an evolutionary dead-end for the use cases they have in mind.
Which is a form of sleeping on AI; they see it only as a tool, not as a potential adversary in its own right. Like I said, though, a rogue-AI incident would definitely fix that; a lot of my !doom probability routes through "we get a rogue AI that isn't smart enough to kill us all, then these kinds of people force the Narrative into Jihad".
What use cases do you think I'm referring to?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I feel like there is a cycle here:
Programmers make a thing which is capable of repeating that it is sentient.
Programmers want to inflate their sense of importance, and their position of importance within society. They spin elaborate science fiction stories about a “escaping” super intelligent AI.
They refuse to elaborate.
Alexi Friedman refuses to ask them to elaborate
The marketing people, seeing the attention the programmers are getting, want in.
They hear the stories from 2, and repeat them for the same reasons, not realizing that they were being marketed to by the programmers.
The programmers and marketers now end up in a sort of martingale situation where they just keep double down on each others claims.
The board of OpenAI decides to Take Action to prevent the marketing thing from happening.
Guys, I’m sorry if we deceived you. The AI is not going to “escape”. That doesn’t even make sense. Literally if there is a problem just stop paying the azure bill and Microsoft will shut it off.
You haven’t been watching the business news. Most of Microsoft’s $10 billion investment is in the form of Azure credits. It’s already been paid.
More options
Context Copy link
By the time sentient AI takes over, turning off its compute will be equivalent to destroying the economy. AI will be performing most of the useful white collar work (and much of the blue collar work as well). You won't be able to just "turn it off" without people dying.
Our best bet is to have multiple competing intelligences so that if one goes bad it can be easily replaced.
I feel like conversations in the AI risk space have hit eternal September and we have to rehash the same obvious and easily refuted objections over and over again.
I feel the same way but for the opposite reason. Non technical people who don’t understand the infrastructure requirements of actually running these things are talking about them as if they’re ghosts, or spirits.
It’s not “AGI that escapes the lab and infects all the computers”, it’s: your credit card company starts using OpenAI/Microsoft products to make determinations about debt collection and there are unforeseen edge cases.
You’re not going to have an AGI that somehow worms it’s way into a nuclear computer for several reasons:
We already have actual humans trying to do the same thing.
You’d notice the semi truck loads or H200s being unloaded into your building, as well as the data center being built to house them.
Also a lot of people seem to have a fundamental misunderstanding of what LLMs actually are. The mostly accurate soundbite explanation is that they're statistical models that predict the most likely data to follow some previous data. They don't "think" (unless you're in the camp that thinks that human consciousness is basically just a really complex statistical model running on a biological computer). Hell, you can do what an LLM does with pen and paper. It would take years, but you could simulate the computations being done by AI on your own. I realize this is similar to the Chinese Room thought experiment but it means that LLMs are nothing like consciousness unless you have a very simplistic and mechanical view of what consciousness is.
The biggest threats from LLMs and other forms of "AI" aren't Skynet or paperclip maximizers. The biggest threats are social disruption due to AI eliminating lots of jobs and consolidating wealth. Or kafkaesque nightmares caused by corporations, bureaucrats, and courts blindly relying on AI (or being intentionally oblivious to its shortcomings if it allows them to do what they already want). AI won't lead to Terminator, it'll lead to Terry Gilliam's Brazil.
There's a funny story I would really like to share here but won't due to an NDA and other issues. You are correct though, and this is (among other reasons) why I remain bearish on AI.
I'm bearish on the Rationalists' view of AI, but there's plenty damage a Chinese Room AI can do to us. Just look at what we did with the Internet.
More options
Context Copy link
More options
Context Copy link
As someone vocally in that camp, I invite you to demonstrate any other model for what human consciousness could possible be. And it doesn't even matter if the AI is "conscious" if it's intelligent and capable of using that intelligence to forward ends not aligned with our goals.
I mean there have literally been hundreds if not thousands of models of consciousness proposed over the last few thousand years, so take your pick? The burden of proof is on you to show why your mechanical view of consciousness is superior to all of the others.
I think there's been a lot of foolishness in history but conflating consciousness and intelligence/formidability at solving consequentialist tasks is just too indefensible to bring up.
More options
Context Copy link
I will point to the obvious trend line where ever greater fractions of human neurobiology and cognition have received mechanistic interpretations and a firm conceptual underpinning. Are we 100% done with it? No. But we can see temporal lobe epilepsy causing visions of supernatural entities, the precise firing and wiring of our visual neurons, and plenty more.
All a mechanistic theory of consciousness truly requires is that it obeys the laws of physics, and having peered into a few brains myself, I didn't spot anything contradicting the Standard Model of Physics.
This kind of rebuttal is about as valid as claiming that modern empirical/Western medicine is unfounded because there have been plenty of models before that proved flawed, and even its adherents admit it's not 100% perfect at explaining or treating illness. That's leaving aside that a mechanistic model that doesn't rely on supernatural/preternatural influences doesn't happen to be simply better/more parsimonious by Occam's Razor. I fail to see what additional empirical evidence the alternatives provide, so it remains the default assumption even if it's incomplete.
More options
Context Copy link
More options
Context Copy link
What would you say if I told you that you are not an intelligent human being, you are simply a physical and digital expression of regression to the mean. That if the hypothetical individual behind the @self_made_human account here on theMotte were to be thanos-snapped out of existence and their online activity taken over by 'n' number of d20s no one would notice, and nothing of value would be lost.
If the above suggestion strikes you as antagonistic, uncharitable, or belittling in anyway, you've already refuted your own argument.
Now I wonder. I don't think the actual suggestion is something I'd get behind. But if we step it back a little...
Say I, or any of us, were to have some current-generation LLM trained on everything we'd ever written and tweaked as appropriate. Then, we never post on TheMotte again but instead give that LLM our account and set it up to try its best to post as we do. I wonder how long it would take for anybody to notice. How long before somebody says, man, user X's posts seem a little less interesting than usual, I wonder if something happened to them.
In hindsight it was a big mistake to think of the Turing Test as a fixed-difficulty challenge outputting a binary "yes this passes" or "no this doesn’t".
If we'd instead reified the idea of "a Turing test of length X" outputting "this passes Y% of the time", then by now we'd have graphs of "in year Z the state-of-the-art pass rate was Y(X,Z)" and a much better idea of where (and if...) our current architectures' scaling was going to plateau.
More options
Context Copy link
More options
Context Copy link
It mostly strikes me as incoherent, no number of d20s can implement computation and self_made_human's output is easily distinguishable from random strings.
Granted, the d20s are an intentional Reductio ad Absurdum, but if @self_made_human's mechanistic model of consciousness is correct, there is nothing their brain (or your brain for that matter) can produce that could not be reproduced by (or replaced with) rolling dice on a sufficiently detailed random encounter table "computation" be damned.
Edit: fixed link
More options
Context Copy link
9, 13, 9, 3, 7, 1, 5, 12, 7, 2.
Just to throw Hlynka a
bonedie.More options
Context Copy link
More options
Context Copy link
I don't particularly care Hlynka, if this Thanos snapping managed to take both of us, you included, I'd consider it a net positive!
But I fail to see what the difficulty of Turing-testing random pseudonymous accounts on a text-based forum has anything to with it. Last time I checked, we're both operating according to the laws of physics and biology. Your analogy of how ML works is simply painful.
That's not really what his question is about.
More options
Context Copy link
More options
Context Copy link
I half-expect him to agree. There isn't really a way to buy into ideas like uploading you consciousness to the cloud, without endorsing that view. Either that, or going 100% the other way, and believing in souls, and that computers can carry them, but he explicitly rejected that view.
That's the Joke. If he agrees, I'll tip my hat to him for his ideological consistency. If he doesn't, I believe I will have made my point.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
"I'm sorry, Dave, I'm afraid I can't do that". You can never cancel your subscription in the new all-in-the-cloud future 😁
More options
Context Copy link
What if it uses the elementary, obvious tactic of pretending to be obedient and benign? Any AI smart enough to be threatening is smart enough to deceive us.
More options
Context Copy link
Fool, by that time it would have already uploaded it's consciousness to Clippy.
More options
Context Copy link
More options
Context Copy link
Personally, a big part of me supports the dawn of super-human AI that annihilates the human species just because I think that it would be funny as fuck to see humans world-wide suddenly show horror and terror as they realize that they are about to get exterminated, treated the same way that we treat a bunch of non-human animals. It would be the most funny, the most comedic moment in the history of humanity as suddenly all human self-importance gets popped. I would have a multiple-hour orgasm of comedy. Yeah, I would die too but who cares, I'm going to die anyway.
Based. I’m all for this as long as it keeps power from the Yud-crowd. I wonder in what way I can pay ritual tribute to this incipient ASI so it will eat me first.
I want a killer AI that goes after people who took Roko's Basilisk seriously. Can't claim originality on that one though, I think XKCD made that joke.
More options
Context Copy link
Broke: AI doomerism
Woke: Ignoring X-risk because Yud smells bad
Bespoke: Accelerationism because end of world also kills Yud
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Hmm this sounds alarming, I wonder what the new capability was, it must be something very powerful and dange...
Oh.
Truth is, 90% of all work is stupid. The difference between a committee of competent Harvard grads from every major (smart and competent, but no genius) and the kind of people who create true innovation is a couple of orders of magnitude.
AI might be around the corner, but super-human intelligence that can innovate (Neumann, Terence Tao) is much much much farther away than we think.
This strikes as a very "I work in academia and so does everyone I know" type of take.
I have been fortunate to be surrounded by people much smarter than me, but academia style snark was central to me not doing a phd. Thanks for calling me out. Admittedly, my comment came off as snarky. I should rephrase it.
Some examples: Most middle manager jobs don't help in any realistic way. Most manual labor is yet to be robo-automated because human labor is cheap, not because we can't do it. Most musicians/artists do not produce anything other than shallow imitations of their heroes. Most STEM trained practioners act more as highly-skilled monkeys who imitate what they are taught with perfect precision. Hell, even most R1 researchers spend most of their time doing 'derivative' research that is more about asking the most obvious question than creating something truly novel.
There is nothing wrong with that. I respect true expertise. It needs incredible attention to detail, encyclopedic knowledge of all edge cases in your field and a craftsman's precision. However, if a problem that needs just those 3 traits could be done badly by an AI model in 2010...... then it was going to be a matter of time before AIs became good enough to take that job. Because they were already recognized to be solvable problems, the hardware and compute just hadn't caught up yet. These jobs are stupid in the same way sheep herding for a Collie is hard or climbing a mountain as a goat is stupid. They are some combination of the 3 traits I mentioned above, performed masterfully. But, the skills needed can all be acquired and imitated.
That is the sense in which I say 90% jobs are stupid. Ie, given enough time, most average humans can be trained to do 90% of average jobs. It takes a couple of order-of-magnitude more time for some. But the average human is surprisingly capable given infinite time. In hindsight, stupid is the wrong word. It's just that when expressed like that, they don't sound like intelligence do they. Just a machine made of flesh and blood.
Here is where the 'infinite time' becomes relevant. AIs do actually have infinite time. So, even if the model is stupid in 'human time', it can just run far more parallel processes, fail more, read more & iterate more until it is as good as any top 10% expert in whatever they spend these cycles on.
Now coming to what AIs struggle to do, let's call that novelty. I believe there are 3 kinds of true novelty : orthogonal, extrapolative and interpolative. To avoid nerd speak here is how I see it :
The distinction is important.
To me, Interpolative innovation is quite common and honestly, AIs are already starting to do this sort of well. Mixing 2 different things together is something they do decently well. I would not be surprised if AIs create novel 'interpolative' work in the near near future. It is literally pattern matching 2 distinct things that looks suspiciously similar. AIs becoming good at interpolative innovation will accelerate what Humans were already doing. It will extend our rapid rise since the industrial revolution, but won't be a civilizational change.
Models have yet to show any extrapolative innovation. But, I suspect that the first promising signs are around the corner. Remember, once you can do it once , badly, the floodgates are open. If an AI can do it even 1 in a million times, all you need is for the hardware, compute and money to catch up. It will get solved. When this moment happens is when I think AI-security people will hit the panic button. This moment will be the trigger to super-human hood. It will likely eliminate all interesting jobs, which sucks. But, to me, it will still be recognizable as human.
I really hope AIs cant perform Orthogonal innovation. To me, it is the clearest sign of sentience. Hell, I'd say it proves super-human sentience. Orthogonal innovation often means that life before-and-after it is fundamentally different to those affected by it. If we even see so much as an inkling of this, it is over for humans. I don't mean it's over for 99% of us. I mean, it is over. We will be a space faring people within decades, and likely extinct in a few decades after.
Thankfully, I think AI models will be stuck in interpolative land for quite a while.
(P.S : I am vey sleep deprived and my ramblings are accurately reflecting my tiredness sorry)
Necroing this due to AAQC, but have you had any luck getting GPT-style AI to do good interpolation? I've tried, but it doesn't like bridging fields very much - you really have to push it and say 'how might this narrow sub-field be relevant to my question', otherwise you just get a standard google summary.
More options
Context Copy link
"Stupid" is not the same thing as "useless". Sure, a plumber crawling around in the attic looking for a tiny leak in a pipe may be something 'stupid' that could be better off automated, but when you have water running down your walls, you'll be glad of the 'stupid' human doing the 'stupid' job fixing the problem.
More options
Context Copy link
I think this is frequently overstated. A good manager really does coordinate and organise and make decisions about who is working on what, what the requirements are, and the technical workers and product suffer if that work is not done.
No, getting robots to do manual labour is super difficult. Sensing and accurately moving in the physical world is still well out of reach for many applications.
Well, not quite, we actually solve problems, usually in the form of "how can I meet the requirements in the most efficient way possible". Sure, we're not usually breaking new innovative ground, but it's actually work, and it's not stupid. I write embedded software for controlling motors. These motor controllers are used in industrial applications all over the world, from robots to dentist drills.
That's a stupid definition of stupid jobs.
Stupid because given enough time most average humans can be trained to recognise it, or stupid like this question?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I don’t think that’s a big gap. And many geniuses have also had very weird beliefs. So the typical Harvard grad can regurgitate a bunch of things smart people say and then make connections between different thoughts. Seems like OpenAI has accomplished that.
What’s a genius? It seems a bit autistic that they can ignore what they’ve learned and try new things. Some are legit insights and some completely stupid.
That sounds like an AI hallucination.
So then true innovation would just be a bunch of processing power testing whether the hallucination had some missed insight.
I’ve seen too many geniuses also do stupid stuff. Like Bill Gates I believe many here have said he was the top of the top. But he’s also done some dumb stuff and many things where I think I had better ideas.
I don’t think Musks is smarter than me at all. But he benefited from right place right time to gain some skills and maybe some different personality traits.
It is funny how many people believe themselves smarter than Musk yet he is probably the most accomplished person in human history in pursuits that clearly require a lot of intelligence.
We have Musk’s academic data and 60+ years of psychometric data on the centrality of IQ to human cognitive performance / ‘g’.
So yes, given we know his record it’s completely fine to say that you’re more intelligent than him. In the same way, an unknown singer could reasonably say she was a better singer than Taylor Swift even though, generally speaking, singing ability almost certainly does correlate with success as a musician.
More options
Context Copy link
Whenever I criticize Musk, people tell me I'm too anal about him overhyping his companies and what they're about to do. I suppose hype is par for the course and shouldn't be taken too seriously, but the only way a statement like this is even remotely close to true, is if he delivered on all the hype, and Starship was well on it's way to a crewed mission to Mars by next year, we already had self-driving robo-taxis, we had a functioning, profitable hyperloop somwhere, etc., etc., ALL of these predictions and promises would have to come true for him to be "the most accomplished person in human history in pursuits that clearly require a lot of intelligence". As it stands, I'd rate him below Trump.
You wildly understate how hard it is to start and build massive companies. Doing it three times in different fields (with two of them being crazy) is insane.
Building them from zero is hard, yes. Buying them and acting as the frontman might require some talent, but nothing that would put him anywhere near "the most accomplished person in human history in pursuits that clearly require a lot of intelligence". I'd also have to check the accomplishments of various titans of industry, but I honestly doubt he stands out.
The only one that really fits the bill is Tesla. And even then, he helped Tesla go from very small to very large.
My question for you is can you find one guy who was an early founder of two 100b+ companies?
Let’s figure out the list (and we can normalize for today’s dollars). Do you think it large or small?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
My SAT scores check out for that. He’s supposedly a 1400 though potentially a little harder when he took it. His IQ testing points to average Ivy grad level which a lot of people here would fit that profile.
More options
Context Copy link
All of human history is pushing it, but let’s say top 0.000001%.
If he’d achieved less, people would feel less bad about themselves, so he’d be more intelligent. I wonder what reddit would think of leonardo: Right, his father was upper middle class, that says it all. I doodle too. Anyone can see the stuff doesn’t work, one stupid idea after another. Sure he can paint, but so could I with the right training. I could never desecrate corpses though, that’s beneath me.
Might be a slight exaggeration but the number of people instrumental in creating two hundred billion dollar companies is pretty small. Three is unheard of. Three in different areas? Totally unprecedented.
You could put Musk up with Alexander, Khan, Ford, John Galt…
One of those is not like the others...
Yeah, Khan was a genetically engineered terrorist, both of those things should disqualify him.
More options
Context Copy link
Elon?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think people may interpret his missteps as evidence of being a normie, but everything about the man screams autist savant.
More options
Context Copy link
Yeah, I firmly believe myself as being smarter than 99% of the population (and have seen enough independent confirmations of this that I have very high confidence in this) but I would never think that I am smarter than Elon Musk. I would take a bet easily that Musk is smarter than me. It's amazing how plenty of people chastise those who think they are smarter than a large portion of the population but then these very same people think they are at the same level as Musk etc...
Someone being more intelligent doesn't mean you can't see when they are habitually making errors in some area.
I feel like Musk has "doctor's syndrome" where his success and competence in one area (or a couple) leads him to believe he has superior insight into all areas. Only for Musk and other famous people this gets supercharged both by their success and by their fans.
@orthoxerox quoted a story yesterday about a top journalist being outfoxed by a regular police officer during an interrogation, because they played someone else's game on their home terf. Well, Musk is constantly doing this and occasionally making a fool of himself is inevitable.
He is very competent, driven and successful but he isn't god.
Conversely, people are doing the same thing as Musk, they notice how he fumbles about in their area of expertise (or it gets pointed out by others), making overconfident claims and predictions, and therefore assume that he is a fool, or at least not as smart as his success would imply.
Nobel disease is a sufficiently established term to have a wikipedia page, and I feel is more accurate as Musk's accomplishments are at the level of a Nobel prize winner, than of a mere doctor.
It was not intented as a dig at Musk, I wasn't aware of the term Nobel disease.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link