BTW, if you want to read a good example of pre-Yudkowsky rationality, I recommend The Demon-Haunted World. Carl Sagan did a lot to help me learn how to think clearly, in my formative years.
I want to be clear that this is coming from somebody who once liked his writings. I didn't worship him. I didn't learn much from him. But he has always had a fun and unique writing style.
But believe me, there's no confusion here. Capital-R Rationality may be something that crystalized around LessWrong and the Sequences, but the concepts of rationality are hardly new; we're building on a legacy of humans struggling to explain the Universe that has been built over thousands of years. Yudkowsky wrote some entertaining essays, some of which are insightful (and some of which are silly, particularly when he veers into fields of science he doesn't know well). You could credit him with collecting and indexing a few good ideas. But he's very bad at practicing what he preaches - Scott, for instance, is far better at actually making and testing predictions than Yudkowsky. I suppose cult leaders don't usually lower themselves to the level of scrubbing the temple floor.
As for AI Safety, no. No, no, no. There's absolutely no defense for his egotistical claim in the April Fool's post. Futurists have been discussing AI safety since at least Asimov's Three Laws. What do you think AI researchers did before him, shrug and go "hmm, I wonder if making this neural net behave is something I should study sometime"? Maybe I can trace one particular flavour of the "edifice" to his writings - superintelligence-goes-FOOM-breaks-out-of-black-box-and-builds-nanotech-in-a-bio-lab - but AI safety as a whole would still exist and look pretty much the same without him. Arguably, it would be healthier, with the many people with different intelligent perspectives not being drowned out by his singular view and stubborn insistence that he knows the unknowable future.
I've lost pretty much all respect for Yudkowsky over the years as he's progressed from writing some fun power-fantasy-for-rationalists fiction to being basically a cult leader. People seem to credit him for inventing rationality and AI safety, and to both of those I can only say "huh?". He has arguably named a few known fallacies better than people who came before him, which isn't nothing, but it's sure not "inventing rationality". And in his execrable April Fool's post he actually, truly, seriously claimed to have come up with the idea for AI safety all on his own with no inputs, as if it wasn't a well-trodden sci-fi trope dating from before he was born! Good lord.
I'm embarrassed to admit, at this point, that I donated a reasonable amount of money to MIRI in the past. Why do we spend so much of our time giving resources and attention to a "rationalist" who doesn't even practice rationalism's most basic virtues - intellectual humility and making testable predictions? And now he's threatening to be a spokesman for the AI safety crowd in the mainstream press! If that happens, there's pretty much no upside. Normies may not understand instrumental goals, orthogonality, or mesaoptimizers, but they sure do know how to ignore the frothy-mouthed madman yelling about the world ending from the street corner.
I'm perfectly willing to listen to an argument that AI safety is an important field that we are not treating seriously enough. I'm willing to listen to the argument of the people who signed the recent AI-pause letter, though I don't agree with them. But EY is at best just wasting our time with delusionally over-confident claims. I really hope rationality can outgrow (and start ignoring) him. (...am I being part of the problem by spending three paragraphs talking about him? Sigh.)
Sounds like some sort of insanely well read but very dim intern that you can always ask to do anything through a computer or something. Very weird but probably very useful in a Jarvis-from-Iron-Man sort of way.
Yeah, that's a pretty good description of it! I'm definitely still the brains of the outfit. But it's getting closer to the "Hollywood UI" ideal where you use your computer by talking to it rather than by remembering the correct syntax of a Unix command.
I'm concerned that this tech is still very much on lock in from giant corporations. Microsoft's Office integrations all seem to rely on spying on everything you do and those training costs are still too prohibitive for FOSS to remain competitive. I sure hope that changes.
No argument here. I personally trust Microsoft a little more than Google, but still, I'm really hoping this tech gets democratized sooner rather than later. (I've heard Alpaca, which is small enough to run on a PC, is pretty good, but "pretty good" might not cut it.)
Here, since you asked for specifics, let me recount one of the most impressive conversations I had with Bing AI. (Unfortunately it doesn't seem to save chat history, so this is just paraphrasing from memory. I know that's a little less impressive, sorry.)
Me: In C++ I want to write a memoized function in a concise way; I want to check and declare a reference to the value in a map in one single call so I can return it. Is this possible?
Bing: Yes, you can do this. (Writes out some template code for a memoized function with several map calls, i.e. an imperfect solution).
Me: I'd like to avoid the multiple map calls, maybe using map::insert somehow. Can I do this?
Bing: Sure! (Fixes the code so it uses map::insert, then binds a reference to it->second, so there's only one call).
Me: Hmm, that matches what I've been trying, but it hasn't been compiling. It's complaining about binding the reference to an RValue.
Bing: (explanation of what binding the reference to an RValue means, which I already knew.)
Me: Yes, but shouldn't it->second be an LValue here? (I give my snippet of code.)
Bing: Hmm, yes, it should be. Can you tell me your compile error?
Me: (Posts compile error.)
Bing: You are right that this is an RValue compile error, which is strange because as you said it->second should be an LValue. Can you show me the declaration of your map?
(Now, checking, I realize that I declared the map with an incorrect value type and this was just C++ giving a typically unhelpful compile error.)
I want to emphasize that it wasn't an all-knowing oracle, and back-and-forth was required. But this conversation is very close to what I'd get if I'd asked a coworker for help. (Well, except that Bing is happy to constantly write out full code snippets and we humans are too lazy!)
I don't ask it to write code then plunk it into my projects - I agree that it sometimes gets things wrong there (although you can point out errors and it'll acknowledge and often fix them). What I use it for is to talk through my problems (it's not a rubber duck, because it's replying with knowledge I didn't have before). It uses its vast breadth of knowledge to help me with things like syntax, library functions, simplifying code, debugging a compile error, etc. ChatGPT is bit rougher, but Bing AI has even been smart enough to challenge me when I'm giving it mistaken information, asking follow-up questions that get me to the root of my problem (like a coworker would).
So, I don't want really want to argue the Chinese Room philosophy of when language understanding starts to "count". All I know is what my lying eyes are telling me: I'm now conversing with my computer in completely natural language, and it hasn't once failed to understand me. (Its reply hasn't always been helpful or right, but it's always made sense.) It's important to resist the cynicism of finding ways to break the LLM and going "oh, it's lame after all". Even if LLMs somehow never get any smarter, even if they're not on the critical path to AGI, just the capabilities we've already seen are enough for them to change the world.
VERY strong disagree. You're so badly wrong on this that I half suspect that when the robots start knocking on your door to take you to the CPU mines, you'll still be arguing "but but but you haven't solved the Riemann Hypothesis yet!" Back in the distant past of, oh, the 2010s, we used to wonder if the insanely hard task of making an AI as smart as "your average Redditor" would be attainable by 2050. So that's definitely not the own you think it is.
We've spent decades talking to trained parrots and thinking that was the best we could hope for, and now we suddenly have programs with genuine, unfakeable human-level understanding of language. I've been using ChatGPT to help me with work, discussing bugs and code with it in plain English just like a fellow programmer. If that's not a "fundamental change", what in the world would qualify? The fact that there are still a few kinds of intellectual task left that it can't do doesn't make it less shocking that we're now in a post-Turing Test world.
I'm assuming you didn't watch the GPT-4 announcement video, where one of the demos featured it doing exactly that: reading the tax code, answering a technical question about it, then actually computing how much tax a couple owed. I imagine you'll still want to check its work, but (unless you want to argue the demo was faked) GPT-4 is significantly better than ChatGPT at math. Your intuition about the limits of AI is 4 months old, which in 2023-AI-timescale terms is basically forever. :)
Remember Scott's post about how 2100 "isn't a real year"? You're making that mistake, times a thousand. The question of "based on physics, how many consciousnesses can our civilization support" has almost nothing to do with our current existence; any answer, and any pressing need to answer, is way beyond the future event horizon where the world will be unrecognizable to us.
What you're doing now is the equivalent of ancient tribes sitting by their campfire, taking a break from their stories about how the Moon Goddess hides from the Sun God, to talk about how the Fed should optimally set interest rates to avoid a recession. It's beyond pointless.
- Prev
- Next
Yes, I'm really glad to see someone else point this out! One thing that's interesting about LLMs is that there's literally no way for them to pause and consider anything - they do the same calculations and output words at exactly the same rate no matter how easy or hard a question you ask them. If a human is shown a math puzzle on a flashcard and is forced to respond immediately, the human generally wouldn't do well either. I do like the idea of training these models to have some "private" thoughts (which the devs would still be able to see, but which wouldn't count as output) so they can mull over a tough problem, just like how my inner monologue works.
More options
Context Copy link