Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.
Jump in the discussion.
No email address required.
Notes -
Yea, it's definitely been a crazy couple of months.
Sort of. I've been playing around a with the publicly available LLMs, including the largest one, the 65 billion-parameter Llama model from Meta, and I find it's somewhere around the quality of GPT-3, nowhere near the quality of GPT-4. I'm also running it quantized to 4 bits on my CPU rather than on my GPU, so it's dogshit slow -- a word every ~2 seconds. Just enough to slake my curiosity. To run it at a conversational speed, you need a GPU with 40GB of VRAM, so you're probably looking at dropping $4,500 minimum on just the GPU, and maybe closer to $15,000 -- not exactly available to the masses. Maybe in 4 years. Moore's law is still kicking on the GPU side.
I'm not that impressed with any of the LLMs. I know it's a controversial take around here, but I don't think they're doing any reasoning at all. The reasoning is in the humans who wrote their training data, and the LLMs are doing a great job of predicting the text. You'll see it for yourself if you just play around with one enough until you see it screw up in ways that demonstrate it has no idea what it's talking about. Adding parameters and training data helps push back the boundaries of what it can do, but the fundamental issue remains. The lights are on, but nobody's home. There need to be more algorithmic insights before we get AGI.
I however think Big Yud is right in that eventually we will get to AGI, and unless we're extraordinarily conscientious, it'll kill us all. His arguments aren't rigorous to the point of mathematical certainty, but it seems like that's the default path unless there's a big as-of-yet-unpredicted happening intervenes. The future is full of such things, of course. But I don't share anybody's concern about LLMs or transformers. If any thing, all the recent hype about LLMs' shockingly good performance improves humanity's odds. But we're actually going to need world-wide agreements and to risk shedding blood and bombing defectors' data centers if we want humanity to survive our first contact with an alien species.
I'm curious what the availability of standalone AI processors might do. You can get, today, a Jetson NGX Orin with 64GB VRAM on a development board around 2K USD. That's not as fast as an nVidia A100 (I think ~30% of the speed for Int8 perf? supposed to be comparable to a V100) and data transfer rates can be obnoxious since everything has to get shoved through an ARM processor, but for non-video applications, it's really hard to beat the price point or the thermal envelope while still having enough VRAM for the big models.
((At least unless nVidia's consumer modules end up surprising everyone, but I expect they're more likely to boost to 24GB/32GB for the mid- and high-end cards for this generation, rather than 32GB/48GB or higher.))
That doesn't make them a good investment today, since 2k worth of standard GPU will also let you do everything else GPUs are used for, but if a killer app comes about there's a lot of ways for people to run at smaller-scales in home environments.
I don't know.
This guy (https://news.ycombinator.com/item?id=35029766) claims to get about 4 words per second out of an A100 running the 65B model. That's a reasonable reading pace. But I'm sure there's going to be all sorts of applications for slower output of these things that no one has yet dreamt of. One thing that makes Llama interesting (in addition to being locally-runnable) is that Meta appears to have teased more usefulness per parameter -- it's comparable with competing models that have 3-4 times as many parameters. And now there's supposedly a non-public Llama that's 546 billion parameters. (I think all of these parameter numbers are coming from what can fit in a single A100 or a pod of 8x A100s). Sadly, I think there's already starting to be some significant overlap between the cognitive capabilities of the smartest language models and the least capable deciles of humans. The next ten years are going to be a wild ride for the employment landscape. For reference, vast.ai will rent you an A100 for $1.50/hr.
More options
Context Copy link
More options
Context Copy link
I'm sorry, I wasn't clear. The aliens we're about to meet are the AGIs.
More options
Context Copy link
More options
Context Copy link