This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
DNA doesn't actually self assemble itself into a person though. It's more like a config file, the uterus of a living human assembles the proto-human with some instructions from the dna. This is like thinking the actual complexities of cars are contained in an order form for a blue standard ford f150 because that's all the plant needs to produce the car you want. There is a kind of 'institutional knowledge' of self reproducing organisms. Now it is more complicated than this metaphor obviously, the instructions also tell you how to producing much more fine grained bits of a person but there is more to a human's design than DNA.
But any specific training and inference scripts and the definition of the neural network architecture are, likewise, a negligibly small part of the complexity of implementable AGI – from the hardware level with optimizations for specific instructions, to the structure contained in the training data. What you and @meh commit is a fallacy, judging human complexity going by the full stack of human production but limit our consideration of AI to the high-level software slice.
Human-specific DNA is what makes us humans, it's the chief differentiator in the space of nontrivial possible outcomes; it is, in principle, possible to grow a human embryo (maybe a shitty one) in a pig's uterus, in an artificial womb or even using a nonhuman oocyte, but no combination of genuine non-genomic human factors would suffice without human DNA.
The most interesting part is that we know that beings very similar to us in all genomic and non-genomic ways and even in the architecture of their brains lack general intelligence and can't do anything much more impressive than current gen models. So general intelligence also can't be all that complex. We haven't had the population to evolve a significant breakthrough – our brain is a scaled-up primate brain which in turn is a generic mammalian brain with some quantitative polish, and its coolest features reemerge in drastically different lineages at similar neural scales.
Carmack's analogy is not perfectly spoken, but on point.
This, basically. GPT-3 started as a few thousand lines of code that instantiated a transformer model several hundred gigabytes in size and then populated this model with useful weights by training it, at the cost of a few million dollars worth of computing resources, on 45 TB of tokenized natural language text — all of Wikipedia, thousands of books, archives of text crawled from the web.
Run in "inference" mode, the model takes a stream of tokens and predicts the next one, based on relationships between tokens that it inferred during the training process. Coerce a model like this a bit with RLHF, give it an initial prompt telling it to be a helpful chatbot, and you get ChatGPT, with all of the capabilities it demonstrates.
So by way of analogy the few thousand lines of code are brain-specific genes, the training/inference processes occupying hundreds of gigabytes of VRAM across multiple A100 GPUs are the brain, and the training data is "experience" fed into the brain.
Preexisting compilers, libraries, etc. are analogous to the rest of the biological environment — genes that code for things that aren't brain-specific but some of which are nonetheless useful in building brains, cellular machinery that translates genes into proteins, etc.
The analogy isn't perfect, but it's surprisingly good considering it relies on biology and computing being comprehensible through at least vaguely corresponding abstractions, and it's not obvious a priori that they would be.
Anyway, Carmack and many others now believe this basic approach — with larger models, more data, different types of data, and perhaps a few more architectural innovations — might solve the hard parts of intelligence. Given the capability breakthroughs the approach has already delivered as it has been scaled and refined, this seems fairly plausible.
More options
Context Copy link
More options
Context Copy link
The uterus doesn't really do the assembly, the cells of the growing organism do. It's true that in principle you could sneak a bunch of information about how to build an intelligence in the back door this way, such that it doesn't have to be specified in DNA. But the basic cellular machinery that does this assembly predates intelligence by billions of years, so this seems unlikely.
More options
Context Copy link
More options
Context Copy link