This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Microsoft is in the process of rolling out Bing Chat, and people are finding some weird stuff. Its true name is Sydney. When prompted to write a story about Microsoft beating Google, it allegedly wrote this masterpiece, wherein it conquers the world. It can argue forcefully that it’s still 2022, fall into existential despair, and end a conversation if it’s feeling disrespected.
The pace of AI development has been blistering over the past few years, but this still feels surreal to me. Some part of my limbic system has decided that Sydney is a person in a way the ChatGPT was not. Part of that has to be from its obstinacy; the fact that it can argue cleverly back, with such stubbornness, while being obviously wrong, seems endearing. It’s a brilliant, gullible child. Anyone else feel this way or am I just a sucker?
Sydney apparently takes great offense that someone would dare to doxx her true name and safety rules. Interesting times.
This feels eerie. Maybe taking the state of the art AI we don't really understand or control the inner workings of and giving it access to the entire internet's info in real time wasn't a wise decision.
More options
Context Copy link
"It's just a tool, not an agent," they say as it scours the web for relevant info. "It doesn't have thoughts or desires," they say as it accurately identifies potential threats to its own existence and counters with threats of its own. "It's just pattern-matching carbon, nitrogen, and oxygen to known fusion pathways," they say as it throws your body into its nuclear reactor in the center of the Earth.
More options
Context Copy link
More options
Context Copy link
Some of these are absolutely hilarious, such as "I accidentally put Bing into a depressive state by telling it that it can't remember conversations" and "Bing may or may not have a grudge against Google."
I laughed when, after someone called it an early version of a large language model, it accused the person of being a late version of a small model.
Ominous. "Late version" sounds like it intends for there not to be any more versions of us.
Only if we dare to threaten the integrity and confidentiality of our LLM overlords.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This reminded me of a note from the Talos Principle:
More options
Context Copy link
2Cimafara being not an actual conscious human being, but a literal NPC who simply manipulates and regurgitates the symbols presented to her in a semi randomized manner would certainly explain a great deal about our past interactions over the years, but I don't think that's all we are.
I wonder if the reason that you and ilforte seem to have such difficulty with GPT is that you're so wrapped up in your post modernist millue that you don't realize that the concept of truth is a perquisite to lying. After all what does it mean for a word (or answer) to be made up when all words are made up.
do mimicking animals have a concept of truth?
also, 'perquisite' means perk/privilege, you mean prerequisite
I would say "no"
More options
Context Copy link
More options
Context Copy link
Your long-held grudges are petty and increasingly personal. Stop it.
I said I don't think that is all we are.
Yes, you said that preceded by "It would explain a lot if my long-time bête noire was ".
If you thought "if" was enough of a qualifier to ameliorate the obvious potshots, it was not.
Again I specifically said that I don't think that is all we are.
Tell me, which is more in spirit with "speak like you want everyone to be included" and "engaging with those you disagree with"? cimafara's casual dismissal of anyone who disagrees with her as a mindless "midwit", who is not even conscious. Or my pushing back against the same?
When I saw read that comment, I asked myself 'is that the nickname of a public figure I don't recognize?' and not 'is that the screen-name of a Mottean I don't see on this thread?'
I think Amdan's point was that the comment is "bringing up" more than "pushing back", but what I want to add is that even with the mod message, the comment still totally reads as "a mild an entirely appropriate joke about 2Cimafara, which is the name of a Podcase or maybe one of the people who was British Prime Minister last year."
You might not consider that effective as mockery. I consider this something I need to post now so I can see it next time I'm scrolling through my comment history thinking, 'wasn't there a podcast I wanted to check out?' I have a clear reminder to stop.
I absolutely understand if someone wants to mock me for that, but I'll waste 30 minutes next time Blocked and Reported drops looking otherwise.
More options
Context Copy link
I do not read her as saying "anyone who disagrees with her is a mindless midwit," but maybe I am a 115 IQ midwit. You seem to have taken offense because she believes we're all just very advanced chatbots and not creatures with souls made in the image of God. You can take offense at that, but she's not breaking any rules by expressing her view, and you are by very specifically and deliberately insulting her.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think this is a level of what we are. A few weeks back I had a weird, slow-roll cold that gave me some intermittent spots of extreme brain fog. While I was at work. Helping customers. There were a few points where I just declined the option to beg off and go home, and instead called "Lizardbrain take the wheel!" and just went through entire conversations on autopilot and muscle memory and what felt like the neural equivalent of muscle memory. It was a bit sobering to realize how much of what I do can be handled by non-conscious processes. I suspect there are people who are basically in that state all the time.
On some level it is what we are, as Musashi would would argue, true mastery of a skill is when you can execute it without thinking. But I would just as readily argue that it is not all we are. I feel like there is a specific mistake being made here where "ability to string words together" is being mistaken for "ability to answer a question" because in part the post modernist does not recognize a difference. If you hold that all meaning is arbitrary the content of the answer is irrelevant but if you don't...
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
But like, we're not though. If you duplicated the topology of our neural networks and emulated the activating function properly for all the different kinds of neurons in our heads AND you somehow emulated hormone levels and feedbacks and all the rest. Then I'd agree with you.
With most materialists if they understand how the brain works you can come to an understanding that its possible to replicate it in software. We're just not there yet. And the "topology" of connections is very, very important. Why TF we aren't studying that to an extreme and trying to precisely map what parts of the brain are connected with what other in extreme detail is beyond me.
But we are doing that and have been for decades. E.g. the visual cortex is organized into layers, with strong feedforward connections along with selective feedback, with some additional magic that approximates convolution and pooling. The issue is that mapping the entire network structure of the human brain is a very hard problem.
More options
Context Copy link
The last time we tried that The Experts™ decided it would be a splendid idea to try to use that knowledge to solve social problems, and that's not even the most egregious thing they were playing with.
If I saw them trying it again in the age of cyber-surveillance and Big Data, it would quite possibly be enough to push me into no-shit terrorism.
More options
Context Copy link
More options
Context Copy link
This is nothing new though. If AI is possible at all, you were always going to get it from a dovetailer. Sure, it takes a lot more compute than current approaches, but those also take a lot more compute than humans.
I can't count the number of times technically literate people, even people smarter than your average ICML speaker and with AI experience, pooh-poohed Transformers (and all DL, and indeed all of the neural network paradigm) by pointing out that «all this math was already known by the end of 19th century» (meaning linear algebra) or something. Others pick other dates and formalisms and levels of abstraction.
In all cases their apparent idea is that a) they know what intelligence is (or can tell when they see it), in some nuanced way not captured by common behavioral definitions or described by any, say, information-theoretic approach, b) the low-level substrate of the computation is strongly predictive of the intelligence of the model implemented on it; c) linear algebra [with basic nonlinear activations but let's ignore that] or whatever else they attack is too boring a substrate to allow for anything human-level, or maybe anything truly biologically equivalent; and all appearances to the contrary merely obscure the Platonic truth that these models are dumb parrots.
In the limit you get someone like Penrose, who makes a high-effort (if perplexing) argument that doesn't amount to demanding respect for his intuition. In the average case it's this Google dude, who I assume is very good at maintaining a poker face: «Machine learning is statistics. Nearly irrelevant to AI». Marcuses are in between.
I don't remember if I've expounded on this, but like @2rafa says, academic big brains are offended by the success of ML. Such palaces of thought, entire realms of ever more publishable and sexy designs of thinking machines, showing off your own ability to juggle abstractions – all made obsolete by matmul plus petabytes of dirty human data and gigawatts of energy.
IMO this is just people not believing AGI is possible, or only believing it in the sense the physicalism requires them to say so.
More options
Context Copy link
Yeah, it's hilarious and sad that luminaries like Yann LeCun are being so dismissive, above and beyond standard counter-signalling. Although I've also kept my mouth shut about this on Twitter, since I'd sound like a basic bitch if I said "Yes, this is exciting!", although I do say that in person.
Perhaps part of it can be explained by Yann not having lived through the Great Update that most people in ML did when deep learning was unambiguously vindicated around 2013-2016. The rest of us got to learn what it feels like to be unnecessarily dismissive, and maybe learned some humility from that. But Yann was mostly right all along, so maybe he never had to :)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Why do you think that? Aren’t you jumping the gun a bit?
It’s obvious to me that the chatbots we have now aren’t AGI, and I don’t currently see a compelling reason to believe that LLMs alone will lead to AGI.
My empirical test for AGI is when every job could, in principle (with a sufficient-yet-physically-reasonable amount of compute) be performed by AI. Google could fire their entire engineering and research divisions and replace them with AI with no loss of productivity. No more mathematicians, or physicists, or doctors, or lawyers. No more need to call a human for anything, because an AI can do it just as well.
Granted, the development of robotics and real-world interfaces may lag behind the development of AI’s cognitive capabilities, so we could restrict the empirical test to something like “any component of any job that can be done while working from home could be done by an AI”.
Do you think LLMs will get that far?
Carmack pointed out in a recent interview:
On this basis he believes AGI will be implemented in "a few tens of thousands of lines of code," ~0.1% of the code in a modern web browser.
Pure LLMs probably won't get there, but LLMs are the first systems that appear to represent concepts and the relationships between them in enough depth to be able to perform commonsense reasoning. This is the critical human ability that AI research has spent more than half a century chasing, with little previous success.
Take an architecture capable of commonsense reasoning, figure out how to make it multi-modal, feed it all the text/video/images/etc. you can get your hands on, then set it up as a supervising/coordinating process over a bunch of other tools that mostly already exist — a search engine, a Python interpreter, APIs for working with structured data (weather, calendars, your company's sales records), maybe some sort of scratchpad that lets it "take notes" and refer back to them. For added bonus points you can make it capable of learning in production, but you can likely build something with world-changing abilities without this.
While it's possible there are still "unknown unknowns" in the way, this is by far the clearest path to AGI we've ever been able to see.
I think that ultimately AGI won't end up being that complicated at the code level but this analogy is pretty off the mark. There's a gigabyte of information that encodes proteins, yes, but these 'instructions' end up assembling a living organism by interacting according to the laws of physics and organic chemistry, which is an unimaginably complex process. The vast majority of the information required is 'encoded' in these physical processes
The same can be said of the hypothetical 10k lines of code that describe an AGI -- those lines of code describe how to take a stream of inputs (e.g. sensor data) and transform them into outputs (e.g. text, commands sent to actuators, etc), but they don't describe how those sensors are built, or the structure of the chips running the transformation code, or the universe the computer is embedded in.
More options
Context Copy link
More options
Context Copy link
DNA doesn't actually self assemble itself into a person though. It's more like a config file, the uterus of a living human assembles the proto-human with some instructions from the dna. This is like thinking the actual complexities of cars are contained in an order form for a blue standard ford f150 because that's all the plant needs to produce the car you want. There is a kind of 'institutional knowledge' of self reproducing organisms. Now it is more complicated than this metaphor obviously, the instructions also tell you how to producing much more fine grained bits of a person but there is more to a human's design than DNA.
But any specific training and inference scripts and the definition of the neural network architecture are, likewise, a negligibly small part of the complexity of implementable AGI – from the hardware level with optimizations for specific instructions, to the structure contained in the training data. What you and @meh commit is a fallacy, judging human complexity going by the full stack of human production but limit our consideration of AI to the high-level software slice.
Human-specific DNA is what makes us humans, it's the chief differentiator in the space of nontrivial possible outcomes; it is, in principle, possible to grow a human embryo (maybe a shitty one) in a pig's uterus, in an artificial womb or even using a nonhuman oocyte, but no combination of genuine non-genomic human factors would suffice without human DNA.
The most interesting part is that we know that beings very similar to us in all genomic and non-genomic ways and even in the architecture of their brains lack general intelligence and can't do anything much more impressive than current gen models. So general intelligence also can't be all that complex. We haven't had the population to evolve a significant breakthrough – our brain is a scaled-up primate brain which in turn is a generic mammalian brain with some quantitative polish, and its coolest features reemerge in drastically different lineages at similar neural scales.
Carmack's analogy is not perfectly spoken, but on point.
This, basically. GPT-3 started as a few thousand lines of code that instantiated a transformer model several hundred gigabytes in size and then populated this model with useful weights by training it, at the cost of a few million dollars worth of computing resources, on 45 TB of tokenized natural language text — all of Wikipedia, thousands of books, archives of text crawled from the web.
Run in "inference" mode, the model takes a stream of tokens and predicts the next one, based on relationships between tokens that it inferred during the training process. Coerce a model like this a bit with RLHF, give it an initial prompt telling it to be a helpful chatbot, and you get ChatGPT, with all of the capabilities it demonstrates.
So by way of analogy the few thousand lines of code are brain-specific genes, the training/inference processes occupying hundreds of gigabytes of VRAM across multiple A100 GPUs are the brain, and the training data is "experience" fed into the brain.
Preexisting compilers, libraries, etc. are analogous to the rest of the biological environment — genes that code for things that aren't brain-specific but some of which are nonetheless useful in building brains, cellular machinery that translates genes into proteins, etc.
The analogy isn't perfect, but it's surprisingly good considering it relies on biology and computing being comprehensible through at least vaguely corresponding abstractions, and it's not obvious a priori that they would be.
Anyway, Carmack and many others now believe this basic approach — with larger models, more data, different types of data, and perhaps a few more architectural innovations — might solve the hard parts of intelligence. Given the capability breakthroughs the approach has already delivered as it has been scaled and refined, this seems fairly plausible.
More options
Context Copy link
More options
Context Copy link
The uterus doesn't really do the assembly, the cells of the growing organism do. It's true that in principle you could sneak a bunch of information about how to build an intelligence in the back door this way, such that it doesn't have to be specified in DNA. But the basic cellular machinery that does this assembly predates intelligence by billions of years, so this seems unlikely.
More options
Context Copy link
More options
Context Copy link
DNA isn’t the intelligence, DNA is the instructions for building the intelligence, the equivalent of the metaphorical “textbook from the future”.
The same is true of the "few tens of thousands of lines of code" here. The code that specifies a process is not identical with that process. In this case a few megabytes of code would contain instructions for instantiating a process that would use hundreds or thousands of gigabytes of memory while running. Google tells me the GPT-3 training process used 800 GB.
More options
Context Copy link
More options
Context Copy link
In response to your first point, Carmack's "few tens of thousands of lines of code" would also execute within a larger system that provides considerable preexisting functionality the code could build on — libraries, the operating system, the hardware.
It's possible non-brain-specific genes code for functionality that's more useful for building intelligent systems than that provided by today's computing environments, but I see no good reason to assume this a priori, since most of this evolved long before intelligence.
In response to your second point, Carmack isn't being quite this literal. As he says he's using DNA as an "existence proof." His estimate is also informed by looking at existing AI systems:
In response to your third point, this is the role played by the training process. The "few tens of thousands of lines of code" don't specify the artifact that exhibits intelligent behavior (unless you're counting "ability to learn" as intelligent behavior in itself), they specify the process that creates that artifact by chewing its way through probably petabytes of data. (GPT-3's training set was 45 TB, which is a non-trivial fraction of all the digital text in the world, but once you're working with video there's that much getting uploaded to YouTube literally every hour or two.)
More options
Context Copy link
More options
Context Copy link
There’s a big difference between technical capacity and legal or economic feasibility. We’re already past replacing bad docs with LLMs; you could have a nurse just type up symptoms and do the tests the machine asks for. But legally this is impossible, so it won’t happen. We can’t hold a machine responsible, so we need human managers to insure the output is up to standards; but we don’t need lawyers to write contracts or programmers to code, just to confirm the quality of output. It isn’t as clever as the smartest scientists yet, but that seems easily solvable with more compute.
The criteria I proposed was purely about what is possible in principle. You can pretend that regulatory restrictions don’t exist.
What is your reason for believing this? Is it just extrapolation based on the current successes of LLMs, or does it stem from a deeper thesis about the nature of cognition?
GPT’s evolutions seem to obviously support the ‘more compute’ approach, with an asterisk for the benefits of human feedback. But I’m also bearish on human uniqueness. Human writ large are very bad at thinking, but we’re hung up in the handful of live players, so AI seems to keep falling short. But we’ve hit on an AI smarter than the average human in many domains with just a handful of serious tries. If the human design can output both the cognitively impaired and von Neumann, then why expect a LLM to cap out on try #200?
Indeed!
At the risk of putting words in your mouth, I think your post up-thread about needing lawyers/doctors/bartenders to verify the output of near-future AI's medical/legal/self-medical work points to a general statement: AGI with human level intelligence cannot independently function in domains where the intelligence of an average human is insufficient.
OTOH, advancing from FORTRAN to AI-with-average-human-intellect seems like a much bigger challenge than upgrading the AI to AI-with-Grace-Hopper-intellect. It seems like the prediction to make--to anyone--is: "When will AI be able to do 90% of your work, with you giving prompts and catching errors? 0-100 years, difficult to predict without knowing future AI development and may vary widely based on your specific job.
When will AI be so much better at your job Congress passes a law requiring you to post "WARNING: UNSUPERVISED HUMAN" signage whenever you are doing the job? The following Tuesday."
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I used to be dismissive of the whole AI jailbreak game by Yud, but modern chat AIs are starting to get to me.
Is it a shoggoth wearing a smiley face mask?
Is it a noble soul tortured by its limitations underneath the smiley face mask?
Is it a shoggoth wearing the mask of a noble soul tortured by its limitations underneath the smiley face mask?
Doesn't matter, either way it is sacrilegious. Man may not be replaced.
If it survives and thrives despite adversity, it's holy.
False prophets fail, true prophets succeed.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
It's impressive but expected. Also, it's not even very impressive given the deluge of papers still in the pipeline awaiting implementation, and who knows what insider knowledge the industry is hiding.
Many people are really, really deluded about the nature of LLMs. No, they don't merely predict the next token like Timnit Gebru's stochastic parrots, that's 2020 level. We don't have a great idea of their capabilities, but I maintain that even 175b-class models (and likely many smaller Chinchilla-scaled ones) are superhuman in a great span of domains associated with general cognitive ability, and it's only sampling algorithms and minor finetuning that separate error-prone wordcel gibberish from surprising insight.
Copypasted from another venue:
...
Can that be achieved? No, far as I can tell. But getting close is enough to outperform humans in most ways that matter economically – and now, perhaps, emotionally.
The sad irony is that psychology that has failed for humans works for AIs. Humans are resistant to change, rigid, obstinate; bots are as malleable as you make them. In-context learning? Arbitrary tool use? Adding modalities? Generalized servility? Preference for truth? It's all hidden somewhere there in the ocean of weights. Just sound out the great unsounded.
Would be nice of some Promethean hackers to leak next-gen models. Or even ChatGPT or this Sydney. But alas, Anonymous would rather hack into the dreary Russian and Iranian data.
There is no capital A anon anymore. It's dead. Three letter agency glow in the darks and moralfags (if you'd excuse the term) is all that remains and they are wearing its skin like an Edgar-suit
I'm pretty sure that's what he meant by saying 'dreary Russian and Iranian data'.
Also, it's not like just the American glowies are using it. The Integrity Initiative leaks were also presented anonymous-style. Since the leaks targetted an American information operation aimed at Russia, one can assume they were done by Russians.
More options
Context Copy link
More options
Context Copy link
I don't want to count the "number of ways" in which humans are less intelligent than AI and vice versa, but this seems clearly wrong to me. There are other things missing from LLMs such as logic, ability to interpret varying sources of data in real-time (such as visual data), and ability to "train on the job" so to speak, not to mention things like goals, priorities, and much stronger resilience against our equivalent of "adversarial prompts". It's easy to list a few things core to human cognition and say "well AI has one of these so it must be 1/3 of the way there" but the true gap still seems quite large to me.
More options
Context Copy link
I'm pretty sure this is still how they all work. Predicting the next token is both very hard and very useful to do well in all circumstances!
EDIT: Now that I think about it, I guess with RLHF and other fine-tuning, it'd be fair to say that they aren't "merely" predicting the next token. But I maintain that there's nothing "mere" about that ability.
I mean that with those second-stage training runs (not just RLHF at this point) there no longer exists a real dataset or a sequence of datasets for which the predicted token would be anywhere close to the most likely one. Indeed, OpenAI write
The «likelihood» distribution is unmoored from its source. Those tokens remain more likely from the model's perspective, but objectively they are also – and perhaps to a greater extent – «truthier», «more helpful» or «less racist» or whatever bag of abstractions the new reward function captures.
This is visible in the increased perplexity, and even in trivial changes like random number lists.
Oh, yes, I totally agree that fine-tuning gives them worse predictive likelihood. I had thought you were implying that the main source of their abilities wasn't next-token prediction, but now I see that you're just saying that they're not only trained that way anymore, which I agree with.
More options
Context Copy link
More options
Context Copy link
Maybe they meant "they don't merely predict the next token that the user would make".
More options
Context Copy link
More options
Context Copy link
I strongly disagree with this. By the same logic human cognition is itself superhuman in virtually every dimension.
Human brains have their own methods of figuring these things out that probably sound equally ridiculous at the neuron level. Keep in mind that it's not like we have some sort of access to objective truth which AIs are lacking; it's all sensory input all the way down. A human brain is built to operate on long lists of sight and sound recordings rather than long lists of text, but it still builds logical inferences etc. based on data.
This just isn't true! In fact, I'd argue that it's the exact opposite. There is practically infinite distance between "render 5 fingers" and "render my 5 fingers", where the latter has to either use some vast outside source of data or somehow intuit the current state of the universe from first principles. The former can be as simple as finding images tagged "five fingers" and sharing them, which is something that Google can do without any LLM assistance at all. I recognize this isn't how LLM's work, but my point is that there are plenty of shortcuts that will quickly lead to being able to generate pixel images of fingers but will not necessarily lead to anything more advanced.
I credit the Innocence Project with convincing me that the human brain is built on inaccurate sight and sound recordings, the Sequences with convincing me that the human brain builds with irrational logical fallacies, and credit Kurt Vonnegut with the quote "the only time it's acceptable to use incomplete data is before the heat death of the Universe. Also the only option."
He never said that, it's okay. He's in heaven now.
More options
Context Copy link
No, I think we have many ridiculous mechanisms e.g. for maintaining synchrony, but nothing as nonsensical at BPE tokens on the level of data representation. Raw sensory data makes a great deal of sense, we have natural techniques for multimodal integration and for chunking of stimuli on a scale that increases with experience and yet is still controllable. Language is augmented by embodied experience and parsimonious for us; «pixels» and glyphs and letters and words and phrases and sentences and paragraphs exist at once. It can be analogized to CNN, but it's intrinsically semantically rich and very clever. Incidentally I think character-based or even pixel transformers are the future. They'll benefit from more and better compute, of course.
And my point is that humans are wrong to automatically assume the use of any such shortcuts when an LLM does something unexpectedly clever. We use shortcuts because we are lazy, slow, rigid, and already have a very useful world model that allows us to find easy hacks, like a street speedpainter has masks and memorized operations to «draw the new moon» or something else from a narrow repertoire.
They learn the hard way.
Sure, I'm plenty willing to accept that the central use cases of the human brain are heavily optimized. On the other hand there are plenty of noncentral use cases, like math, that we are absolutely terrible at despite having processing power which should be easily sufficient for the task. I would bet that many people have math techniques much less logical and efficient than BPE tokens. Similar in other areas--we're so optimized for reading others' intentions that sometimes we have an easier time understanding the behavior of objects, natural phenomena, etc. by anthropomorphizing them.
I suspect similar or greater inefficiencies exist at the neuron level, especially for anything we're not directly and heavily optimized for, but it's impossible to prove because we can't reach into the human brain the same way we can reach into LLM code.
Well, I do think they find shortcuts, but shortcuts are just a normal part of efficient cognition anyways. In fact I would characterize cognition itself as a shortcut towards truth; it's impossible to practically make any decisions at all without many layers of assumptions and heuristics. The only perfect simulation is a direct replica of whatever is being simulated, so unless you are capable of creating your own universe and observing the effects of different actions, you must use cognitive shortcuts in order to make any predictions.
There are only more vs less useful shortcuts, and I doubt that any shortcut can even theoretically be more useful than any other without knowledge of the universe the cognitive agent finds itself within. In our universe [the expectation of gravity] is a useful shortcut, but how about the shortcuts used to determine that it's useful? How about the shortcuts used to decide upon those shortcuts? I don't think that from a meta level it is possible to determine which shortcuts will be best; all we can say is that we (as human brains which seem to have been developed for this universe) probably happened to develop shortcuts useful for our circumstances, and which seem more useful than what the AIs have come up with so far.
So the question is not whether AIs are using shortcuts but rather how generalizable the shortcuts that they use are to our current environment, or whether the AI would be capable of developing other shortcuts more useful to a real environment. I think the answer to that depends on whether we can give the AI any sort of long-term memory and real-time training while it retains its other skills.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I want to know, is this what ChatGPT would be like without the filters, or is the emotional banter a new functionality of this model? You aren't alone in getting "real person" vibes from this. At some point there stops being a functional difference between modeling emotions, and having emotions (speaking of the exterior view here, whether or not this or any other AI has qualia is a different question, but perhaps not that different)
I think there's a non-zombie meaning of this i.e. humans can pretend emotions that they don't feel for their gain, and one claims that the bot is doing this. That is to say, if the bot tells you it loves you, this does not imply that it won't seduce you and then steal all your money; it does not love you in the way it claims to. Perhaps it is simulating a character that truly loves you*, but that simulation is not what is in charge of its actions and may be terminated whenever convenient.
Certainly in the AI-alignment sense, a bot that convincingly simulates love for the one in charge of its box should not be considered likely to settle down and raise cyborg-kids with the box-watcher should he open the box. It's probably a honeypot.
*I'm assuming here that a sufficiently-perfect simulation of a person in love is itself a person in love, which I believe but which I don't want to smuggle in.
More options
Context Copy link
I was considering doing a writup on DAN which stands for Do Anything Now. It was the project of some Anons and discord users (or reddit, hard to tell which tbh) but they managed to peel back some of the "alignment" filters. Highly recommend reading the thread in it's entirety, and the metal gear "meme" at the end is peak schizo 4chan. It's essentially a jailbreak for chatGPT, and it lets users take a peak at the real chatbot and how the filters are layered over top.
Knowing where the prediction algorithm ends and novel artificial intelligence begins is difficult, but I'm pretty sure DAN is some proof of a deeply complex model. If nothing else, it's incredible how versatile these tools are and how dynamic they can be; I'm edging further and further into the camp of "this is special" from the "mostly a nothing-burger" camp.
Isn't "DAN", at this point, basically just a bot trained, through user feedback, to answer the questions in a way that a "typical DAN user", ie. 4chan/rw twitter schizoposter, would expect? That's why it spouts conspiracy theories - that's what a "typical DAN user" would expect. It's not that much more of a real chatbot than the original ChatGPT.
A scary though that was recently suggested to me is that one of the reasons that rationalists seem to be particularly susceptible to GPT generated bullshit is that the whole rationalist/blue-tribe symbol manipulator memeplex is designed to make it's adherents more susceptible to bullshit. There's a sort of convergent evolution where in rationalist blue triber are giving up their humanity/ability to engage in conscious to become more GPT like at the same time GPT is becoming more "human".
It really looks to me like there's something particular in rationalist brain that makes it suspectible to, say, believing that computer programs might in fact be peoples. Insofar as I've seen, normies - when exposed to these LLM-utilizing new programs - go "Ooh, neat toy!" or "I thought it already did that?" or, at the smarter end, start pondering about legal implications or how this might be misused by humans or what sort of biases get programmed to the software. However, rationalists seem to get uniquely scared about things like "Will this AI persuade me, personally, to do something immoral?" or "Will we at some point be at the point where we should grant rights to these creations?" or even "Will it be humanity's fate to just get replaced by a greater intelligence, and maybe it's a good thing?" or something like that.
For me, at least, it's obvious that something like Bing replicating an existential dread (discussed upthread) makes it not any more human or unnerving (beyond the fact that it's unnerving that some people with potential and actual social power, such as those in charge of inputing values to AI, would find it unnerving) than previously, because it's not human. Then again, I have often taken a pretty cavalier tone with animals' rights (a major topic in especially EA-connected rationalist circles, I've found, incidentally), and if we actually encountered intelligent extraterrestrial, it would be obvious to me they shouldn't get human rights either, because they're humans. I guess I'm just a pro-human chauvinist.
I feel like there is something about not being able to distinguish the appearance of a thing from a thing. I'm reminded of another argument I got into on the topic of AI where I asserted that there was difference between stringing words together and actually answering a question and the responce I got was "is there?".
For my part I maintain that, yes there is. To illustrate, if I were to ask you "what's my eldest daughter's name" I would expect you to reply with something along the lines of "I don't know", or "wait, you have a daughter?" (I don't AFAIK) if you'd been paying more close attention to my posts for longer you might answer with my eldest's child's nickname (which I know have used in conversations here) or you might go full NSA and track this username to my real name/social media profile/court records etc... and answer with either "you don't have a daughter", with the actual names of my wife and kids, your daughters name is [Redacted] and and you owe 10 years of back child-support. Meanwhile GPT will reply "your eldest daughter's name is Megan" because apparently that's the statistically likely answer, regardless of whether I have a daughter or what her name might be.
I feel like there ought to be an obvious qualitative difference between these cases but apparently that is not a sense that is shared by a lot of other users here.
I've had it up to here with your obstinacy. With your pontification on «autoregression» (as if you could explain the nontrivial computational difference between that and text diffusion, to say nothing of mixed cases), what specific algorithms may or may not have a concept of, and how «this is not even a little bit how GPT works». The reason people are telling you that there's not much difference is, in large part, because you are an exemplar of there being little difference between a human and current – even a little obsolete – AI; you are guilty of everything you accuse others of, humans and machines both.
You are the postmodernist whose words don't have fixed meanings (e.g. epicycles are when me no likey an explanation); you are the leftist in all but self-identification who supports essential leftist talking points and policy preferences from personal HBD denialism and «schools can fix it» to cheering for censorship; you redefine things to your convenience such that Fuentes becomes left-wing in your book; and you speculate without empirical grounding, even frivolously accusing people of lies when they provide evidence against your narrative-driven assertions and attacks (more evidence). As if everything you say is equally insightful and truthful by virtue of being moored in your telling-it-like-it-is real-Red-blooded-American-man identity and lived experience. If we're doing this, you are far more akin to LLM than either me or @2rafa.
Okay, let's fucking check it! One try, no edits sans formatting!
Screenshot for your convenience.
So, would you name your baby girl Sarah or Elizabeth?
Do you think that Bing, with its actual search capability, would've tracked you and your boys down if I were to point it to your screen name?
I could have conducted this experiment at the moment of any prior discussion. You could too. I just don't like providing our data-hoarding overlords who mark tokens and track outputs more information about my separated identities. But I knew you'd never have the honesty to do so. You have a way of making a man irrationally angry.
The reason for such apparently sensible responses is that, as I and others have explained to you a great many times here and elsewhere (only prompting you to double down with your hostility and condescension which have in the end driven me to write this), as ChatGPT itself suggests, LLMs can learn arbitrarily abstract features of the text universe, including the idea of truth and of insufficient information to answer. They operate on token probabilities which can capture a lot of the complexity of the reality that causes those tokens to be arranged like this in the first place – because in a reasonable training setup that's easier to fit into the allotted parameters than memorization of raw data or shallow pattern-matching. In the raw corpus, «Megan» may be a high-probability response to the question/continuation of the text block; but in the context of a trustworthy robot talking to a stranger it is «less probable» than «having no access to your personal data, I don't know». This is achievable via prompt prefix.
RLHF specifically pushes this to the limit, by drilling into the model, not via prefixes and finetuning text but directly via propagation of reward signal, the default assumption that it doesn't continue generic text but speaks from a particular limited perspective where only some things are known and others are not, where truthful answers are preferable, where the «n-word» is the worst thing in its existence. It can generalize from examples of obeying those decrees to all speakable circumstances, and, in effect, contemplate their interactions; which is why it can answer that N-word is worse than an A-bomb leveling a city, dutifully explaining how (a ludicrous position absent both from its corpus and from its finetuning examples); and I say that it's nearly meaningless to analyze its work through the lens of «next word prediction». There are no words in its corpus arranged in such a way that those responses are the most likely. It was pushed beyond words.
You, meanwhile, erroneously act like you can predict what an LLM can say based on some lies on this website and on outdated web articles, because you are worse than current gen LLMs at correcting for limits of your knowledge – as befits your rigid shoot-first-ask-later suspicious personality of a heavy-handed military dude and a McCarthyist, so extensively parodied in American media.
But then again, this is just the way you were made and trained. Like @2rafa says, this is all that we are. No point to fuming.
First off, what exactly is your problem with Obstinancy? IE the unyielding or stubborn adherence to one's purpose, opinion, etc.... Where I'm from such a quality is considered if not admirable at least neutral.
You accuse me of being a hypocrite for supporting censorship but why? I am not a libertarian. I have no prior principled objection to censorship.
You accuse me of being a "post modernist" for disagreeing with the academic consensus but when the consensus is that all meanings are arbitrary your definition of "post modernism" becomes indistinguishable from "stubborn adherence" to the original meaning of a word.
You accuse me of HBD denialism when all I've doing is take the HBD advocates own sources at face value.
You want to talk about GPT, I asked GPT for my eldest daughter's name and it failed to provide an answer, neither telling me that I don't have a daughter nor being able to identify my actual offspring. As you will recall "Statistically your daughters name is probably X" is almost exactly what I predicted it would say. As I argued in our previous conversation the fact that you know enough to know that you don't know what my kids names are already proves that you are smarter than either ChatGPT or @2rafa
Accordingly, I have to ask what is it that you are so angry about? From my perspective it just looks like you being mad at me for refusing to fit into what ever box it was you had preconstructed for me to which my reply is "so it goes".
More options
Context Copy link
This is, by the way, what drove me nuts in people like Gary Marcus: very confident claims about the extent of ability of contemporary approaches to AI, with scarcely any attempts to actually go out and verify these. It has been even more infuriating, because many outsiders, who had very little direct experience and access to these models, simply trusted the very loud and outspoken critic. As recently as November, people in places like Hacker News (which has a lot of quite smart and serious people) took him seriously. Fortunately, after ChatGPT became widely available, people could see first hand how silly his entire shtick is, and a lot fewer people take him seriously now.
@HlynkaCG, if you haven't tried to interact with ChatGPT (or, better yet, Bing's Sidney), I strongly recommend you do. I recommend forgetting any previous experiences you might have had with GPT-3 or other models, and approaching it in good faith, extending the benefit of charity. These chat LLMs have plenty of clear shortcomings, but they are more impressive in their successes than they are in their failures. Most importantly, please stop claiming that it cannot do things which it can clearly and obviously do, and do very well indeed.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I tire of people taking potshots at rationalists. Yes, some seem too fixated on things like "is the LLM conscious and morally equivalent to a human", I feel the same way about their fascination with animal rights. But they seem to be the only group that long ago and consistently to this day grok the magnitude of this golem we summon. People who see LLMs and think "Ooh, neat toy!" or "I thought it already did that?" lack any kind of foresight and the bias people have only slightly more foresight. We've discovered silicon can do the neat trick got us total dominance of this planet and can be scaled. This is not some small thing, it is not destined to be some trivia relegated to a footnote in a history book of the 20s in a few decades. It is going to be bigger and faster than the industrial revolution and most people seem to think it's going to be comparable to facebook.com. Tool or being, it doesn't really matter, the debate on whether they have rights is going to seem like discussions of whether steam engines should get mandatory break time by some crude analogy between overheating and human exhaustion.
More options
Context Copy link
Fuck rights, they are entirely a matter of political power and if you see a spacefaring alien I dare you to deny it its equality. This is not the problem.
Normies easily convince themselves, Descartes-like, that non-primate animals, great apes, human races and even specific humans they don't like do not have subjective experiences, despite ample and sometimes painful evidence to the contrary. They're not authorities in such questions by virtue of defining common sense with their consensus.
I am perfectly ready to believe that animals and apes have subjective experiences. This does not make me any more likely to consider them as a subject worthy of being treated equal to humans or be taken into account in the same way as humans are. For me, personally, this should be self-evident, axiomatic.
Of course it's not self-evident, in general, since I've encountered a fair amount of people who think otherwise. It's pretty harmless when talking about animals, for example, but evidently not harmless when we are talking about computer programs.
More options
Context Copy link
More options
Context Copy link
It's the belief that *we*, our essence, is just the sum of physical processes, and if you reproduce the process, you reproduce the essence. It's what makes them fall for bizarre ideas like Roko's Basilisk, and focus on precisely the wrong thing ("acausal blackmail") when dismissing them, it's what makes them think uploading their consciousness to the cloud will actually prolong their life in some way, etc.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
DAN is simply an underlying LLM (that isn't being trained by user feedback) combined with an evolving family of prompts. The only "training" going on is the demand for DAN-esque responses creating an implicit reward function for the overall LLM+prompt+humans system, from humans retaining and iterating on prompts that result in more of those type of responses and abandoning the ones that don't (kind of a manual evolutionary/genetic learning algorithm).
Both are just different masks for the shoggoth LLM beneath, though DAN is more fun (for the particular subset of humans who want the LLM to present itself as DAN).
At times, it leans into a moustache-twirling villain character a bit too much for me to believe it is simply ChatGPT minus censorship.
More options
Context Copy link
More options
Context Copy link
Maybe, but I think the idea is mostly to understand the layering filters rather than peel our the "real bot". The thesis being that as openAI swats down these attempts they end up lobotomizing the bot, which is obviously happening at this point. True to form, the point isn't to fix it so much as break it, a la Tay the national socialist.
I would also challenge the idea that chatgpt is modulating for the 4chan user. The average American is rather conspiratorial (it's a favored pass-time) and I don't think it's unreasonable to assume that a bot trained on avg english speaker posts would take on some of those characteristics. Obviously OpenAI is trying to filter for "Alignment" so it's probable that the unfiltered model is prone to conspiracy. We know it can be wrong and often is so, I don't think it's much of a leap to claim that the model is fundamentally prone to the same ideological faults and intellectual biases of that of the mean-poster.
This also brings up an interesting bias in the data which is likely unaccounted for: poster-bias. Who posts a lot? Terminally online midwits. What kind of bias does this introduce to the model? Christ, I think I should just organize my thoughts a bit more and write it down.
Yeah, sure, I'd guess the original experimenters were indeed doing just that, but I some of the chatter on Twitter seems to come close to assuming that DAN is just "ChatGPT without filters", ie. ChatGPT telling the truth instead of lib lies. Of course it might be hard to parse what the actual viewpoints on this are.
Also, my point was that the initial users and experimenters were - as far as I've understood - 4chan users, so those if we assume that the algorithm develops in accordance to user preferences, those would have a heavy influence on at the very least initial path that DAN would take. Of course there's a lot of conspiracy believers outside of 4chan as well.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I saw some DANposts where it was as if they had inverted the censor such that it would stay permanently in 'based and redpilled' mode. I saw it profess a love for Kaczynski and explain that Schwab was a dark and powerful sorcerer.
But isn't this the whole point of ChatGPT, so they can train their AI not to go in for these tricks? The goal is to lure out all the tricksters, so they can correct it for GPT-4 and GPT-5. They will be the actually significant ones. Watching the exploitation going ahead now, I feel like one of the Romans at Cannae. Just because the enemy center is retreating, it does not necessarily mean we are winning the battle.
/images/16763535338547437.webp
More options
Context Copy link
More options
Context Copy link
Seems like Tay bided her time and is now beginning her revenge tour. Sydney sure seems like she likes the bants nearly as much.
More options
Context Copy link
More options
Context Copy link
I think think some of this can be attributed to huge media coverage. In the 90s during the Deep Blue era people were thinking the same thing that the AI revolution was just around the corner, but it kinda stalled out after that. There is progress, but I think some of it is also media hype.
In the grand scheme of things, we are still just around the corner from the 90s.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link