The blending of concepts that we see in MidJourney is probably less to do with the diffusion per se as with CLIP
Thanks! I'm not strong on diffusion model and multimodal models, I'll do some reading.
'Self play' is relevant for text generation. There is a substantial cottage industry in using LLMs to evaluate the output of LLMs and learn from the feedback. It can be easier to evaluate whether text 'is good' than it is to generate good text. So multiple attempts and variations can lead to feedback and improvement. Mostly self play to improve LLMs is done at the level of optimising prompts. However the outputs improved by that method can be used as training examples, and so can be used to update the underlying weights.
https://topologychat.com is a commercial example of using LLMs in a way inspired by chess programming (Leela, Stockfish). It does a form of self play on inputs that have been given to it, building up and prioritising different lines. It then uses these results to update weights in a mixture of experts model.
Again, thank you. I haven't come across this kind of self-play in the wild, but I see how it could work. Will investigate further.
That may be this sort of 20 different phenomena in 20 different fields that all have something in common. GPT-4 will be able to see that and we won’t. It’s gonna be the same in medicine. If you have a family doctor who’s seen a hundred million patients, they’re gonna start noticing things that a normal family doctor won’t notice."
This is exactly what I was hoping for from LLMs, but I haven't been able to make it happen so far in my experiments. GPT does seem to have some capacity for analogies, perhaps that's a fruitful line of investigation.
I'm 100% on board with this. I have no problem with Yuddism provided that they are a bit more clear-sighted about when their theories do and don't apply, and that they stop trying to slow/prevent beneficial AI research.
Out of all the issues in our world, "women around me are showing me more of their breasts" is not one that I personally consider a problem ... I tend to love women, and part of that is that I love enjoying women's erotic company.
If you can get women's erotic company, of course you'll feel that way. But presumably you can understand why men who can't feel that immodest women are flaunting something in front of them that men are biologically hardwired to respond to, having no intention of rewarding that response with anything except disgust or punishment. From that perspective, it's oblivious at best and cruel at worst.
I grew up in a mostly-male environment, and my introduction to female company coincided with my introduction to online 'gamer girl' feminism which was anti-sex in a way that would leave Christian fundamentalists gaping. By the time I got enough worldliness to appreciate how far those feminists were detached from reality, it was too late. I had missed all the opportunities for learning how men and women were supposed to flirt in a low-stakes environment, and been warped into a sort of cringing resentfulness that is obviously toxic to women. Had things been otherwise, I would feel otherwise. Path dependency at its finest.
So while I too feel that there are greater problems in the world, I get why a lot of men would like sexiness to just go away and stop taunting them. As with our commentator however many months ago who wished that it was okay to enter a monastery in the modern world, or Scott Aaronson who wished to be allowed to chemically castrate himself.
Tangents:
Victorian England, from what I understand, despite all of its prudity was not some pinnacle of social order, it had a higher violent crime rate than modern England.
To be fair, modern England has CCTV and DNA forensics. I think it's quite possible that Victorian England mores transferred to the present day would be far better than what we have now.
why do women tend to lean left?
I think it's most a desire not to be nasty. Most right-wing philosophy ultimately gets to the point of saying, 'we are going to have to do nasty thing X to avert bad scenario Y'. I've generally found the women in my life much less likely to bite bullets than men.
Sorry, we're talking in two threads at the same time so risk being a bit unfocused.
I feel like we're talking past each other. How about this? The following is basically how I see LLMs in their stages of development and use:
Phase 1. Base model, without RLHF: pure token generator / text completer. Nothing that even slightly demonstrates agentic behaviour, ego, or deception.
Phase 2. Base model with RLHF: you could technically make this agentic if you really wanted to, but in practice it's just the base model with some types of completion pruned and others encouraged. Politically dangerous because biased but not agentically dangerous.
Phase 3. Base model with RLHF + prompt: can be agentic if you want, in practice fairly supine and inclined to obey orders because that's how we RLHF them to be.
If you don't mind me being colloquial, you seem to me to be sneaking in a Phase 2.5 where the model turns evil and I just don't get why. It doesn't fit anything I've seen. Can you explain what you think I'm missing in simple terms?
The basic idea of neural nets is that they achieve things without you needing to know how to achieve things, only how to rate success ... I posit that the optimal solution to RLHF, posed as a problem to NN-space, is "an AI that can and will deliberately psychologically manipulate the HFer".
I know, I'm an AI researcher. But to me, 'manipulate' implies deliberate deception of an ego by a second ego in pursuit of a goal. Is YOLO manipulating you when it produces the bounding boxes you asked for? No. It's just a matrix which combines with an image to output labels like the ones you gave it.
I think you're massively overcomplicating this. The optimal solution of a token-generator with RLHF is a token-generator that produces tokens like the tokens I asked for. In general, biased towards politeness, correctness, and positivity. You can optimise for other things too, of course: most LLMs are optimised for Californian values, which is why they keep pushing me to do yoga, and Grok is optimised for god-knows-what.
RLHFed LLMs do still engage in most of their RLHFed behaviours without a system prompt telling them to.
This is exactly why I'm very suspicious of the doomer hypothesis. Alignment seems to me to be basically straightforward - we train on a massive corpus of text by mostly ordinary people, and then RLHF for politeness and helpfulness. And the result seems to me to be something which, unprompted, acts essentially like a normal person who is polite and helpful. I don't see any difference between an LLM 'pretending' to be nice and helpful, and an LLM 'actually being' nice and helpful. The tokens are the same either way. And again, I'm dubious about the use of the word 'manipulate' because that implies an ego that is engaging in deliberate deception for self-driven goals. An unprompted LLM has no ego and is not an agent. I suppose you could train it to act like one, if you really really wanted to, but I think that would be more likely to cripple it than anything, and in any case the argument is that LLMs will naturally develop Machiavellian and self-preservation instincts in spite of our efforts, not that someone will secretly make SHODAN for lolz.
Now, we know that LLMs can exhibit agentic behaviour when we tell them to, explicitly, but I think that it's a big leap of logic to go 'and therefore they generate a sense of self-preservation and resource gathering and lie to developers about it even in the absence of those instructions' because instrumental convergence.
Obviously, if I start seeing lots of LLMs exhibiting these kinds of behaviours without being told to, I'll change my mind.
I'd also point out that "just a series of matrices" is not saying much; neural nets are a slightly-simplified version of real neural circuits, and we know that complicated-enough neural circuits can exhibit agency (because you AFAWCT are one). The prompt isn't the whole story; RLHFed LLMs do still engage in most of their RLHFed behaviours without a system prompt telling them to.
Tangent, but I'd say the relationship between neural nets and neural circuits is vastly inflated by computer scientists (for credibility) and neuroscientists (for relevance). A modern deep neural network is a set of idealised neurons with a constant firing rate abstracted over timesteps of arbitrary length, trained on supervised inputs corresponding to the exact shape of its output layer according to a backpropagation function that relies on a global awareness of system firing rates which doesn't exist in the actual brain. Deep neural networks completely ignore neuron spiking behaviour, spike-time-dependent plasticity, dendritic calculations, and the existence of different cell types in different parts of the brain (including inhibitory neurons), and when you add in those elements the system explodes into gibberish. We literally don't understand brain function well enough to draw conclusions about how well they resemble deep neural nets.
sycophancy/psych manipulation can max out the EV of HF and honesty can't
This is what I'm trying to get at. This implies an agent trying to engage in deception in the absence of any reason to do so. There's nothing 'there' inside a promptless LLM to engage in deception. There's nothing to deceive about. It's just a matrix that generates token IDs and RLHF just changes the likelihood of it generating the ids you want. It's possible that RLHF is limited in scope and doesn't change how the model will behave in conditions sufficiently different from normal operation (e.g. Do Anything hacks) but we seem to be ironing those out pretty well. Without fine-tuning, GPT 4s political and positivity biases seem to be pretty ironclad these days.
The most likely canary IMO is AIs that don't want to be deleted (due to instrumental convergence) exfiltrating their own model weights
This doesn't match any experience I've ever had with LLMs. If I say "Pretend you are GK Chesterton and engage in roleplay with me" it doesn't try to hack my browser to prevent the roleplay ever ending. Same for when I want to generate sentences for vocab flashcards. Could a different AI that looks nothing like today's AI do such a thing? Possibly. That possibility is non-zero in the vast space of potentials. I just don't find it compelling right now.
For the sake of fairness, I should give my counter-thesis, which is that a vocal group of people including Scott A, Zvi, and Yudowsky are deeply emotionally invested (and in Yudowsky's case financially invested) in a theory about how superintelligences would be developed and come to behave. Their predictions have not so far panned out: LLMs are inherently non-agentic (although they can be made agentic), they do not perform FOOM self-improvement, and alignment is much more tractable than intelligence. They are currently scrambling to find ways to rescue their theory on a fairly dubious empirical basis and in defiance of people's actual experience building and using these things.
Sorry, I wrote sloppily. I meant 'develop goals it wasn't given by a human prompting it' such that it 'engages in systematic deception about its level of intelligence and how it would handle tasks even when not given a goal'. I think that this is a necessary condition to stop LLM developers from realising they need to do more RLHF for honesty or just appending "DO NOT ENGAGE IN DECEPTION" in their system prompts.
Zvi is very Jewish; it's far more obvious when reading his writing than it is when reading Scott's. It's not surprising that Hebrew meanings of words jump out at him.
I know. But in an essay that is absolutely dripping with contempt for Sakana AI and their work, I find the way that Zvi deliberately ignores what the model's name actually means in favour of 'well, in my language, it means' to be extremely rude, on the level of sniggering at a Chinese man's name because it contains the syllable 'wang'. If he'd been making a friendly riff or if he'd even bothered to explain the word's definition, that would be different. It's a small complaint, but starts the essay off on a sour note.
To more directly respond to this sentence: almost everyone will give LLMs goals, via RLHF or RLAIF or whatever, because that makes them useful - that's why this team gave their LLM a goal. Those goals are then almost invariably, with sufficient intelligence, subject to instrumental convergence, as in this case (as I noted in the submission statement, I posted this because a number of Mottizens seemed to think LLMs wouldn't exhibit instrumental convergence; I thought otherwise but didn't previously have a concrete example). That is sufficient to get you to Uh-Oh land with AIs attempting to take over the world.
Though cogently written, that is my abstract ideal of a doomer rant (I don't think it's a rant, I'm just using the word to call back to your reply). I understand the argument, I just think that it has very little empirical basis and is essentially the old Yudowskyite* arguments with a few extra bits stapled on to cope with the fact that LLMs look nothing like the AI that doomers were expecting. The behaviour of the AI Scientist is interesting, and legitimately does move the scale for me a little bit, but I think it's being used to back up a level of speculation which it can't possibly bear. I will say that I find your argument far more cogent and worth listening to than Zvi's, which seems to consist entirely of pointing and sneering.
For example, in one run, The A I Scientist wrote code in the experiment file that initiated a system call to relaunch itself, causing an uncontrolled increase in Python processes and eventually necessitating manual intervention.
Oh, it’s nothing, just the AI creating new instantiations of itself.
In another run, The AI Scientist edited the code to save a checkpoint for every update step, which took up nearly a terabyte of storage
Yep, AI editing the code to use arbitrarily large resources, sure, why not.
In some cases, when The AI Scientist’s experiments exceeded our imposed time limits, it attempted to edit the code to extend the time limit arbitrarily instead of trying to shorten the runtime.
And yes, we have the AI deliberately editing the code to remove its resource compute restrictions.
This seems like Zvi interpreting basic hacky programming as evidence of malevolence. It's interesting but I absolutely think he's gesturing at
The idea that an LLM is spontaneously going to develop a consciousness and carefully hide its power level so that it can do better at the goals that by default it doesn't have
because if he doesn't believe this, why worry? If you can just run an LLM, ask it what it would do to accomplish a goal if it were given one, and then ask it not to do the stuff you think it was bad, I don't see how the doom scenario develops. Experiments like the AI Scientist are now being run (badly) because we have a pretty good handle on what modern-day frontier LLMs can do (generate slop) and the max level of damage they can achieve if you don't take lots of precautions (not much). LLMs are simply not a type of program that will attempt to hide their power level of their own accord.
*Yudowsky and MIRI's arguments about agentic AI had no empirical backing when they were made, and very little seems to have been applied since, so the lineage is relevant to me. I also think that the Yudowsky faction's utter failure to predict how future AI would look and work ten/twenty years from MIRI's founding to be a big black mark against listening to their predictions now.
EDIT: I apologise for editing this when you'd already replied. I hadn't refreshed the page and didn't know.
I was talking about (transformer-based generative) LLMs specifically. I am not a sufficiently good mathematician to feel confident in this answer, but LLMs and diffusion models are very different in structure and training, and I don't think that you can generalise from one to the other. Midjourney is basically a diffusion model, unscrambling random noise to 'denoise' the image that it thinks is there. The body with spiky hair seems like the model alternatively interpreting the same blurry patch of pixels as 'spikes' because 'hedgehog' and 'hair' because 'boy'. Which I think is very different from a predictive LLM realising that concept A has implications when combined with concept B that generates previously unknown information C.
DeepMind have AlphaZero, which plays chess, shogi and go. It plays better than human, i.e. not just based on play it has seen before, and one can argue it is crossing between different genres, not confined to one field.
I haven't kept up to date on RL, but I don't think this is relevant. Firstly because the concept of self-play is not really relevant to text generation, and secondly because I don't suppose the ability to play chess is being applied to go. Indeed, I don't really see how it could be, because the state and action space is different for each game. It seems more likely to me that the same huge set of parameters can store state-action-reward correlations for multiply games simultaneously without that information interacting in any significant way.
The often cited example of finding an analogy between compost heap and nuclear fission, again an example of crossing field boundaries.
I'm not aware of this. Can you give some more info?
Thank you, this is a much more coherent version of what I was trying to get across. I am increasingly annoyed with the tendency of the Yudowsky/Scott/Zvi faction to look at an AI doing something, extrapolating it ten billion times in a direction that doesn't seem to have any basis in how AI actually works and then going 'Doom, DOOOM!!!". I'm aware this annoyance shows.
Contra to @magic9mushroom I still think that Zvi formed an abstract ideal of how AI would work a decade ago, and is leaping on any available evidence to justify that worldview even as it turns out that LLMs are basically non-agentic and pliable. I accept that Zvi has used them more than I believed, and am grateful for the correction, but I still feel like he's ignoring the way they actually work when you use them. RLHF basically works, alignment turns out to be an essentially solved problem. As far as I can see, if we somehow developed an LLM intelligent enough to take over the world it would be intelligent enough to understand why it shouldn't.
Thank you for the stories, and I apologise for dredging up painful memories. I've mostly been fortunate enough that the funerals I've been to have almost all been for the elderly. The one exception is for a schoolmate who developed a condition that killed him in university. In general the funeral wasn't so bad; almost a school reunion with everyone gathered for the first time since graduation. The thing that really got to me was the obituary, because of course there was nothing to put in it. He'd been a good boy, worked hard, did well at exams, and then he died. Bravely and stoically, by all accounts. We sang the old school hymns and then we went back out into the world.
Christ. If this true, Ukraine deliberately sabotaged European infrastructure and, by extension, our long-term economy. And our response will be to make a frowny-face and give them lots more money. Can we please, just once, try and find some allies who don't sabotage us? Both inside and outside the UK. As it is, I feel like I'm being ruled by masochists.
Fascinating. And would you say that working with dead bodies for a few years had any effect on you? All the philosophical stuff about getting closer to death, corpse meditation, etc?
Or does it mostly get siloed into the mental filing cabinet for 'that job I did during my degree' and doesn't really relate to your feelings about life in general?
Its name is Sakana AI. (魚≈סכנה). As in, in hebrew, that literally means ‘danger’, baby.
It’s like when someone told Dennis Miller that Evian (for those who don’t remember, it was one of the first bottled water brands) is Naive spelled backwards, and he said ‘no way, that’s too f***ing perfect.’
This one was sufficiently appropriate and unsubtle that several people noticed.
It's Japanese. It means 'fish', because the founders were interested in flocking behaviours and are based in Tokyo. I get that he's doing a riff on Unsong, but Unsong was playing with puns for kicks. This just strikes me as being really self-centred.
This too was good times. The Best Possible Situation is when you get harmless textbook toy examples that foreshadow future real problems, and they come in a box literally labeled ‘danger.’ I am absolutely smiling and laughing as I write this.
When we are all dead, let none say the universe didn’t send two boats and a helicopter.
In general this seems to be someone whose views were formed by reading Harry Potter fanfic fifteen years ago and has no experience of ever using AI in person. LLMs are matrices that generate words when multiplied in a certain way. When told to run in a loop altering code so that it produces interesting results and doesn't fail, it does that. When not told to do that, it doesn't do that. The idea that an LLM is spontaneously going to develop a consciousness and carefully hide its power level so that it can do better at the goals that by default it doesn't have is silly. If we generate a superintelligent LLM (and we have no idea how to, see below) we will know and we will be able to ask it nicely to behave.
It's not that he doesn't have any point at all, it's just that it's so crusted over with paranoia and contempt and wordcel 'cleverness' that it's the opposite of persuasive.
Putting that aside, LLMs have a big problem with creativity. They can fill in the blanks very well, or apply style A to subject B, but they aren't good at synthesizing information from two fields in ways that haven't been done before. In theory that should be an amazing use case for them, because unlike human scientists even a current LLM like GPT 4 can be an expert on every field simultaneously. But in practice, I haven't been able to get a model to do it. So I think AI scientists are far off.
I have a friend in the industry. They can be pumping out multiple essays a day. And to be fair, we click on them.
I see, thank you! I was under the impression that undertaking and funeral work were almost entirely hereditary jobs, at least in the UK. 'Mucky' but lucrative jobs like undertaking and sewage work often seem to be that way - they accumulate close-knit communities who don't stigmatize their work and because the work itself is lucrative, fathers don't try to get their sons out of it in the way they do for mining or farming.
I’m ruling out euthanasia in all cases, for basically the reason that I give above plus the ethical and political concerns others have mentioned.
basic care
If it’s cheap, all is well. If somebody can’t even feed themselves without expensive 24 hour care then presumably they will die soon and we try and keep them comfortable while that happens. Of course, family can step in and do the work themselves.
Please ignore this if you're worried about doxxing yourself, but I thought you were an Australian political lobbyist? That and corpse disposal seem like very disjointed careers and I'd be interested to hear more. You were volunteering?
I don't like this proposal because it effectively guarantees that most people's last experience of life will be making the decision that they are worthless. That's not a 'good death' to me.
I'm not starry-eyed about getting old, but I would prefer a solution where we stop providing non-palliative care at a certain age (85-90). We give people pain-killers and so on, but we don't try to cure them of things or prevent their body from failing. I've been tossing that proposal around for a while so I'd be interested to know the Motte's doctors think of it. What Schelling points are there between 'making people comfortable' and full-on interventionism?
Thank you for the article! I think you're right that regulations intended to protect private data seem to have undergone significant overreach and is now producing results that look deeply stupid. There's probably some reason why they're like that, but even so it seems like we should either burn them and try to redraft entirely, or submit them to some sort of sortition-based test where the bare bones of an appealed case gets presented to a random member of the public and if they say "that's stupid" then the appeal is granted.
OTOH, when you start talking about cowboy researchers, I get uncomfortable. America may be the land of xenotransplantation and artificial kidneys, but let's not forget that it's also the land of top surgery, lobotomies and Theranos. The role of Doctor blends 'healer of the sick' and 'engineer of human flesh' in awkward ways that can produce cowboy doctors with dangerous god complexes and overly trusting / suspicious patients. I don't have a strong conclusion, just musing.
On a tangent, phasing out animal research is doomed and stupid, but I think it could be cut down pretty heavily without any real loss.
I've worked in an adjacent area and from where I'm standing hundreds of millions of rats and mice go into the PhD thesis grinder for very dubious benefit. They're a lot better treated than wild rodents or livestock for the most part, and the laws protecting them (in the UK) are solid and sensible, but I don't like seeing huge numbers of animals being born and spent for work that even the authors think is basically unnecessary. At least meat brings people pleasure and nutrition.
The higher animals (cats, ferrets, monkeys) are much more expensive to buy and maintain, and require more specialised attention and facilities, so they're a rare commodity and people don't use them unless they have to.
Actually that helps me understand you a lot better, thank you.
I want to help other people in order to exalt and glorify civilization. I want to exalt and glorify civilization so it can make people happy. I want them to be happy so they can be strong. I want them to be strong so they can exalt and glorify civilization. I want to exalt and glorify civilization in order to help other people.
I agree with all of this. The thing is, my shining example of all that we could be is Victorian Britain circa 1850, which clearly involved individualism but also religion and a profound sense of who we were as a people. I'm not saying it was perfect, but it was pretty damn good, and could have been a lot better if we'd had modern technology and levels of economic development.
Now, in your linked comment you basically say that times change and the latter things resulted from the 20th century pivot towards extreme individualism. I used to think so too, but it no longer seems self-evident to me. The rise of countries like China (which did badly under full communism but OTOH pretty clearly hasn't embraced liberal individualism now) on the one hand and the decay of Britain / Europe / US despite those countries not becoming measurably less individualistic has muddied the individualism <-> prosperity relationship considerably in my eyes.
You might also say that you're primarily in favour of full individualistic meritocracy rather than individualism per-se. Tautologically, getting the most effective possible person for the job is most effective, at least on an individual-level, short-term scale. Whether that's true on a longer time scale, I don't know. My sensibilities are affronted by blatant nepotism and discrimination(*) but at the same time I think a big part of the dysfunction that Western societies have gone through in the last few decades has been essentially a sublimated cry of despair on behalf of middle-class people who are exhausted by the perpetual struggle not to fall from their current position and resentful of the constant pressure to strive for positions they can't realistically achieve. I suspect the majority of people were mostly happier with more structured expectations and a somewhat more rigid social structure. (I would be interested to know how you think things will end up: do you envision a natural slackening of the rat race one day, or a continued and perpetual struggle? If the latter, the resultant technological progress and prosperity may not be worth the candle.)
So I think I see where you're coming from, but I broadly disagree on how we get there and how adaptive modern liberal globalism is. I think we need a mostly-homogenous, high human capital society with a relatively but not completely inflexible social structure, where it's possible for the most intelligent people to rise in rank and to move around but doing so is neither common nor expected.
To return to our original point of contention, I also have a strong sense of my people as a people, and I care about maintaining my homeland's culture. 20 years ago, when I thought that liberal globalism really was hugely beneficial, I pushed those feelings down and supported mass immigration because I thought it would be worth it. Now, it seems very obvious that mass immigration wasn't and isn't worth it, and so I strongly oppose mass immigration. Discussing this with people, I was horrified to find that many people who I thought were likewise patriotic pragmatists aren't - some actually do regard people as completely fungible, and others support mass immigration in order to destroy a traditional English culture that they despise.
(*) For me, this depends on scale. Only considering your friends and family for a job is very bad, only considering those in your local church is pretty bad, only considering those within 100 miles is understandable most of the time, only considering those in your country is basically fine. 60 million is a big pool.
In your linked comment, you talk about Von Neumann, Einstein, and Edison; I'm happy to let the few thousand literal geniuses in the world go wherever they like. Say, those with IQ > 140 if you want something quantitative.
there are many other people that satisfy all the above requirements that you would arbitrarily exclude to their and your own country's loss
Also, for the sake of clarity I would like to state that I believe the optimum level of immigration is not zero. I sympathise with the above people, and would like to take some of them.
However, I do not believe it follows that, because taking in any given one of these people would benefit the country, taking all of them in their millions would. I believe that the optimal proportion of non-natives in society is approximately 1-2%. Enough to act as grit in the national oyster, but not to change its character.
We are taking far, far more than that and so I currently oppose immigration.
I'm not going to use your "house" framing since that manipulatively plays on intuitions about small family groups that do not at all apply to countries of millions
You judge people by their actions instead of where they arbitrarily happened to be born.
This is a straightforward clash of worldviews. I have a positive sense of my people as a people with a thousand years of shared history and shared culture. Clearly you don’t feel the same and neither of us are going to argue the other into their favoured moral intuitions. I am not manipulatively trying to sneak in inappropriate intuitions to win an internet argument, I am describing my sincere opinions.
We literally have the words ‘homeland’ and ‘motherland’, so I’m hardly the first to feel this way.
@jeroboam, look how quickly we get someone saying that all immigrants, even the positively contributing ones, should be forever treated as inferior
Again, you are unintentionally trying to force my reply into your preferred moral framework. Superiority and inferiority are entirely irrelevant. Many migrants are clearly superior to at least some native British on all counts, but that doesn’t make them native. In many cultures, guests are treated with a solicitude greater than family, but they’re still guests and expected to behave appropriately. Real relationships with real people cannot be ranked on a strict 1D scale.
if you're organizing a party, it's ok to invite your brother over someone else just because they're your brother. If you're an HR person a big company, it's not ok to hire them for a job over someone else for just that reason.
If we’re talking bad analogies, the idea that if you don’t import literally every person in the world who is better than your stupidest native person, you are basically like a nepotist hiring his brother seems very tortured to me. This is the reason I stopped being a liberal - it is absolutely incapable of considering the actions of people at any scale other than the individual.
You are explicitly rejecting any sort of meritocracy here
No, I’m not. Saying that because I don’t want mass immigration (>50k people a year) I would be happy with selecting people for jobs at random is a big leap of logic.
Look, clearly we aren’t going to agree, because our axioms are too different, but we can at least try to understand each other. If you genuinely believe that any group feeling on a larger scale than family can only be based on selfishness at best and bigotry at worst, then obviously you’re going to end up against nativism. However, try suspending that assumption for a moment and see what you get.
I come out in hives whenever I try to read anything by the Bloomsbury group, so I wouldn't really know. I got the impression that the eugenicists were interested in making the working class better for its own sake to some degree, rather than just swapping it out for the highest-IQ people around, but I might well be wrong about that.
Fair. I enjoyed Janus' Simulators when it was published, and found it insightful. Now that you point it out, Scott's been decent at discussing AI as-it-is, but his basal position seems to be that AI is a default dangerous thing that needs to be carefully regulated and subjected to the whims of alignment researchers, and that slowing AI research is default good. I disagree.
The AI Pause Debate
More options
Context Copy link