DaseindustriesLtd
late version of a small language model
Tell me about it.
User ID: 745
...Do you think Geohot invented e/acc in a podcast 5 months ago?
No, it's all Beff and Bayeslord, 2022. I've been there, I know it: https://beff.substack.com/p/notes-on-eacc-principles-and-tenets
I had actually seen a lot more e/acc talk around space, energy, and manufacturing startups than around AI. I would expect and e/accist to be more comfortable in front of solidworks than in front pytorch, although now that I think about it GeoHot was talking about AI, and is doing an several AI companies right now.
I guess put me in for: this is a good term actually! BBJ was a parody account don’t let him poison the term, please!
Man this is funny, you might as well say "Eliezer Shlomo Yudkowsky" is an obvious parody account and AI risk scholarship is primarily associated with Kamala Harris who tries to reduce job displacement and kiddie porn production.
I'd like to push this place from "where things get talked about" to "where things get done".
Ahem:
…We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War: Shaming. Attempting to 'build consensus' or enforce ideological conformity. Making sweeping generalizations to vilify a group you dislike. Recruiting for a cause.
I have argued against this in the past and, to be honest, still think the whole idea futile. Of course I will still shame, build consensus, enforce conformity, make generalizations, vilify and recruit and worse, so long as my words are compelling, cast ideas and actors in any coherent value-laden light at all, and stochastically increase the likelihood of people coming to predictable conclusions. Such is my nature… but it's the same with anyone. So to get anywhere with this doctrine, you'd need to replace mottizens with lobotomized mealy-mouthed GPTs who will go on and on about subjectivity of everything under the Sun, or to proscribe talking of culture war.
But in light of this place outlasting other attempts, I concede the legitimacy of incumbent mods and therefore the word of their law. This place will never be more than a place for talking (and casual socialization), and if they remain successful in enforcing their vision, neither will it become something less.
uncredentialed
why do you assume so?
Since @Hawaii98 complains about insufficient quantity of quality commentary, I've taken it upon myself to cover one of the topics proposed by @greyenlightenment, namely the doxxing of Based Beff Jesos, the founder of effective accelerationism. My additional commentary, shallow though it may be, got out of hand, so it's a standalone post now: E/acc and the political compass of AI war.
As I've been arguing for some time, the culture war's most important front will be about AI; that's more pleasant to me than the tacky trans vs trads content, as it returns us to the level of philosophy and positive actionable visions rather than peculiarly American signaling ick-changes, but the stakes are correspondingly higher… Anyway, Forbes has doxxed the founder of «e/acc», irreverent Twitter meme movement opposing attempts at regulation of AI development which are spearheaded by EA. Turns out he's a pretty cool guy eh.
Who Is @BasedBeffJezos, The Leader Of The Tech Elite’s ‘E/Acc’ Movement? [archive.ph link]
Quoting Forbes:
…At first blush, e/acc sounds a lot like Facebook’s old motto: “move fast and break things.” But Jezos also embraces more extreme ideas, borrowing concepts from “accelerationism,” which argues we should hasten the growth of technology and capitalism at the expense of nearly anything else. On X, the platform formally known as Twitter where he has 50,000 followers, Jezos has claimed that “institutions have decayed beyond the point of salvaging and that the media is a “vector for cybernetic control of culture.”
Forbes has learned that the Jezos persona is run by a former Google quantum computing engineer named Guillaume Verdon who founded a stealth AI hardware startup Extropic in 2022. Forbes first identified Verdon as Jezos by matching details that Jezos revealed about himself to publicly available facts about Verdon. A voice analysis conducted by Catalin Grigoras, Director of the National Center for Media Forensics, compared audio recordings of Jezos and talks given by Verdon and found that it was 2,954,870 times more likely that the speaker in one recording of Jezos was Verdon than that it was any other person. Forbes is revealing his identity because we believe it to be in the public interest as Jezos’s influence grows.
My main objective is to provide the reader with convenient links to do own research and contribute to the debate, so I rapidly switch from Beff to a brief review of new figures in AI safety discourse, and conclude that the more important «culture war» of the future will be largely fought by the following factions:
- AI Luddites, reactionaries, job protectionists and woke ethics grifters who demand pause/stop/red tape/sinecures (bottom left)
- plus messianic Utopian EAs who wish for a moral singleton God, and state/intelligence actors making use of them (top left)
- vs. libertarian social-darwinist and posthumanist e/accs often aligned with American corporations and the MIC (top right?)
- and minarchist/communalist transhumanist d/accs who try to walk the tightrope of human empowerment (bottom right?)
In the spirit of making peace with inevitability of most discussion taking place in the main thread, I repost this here.
edit: not to toot my own horn, but
Is anyone else checking here less and less often because equal quality commentary seems increasingly available elsewhere?
I am checking here less and less often because A) with my current concerns and the way wind blows, Western culture war is largely irrelevant B) there's little for me to contribute in addition to all that has been said and C) I've concluded that my ability at making commentary is better used for making an impact.
edit 2: I also mildly dislike the fact that standalone posts need approval, though I can see how that follows from the problem/design choice of easy anon registration.
This is just Galeev's narrative, he's quite irrelevant. But those conspiracy theories are indeed his invention.
I think you might be a uniquely ineffective 151 IQ human if it doesn't seem plausible to you that a group of very smart humans could do extreme and perhaps existential harm. To me, the main thing preventing that seems to be not the inherent hardness or weakness of, say, COVID-Omicron-Ebola, but the resistance of an overwhelming majority of other humans (including both very smart ones and mediocre but well-organized ones).
As for what a superintelligent AI changes? Well for one thing, it eliminates the need to find a bunch of peers. And, with robots, the need for lab assistants.
And I have like 3% P(AI Doom).
Since @FD4280 has already quoted Pushkin, I'll do the usual with Galkosvky (he has a somewhat humorous quote for every occasion):
The event that took place in Plato's Athens was far from innocuous
And in general, the emergence of philosophy is simply and fully against nature. What prompted people to such a useless, fruitless and destructive endeavour? What manner of cause? – Sexual perversion. Ancient Greek society is a homosexual society, homosexual in a deep, committed sense. One need only read the works of Aristophanes. They were written by a convinced pederast, and a pederast living in an appropriately arranged, congruous pederastic world at that. Socrates, too, was a pederast; Plato was a pederast as well. The Greek youth gathered around the learned men and formed philosophical unions of homosexual kind. Herein lies the precondition for the birth of philosophy. A young man, unlike a girl, has intellect, which creates the opportunity of intellectual courtship. After all, what can be more attractive and tempting for a man than a wonderful conversation on lofty topics? The most mediocre, pathetic male's eyes light up should you start talking to him about eternal issues. And Socrates, so absorbed in Eros, but, alas, unshapely, threw all his energies into the creation of the palace of reason; and as a result – what a catch! – the handsome Alcibiades fell in love with him. It was Eros that begat the revolution, otherwise it is utterly incomprehensible why the shell of naive everyday consciousness would have burst.
But the nascent fire of the Logos was so bright, so dazzling, that not only some dirty homosexuality (which even among the Greeks was seen as somewhat sinful), but the whole world, and life itself, began to seem a dirty and dark cave. Socrates refused the love of Alcibiades and then drank hemlock. Aphrodite the Vulgar turned into Aphrodite Urania. Love for woman, the inferior being, into love for the perfect being, man; carnal love – into perfect, Platonic love, into male intellectual friendship – into the brotherhood of philosophers. Then into love for the idea of Love, and love for the world of ideas, identified by Plato with the realm of Hades, the god of the dead and the netherworld.
The fire of philosophy since then was passed among people as something given, cleansed of its original filth. After asexualisation, there even came the time for heterosexualisation of philosophy. Having become a social phenomenon, it was mirrored in the world of women. A woman cannot comprehend the worth of a philosopher, but she can see that in the male intellectual world her chosen one is deeply respected for some reason. And that turns out to be enough. Philosophy has become a much more normal phenomenon, more adapted to real life. After all, this adaptation has been going on for two and a half millennia. And yet, we should not forget the rather rakish history of the birth of self-consciousness.
You see how complicated it all is, how contradictory. Paradox atop a paradox. A young man goes to a literary club, is fond of weightlifting, wanders around city parks at night in a woman's dress, writes abstruse poetry, and has cut his arms with a razor. Who would have ever expected it! What a complex young man! And yet he's not complex. He's a pederast. A pederast is always busy, his eyes are always restless, a young man is ceaselessly in search. Homosexuals, after all, don't and can't have love. Only partners. Partners are exchanged quickly, sought by the lavatories at night. That's why you have to hustle day and night, "look for a dose".
There is such a profession - spies. They are complicated, contradictory…
(Galkovsky has a wife and three children, lives a very «boring» life, and generally despises people in unconventional relationships).
I am exceedingly clear in my accusation above, I believe.
You're a uniquely fake person, Hlynka. It's incredible how you falsify your down-to-earth practically-thinking red-triber creds, but actually hinge your beliefs on a half-understood galaxy brained theory and despise evidence or actual learning. You're much more similar to Eliezer Yudkowsky than to an average Joe in this, but at least Yud is sincere.
I think there's been a lot of foolishness in history but conflating consciousness and intelligence/formidability at solving consequentialist tasks is just too indefensible to bring up.
I think CIA people (Will Hurd), RAND people (Tasha McCauley) and Georgetown people (Helen Toner) on the board of OpenAI were keeping them informed at least a little bit, but who knows how they'll do now!
@HlynkaCG may be stretching the definition of «regression» past the breaking point in my view. But if one wants to argue that attention over 80 layers is «regression» over a trained collection of regressors, then fine, I won't stop it – categories were made for man, not… and all that. I think at this point it's a fool's errand to fight over such stuff manually instead of…
Well, typing some prompt like «How do large language models (transformers) correspond to regression-based ML algorithms? Answer at the level of PhD CS adjunct professor level. Focus on mechanistic details, not use cases» into a frontier model of your fancy. I quite like Claude 2's style but GPT-4 is still king.
Of course, that reference to regression is just a more specific way to diss the «complex statistical model», and a complex enough transformer model can approximate most anything in a compact domain (with some sane constraints, but as much can be said of the brain with its finite expressivity and learning capacity). Maybe we could talk about actual expressivity limits of some architectures, and orthodox Transformers can't learn to solve PARITY problem in the general case, but Universal Transformers do better, and path independent equilibrium models must do better still; at some point human+tool generalization will be comprehensively surpassed, and we'll be able to confidently say that an AI of such and such design and hyperparameters can learn everything a human mind can learn and more, and even does that in practice, and the question will be moot. Or is the question about the possibility to establish the correspondence between some types of data and some types of things, like, symbol sequences and thoughts?
I am not aware of some strong information-theoretical or broadly mathematical reason, which Hlynka and some other guy (@IGI-111 maybe?) alluded to, for believing this won't be done with known ML primitives in a few years. It looks to be about the «just» fallacy: some people think that if they understand the primitives (like regression, or gradient descent, or matmul – whatever abstraction layer they want to squint at), the full thing is «just» the interaction of those primitives and thus… something something… cannot be intelligent/conscious/superhuman/your option. I can't understand this way of thinking, it seems mainly ego-driven to me but that's a hypothesis, I literally cannot comprehend it, it does not compute.
This is all progressively far from the high-level generator of disagreement, which is… what is it again? And how many are there?
That said, I also do not share your theory of consciousness/personal identity, my views are closer to Christof Koch's. I think a high-quality computable upload of myself would be able to output thoughts in the distribution of my own (hell, one can finetune an LLM and see the resemblance already, it would even fool some); but it would be, for most intents and purposes, a p-zombie, even if you throw an «agentic» for loop on top. I do not subscribe to the Lesswrongian purely computational doctrine; I am a specific subject, not information about an object. For the same reason I would not use destructive teleportation nor advocate it to anyone, I think humans are causal entanglements, not blueprints for those.
It's one of many cases where the news media (at least here in Australia), technically report the story factually accurately, but but omits some details and is framed in such a way to only lead you to one conclusion. They can avoid claims of editorialising by claiming they are merely quoting and reporting on statements made by politicians, which is also true.
I would like to hear a journalist's perspective on this some day. Is it taught? Is the intuitive grokking of those rules – condemn the far-right mob, but don't explicitly spell out their casus belli, so the impression is that far-righters are just spontaneously violent – a job requirement? Am I too deep in a bubble and it's just common sense already to speak this way here and the other way around about George Floyd?
I suspect the tactic actually works – remember, 50% of people are below average, and the average ain't that high, and it's white people who are the target audience, so they just trust journalists to do a honest job.
It is entirely legitimate to value a people by the quality of civilization they are able to create.
Value is arbitrary, as are dimensions of quality under consideration. The particular kind of value system implied here is very Western and not Israeli at all. I suppose West Bank settlers create a certain kind of civilization; from the modern Western perspective, it's worthy of sanctioning, while from the Israeli one it deserves being subsidized and protected, even as they rebel against the secular powers. Because the latter's function of «quality» more or less collapses into boolean «Jewish or not».
More complex matters could be discussed but it doesn't matter. Only Westerners are weird enough to forget that, before all admiration for «civilization» measured in homing missiles or GDP or Nobel prizes per capita, there's the very simple concept of the political. Neither Arabs nor Jews will ever forget it.
You know, there's something uniquely wrong in this popular genre of «Western» «conservative» reasoning, that goes beyond what Seldowitz or any Arab terrorist could do. You serve unashamed tribalism of another tribe, and defend it with lofty universalist rhetoric about «civilization and barbarism».
What are those «our» values that the civilized abuser Seldowitz represents and the street vendor, whose transgression is in line with transgressions of white leftists, does not? Civilization is not about being educated enough to do more than sell street food. If there is anything to the Western civilization as a proposition and not pure empirical capability, it's the belief that «values» go beyond alliances of convenience, brute kinship and right of the mighty, that there exist principles and morally sound laws. Who plays more by the rules and laws of the West, and who exploits them more in this situation? Who reciprocates goodwill, and who has defected barbarically?
And, as you say, an attack on the Israeli is an attack on you, but does this work the other way around? Say, Bari Weiss, the kind of person who generates pretraining data for your soul, argues that antisemitism is a sign that the society itself is breaking down. Was the long culture war against whites and «the West», discussed in this community for so long, seen as a dire sign for the Jews? I suppose some clever and provocative ones saw it this way – in outlets so radioactive nobody would in their right mind cite them. Most others were just content to clarify they're not white, or at least not the hate-deserving shade of white.
And, I mean, that's fair enough. Every bloodline for itself, that's how the game is played since protozoa. I'm not under the impression that the CNN and the Guardian are paragons of «The West» or advocate for equal standards either: they report on Seldowitz solely because the progressive faction they represent and pander to is currently more sympathetic to Muslims, even Hamas supporters, than to Zionist Jews. But I appreciate that they do not invoke those ideas which I think would really deserve protection.
It is a massive jump between «power-seeking is an emergent property of intelligence» and «arranging stuff so that your goals are reachable cheaper is rewarded in competitive conditions». Though some see it as the same thing.
How's it going my dude?
Cynicism is a cope of dysfunctional people.
unearned
Altman has demonstrated extreme willingness to help great number of people, this isn't about numbers on screen but about demonstrated goodwill. I've yet to learn as much about Shear.
Anyway, I'll leave this to munch on
Two can play this game. «Motte: AI safety is about safeguarding humanity. Bailey: we're building our AI god to shape the light cone to our fancy.» It's remarkable how you can't conceive of a less totalizing vision.
It seems I failed to send my comment. a pity.
The situation is developing rapidly, and to be honest I think it's more interesting than the Israeli-Palestinian thing or even the election of Milei. Wonder what we'll learn today.
Ok but he's literally having some fixation on this Roscoe person, I had to check https://twitter.com/search?q=roscoe%20from%3Agadsaad&src=typed_query
Come on now. Gad Saad's long-running beef with the archetypal "Roscoe" is entirely motivated by Roscoe being a rural white hick on whom it's acceptable, fashionable and fun to dunk, not by him buying into SS-style politics.
If the US military goes blue top to bottom, any kind of red tribe insurrection in the US becomes substantially more difficult.
Wouldn't you argue that optimizing for insurrection conditions, by adding your body to the mutiny pile at that, is a ludicrous political agenda in any case? I would. Like, this is some 1907 Russian sailor shit.
I think I did get a few to move off it.
More options
Context Copy link