@Primaprimaprima's banner p

Primaprimaprima

Aliquid stat pro aliquo

3 followers   follows 0 users  
joined 2022 September 05 01:29:15 UTC

"...Perhaps laughter will then have formed an alliance with wisdom; perhaps only 'gay science' will remain."


				

User ID: 342

Primaprimaprima

Aliquid stat pro aliquo

3 followers   follows 0 users   joined 2022 September 05 01:29:15 UTC

					

"...Perhaps laughter will then have formed an alliance with wisdom; perhaps only 'gay science' will remain."


					

User ID: 342

What if we had a ‘hear the other side’ cwr

…this IS the “hear the other side” thread.

for every Hlynka-stan who misses him, there is someone who was screaming at us to ban him for years.

"50% of the forum loves them and 50% hates their guts" is practically the definition of an interesting poster. If there's unanimous agreement that someone is a good contributor, then they may indeed be a "good" poster, but there's a cap on how interesting they can be.

And I've already written several times about how we did everything we could, short of just literally saying "The rules don't apply to Hlynka," to avoid having to permaban him.

My suggestion has always been that bans are capped at a length of one year, except in incredibly egregious cases (e.g. spam bots, or the person launched cyberattacks on the forum or something). I don't expect that this suggestion will ever actually be implemented, but it is a possibility nonetheless.

Go on, tell me who on this list was a valuable contributor who you think should be granted amnesty?

Hlynka is the primary example of course, also fuckduck9000, AhhhTheFrench, AlexanderTurok.

In fairness, people have been saying “the forum will die because you’re banning all the interesting people” for at least 5 years now.

On the other hand, we actually have banned some interesting people, and the forum is worse for their absence.

Can anyone explain the government shutdown to me? I haven't followed the story at all. If you consider yourself to be aligned with the Democrats, I'd especially like to hear your perspective.

After not following the news at all since the beginning, I casually overheard on Fox News that "the Democrats are keeping the government shut down over Obamacare". I assumed that that couldn't be right. Surely the whole thing couldn't be happening because of any one policy issue; there had to be more to the Democrats' side of the story. But then I started reading reddit comments and the consensus from leftists seemed to be that, yes, we really are keeping the government shut down over Obamacare, and this is Good and Righteous.

My initial reaction is that this seems rather petulant and childish on the part of the Democrats, because I think the minority generally should be expected to make concessions to the majority, but that's where my factual knowledge essentially ends so I'll let other people argue the case.

Well, it's a variation of the goat fucker problem. You can be an upstanding citizen your whole life, but if you fuck one goat, you're still a goat fucker. Similarly, it doesn't matter how many complex problems you can correctly solve; if you say that "entropy" spelled backwards is "yporrrtney" even once (especially after a long and seemingly lucid chain of reasoning), it's going to invite accusations of stochastic parrotism.

Humans make mistakes too, all the time. But LLMs seem to make a class of mistakes that humans usually don't, which manifests as them going off the rails on what should be simple problems, even in the absence of external mitigating factors. The name that people have given to this phenomenon is "stochastic parrot". It would be fair for you to ask for a precise definition of what the different classes of mistakes are, how the rate of LLM mistakes differs from the expected rate of human mistakes, how accurate LLMs would have to be in order to earn the distinction of "Actually Thinking", etc. I can't provide quantitative answers to these questions. I simply think that there's an obvious pattern here that requires some sort of explanation, or at least a name.

Another way of looking at it in more quantifiable terms: intuitively, you would expect that any human with the amount of software engineering knowledge that the current best LLMs have, and who could produce the amount of working code that they do in the amount of time that they do, should be able to easily do the job of any software engineer in the world. But today's LLMs can't perform the job of any software engineer in the world. We need some way of explaining this fact. One way of explaining it is that humans are "generally intelligent", while LLMs are "stochastic parrots". You're free to offer an alternative explanation. But it's still a fact in need of an explanation.

Of course this all comes with the caveats that I don't know what model the OP used, a new model could come out tomorrow that solves all these issues, etc.

I think there's a missing personality trait that I thought was conscientiousness, but it turns out that means something different (being organized and careful). The trait I am thinking of is more like "conscious awareness of reality," which is like, can you tell how your behavior is interacting with the people around you, do you work with theories of mind, are you able to weigh your thoughts and feelings and choose what to say next, etc.

Jungian typology splits conscientiousness and agreeableness into a few different dimensions in order to provide a finer-grained analysis of situations like the one you described. In Jungian terms, the disposition to rapidly respond to emotional cues and fluctuations in group mood would fall under the heading of “extroverted feeling”.

People who are low in extroverted feeling might still come off as quite friendly and gregarious (i.e. high agreeableness on most measures), but they’ll tend to have a lower “refresh rate” when polling the external environment for emotional and social cues (particularly when it comes to discerning a group average), and they’ll assign lower salience to this information, which is one way you can end up with two people excitedly exchanging stories while being oblivious to, or unmoved by, the disinterest of everyone else. See also Michael Pierce’s ENFP example.

This is all part of normal human variation and should emphatically not be confused with autism or other deficiencies in socialization.

Maybe this all boils down to rising autism numbers but I feel like this is something that is supposed to be learned, and I would hope that if you haven't learned this by the time you are an adult there is something wrong with you.

Nope! Nothing wrong with them (usually). It’s mind-bending how different we all are from each other. Things that seem basic and obvious to you are things that other people have never even thought about, and vice versa.

My gut feeling on this is that it's not just a kind of autism style drug or biological induced disease, it's more a symptom of cultural decay, and seems more like we have bad values -> we get worse people type of movement over time.

Nope; not necessarily. We’re supposed to be shockingly different from each other, and we’re supposed to cause friction with each other; that’s nature working as intended. It takes all kinds to make up the world.

That the surveillance state will be used for good. The narrative is seductive. If we could just see everything in 4K, disputes over what really happened would collapse, the thinking goes. If everything in life is videotaped and archived, then the real truth of these messy situations would be indisputable. But Hassan Piker's dog collar incident shows that this theory is catastrophically wrong.

Weird take.

Seeing everything in 4k isn't bad because it makes us realize that our delusions of truth and stability fade into so much airy nothingness when exposed to the might of the simulacrum panopticon. Seeing everything in 4k is bad because it truthfully shows people the actual truth of what you did (e.g. shouting racial slurs while you were having a really bad trip at that frat party that one time), which then causes them to actually harm you, in a truthy way.

There is no post-truth, there is no collapse of values and morality, there's none of that. It's clear that the discourse on "post-truth" is a product of wishful thinking. It sure would be nice if the truth would stop smashing its boots into our faces for just a little while. But that's not the reality we live in unfortunately. For people whose physical survival depends on the truth, it's actually shockingly rare for the truth to come into serious question (regarding issues that actually matter, anyway).

You know how when you were a kid on the playground, people could make up anything about the new Pokemon game and you would believe them, or at least you would hold open the possibility that the rumors could be true, because you were a dumb kid and you didn't know anything, and the internet was in its infancy so you didn't just have a source of infinite authoritative knowledge that you could verify everything against? All of reality used to be like that. "Lightning is just what Zeus does when he gets mad", "Well zamn, I don't know whether that's really plausible or not, but I'm illiterate and I only know about the existence of one or two city-states besides my own and the rest of the world is shrouded in mystery, so for all I know, it certainly could be true". That was post-truth. Or pre-truth, rather. Now we're living in the age of truth, and it sucks. (It would suck too if we actually did enter the post-truth reign-of-nihilism age, but for different reasons.)

There's nothing "seductive" about truth; the truth is real, it's dangerous, and most of the time I would very much like it to stay over there, away from me. Alternative perspectives on the matter are typically coming from, as the kids like to say, a place of "privilege".

Even the most attractive woman on earth doesn't come with a literal motor in her pussy.

It took me a long time, but I eventually realized that the feminists were right all along. Sex (for men) isn’t about sex, it’s about power.

The motorized pussy might feel good. But you’ll know that you didn’t make another soul submit to you.

It’s almost identical to the real show and some of them do just look like normal scenes (except for when it completely glitches out).

It was a hot topic in the earliest days of the forum (like around the 2016 election) because a lot of people were suddenly encountering these ideas for the first time.

Interest has dropped a lot since then because the old regulars have simply had those arguments many times now and don't find them as interesting as they used to, plus it's not quite as hot a topic in internet discourse as it used to be so we don't have as many new people wanting to learn about it for the first time. Even after the move to themotte.org it still came up sometimes but over the last year or two it's been quiet.

AI is Too Big to Fail

You've probably been hearing that we're in an AI bubble. I think that's both loaded and reductive, and I'd like to take some time to help people understand the nuances of the situation we're currently in, because it's deep. To be clear, I am pro AI as a technology and I have an economic interest in its success (and for reasons I'll discuss, so should you), however there is a lot more going on that I don't agree with that I'd like to raise awareness of.

AI capital investments are running far ahead of expected returns, and the pace of investment is accelerating. Analysts estimate AI-linked activity drove roughly 40–90% of H1-2025 U.S. GDP growth and 75–80% of S&P 500 gains. If it wasn't for AI investments, it's likely the United States would be in a recession right now. According to Harris Kupperman of Praetorian Capital “the industry probably needs a revenue range that is closer to the $320 billion to $480 billion range, just to break even on the capex to be spent this year.” It sure sounds like a bubble, however thinking of it as just another bubble would be doing a disservice to the magnitude of the dynamics at play here. To understand why, we have to explore the psychology of the investors involved and the power circles they're operating in.

The elites of Silicon Valley have cozied up to Donald Trump in a way that's unprecedented in the history of modern democracy. They've lined the pockets of his presidential library foundation, supported his white house renovations, paid for his inauguration and provided a financial lifeline for the Republican party. Between Elon Musk, David Sacks, Sriram Krishnan, Peter Thiel and his acolyte J.D. Vance, Trump has been sold the story that AI dominance is a strategic asset of vital importance to national security (there's probably also a strong ego component, America needs "the best AI, such a beautiful AI"). I'm not speculating, this is clearly written into the BBB and the language of multiple executive orders. These people think AI is the last thing humans will invent, and the first person to have it will reap massive rewards until the other powers can catch up. As such, they're willing to bend the typical rules of capitalism. Think of this as the early stages of a wartime economy.

[...]

I'm going to say something that sounds a little crazy, but please bear with me: from a geopolitical perspective, what we're doing is a rational play, and depending on how valuable/powerful you expect AI to be and how hostile you expect a dominant China to be, possibly a near optimal one. If you're a traditional capitalist, it probably looks like a bad move to you regardless of your beliefs about AI; you're going to need to put those aside. This is not a traditional economic situation. We're in an arms race, and we're veering into a wartime economy, or at least that's how the powerful view it.

[...]

Returning to the traditional capitalists, I'd like to note that they aren't wrong; this AI push is unsustainable (for us). I'm not sure how long we can run our economy hot and directed before the wheels come off, but my napkin estimate is between 5-10 years, though it's likely we'll lose the political will to keep pushing before that point if the AI transformation is underwhelming and we still have a democracy. To further support the traditional capitalists' position, if AI unwinds at that point having under-delivered, the economic damage will probably be an order of magnitude greater than if we had just let the bubble deflate naturally. This will be exacerbated by the favorable treatment the administration will make sure the Oligarchs receive; we will suffer, they will coast.

Where does all this leave us? For one, you better hope and pray that AI delivers a magical transformation, because if it doesn't, the whole economy will collapse into brutal serfdom. When I say magic here, I mean it; because of the ~38T national debt bomb, a big boost is not enough. If AI doesn't completely transform our economy, the massive capital misallocation combined with the national debt is going to cause our economy to implode.

I don't have the expertise needed to evaluate the economic arguments, so I'm mainly posting this here to solicit feedback on the linked article.

It's probably too late to avoid a future of "brutal serfdom" regardless of what happens, even if we reach singularity escape velocity. Power will do what it always has done, which is centralize in the hands of a few to the detriment of the many; turning every human into a cyborg god won't change that (you simply have the problem of organizing the coexistence of cyborg gods rather than the problem of organizing the coexistence of baseline humans). To think otherwise is to implicitly rely on a Rousseauean (and anti-Hobbesean, channeling Hlynka) presupposition that people are basically good and just and suffering is merely an incidental byproduct of material lack, which we have reason to be skeptical of. The second half of the 20th century provided what were probably the most fertile material and social conditions for freedom that have ever been seen in human history; regardless of wherever we're going now, we're leaving freedom in the rear-view mirror.

I was gonna say that if this all blew over with nothing happening then I would finally admit that the pendulum is swinging and woke is on the downturn. But I see that a few people have already stepped down, so, yeah. Anyone trying to say that “the era of woke is over” is coping.

Why are schizophrenia and depression mutually exclusive?

Thank you, that was highly interesting.

This same concept has been independently rediscovered in multiple communities (including the link to the historical practice of shamanism) which increases my confidence that there's something to it.

I suppose that was ambiguous.

In terms of the sheer number of people around the world who (claim to) adhere to his ideas, no one can really touch Marx. But within academic circles, self-professed "followers of Marx" I think are more willing to be critical of Marx when compared to followers of certain other philosophers.

I believe it is a significant outlier yes, in terms of providing a comprehensive metaphysical worldview, an ethics, an eschatology, etc. I think it's more of a religion than any historical form of fascism is for example.

Nietzsche certainly. Kierkegaard too.

I don't think that the way Marx is treated is all that out of the ordinary compared to how other canonical historical philosophers are treated (and you can find other historical thinkers who have a bigger cult of personality, like Lacan imo). I think the locus of emotional investment is more in the cause of socialism itself rather than Marx as a person.

The particular attention paid to Marx's writings and Marx as a person may seem strange to people with a STEM background, where primary historical sources are never read by anyone except dedicated historians. But that's simply how things are done in philosophy. If you want to do serious scholarly or intellectual work using X thinker’s ideas, then you're expected to read what X actually wrote.

No one treats Marx's thought as an infallible edifice which can never be criticized or amended. The Frankfurt school thought that Marxism had to be supplemented with psychoanalysis and cultural criticism in order to address some of its blind spots. Wokes are intrinsically suspicious of Marx because he was white and male. Etc.

Marxism and the History of Philosophy:

Most people in most places, including intellectuals, have never worked out their basic worldviews, and thus, they flounder without foundations. This is what Marxism has to offer: foundations and meaning.

We have a worldview that is clear, coherent, comprehensive, and credible. We bring a way to think that combines totality with historicity, a way of processing experience that is both integrative and empirical, and a way of synthesizing that is not an abstract unfolding of a mystified idea, but a constant and dynamic interaction with nature and with labor in a material historical process.

We need to show how the system structuring people’s lives, capitalism, is responsible for the terrible injustices of the world, the ecological destruction of the world as well as for the cultural decadence and psychological disorder of the world. We offer not only analysis in understanding the nature of the system generating the most basic problems, but also a solution in a movement to expose this system and to bring about an alternative system, socialism. We offer both meaning and purpose. [emphasis mine]

If this sounds a lot like a religion, then that's because it should. Marxism undoubtedly shares many structural features with traditional religions in its fundamentals.

(I have argued previously that wokeism is not identical with Marxism. The relationship between wokeism and Marxism should be understood as being something like the relationship between Christianity and Judaism. Adherents of the newer religion incorporate the sacred texts of the older religion as their own, but they also make a number of modifications and additions that adherents of the older religion would stridently reject. Nonetheless, the two traditions are united in certain ethical and philosophical commitments that more distant outsiders would find baffling.)

Much ado has been made about the "crisis of meaning" in the contemporary West, and how "we", as a civilization, "need" religion (and how in its absence, people will inevitably seek out substitutes like wokeism). But speaking at this level of generality obscures important and interesting psychological differences between different individuals. Many, perhaps most, people are actually perfectly fine with operating in the absence of meaning. And they can be quite happy this way. They may be dimly aware that "something" is missing or not quite right, but they'll still live docile and functional existences overall. They achieve this by operating at a persistently minimal level of sensitivity towards issues of meaning, value, aesthetics, etc, a sort of "spiritual hibernation".

It is only a certain segment of the population (whose size I will not venture to estimate -- it may be a larger segment than the hibernators, or it may be smaller, I don't know) that really needs to receive a sense of purpose from an authoritative external social source. And this segment of the population has an outsized effect on society as a whole, because these are the people who most zealously sustain mass social movements like Christianity and wokeism.

Finally there are individuals who are seemingly capable of generating a sui generis sense of meaning wholly from within themselves. This is surely the smallest segment of the population, and it's unlikely that you could learn to emulate their mode of existence if you weren't born into it -- but you wouldn't want to anyway. Such individuals are often consumed by powerful manias to the point of self-ruin, or else they become condemned to inaction, paralyzed with fear over not being able to fulfill the momentous duties they have placed upon themselves.

TheMotte was intended to be a neutral meeting ground where different factions of the culture war could come together and have cordial discussions.

If the forum puts forward the appearance of a consensus on sensitive issues (e.g. “we all know the 2020 election was stolen”) then that would be antithetical to our goals because it would make the atmosphere more hostile to factions with different opinions.

I just wanted you to know that I’m not ignoring this, but I only have so much time in the day for typing long replies, and this thread is already buried. I’ll save my thoughts on this for the next time this topic recurs.

I didn’t take it as an insult lol, just thought it might be a fun factoid for people who didn’t know how much overlap we had (I know of a couple high-profile mottizens who have confirmed they post there)

You can talk to someone and think 'yeah, they would make a good Mottizen' and look at a 4channer or blueskydiver and think that they wouldn't.

I am literally the guy writing the posts on 4chan that make you think “that guy wouldn’t make a good mottizen”.