@FCfromSSC's banner p

FCfromSSC

Nuclear levels of sour

34 followers   follows 3 users  
joined 2022 September 05 18:38:19 UTC

				

User ID: 675

FCfromSSC

Nuclear levels of sour

34 followers   follows 3 users   joined 2022 September 05 18:38:19 UTC

					

No bio...


					

User ID: 675

Let's just wind this thing down. I am honestly tired of explaining the value of an alliance with a bloc with a huge economy and population and very similar interests.

Why would you suppose we have very similar interests?

despite being an Americanophile through and through.

What sort of American?

I want merz and everyone to tell trump to fuck off in no uncertain terms and stop giving him face-saving exits.

There's a considerable number of Americans who would welcome this, I'd imagine.

I don't want to live in Europe. I don't want to live anywhere like Europe. I don't want where I live to become more like Europe, even marginally. I would prefer actual war against the authorities to this happening. Your entire social consensus is inimical to what I view as fundamental human rights and basic principles of liberty. We are not friends in any meaningful sense; you are allied with my tribal enemies, and will be for the foreseeable future.

Again, Carney says it best:

You cannot “live within the lie” of mutual benefit through integration when integration becomes the source of your subordination.

I perceive integration with Europe one of the major sources of my subordination.

Here you go. Or perhaps that's not a "serious manner"? Do you disagree that the BBC and the social consensus it represents is deeply hostile to America's Red Tribe?

Have you ever considered that you are a little bitch?

This is not acceptable. The rest of your post is fine, but you are being deliberately inflammatory.

You have no notes either way on your moderation log. I get that you are using the insult for dramatic effect, and so I am giving you a warning. Do not post like this in the future, or you will receive a ban.

You are welcome to reject the inevitability of extinction. You are not welcome to use your rejection of extinction to claim divine right to getting everything you want the way you want it. If you need things from other people, resources, cooperation, whatever, you have to actually negotiate for them, not declare that they do what you want or else they're damning all humanity.

I am more worried about current power allocation than I am about hypothetical hostile super intelligent AGI. Maybe I'm wrong to think that, but given that the current AI safety alliance does not see a place in the future for me and mine anyway, it doesn't seem like I've got much of a choice.

You cannot “live within the lie” of mutual benefit through integration when integration becomes the source of your subordination.

This is straightforwardly true. The problem is that it runs the other way also. The political problem facing Red Tribe has been obvious for some time:

  • We have to win a conflict against Blue Tribe, or we will be ruled for the foreseeable future by people who hate us.
  • We have to fund our side of this conflict out of our own pockets.
  • Blue Tribe funds themselves out of our tax money.
  • Blue Tribe is allied with the Blue-Tribe analogues in pretty much every Euro country, most of which are also funded to a considerable degree out of our tax money.
  • Those allied Blue-Tribe-analogues have already won their tribal fight in their home countries, so completely that their operations are effectively uncontested
  • Those Blue-Tribe-analogues interfere directly in our domestic politics in ways that give our Blue Tribe additional considerable advantages.
  • Those Blue-Tribe analogues have repeatedly and obviously broken some of the rules we care about the most, and have been openly and quite effectively coordinating to help Blue Tribe break those same rules.

As the man says, Integration became the source of our subordination. European governments have been actively cooperating with Blue Tribe to close the door on us and our values for at least the last decade. We have already been fighting them for at least the last decade. There is very little hope that this will change, and there is very little observable value in maintaining a situationship that will never, ever break to our advantage.

The multilateral institutions on which middle powers relied— the WTO, the UN, the COP—the architecture of collective problem solving — are greatly diminished.

Yeah, that's sort of the point, isn't it? Why do I want this "architecture of collective problem solving" stronger, when in fact a lot of the "collective problems" it "solves" appear to involve my tribe's continued existence?

I am not sure who's going to be American ally in WWIII now.

How about we sit WWIII out? I for one am not particularly interested in seeing the sons of my friends and family fed into a droneswarm Armageddon.

Five years ago, even two years ago, it was taken as obvious that we (meaning primarily the US) were going to fight a major war with China and/or Russia. How does the above shift the probabilities, in your estimation? Do you think the crackup of the previous Rules-Based order makes an imminent fight with China less likely or more?

The obvious problem is economics. Does this end the Dollar as reserve currency? Does this crash the global economy? Are we Americans going to get super poor forever? ...I've been thinking about writing a post, collecting some of the economic predictions made here in the runup to the 2024 election, and comparing them to what's happened since, with comparison and contrast to the economic predictions about Brexit. To boil it down, I note that the economic predictions and even current assessments seem fundamentally unreliable, that the previous order seemed obviously unsustainable, and that the risk is worth it given the current trajectory.

I do not want America to rule the world, especially not if the version of "America" that rules is a Blue Tribe that has secured itself permanent unaccountable power. Even if it were my tribe ascendent, the value seems quite limited. I do not want to be subjugated by the Chinese, but I do not want to fight a major war with them either, and my assessment is that as of a year ago, pretty much everyone in this forum considered such a war to be an obvious inevitability. And for what? I do not want my country to be poorer, but I note that our previous economic model seemed to have very obvious problems that only ever got worse, and the only solution anyone could even begin to imagine was to keep doing the same things even harder, as pressure built toward an inevitable blowout.

I wanted change. This is change. It is scary and somewhat horrifying change... but it's not obvious what the alternative was supposed to be, and what seem to me to be plausible guesses seem worse.

All available evidence indicates that you and all your descendents will someday die no matter what anyone does. All available evidence indicates that humanity will go extinct, and that extinction being soon is a distinct possibility, again no matter what anyone does.

I am not building AI. I am pointing out that Yudkowsky's proposed solution seems both very unlikely to work and also very likely unnecessary for a whole host of reasons, and that there appears to me approximately zero reason to play along with his schemes. I am not gambling with your life, or that of your descendants. You do not get to stack theories a hundred layers high and then declare that therefore, everyone has to do what you say or be branded a villain.

I say Yudkowsky demands unaccountable power, because it is obvious that this is, in fact, exactly what he's demanding. Neither he nor you get to beg out of the entire concept of politics because you've invented a very, very scary ghost story.

Neither Yudkowski nor yourself are the first humans to discover that "living" requires amassing unaccountable power. Time is not used well under a rock.

In any case, I hear Pascal also has a pretty good wager.

My determination to close off the effect zone would depend on my assessment of the probabilities firstly that such a lockdown could be effected, and secondly the probabilities of apocalyptic destruction from other sources. If lockdown seems unlikely to work, and also there are numerous other, similar threats, then it seems to me I might better spend my time using the time I have well.

Groups of humans such as the united states are able to blow up a target from so high up in the air that you can't see where the bomb was launched from. A medieval king couldn't even fathom defending from this sort of attack.

And yet, humans have figured out how to defend against this sort of attack, to the point that we decisively lost the war in Afghanistan.

If you'll allow me to quote myself:

Coin-op payphones granted, there's something to Gibsonian cyberpunk, something between an insight and a thesis, that sets his work apart from the stolid technothrillers of Clancy and company. Something along the lines of "technology is useful, not merely because they have a rock and you have a gun, but because it inherently and intractably complicates the arithmetic of power." His stories are built on a recognition that people are not in control, that our systems reliably fail, that our plans are dismayed, and that far from ameliorating these conditions, technology only accelerates them.

"AI Safety" operates off a fundamentally Enlightened axiom that chaos and entropy can, with sufficient intelligence, be controlled. I think they are wrong, for roughly the same reasons that all previous attempts to create perfect order have been wrong: reality is too complicated.

I am not arguing that AI can't kill us all. I'm pretty sure we can kill us all, and I think the likelihood of us doing so is considerable.

Yudkowsky does not want to rule you, he just wants to keep you, or anyone including himself, from massing billions of dollars worth of compute and using it to end humanity.

He wants to invent a new category of crime with global jurisdiction and ironclad, merciless enforcement. I am 100% on board, provided that it is me and mine given exclusive control of the surveillance and strike capabilities needed to enforce this regime. Don't worry, we'll be extremely diligent in ensuring that dangerous AI is suppressed.

It seems to me that there is a long tradition of smart people coming together an inventing new and not distantly in the past foreseen weapons and technologies.

There's also a long tradition of smart people "forseeing" weapons that aren't physically possible.

There's also a long tradition of smart people failing to recognize that weapons or other tech can stagnate due to basic physical laws.

"Maybe the AI will figure out how to hack the simulation" or "maybe the AI will kill us all in the same second with hypertech nanobots" are not scenarios that we can plan for in any meaningful way, but much AI safety messaging uses them as examples. They do this because they are worried about out-of-context problems, and want to handle such problems rationally. But the core problem is that out-of-context problems cannot in fact be handled rationally, because our resources are finite and the out-of-context possibility space is infinite.

They argue that Superintelligence will give the AI an unbridgeable strategic advantage, that intelligence allows unlimited Xanatos Gambits, but this doesn't in fact appear to be true. Planning involves handling variables, and it seems obvious to me that variables scale much, much faster that intelligence's capacity to solve for their combinations. And again, we can see this in the real world now, because we have superintelligent agents at home: governments, corporations, markets, large-scale organizations that exist to amplify human capabilities into the superhuman, to gather, digest and coordinate on masses of data far, far beyond what any human can process. And what we see is that complexity swamps these superintelligences on a regular basis.

And there is of course just the more mundane issue of a sufficiently advanced AI that is merely willing to give cranks the already known ability to manufacture super weapons could be existential.

You frame this as though we are in some sort of stable environment, and AI might move us to an environment of severe risk. But it appears to me that we are already in an environment of severe risk, and AI simply makes things a bit worse. We are already living in the vulnerable world; the vulnerabilities just aren't perfectly-evenly distributed yet.

Meanwhile, "AI Safety" necessarily involves amassing absolute power, and as every human knows, I myself am the only human that can be truly trusted with absolute power, though my tribal champions might be barely acceptable in the final extremity. I am flatly unwilling to allow Yudkowksy to rule me, no matter how much he tries to explain that it's for my own good. I do not believe Coherent Extrapolated Volition is a thing that can possibly exist, and I would rather kill and die than allow him to calculate mine for me.

Where do these diminishing returns kick in?

Within the human scale, at the point where Von Neumann was a functionary, where neither New Soviet Man nor the Thousand Year Reich arrived, where Technocracy is a bad joke, and where Sherlock Holmes has never existed, even in the aggregate.

Or maybe you mean to application of intelligence, in which case I'd say just within our current constraints it has given us the nuclear bomb, it can manufacture pandemics, it can penetrate and shut down important technical infrastructure.

We can do all those things. Can it generate airborne nano factories whose product causes all humans to drop dead within the same second? I'm skeptical.

It seems to me that it does, yes. If your intelligence scales a hundred-fold, but the complexity of the thing you want to do scales a billion-fold, you have lost progress, not gained it. The AI risk model is that intelligence scales faster than complexity and that hard limits don't exist; it's not actually clear that this is the case, and the general stagnation of scientific progress gives some evidence that the opposite is the case. It seems entirely possible to me that even a superintelligent AI runs into hard limits before it begins devouring the stars.

Now on the one hand, this doesn't seem like something I'd want to gamble on. On the other hand, it's obviously not my choice whether we gamble on it or not; AI safety has pretty clearly failed by its own standards, there is no particular reason to believe that "safe" AI is a thing that can even potentially exist, and we are going to shoot for AGI anyway. What will happen will happen. The question is, how should AI doomsday worries effect my own decisions? And the answer, it seems to me, is that I should proceed from the assumption that AI doomsday won't happen, because that's the branch where my decisions matter to any significant degree. I can solve neither AI doomsday nor metastable vacuum decay. Better to worry about the problems I can solve.

With an arbitrarily large amount of intelligence deployed to this end then unless there is something spooky going on in the human brain then we should expect rapid and recursive improvement.

...Or unless intelligence suffers from diminishing returns, which actually seems fairly likely.

you can make your own black powder, and your own cannons to shoot it out of.

Do you oppose the use of public resources to subsidize their lifestyle? Can you actually prevent public resources from being used to subsidize their lifestyle? Or is this just policy arbitrage, where we appeal to atomic individualism or social unity, whichever is convenient at the moment?

But in the same way that prediction markets help to reveal true beliefs, free economic markets reveal true preferences.

Would you agree that most poor people have a revealed true preference to invest most of the money they receive into credit card payments and similar fees, and that the people who receive those fees are benevolent actors working tirelessly to help such poor people live their very best life?

If not, I'm curious as to why you view the market as "revealing true preferences" in the one case and not the other.

That seems like an extremely bad question to ask. Do you interrogate all your moral intuitions off a similar framing, starting with what you wish was true and working from there? And note that you are treating "poor" and "unfortunate" as philosophical primitives, states that simply exist ex nihilo.

Suppose I assert that all humans deserve justice. How does this interact with your "how much would I want the poor and unfortunate to get, in a vacuum where it's no skin of mine or anyone else's nose"? Because my understanding is that what some humans deserve from justice is swift, merciless death.

The specific speech that brought the question to mind was Alexander's purported speech to his mutinous army at Opis. A neat parallel to your own choice, it seems.

I feel both these examples are quite distant, and that I have seen and heard many examples of leaders or prominent men being noted for addressing hostile audiences in circumstances of significant danger, and nonetheless persuading the audience by their appeal. Unfortunately, I can't recall them; as with our two examples here, it would be interesting to see what elements of shared culture people appeal to under duress, and assess whether those elements are meaningfully shared under current conditions.

The point is that happiness does not derive from material circumstances, in opposition to the underpinnings of the argument that all people "deserve to be happy", contrasted with "every person deserves to be as happy and safe as they can accomplish themselves". I'm not sure the latter is the precise wording I'd nail my flag to, but the former seems profoundly untrustworthy and dangerous.

My concern is that WhiningCoil does not recognize that all else being equal it is always good, rather than neutral, for sentient beings to have nice things.

It seems to me more likely that they recognize that all else is, in fact, never equal, never has been, and likely never will be.

Solzhenitsyn figured out how to be happy in a death camp. Some Ukrainians in the Holodomor figured out how to be happy while they and their families were intentionally starved to death. These apparent historical facts appear to me to support @WhiningCoil's model of happiness, and undermine the one you are presenting.

This world. 14th Amendment, baby. You don’t get to pick one line from the Constitution and ignore the rest.

Why not? Everyone else does, and whatever objections you and I might muster have clearly failed.

To be clear, I do not endorse the assessment described above. I do not believe that "American" is a boundary that can be effectively drawn on racial or ethnic lines. Unfortunately, that agreement is downstream from my assessment that "American" is not a boundary that can be effectively drawn at all.

I think this is a pretty good effort at defining "American culture", and do not believe that I could do better.

Suppose you are confronted by an angry and possibly violent mob of Americans. Which of these features you have listed would you appeal to in attempting to talk them down and convincing them to disperse? That is to say, which of these features provide serious, reliable traction on an interpersonal level?

Talking down angry mobs is something notable leaders have needed to do many times throughout history, and generally "culture" is what has allowed them to do it. Do you believe you are describing that sort of culture above?

Good lord no it didn't.

I watched it happen. I lived through it happening. The GWOT drove me into the Blue Tribe for a decade, and I only returned when the existing Red establishment was driven out in turn. 2000s republican leaders now mostly vote democrat.

As for the destruction of America...

If anything, since it became a bipartisan thing to criticize it ought to be a unifying factor, right?

We don't have to appeal to theory when we can observe what actually happened. The GWOT burned the Reagan coalition to the ground and supercharged progressivism. Progressive overreach has, in turn, destroyed the nation. The Constitution is dead. Our system of government is pretty clearly dead. Tribal values are now mutually-incoherent and -intolerable, and the stress of tribal conflict is blowing out what institutions remain to us one after another. Reds and blues hate each other, wish to harm each other, and are gleefully seeking escalation to subjugate each other. This process takes time, but the arc is not ambiguous, and neither is where it leads. At some point in the next few years, it will be Blue Tribe's turn to wield federal power, and Red Tribe's turn to resist it, and at that point, if not sooner, things will get significantly worse. It is insanity at this point to think either that the tribes are going to coordinate a halt to the escalations, or that our society can survive another decade of accumulated escalations. The peace is not going to last.

But also, intervening in Iran doesn't have to involve an invasion and occupation. That is learning.

As we have previously discussed, Libya also did not involve an invasion and occupation.

You appear to be assuming that the general population of Iran is some sort of generic huddled mass, yearning to breath free, that the problem is just the Mullahs and if we sweep the mullahs out of the way Iran magically transforms into Michigan. But Iran is not Michigan; at this point, even Michigan is not Michigan. Iran's current government are not alien space invaders, but rather Iranians who emerged from the population of Iran, and are thus at least somewhat representative of the sort of leadership that population produces. The Shah was an Iranian leader who operated torture dungeons. He was overthrown by Iranian Muslim communists(?), who... then also operated torture dungeons. Why do you believe that radical change in the government will produce a totally new sort of government, when it did not do so previously?

Your confidence that an intervention likely leads to a better situation for all involved is contradicted by recent experience, which you are dismissing out of hand. I have no reason to believe that "this time, it will be different", because it has not in fact been different any of the previous times. I do not care that the mobs are crying out for our aid; mobs cry out for lots of things when such appeals are obviously in their immediate interest, but that does not mean what they are crying out for today is a reliable indicator of their future preferences, and intervention has a grim track record.

I am not questioning whether we can bomb a second-tier power. I am questioning whether bombing will do any good, with the full knowledge that if I and people like me consent to bombing, and things go sideways, next we will be arguing over whether we should bomb them more, or maybe send just a few troops, and then just a few more. I note that the US and Israel "dominated a second-tier power" less than a year ago, and yet here you are, demanding we bomb them again. Did we not dominate them hard enough last time? If so, why are you claiming that this current domination will succeed where the previous domination failed?

I think any objective observer who isn't suffering from Iraq Syndrome or a committed isolationist can see this is a good case for it.

Any observer who does not suffer from "Iraq Syndrome" is not thinking objectively. The GWOT destroyed the Republican party as an institution, and arguably destroyed America as a nation. It was ruinously expensive by every possible measure, for little to no perceivable benefit. Those responsible have taken no accountability and have suffered no consequences, and there is not even the slightest reason to be confident that Lessons have been Learned. And that was before we entered a fundamental revolution in military affairs, wherein it is questionable whether our comically expensive military is actually capable of surviving, much less dominating.

You should not need to stick your dick in a blender three times (four? Five?) to learn not to do that, but apparently some people need to go all the way down to the angriest inch.

What does the Alternate history look like if America stays entirely out of World War I? It's hard for me to imagine things working out worse than they did in our timeline. Is it enough change that WWII doesn't happen, or ends up as the West vs the Commies?

Again, I think there's a strong case to be made that our current position is pretty similar to 1910 or so, for a whole variety of reasons. I think we should try to lean hard into isolationism this time around, not least from observing how WWI and WWII went for the sclerotic, unwieldy empires that rolled into them. Modelling our current choices off WWII history is like a 55-year-old morbidly-obese former athlete with a bad back and a bum knee thinking he can throw down like he did when he was an 18-year-old in peak condition. We should be considering our future more from the perspective of Tsarist Russia or the Austro-Hungarian empire, not from that of a vital, highly cohesive, highly motivated state gifted with secure borders and unlimited, untapped natural resources.