@Shrike's banner p

Shrike


				

				

				
0 followers   follows 0 users  
joined 2023 December 20 23:39:44 UTC

				

User ID: 2807

Shrike


				
				
				

				
0 followers   follows 0 users   joined 2023 December 20 23:39:44 UTC

					

No bio...


					

User ID: 2807

To destroy their AI clusters, lowering the chance of a misaligned singularity of course!

But seriously, there are lots of potential reasons. One of them is superconductor control, although I think this gets less relevant every year. One of them is that China might attack the US as part of their opening salvos against Taiwan over uncertainty as to what the US would do, which essentially would render US desires moot – there's no world where we don't respond to that.

One of the most overlooked reasons, in my humble opinion, is to stop nuclear proliferation – my understanding is that Japan views a non-CCP-aligned Taiwan as a core national security interest. I think there's a nonzero chance that if Taiwan reunifies with China, Japan acquires nuclear weapons. If Japan acquires nuclear weapons, South Korea may follow.

Nuclear weapons proliferation is arguably bad for a lot of reasons but probably one of the core ones from the perspective of US policymakers is that it undermines American power relative to the rest of the world.

No, what I mean is that it is possibly already baked in – I dunno how likely this is but Trump, as POTUS, may know that we're going to war with China in less than four years.

Also you're not obliterating the industrial capacity of China WW2-style with anything less than carpet-nuking.

On the one hand, touché.

On the other hand, Chinese trade flows through overseas shipping. A war with Taiwan might involve carpet nuking levels of destruction (the Three Gorges Dam is within Taiwanese striking range) but is more likely to involve interdicting Chinese trade routes and might also involve striking their port assets. If China loses the war, its fleet, its merchant marine, and its port infrastructure, even without destroying industrial capacity or critical infrastructure it will hamper their exports for years.

After the Chinese kill American service members? Forget Trump declaring war, Congress will.

You should consider that the odds of "literal war with China" happening in the next four years is relatively high, possibly 100%, the odds of the US winning are decent, and if Trump gets the US started onshoring before obliterating the industrial capacity of our main rival (which is why the US had such a nice industry between 1945 - 1979) before that happens he might be hailed as a hero and genius.

it seems profoundly stupid to deliberately crash the good times in the hopes of producing strong men, instead of finding a way to preserve them for as long as possible, when we are on the cusp of technologies (AI, eugenics, etc.) that may allow us to do just that.

If you actually think that AI is going to make a big impact on the economy, it seems rational to try to onshore industry and crash the email/finance/coding class. In an AI boom scenario, the email/finance/coding classes will be out of a job first, and it will take time and human elbow grease to get the AI-run-and-assisted factories up and running. Whatever happens with the tariffs will be gentler than what would happen if every single corporation in America replaced ~everyone whose primary job was with a computer with GPT-7 Pro once it demoed.

If you are an AI near-term-ist it makes a ton of sense to blow up an industry that is going to be destroyed anyway in order to begin rebuilding an industrial industry that might-or-might-not be accelerated to stratospheric levels by AI. If we assume that AI can radically reshape industry, we might as well start working on that project now, particularly if we are in a competition with China, who is roughly on par with us in AI and already has a large industry. Supposing that AI makes industry 100% more powerful over the course of ten years: the United States needs as big of an industrial base as possible when that AI drops or it potentially loses in meatspace very badly to countries like China.

Eugenics will not meaningfully affect anything for 40+ years (if it takes off, which it is unlikely to).

I am not an AI near-term-ist but if you really think AI is going to take off, worth considering what that might mean.

It shows that even when you control for education level, how much someone followed the race was negatively correlated with support for Trump in 2024.

I realize this is a tangent but I find this incredibly funny. To me, because of the very specific news media dynamics in the last couple of elections, that is like suggesting we should trust cocaine addicts to set drug policy.

I definitely wonder if a smarter Jones Act (e.g. tariff discounts for goods delivered to US points in US vessels) would work better than the Jones Act as implemented.

Absolutely. For fun I'd even add the AI in Alien (1979), which is programmed perfectly to serve its masters but by that very token is indifferent to its fellow humans and even its own survival in a way a rational human would not be.

100%. I'd add that "AI going bad" arguably predates the computer as a trope, with Frankenstein unambiguously serving as a model for "humans create cool modern scientific innovation that thinks for itself and turns on them" and I am pretty sure that Frankenstein isn't even the oldest example of that trope, just a particularly notable one.

As HBD advocates, they believe in a relatively static human nature that cannot be reshaped by social institutions.

If HBD advocates believe this then they have not thought their position through.

In fact I think HBD postulates a much more flexible view of human nature than blank-slate theories. It seems to me the blank-slate worldview suggests that all humans are basically alike and we absorb what is in our environment and reflect it outwards, and thus changes in groups are the result of external material forces – in other words, humans are all the same. This unavoidably leads to messy questions about to what degree culture versus, uh, guns, germs and steel, I guess? is "the environment" but the point is that humans are basically biologically the same.

Whereas the HBD line is that there are genetic differences between groups that meaningfully affect outcomes. Interestingly HBD people also tend to kick the can back up a level by suggesting those differences are downstream of the environment, but they also point to culture, e.g. it is HBD people you see suggesting that things like "banning cousin marriage" or "executing violent offenders" have meaningful downstream genetic impacts on groups of people. Which means, pretty obviously, for both biological and logical reasons that anyone who believes in HBD believes that meaningful parts of human nature can be reshaped by social institutions, given sufficiently aggressive methods.

In fact I'd go further and suggest that believing in HBD or really that genes have any downstream effects on culture/people groups means it is inevitable that you believe that social institutions will reshape human nature, perhaps not the big parts of Human Nature that people make movies and write novels about, but meaningful parts of human nature that make it possible to divide people meaningfully into groups with different natures.

I think that some cultural aspects of the modern US would shock and appall them but the big picture would look very familiar to them from a historical level. I imagine they would immediately start making familiar analogies to the Roman Republic and its transition to empire.

Not that I necessarily think that those analogies are 100% correct, but I suspect it would pattern-match for them quickly. I'm not sure they would think it was good but it probably would feel familiar.

Unlike the other two, literacy is not an undisputed good. It is a difficult mode of communication that takes years to learn, and about 1/5th of adults in the developed world never learn to read for comprehension. We prize literacy because, for now, it's required to navigate our society.

Literacy also seems to contribute to poor memory skills* at a cultural level and, if overindulged in, poor eyesight at the individual level.

*unfortunately I suspect that replacing literacy with TikTok will make the problem better, not worse

Yeah people forget that the original theoretical question of the American Revolution was if the British Parliament in England or the British Parliaments in America could govern the Englishmen in America.

That's definitely not true for tactical weapons – the Russians (allegedly) almost fired a nuclear torpedo off of Cuba during the Cuban missile crisis, a decision that was up to three naval officers onboard the ship.

I would not be tremendously surprised if modern nuclear submarines had the ability to launch strategic under their own recognizance, although almost certainly not on the initiative of merely one officer. Perhaps they don't have the codes, but it seems plausible that the message they are actually waiting for is not mechanically necessary to use the nuclear weapons, it is merely an authentication code.

Now, under this circumstance, if the entire C&C chain was wiped out instantly, their response would be delayed. But presumably they would still be able to come to a decision once the BBC announced which world cities had been destroyed, and by whom.

It doesn't really matter where the subs are if their missiles have worldwide reach, if you are just using them for second-strike.

I wouldn't necessarily bet that the US can flawlessly eliminate the entire deployed Russian SSBN fleet in their bastion behind the Russian ASW wall. I'm sure the US tries to track them but from what I understand Underwater Ain't Easy.

I for one like the Hlynkaposting.

If the US is actually going to reindustrialize seeing mass exodus of basically intelligent people from email jobs could be extremely beneficial.

They realized what was happening far too late

I think this is correct, and in areas where conservatives have made a concerted effort (particularly in law) they've been able to do very well.

Something I found a bit funny about the Woodgrains position of "shouldn't you build it up rather than tearing it down" is it seems to be to imply that Trump Et. Al. should redirect all of those funds straight into right-wing institutions. Which I somehow doubt would make people very happy. But if they were run by serious conservatives instead of grifter conservatives I think they would do just fine.

Thank you :)

I dunno. It seems to me that lefties live in the shadow of Marx much more so than the right lives in the shadow of, say, Aquinas.

You're not exactly wrong about conservatives definitionally (although consider conservative hero Edmund Burke - not exactly a hidebound anti-reformer), but righties per se have no problem with new and innovative ideas. Look at science fiction (which is very forward-looking) - is it more "conservative" or "rightie" than other areas of literature? Or less? Now look at mainstream film, media, literature, etc. Is it eaten up with retreads, remakes, retellings of fairy-tales and people reliving their childhoods? Where is the innovation truly?

Or look at politics - is there really more innovation in the Democratic national platform than "we should make Greenland a US territory?"

The fundamental problem the Red Tribe/American conservatism faces is a culture of proud, resentful ignorance. They can't or won't produce knowledge and they distrust anyone who does.

I do not think this is true at all. The right is very good at producing knowledge, it is just unevenly distributed. If you spend any time reading Supreme Court briefs, you'll see rightie knowledge production in action, as this is an area where the right has (very successfully!) focused much of their energy and attention.

I think that the right-wing intellectual capital is considerably better than that on the left, if considerably smaller. Conservative or conservative-friendly educational institutions I think can be very good, just dwarfed in number by default-left-wing ones. (Some of this depends on what counts as "right" and "left" of course.)

Personally I find "spreadsheets" very apt so far. I think they definitely have the potential to disrupt some jobs. But if I'm being honest I think a lot of the "email jobs" are begging for disruption anyway, for other reasons. I would not be surprised if "AI" takes the blame for something that was more-or-less going to happen anyway.

I think robotics (which obviously has a lot of overlap with AI!) is potentially vastly more impactful than just "an AI that can do your email job." If you started randomly shooting "email job holders" and "guys who maintain power lines and fiber optic cables" you would notice the disruption in the power lines and fiber optic cables much sooner unless you got weirdly (un?)lucky shooting email jobbers. Similarly, AI will have a much bigger impact if it comes with concrete physical improvements instead of just better video games, or more website code, or better-written emails, or whatever, notwithstanding the fact that a lot of people work in the video game/coding/email industry.

(I hope I am right about that. I guess wireheading is kinda an option...)

In my opinion, it hasn't because (contrary to what AI hype proponents say) it can't.

Yes, I lean towards thinking that AI is often overblown, but at least part of my point here is that probably a lot more automation was possible even prior to AI than has actually been embraced so far. Just because something is possible does not mean that it will be implemented, or implemented quickly.

A skilled programmer can use AI tools as a force multiplier in some situations, so they do have a (fairly narrow) use case.

I think this is pretty analogous to my experience with it (which doesn't involve programming). Force multiplier, yes, definitely. But so is Excel. And what happened with Excel wasn't that accountants went out of business, but rather that (from what I can tell, anyway) fairly sophisticated mathematical operations and financial monitoring became routine.

Why hasn't it already?

My wife worked about five years ago at as a credit analyst, where part of her job involved determining whether or not to extend extra lines of credit: the easiest thing in the world (I would think) to automate. Really, a very simple algorithm based off of known data should be able to make those decisions, right? But my wife, using extremely outdated software, at a place with massive employee retention problems due to insanely high workloads, was tasked with following a set of general guidelines to determine whether or not to extend additional credit. In some cases the guidelines were a bit ambiguous. She was instructed by her manager to use her gut.

As I think I've mentioned before, I work with AI for my IRL job fairly extensively, although mostly second-hand. The work we do now would have required much more human effort prior to modern AI models, and having been involved in the transition between "useless-to-us-GPT" and "oh wow this is actually good" I can tell you that our model of action pivoted away from mass employment. But we still need people - the AI requires a lot of hand-holding, although I am optimistic it will improve in that regard - and AI can't sell people on a product. You seem to be envisioning a world where an AI can do the work of 10 people at a 14 person company, so the company shrinks to 4 people. I'm living in a world where AI can do the work of 10 people, so we're likely to employ (let's say) 10 people instead of 20 and do 100x the work the 20 people would have been able to do. It's quite possible that in our endeavor the AI is actually the difference between success and failure and when it's all said and done by 2050 we end up employing 50 people instead of zero.

How far that generalizes, I do not know. What I do know is that "capitalism" is often extraordinarily inefficient already. If AI ends up doing jobs that could have been replaced in whole or in part by automation a decade before anyone had ever heard of "ChatGPT" it will be because AI is the new and sexy thing, not because "capitalism" is insanely efficient and good at making decisions. It seems quite plausible to me that people will still be using their gut at my wife's place of employment at the same time that AI is giving input into high-level decisions in Silicon Valley boardrooms.

I definitely believe that AI and automation change the shape of industry over the next 50 years - and yes, the next 5. What I would not bet on (absent other factors, which are plenteous) is everyone waking up the same day and deciding to fire all their employees and replace them with AI, mass pandemonium in the streets. For one thing, the people who would make the decision to do that are the people least likely to be comfortable with using AI. Instead, they will ask the people most likely to be replaced by AI to study the question of whether or not to replace them with AI. How do you think that's going to go? There's also the "lobster dominance hierarchy" - people prefer to boss other people around rather than lord it over computers. Money and personnel are a measuring stick of importance and the managerial class won't give up on that easily.

Yeah there's not a world where you accidentally invite a journo to a chat and brag about your solid OPSEC and it's not embarrassing...except for the 16D chess situation. Which I uh wouldn't bet on.