If the US is actually going to reindustrialize seeing mass exodus of basically intelligent people from email jobs could be extremely beneficial.
They realized what was happening far too late
I think this is correct, and in areas where conservatives have made a concerted effort (particularly in law) they've been able to do very well.
Something I found a bit funny about the Woodgrains position of "shouldn't you build it up rather than tearing it down" is it seems to be to imply that Trump Et. Al. should redirect all of those funds straight into right-wing institutions. Which I somehow doubt would make people very happy. But if they were run by serious conservatives instead of grifter conservatives I think they would do just fine.
Thank you :)
I dunno. It seems to me that lefties live in the shadow of Marx much more so than the right lives in the shadow of, say, Aquinas.
You're not exactly wrong about conservatives definitionally (although consider conservative hero Edmund Burke - not exactly a hidebound anti-reformer), but righties per se have no problem with new and innovative ideas. Look at science fiction (which is very forward-looking) - is it more "conservative" or "rightie" than other areas of literature? Or less? Now look at mainstream film, media, literature, etc. Is it eaten up with retreads, remakes, retellings of fairy-tales and people reliving their childhoods? Where is the innovation truly?
Or look at politics - is there really more innovation in the Democratic national platform than "we should make Greenland a US territory?"
The fundamental problem the Red Tribe/American conservatism faces is a culture of proud, resentful ignorance. They can't or won't produce knowledge and they distrust anyone who does.
I do not think this is true at all. The right is very good at producing knowledge, it is just unevenly distributed. If you spend any time reading Supreme Court briefs, you'll see rightie knowledge production in action, as this is an area where the right has (very successfully!) focused much of their energy and attention.
I think that the right-wing intellectual capital is considerably better than that on the left, if considerably smaller. Conservative or conservative-friendly educational institutions I think can be very good, just dwarfed in number by default-left-wing ones. (Some of this depends on what counts as "right" and "left" of course.)
Personally I find "spreadsheets" very apt so far. I think they definitely have the potential to disrupt some jobs. But if I'm being honest I think a lot of the "email jobs" are begging for disruption anyway, for other reasons. I would not be surprised if "AI" takes the blame for something that was more-or-less going to happen anyway.
I think robotics (which obviously has a lot of overlap with AI!) is potentially vastly more impactful than just "an AI that can do your email job." If you started randomly shooting "email job holders" and "guys who maintain power lines and fiber optic cables" you would notice the disruption in the power lines and fiber optic cables much sooner unless you got weirdly (un?)lucky shooting email jobbers. Similarly, AI will have a much bigger impact if it comes with concrete physical improvements instead of just better video games, or more website code, or better-written emails, or whatever, notwithstanding the fact that a lot of people work in the video game/coding/email industry.
(I hope I am right about that. I guess wireheading is kinda an option...)
In my opinion, it hasn't because (contrary to what AI hype proponents say) it can't.
Yes, I lean towards thinking that AI is often overblown, but at least part of my point here is that probably a lot more automation was possible even prior to AI than has actually been embraced so far. Just because something is possible does not mean that it will be implemented, or implemented quickly.
A skilled programmer can use AI tools as a force multiplier in some situations, so they do have a (fairly narrow) use case.
I think this is pretty analogous to my experience with it (which doesn't involve programming). Force multiplier, yes, definitely. But so is Excel. And what happened with Excel wasn't that accountants went out of business, but rather that (from what I can tell, anyway) fairly sophisticated mathematical operations and financial monitoring became routine.
Why hasn't it already?
My wife worked about five years ago at as a credit analyst, where part of her job involved determining whether or not to extend extra lines of credit: the easiest thing in the world (I would think) to automate. Really, a very simple algorithm based off of known data should be able to make those decisions, right? But my wife, using extremely outdated software, at a place with massive employee retention problems due to insanely high workloads, was tasked with following a set of general guidelines to determine whether or not to extend additional credit. In some cases the guidelines were a bit ambiguous. She was instructed by her manager to use her gut.
As I think I've mentioned before, I work with AI for my IRL job fairly extensively, although mostly second-hand. The work we do now would have required much more human effort prior to modern AI models, and having been involved in the transition between "useless-to-us-GPT" and "oh wow this is actually good" I can tell you that our model of action pivoted away from mass employment. But we still need people - the AI requires a lot of hand-holding, although I am optimistic it will improve in that regard - and AI can't sell people on a product. You seem to be envisioning a world where an AI can do the work of 10 people at a 14 person company, so the company shrinks to 4 people. I'm living in a world where AI can do the work of 10 people, so we're likely to employ (let's say) 10 people instead of 20 and do 100x the work the 20 people would have been able to do. It's quite possible that in our endeavor the AI is actually the difference between success and failure and when it's all said and done by 2050 we end up employing 50 people instead of zero.
How far that generalizes, I do not know. What I do know is that "capitalism" is often extraordinarily inefficient already. If AI ends up doing jobs that could have been replaced in whole or in part by automation a decade before anyone had ever heard of "ChatGPT" it will be because AI is the new and sexy thing, not because "capitalism" is insanely efficient and good at making decisions. It seems quite plausible to me that people will still be using their gut at my wife's place of employment at the same time that AI is giving input into high-level decisions in Silicon Valley boardrooms.
I definitely believe that AI and automation change the shape of industry over the next 50 years - and yes, the next 5. What I would not bet on (absent other factors, which are plenteous) is everyone waking up the same day and deciding to fire all their employees and replace them with AI, mass pandemonium in the streets. For one thing, the people who would make the decision to do that are the people least likely to be comfortable with using AI. Instead, they will ask the people most likely to be replaced by AI to study the question of whether or not to replace them with AI. How do you think that's going to go? There's also the "lobster dominance hierarchy" - people prefer to boss other people around rather than lord it over computers. Money and personnel are a measuring stick of importance and the managerial class won't give up on that easily.
Yeah there's not a world where you accidentally invite a journo to a chat and brag about your solid OPSEC and it's not embarrassing...except for the 16D chess situation. Which I uh wouldn't bet on.
I don't think the Signal Chat Debacle is Fine, Actually, even if it didn't involve classified intel (unless it was on purpose as a 16-dimensional chess move, in which case hilarious) but I am old enough to recall when Germany had an video call about selecting Russian targets for weapons systems (which also mentioned that the UK had troops in-country!) that was recorded and released by the Russians and I don't recall any news stories about how NATO allies were nervous about trusting the Germans with classified intel after. Instead the Germany defense minister announced that said NATO allies weren't annoyed and that he probably wouldn't be firing anyone.
My point isn't to downplay the seriousness of the incident so much as it is to suggest that perhaps other countries are selective about their criticism of intel goof-ups.
ETA: to be clear, it seem quite possible to me that regardless of what Congress was told, classified material was disclosed in the Signal Affair. My point is that even if it wasn't, it's worthy of criticism.
Yes - I appreciate the invocation of cybersecurity principles (which I know little about) here, but yeah I think that's right, and a real problem.
If there's an inhouse Signal equivalent would it be cleared for use on your garden-variety cell phone?
(Anyway yes I bet it sucks either way).
Wiping message history without recording keeping is a problem, because all text messages about official acts from federal agencies must be preserved. I guess politicians across the spectrum have decided this is not actually an important accountability feature in democracy nor are historical records important enough to bother. Fair enough.
I have a bit of a rant about this but TLDR;
- I think this is a very common problem and suspect "using Signal with messages set to delete" is fairly typical even at lower ranks, and
- Modern records are made at a MUCH faster rate than dated records-keeping laws anticipated. Arguably either all records-keeping laws or associated technology needs to be completely revamped to account for modern electronic messaging capability.
I've heard extremely hair-raising anecdotes set both inside high-level Pentagon circles and big military contractor circles where high-level political types probably weren't a problem (although political correctness might be). Think things along the lines of knowingly improper access controls on HUMINT or phone calls to foreign countries placed in secure areas.
Yeah that sounds about right, and I 100% think it nudges (in the mind of the practitioners) OPSEC out of the category of "important to prevent people from dying" into "more of this dumb bureaucratic paperwork stuff."
Which is really bad if it's actually important.
IDK I don't actually think a very large portion of the US security apparatus actually cares about or follows the ostensible security protocols, and SCIFs are only as good as said protocols.
I think it's good that it's out there. But I would like to see the intellectual center of gravity move towards(?) or remain(??) at places like Substack and, well, here.
Let me register my response to this: AAHHHHHHH
Let me register a more mature response. I think it is good that there is a move fast and break things outlet in the world. There's more than zero good in that.
But it is also good when people spend several months researching a high-confidence story or essay and write it carefully, thoughtfully, and deliberately. I think our society would be much better off as a whole if they were willing to wait on things before having an opinion.
People I think have come around to distrusting the centralized system because they recognize that a centralized system is a bottleneck of information that, if tainted, corrupts the entire information ecosystem. But what I think is overlooked is that propaganda has for decades been able to work by being fast. Think of the "Iraqi soldiers threw babies out of incubators" story. A lie that succeeded in part, I would say, due to corruption in the centralized nodes, but the mainstream media eventually did call BS on the story! The problem was that by then it was too late, the story had already succeeded.
And optimizing for speed over centrality doesn't shut propaganda out, but lets propaganda shift into moving quickly rather than corrupting a node. It's the classic "headline lies, correction on page 20" problem, just retooled for the information age.
How hard would it be to leak the relevant documents to press and leave out the parts you don't like?
I feel like dirty political hits are not exactly rocket science, here.
This by itself doesn't seem insane to me but the corollary to this theory seems to be that Team Obama/Biden are just insanely incompetent not to release it or are themselves implicated. If it's the latter, then Epstein's network was insanely effective and (based on US policy fluctuations over the past five terms) there's only one or maybe a few things they really care about (because US policy in many areas has widely varied in ways you would not expect if a Secret Group had turbo-blackmail over literally every single President).
And frankly I'm not sure Biden or Trump treated Israel with the deference one would expect if they knew Israeli intelligence services had turbo-blackmail on them. Definitely not the Obama administration, unless Israel is engaging in a lot of kayfabe over the Iran deal and such.
Yeah, great example. I would not be shocked if there was something similar here.
Acosta supposedly said that Epstein "belonged to intelligence" - why the assumption that it's Israeli intelligence?
I think the reason is because of some of Maxwell's connections, but it seems plausible that Epstein was a U.S. intelligence asset. Not mutually exclusive with working for Israeli intelligence!
If they actually did this it would be the start of a nuclear war which ends global civilisation. Why exactly would Russia just blow up a carrier group unprovoked? If I said that body armor doesn't protect against powerful firearms "Well if that was true why wouldn't you just go shoot an antimateriel rifle at the local SWAT team?" would not be a very compelling argument.
At least in this example you could shoot the antimaterial rifle at a bulletproof vest. I actually think this is a good analogy, the antimaterial rifle definitely wins the match-up, but claiming that a bulletproof vest provides zero protection is overstating it.
In this situation it is actually Vladimir Putin you're accusing of overstating the capabilities of hypersonic missiles. Whatever else you can say about the man, I believe he's quite knowledgeable about the capabilities of Russian weapon systems.
I'm quite confident that if I was chatting with Vladimir Putin in person he would agree with me that it is possible to intercept his hypersonic missiles in the boost phase (this is part of why Russia does not like our missile defense systems in Eastern Europe, I believe). He would then point out that as a practical matter that is very difficult to do. I think he would agree with me that while he is quite knowledgeable about Russian weapons systems, his knowledge on US weapons systems is necessarily somewhat limited (though perhaps still better than mine).
I don't see how the combination of hypersonics and throwing large numbers of cheap crap along as well can't defeat any modern missile defence system. Both of these are known weakpoints, and I don't quite understand how it'd be possible to overcome the two strategies in combination.
Yeah, if you look at my comment history you'll see me saying similar things. I think you're overindexing on the big picture (offensive weapons are hard to defeat with missile defense) and overlooking my extremely narrow technical argument.
Now, there is a solution to the large numbers of cheap crap: old-fashioned AAA, laser-guided 5-inch rockets, and lasers. All of these are very cheap. But the West doesn't field AAA in numbers, is just now getting the laser-guided rockets up and running, and is still fooling around with laser systems. (Also, both of the laser-involved systems don't work very well if, for instance, it's foggy outside, which sucks!)
The Russians, with their layered approach to integrated air defense, are arguably ahead of the West in defeating the "mixed" approach you're talking about here, but they still struggle against low-observable cruise missiles. (They really need more A-50 AWACs.)
The US has denied it but the Houthis claimed that they managed to damage an aircraft carrier recently. The houthis seem substantially more trustworthy than US officials to me, but I think we'll have to wait and see for more information on this one. The last time the houthis claimed to have hit an aircraft carrier and the US denied it, the carrier then left the region. For the record I doubt this was an actual direct strike - I think the damage in this instance would be caused by a delayed interception that lead to some minor damage rather than a direct hit.
I'm like 50/50 on whether or not it would have leaked. I will believe it when there is good proof of it.
As for ISR assets I wasn't aware that Yemen had a space program.
The Houthis in fact reportedly used Russian satellite data in their attacks. They also reportedly got targeting data from Iran, IIRC.
It's also worth noting as a practical matter that there's a big difference (if you're a ship) between being deployed to an area like the Red Sea versus an area like "the middle of the Pacific" with considerably more room to maneuver.
So basically despite having satellite ISR data and an ideal situation in which to engage a carrier (I believe the entire battle group went into the Red Sea, correct me if I am wrong) they failed to sink a carrier or its escorts. In fact the most damage done to the CBG (so far) was due to friendly fire.
I don't think there's any real way to prevent a modern nation from shooting down satellites just yet, especially surveillance satellites directly above their heads.
There are a couple of ways to deal with this problem. One of them is by fielding lots of little cubesats so that you're putting more assets in orbit faster than your enemy can shoot them down. This might not work for all applications but it can for some, like communications. (For instance I doubt the US could destroy the Starlink constellation with its ASAT stockpile, it would need to use other methods). Another alternative is to use maneuvering space assets like the X-37 or high-altitude high-speed ISR assets like the totally-not-already-built-and-tested SR-72 and the very real Chinese WZ-8, which will be more difficult to shoot down.
I can't see any more likely motivation for the US to have left the area without achieving their goals. What other reason would they have to run away like that?
Off the top of my head, a very good explanation for US behavior is that they ran low on ammo.
Yes, it's an interesting theory. I guess my point is that due to information friction I think humans can carry out plans - perhaps ones that might not be as good as those of a theoretical superintelligence, but still plans that confound observers. I mean shoot there's still (good faith?) arguments about whether COVID-19 was a lab leak or not despite all the evidence there.
Now, and I apologize for the tangent, but if the scenario you describe came about (or even became plausible) it would be unfalsifiable, leading to a world where Superintelligence replaces the Illuminati as the hidden hand behind world events.
When we use them in practice we have to cut up the content that we feed them because we have much more content (gigabytes worth) than they can handle.
As I said, I think this is a solvable problem. But a lot of AI enthusiasts are, in my impression, just using them as personal assistants and not necessary engaging with them in more strenuous real-world use cases.
- Prev
- Next
I for one like the Hlynkaposting.
More options
Context Copy link