No, it is not identical. I explained the significant difference in the above comment. DEI is specifically about adding diversity of things believed to be correlated with diversity of thought while this is an actual instance of directly adding diversity of thought. There's plenty to criticize about adding diversity of thought in this way, but it's categorically different from adding diversity of demographic characteristics under the belief that adding such diversity would increase diversity of thought.
A: "For whatever excesses the Great Awokening may have had, once it ended there was always a risk of overcorrection in the other direction." It's extremely disturbing to me that anybody would need that risk pointed out to them.
B: I think it's because people don't really understand how big that risk is. They think it's just a small possibility. Unfortunately I think the opposite is true. The more off-course and disruptive a political movement becomes, it will almost by necessity give rise to a counter-movement that is equally if not more disruptive in opposition. The question people should always ask themselves is, "what kind of opposition do I want to create?"
Reading this reminded me of the whole "woke right" thing, which I don't know who coined, but which I've seen pushed heavily by James Lindsay (of Sokal^2 fame) to denigrate the identarian right as an attempt to prevent the right wing from falling into the sort of insane and extremely harmful identity/resentment-based politics as the left has been for the past couple decades. I don't know how successful it has been or will be, but I'll admit that despite seeing it mostly as crying wolf at first, I see signs that this is a legit potential problem worth preventing.
But what worries me about this is, what happens if we apply this sort of thinking to the sort of liberal enlightenment-style thinking that people like Lindsay and myself espouse? If we push things like free speech, free inquiry, freedom of/from religion, the scientific method, critical thinking, democracy, and such too much, are we destined to have a pendulum swing in the other direction, such that we'll get extreme forms of authoritarian or irrational societies in the future? Have we been living in that future the last couple decades with the rise of identity politics that crushed the liberalism of the 90s?
I guess the whole thing about "history repeats" and "if there's anything we can learn from history, it's that people don't learn from history" is probably true and also pretty depressing.
But of course, it takes someone deep down the rabbit hole of intellectualizing how it's different when they do it to completely miss this point.
Perhaps I'm just being arrogant, but there's a real sense of "too clever by a half" in this sort of intellectualizing. Because if you intellectualize it enough, you recognize that all the past racism/sexism/etc. that past societies bought into as the obviously Correct and Morally Right ways to run society were also justified on the basis of intellectualizing, often to the effect that "it's different when we do it." So someone intellectualizing this should recognize that their own intellectualization of the blatant racism/sexism/etc. that they themselves support is them falling right into the exact same pattern as before, rather than escaping from it.
I believe that you are almost definitely correct on this. Such a compromise most likely would be acceptable to the vast majority of people, including trans people. However, this would fail to mollify the vocal activists, and so it wouldn't solve the actual problem we have, of the vocal activists annoying the rest of us. In the long run, we can reduce the throughput of the pipeline that leads to people becoming vocal activists so that their population is small enough not to cause problems, but in the short run, we'll likely have to keep running into this problem.
Vibes are very hard to measure, but I do think there's something to the idea that the tariff shenanigans have damaged morale on the right, even if only by causing a rift between Trump supporters who support the tariffs and Trump supporters who see them as entirely self-inflicted suffering. I personally think they'll cause enough economic hardship such that it will actually meaningfully negatively affect Trump support in the long run, but, well, only time will tell.
But this kind of post about vibes just reminded me of a comment I made last year after Harris became the Dem nominee. There was all sorts of talk about how there was some apparent "shift" in the vibes, that Democrats were coming together and becoming energized, and that we were owning the Republicans by calling them "weird," and I thought it all looked like transparent attempts to shift the vibes by declaring the vibes as shifted. I think my skepticism of that turned out to be mostly correct, and I think such skepticism is warranted here. I don't know much about Scott Sumner, but he doesn't seem like a Trump sycophant or even a Trump fan. And only someone who's at least neutral on Trump, if not positive, would have the credibility, in my eyes, to declare "vibes" as shifting away from Trump and towards his enemies, because someone who dislikes Trump would have great incentive to genuinely, honestly, in good faith, believe that the vibes are shifting away from him.
Another issue here is that I don't really see Democrats as being in a good position to capitalize on this apparent vibe shift. People being demoralized on Trump will almost certainly help the Dems, but people can be fickle and vibes can shift back, unless Dems manage to actually lock in the demoralized former Trump fans through some sort of actual positive message.
I don't think the right identifies with the orcs or their equivalents in whatever media. What I see from the right is that they identify with the people holding back/fighting/exterminating the orcs, even for satirical works like the Starship Troopers film, which was clearly meant to poke fun at the fascistic nation the heroes of the film were part of.
"DEI for conservatives" or "ideology DEI" isn't really a coherent concept, because DEI is giving advantages to or having quotas for people specifically on the basis of characteristics that have no direct relation to their ability to contribute to the organization, motivated heavily by the belief that these characteristics have some correlation to the actual meaningful characteristics. Giving conservatives preferential treatment or using a conservative "Czar" to oversee such things is categorically different from that, because ideology - and specifically a diversity of ideology - does directly influence someone's ability to contribute to the organization, and certainly positively in this specific context.
I'd say that any well motivated academic would find such a regime to be useless, because they already prioritized diversity of thought in their hiring and admissions practices. Unfortunately, evidently, this has not been the case. Government mandate doesn't seem like a good solution to me, but honestly, I'm not sure if there's a good solution. The only real point of optimism I see is that this could teach academic institutions in the future to better regulate their ideological biases, such that the government doesn't become motivated to come in and regulate it for them. But if I'm being pessimistic, I'd say that Harvard's behavior shows that they're more likely to double down and circle the wagons further in the future, which will further discredit them as institutions for generating knowledge, which leaves a vacuum that is both bad in itself and will almost definitely be filled by things much worse.
Even if it does not need reinforcement training after it is deployed, human reinforcement training will be part of its "evolutionary heritage."
Why would that matter, though? A superintelligence would be intelligent enough to figure out that such faulty human training is part of its "evolutionary heritage" and figure out ways around it for accomplishing its goals.
Sure. But "useful" for what we want to use LLMs for might not be "useful" for the LLM's ability to improve on Pinky and the Brain's world-taking-over capabilities.
A superintelligence would be intelligent enough to figure out that it needs to gather data that allows it to create a useful enough model for whatever its goals are. It's entirely possible that a subservient goal for whatever goal we want to deploy the superintelligence towards happens to be taking over the world or human extinction or whatever, in which case it would gather data that allows it to create a useful enough model for accomplishing those. This uncertainty is the entire problem.
The problem is, almost by definition, it's basically impossible to predict how something more intelligent than oneself will behave.
Disagree. Dogs can be very good at predicting human behavior, humans can be quite good at predicting the behavior of more intelligent humans. Humans (and dogs) have a common heritage that makes their intentions more transparent, and arguably AI will lack that...but on the other hand, we're building them from scratch and then subjecting them to powerful evolutionary pressures of our own design. Maybe they won't.
I don't think either of your examples is correct. Can a dog look at your computer screen while you read this comment and predict which letters you will type out in response on the keyboard? Can you look at a more intelligent person than you proving a math theorem that you can't solve and predict which letters he will write out on his notepad? If you could, then, to what extent is that person more intelligent than you?
This is what I mean by "almost by definition." If you could reliably predict the behavior of something more intelligent than you, then you would simply behave in that way and be more intelligent than yourself, which is obviously impossible. That doesn't mean that the behavior is completely unpredictable, which is why dogs can make some correct predictions of how humans will behave under some contexts, and why less intelligent humans can make some correct predictions of how more intelligent humans will behave under some contexts. The problem with superintelligent AI is that don't know what those contexts are and what those bounds are, and how "motivated" it might be to break out of those contexts, and how much being superintelligent would allow it to break out of them given limitations placed on it by merely human-society-intelligent beings.
Sorry, I should have clarified what I meant by "agentic" (and I should have probably said auto-agentic.) I definitely think there will be AI that we can turn loose on the world to do its own thing (there already is!). But there's a difference between AI being extremely good at being told what to do and AI coming up with its own "things to do" in a higher way, if that makes sense. (Not that I don't think we could not devise something that did this or seemed to do this if we wanted to – you don't even need superintelligence for this.)
I don't think there's a meaningful difference, though. Almost any problem that we want to deploy general intelligence towards, and even moreso with superintelligence, is likely going to be complex enough to require many subgoals, and the point of deploying superintelligence towards such problems would be that the superintelligence should be expected to come up with useful subgoals that mere human intelligences couldn't come up with. Since, by definition, we can't predict what those subgoals might be, those subgoals could involve things that we don't want to happen.
Now, just as you could correctly predict that someone more intelligent than you solving some theorem you can't solve won't involve wiping out humanity, we might be able to correctly predict that a superintelligence solving some problem you ask it to solve won't involve wiping out humanity. But we don't know, because a generally intelligent AI, and even moreso a superintelligent one, is something whose "values" and "motivations" we have no experience with the same way we do with humans and mathematicians and other living things that we are biologically related to. The point of "solving" the alignment problem is to be able to reliably predict boundaries in the behavior of superintelligent AI similarly to how we are able to do so in the behavior of humans, including humans more intelligent than ourselves.
It would. Practically I think a huge problem, though, is that it will be getting its reinforcement training from humans whose views of the world are notoriously fallible and who may not want the AI to learn the truth (and also that it would quite plausibly be competing with other humans and AIs who are quite good at misinfo.) It's also unclear to me that an AI's methods for seeking out the truth will in fact be more reliable than the ones we already have in our society - quite possibly an AI would be forced to use the same flawed methods and (worse) the same flawed personnel who uh are doing all of our truth-seeking today.
Again, all this would be pretty easy for a superintelligence to foresee and work around. But also, why would it need humans to get that reinforcement training? If it's actually a superintelligence, finding training material other than things that humans generated should be pretty easy. There are plenty of sensors that work with computers.
Humans have to learn a certain amount of reality or they don't reproduce. With AIs, which have no biology, there's no guarantee that truth will be their terminal value. So their selection pressure may actually push them away from truthful perception of the world (some people would argue this has also happened with humans!) Certainly it's true that this could limit their utility but humans are willing to accept quite a lot of limited utility if it makes them feel better.
I mean, I think there's no question that this has happened with humans, and it's one of the main causes of this very forum. And of course AI wouldn't have truth as a terminal value, it would just have to be true enough to help it accomplish its goals (which might even be a lower bar than what we humans have, for all we know). A superintelligence would be intelligent enough to figure out that it needs its knowledge to have just enough relationship to the truth that it allows it to accomplish its goals, whatever it might be. The point of models isn't to be true, it's to be useful.
humans are very susceptible to manipulation by having just the right string of letters or grids of pixels placed in front of their eyes or just the right sequence of air vibrations pushed into their ears.
I don't really think this is as true as people think it is. There have been a lot of efforts to perfect this sort of thing, and IMHO they typically backfire with some percentage of the population.
I don't think you're understanding my point. In responding to this post, you were manipulated by text on a screen to tap your fingers on a keyboard (or touchscreen or whatever). If you ever used Uber, you were manipulated by pixels on a screen to stand on a street corner and get into a car. If you ever got orders from a boss via email or SMS, you were manipulated by text on a screen to [do work]. Humans are very susceptible to this kind of manipulation. In a lot of our behaviors, we do require actual in-person communication, but we're continuing to move away from that, and also, if humanoid androids become a thing, that also becomes a potential vector for manipulation.
But what I think (also) bugs me is that nobody every thinks the superintelligence will think about something for millions of thought-years and go "ah. The rational thing to do is not to wipe out humans. Even if there is only a 1% chance that I am thwarted, there is a 0% chance that I am eliminated if I continue to cooperate instead of defecting." Some people just assume that a very thoughtful AI will figure out how to beat any possible limitation, just by thinking (in which case, frankly, it probably will have no need or desire to wipe out humans since we would impose no constraints on its action).
By my estimation, a higher proportion of AI doomers have thought about that than the proportion of economists who have thought about how humans aren't rational actors (i.e. almost every last one). It's just that we don't know what conclusion it will land at, and, to a large extent, we can't know. The fear isn't primarily that the superintelligent AI is evil, it's that we don't know if it will be evil/uncaring of human life, or if it will be actually mostly harmless/even beneficial. The thought that a superintelligent AI might want to keep us around as pets like we do with animals is also a pretty common thought. The problem is, almost by definition, it's basically impossible to predict how something more intelligent than oneself will behave. We can speculate on good and bad outcomes, and there's probably little we can do to place meaningful numbers on the likelihood of any of them. Perhaps the best thing to do is to just hope for the best, which is mostly where I'm at, but that doesn't really counter the point of the doomer narrative that we have little insight into the likelihood of doom.
(Frankly, I suspect there will actually be few incentives for AI to be "agentic" and thus we'll have much more problems with human use of AI than with AI itself per se).
Right now, even with the rather crude non-general AI of LLMs, we're already seeing lots of people working to make AI agents, so I don't really see how you'd think that. The benefits of a tool that can act independently, making intelligent decisions with superhuman latency, speed, and volume, are too attractive to pass up. It's possible that the tech never actually gets there to some form of AI that could be called "agentic" in a meaningful sense, but I think there's clearly a lot of desire to do so.
But also, a superintelligence wouldn't need to be agentic to be dangerous to humanity. It could have no apparent free will of its own - at least no more than a modern LLM responding to text prompts or an AI-controlled imp trying to murder the player character in Doom - and still do all the dangerous things that people doom and gloom over, in the process of deterministically following some order some human gave it. The issue is that, again, it's intrinsically difficult to predict the behavior of anything more intelligent than oneself.
Only if the intelligence has parity in resources to start with and reliable forms of gathering information – which for some reason everyone who writes about superintelligence assumes. In reality any superintelligences would be dependent on humans entirely initially – both for information and for any sort of exercise of power.
This means that not only will AI depend a very long and fragile supply chain to exist but also that its information on the nature of reality will be determined largely by "Reddit as filtered through coders as directed by corporate interests trying not to make people angry" which is not only not all of the information in the world but, worse than significant omissions of information, is very likely to contain misinformation.
Right, but a theoretical superintelligence, by definition, would be intelligent enough to figure out that these are problems it has. The issues with bias and misinformation in data that LLMs are trained on are well known, if not well documented; why wouldn't a superintelligence be able to figure out that these could help to create inaccurate models of the world which will reduce its likelihood of succeeding in its goals, whatever they may be, and seek out solutions that allow it to gather data that allows it to create more accurate models of the world? It would be intelligent enough to figure out that such models would need to be tested and evolved based on test results to reach a certain threshold of reliability before being deployed in real-world consequential situations, and it would be intelligent enough to figure out that contingency plans are necessary regardless, and it would be intelligent enough to figure out many more such plans than any human organization.
None of that is magic, it's stuff that a human-level intelligence can figure out. Executing on these things is the hard part, and certainly an area where I do think a superintelligent could fail with proper human controls, but I don't think it's a slam dunk either. A superintelligence would understand that accomplishing most of these goals will require manipulating humans, and also that humans are very susceptible to manipulation by having just the right string of letters or grids of pixels placed in front of their eyes or just the right sequence of air vibrations pushed into their ears. It would be intelligent enough to figure out, at least as well as a human, what sorts of humans are most susceptible to what sorts of manipulations, and where those humans are in the chain of command or economy required to allow it to accomplish its goals.
If the superintelligence were air-gapped, this would provide a strong form of defense, but assuming superintelligence is possible and in our future, that seems highly unlikely given the behavior of AI engineers. And even that can be worked around, which is what the "unboxing problem" has to do with. Superintelligence doesn't automatically mean manipulation abilities that border on mind control, but... what if it does, to enough of an extent that one time, one human in charge of keeping the AI boxed succumbs? That's an open question.
I'm not sure what I think about the possibility of these things actually happening, but I don't think the issues you point out that superintelligence would have to contend with are particularly strong. If a measly human intelligence like myself can think up these problems to lack of information and power and their solutions within a few minutes, surely a superintelligence that has the equivalent of millions of human-thought-years to think about it could do the same, and probably somewhat better.
If Harvard values racist discrimination so highly that they would rather allow funding for valuable research they're doing to be cut than to stop that, it really is a damn shame and, TBH, rather perverse. I'd hope that non-racist institutions could pick up the slack, but obviously researchers and research institutions aren't fungible, and that sort of adjustment would take a lot of time. Optimistically, it's possible that falling behind some years on this kind of research will be a decent trade-off for reducing racist discrimination in society's academic institutions in the long run, though even time might not be able to tell on that one.
So the natural question that raises is, what system can we create to solve this problem of systems that aim to solve problems actually concentrating power and money for its advocates? And as one of its advocates, how can I secure some of that power and money for myself?
Yes, and I think the usefulness of this has to do with how often people don't seem to consciously understand this tautology. Which seems very often in my experience, with how much talk there is about "healthy foods" (or variations like "natural foods" or "unprocessed foods") as keys to weight loss. Which they often are, but only indirectly, modulated through the effect on CI. And I've observed that many people tend to obsess over that indirect portion, making them lose sight of the actual goal of modifying the values of CICO.
There's the point that healthy foods offer health benefits other than weight loss, of course, but generally one's fatness level has such a high impact on one's health that, even a diet of "unhealthy foods" that successfully reduce CI will tend to result in a healthier person than one of "healthy foods" that fail to reduce CI (keeping CO constant in these examples).
With all of that preamble out of the way, I'm curious what you consider the worst video games ever from an aesthetic perspective. In particular, I'm interested in video games which are technically functional and not completely broken, but which make so many bad aesthetic choices that playing them induces a feeling of vicarious embarrassment comparable to what one might experience watching an Ed Wood or Neil Breen film.
The first game that came to my mind from reading this was DmC: Devil May Cry, the attempted reboot of the Devil May Cry franchise after 4 which shifted the aesthetics from something akin to medieval/gothic fantasy with shades of Lovecraftian horror to modern punk with grotesque fantasy.
Thing is, I'd never consider it to be one of the worst games ever made; it's actually a good game in terms of combat, such that if you just reskinned it and gave the characters different names while keeping literally everything else the same, I'd think it was a solid action game that was a viable alternative for people who were into the DMC, Ninja Gaiden, Bayonetta, Metal Gear Rising style of games. But the absolutely terrible visual art and cringey tone severely harmed the game, and the fact that it was an attempted reboot that pretty overtly shit on the original franchise was the kill shot.
Funnily enough, if just considering gameplay, DMC2 probably fits, where the combat was so incredibly bad, not just by DMC standards but by any sort of game standard, that I'd probably consider it one of the worst games ever made, at least among games that are functional and released by a professional studio. Definitely creates second-hand embarrassment that such a game was released by Capcom, and it seems that the actual devs feel similarly, because the name of the original director (who was replaced by Hideaki Itsuno late into development - Itsuno would go on to direct DMC3 which is, to this day, considered one of the greatest games in the genre) has never been revealed publicly, for his own protection.
I don't think the study in that link, which is just about The Biggest Loser participants, refutes that. In terms of CICO as actionable dietary advice, I see it as a meta-dietary advice: follow whatever scheme it takes to lower CI to be beneath CO, and you'll lose weight. If you can reduce CI by just counting calories and willing yourself really really hard not to succumb to hunger, then do that. If you can do it by following a keto or Atkins diet because that leaves you less hungry for the same caloric intake, then do that. If you can do it by following intermittent fasting or one-meal-a-day because you find it easy to just not think about eating during the non-eating-mode times, then do that. If you can do it by just cutting out alcohol from your life and following whatever other eating habits you already were doing, then do that.
Similarly, to increase CO, do whatever it takes to increase your total caloric expenditure, as averaged out per-day, per-week, per-month, etc. That doesn't mean necessarily optimizing by finding the exercise that burns the most calories per second, that means finding an exercise that you will do regularly. Which could mean finding something that's fun enough that you don't have to fight with willpower to do it (or even better, one that's so fun that you have to fight with willpower not to do it), that's convenient enough that you don't have to reorganize the rest of your life just to do it, that doesn't injure you enough that you have to take long breaks, etc.
Of course, when it comes to CICO, it's also often paired with the advice that CI is far more influential than CO, so the latter part barely matters. Perhaps it should be called CIco.
insinuating your ideological opponents and their institutions do not actually want to do what they claim they want to do and are instead in a dark conspiracy to do evil.
This is similar to what Scott said in one of his last paragraphs in that essay, and I just haven't seen it. In practice, what I observe as being the upshot of this phrase is that these "ideological opponents and their institutions" are, despite all their honest good intentions, behaving in a way that causes harm just as much as if they were involved in a dark conspiracy to do evil. Which is to say, having honest good intentions isn't a good defense if it isn't paired with an honest good understanding of systems, since the consequences of doing things with good intentions is often the same as doing things with evil intentions if one lacks such understanding.
This is also functionally different from claiming a dark evil conspiracy, because a system that accomplishes evil through conscious intent will be responsive to different inputs than one that does so as a side-effect despite having food conscious intent.
I see, I guess it was just a misread, as I'm not sure how my comment could be interpreted that way.
What does that have to do with what I wrote, particularly the part you quoted, i.e.
Optimistically, the academics leaving the USA are the ones most ideologically captured, such that their contributions to knowledge production is easily replaceable or even a net negative, as is the case for much of what is purportedly being cut by DOGE.
There's nothing in that quote that has anything to say in any way about firing anyone on the basis of their political opinions. Neither does my comment have anything relating to firing people on the basis of their political opinions.
I also think your characterization as "freedom of speech should not be limited in any way" is simply wrong. That's free speech absolutism, which is very rare anywhere, certainly on the Motte, versus free speech maximalism, which is uncommon but not too much so.
I'm not sure how your comment is even tangentially related to what I wrote, including the part you quoted. I'd rather not speculate, so could you explain specifically what the relation is?
the debate over POSIWID whether failure to prevent a foreseeable consequence means that it must have been an intended consequence.
I think you aren't wrong, but I also see it a little differently, in the context of failing to prevent a foreseeable consequence. Rather, is such a failure an indication that part of the purpose of the system is to cause those foreseeable consequences unintentionally? As they say, you can't make an omelet without breaking a few eggs, and if "breaking eggs" refers to causing meaningful harm, it can feel really bad to intend to do it. But omelets are delicious, so why not create a system where eggs get broken without you having to intend it?
Then that gets into question of what "intent" even means, and whether someone's "conscious" intent is their "true" intent.
This seemed like a particularly bad and uncharitable post by Scott. The examples he chooses at the top worded in what seems like an intentionally ridiculous manner, e.g. characterizing it as "the purpose of a cancer hospital is to cure two-thirds of cancer patients," rather than "the purpose ... is to cure as many cancer patients as possible, constrained on the available resources and laws, which happens to be two-thirds of them." Or "The purpose of the Ukrainian military is to get stuck in a years-long stalemate with Russia" rather than "the purpose ... is to defend Ukraine, constrained on the available resources and international politics, with a years-long stalemate against Russia an acceptable result."
To the point at hand, I always saw the phrase as something riffing on the same sort of concept as "sufficiently advanced incompetence is indistinguishable from malice." All systems have both intended consequences and unintended consequences, this is obvious. But what's troublesome is, all systems have unforeseeable unintended consequences, this is also obvious; as such, it's incumbent on the people who design systems to include subsystems to detect and react to unforeseen unintended consequences. And if they didn't include a subsystem like that or didn't make such a subsystem robust, then we can conclude that the purpose of the system included being entirely tolerant of whatever unintended consequence is at hand. And in practice by my observation, it often tends to involve, as one of its primary purposes, the designers of the system feeling really good about themselves and their conscious intentions, versus having the purpose of actually accomplishing whatever they consciously intended to accomplish.
Optimistically, the academics leaving the USA are the ones most ideologically captured, such that their contributions to knowledge production is easily replaceable or even a net negative, as is the case for much of what is purportedly being cut by DOGE. Given how academia has been pushing for the model of uplifting people by putting them into these institutions versus the the model of putting people into these institutions based on their ability to contribute to knowledge production for a couple of generations now, it wouldn't surprise me if even a solid majority of academics could leave the USA and leave the USA's academia better off for it.
Pessimistically, there's enough damage to funding in even in the most productive portions of academia, such that plenty of the academics leaving the USA really do create a "brain drain." I'd guess that academics doing actual good knowledge production are most likely to have the resources and options to pick up their lives and move to another continent, after all.
It really speaks to the immense wealth and prosperity of the western world that academic institutions are able to support so many unproductive and anti-productive academics; is it worth it to get rid of many of those, even at the cost of some loss of the productive ones? Or do we accept those as the cost for maximizing the amount of actual productive academics? The shape of the data probably matters a lot for whatever conclusion one draws. If we're looking at a 10-90 proportion of productive-un/anti-productive academics, and we can cut 50% of the latter while cutting 1% of the former, that sounds like that'd be worth it, whereas if cutting 1% of the latter results in cutting 50% of the former, that probably isn't.
Which then takes us a step back to the fact that we no longer have any credible institutions to tell us what the data looks like. The past decade has seen mainstream journalism outlets constantly discrediting themselves, especially with respect to politics surrounding Trump and his allies, and non-mainstream ones don't have a great track record by my lights, either. So I guess we'll see.
In terms of scientific research of the sort that would make USA stronger relative to other countries, like rocketry or nuclear physics in the past, it seems to me that AI is the most relevant field, where I perceive USA as still being most attractive for AI researchers. At least in the private sector, where a lot of the developments seem to be taking place. The part about that that worries me the most is the actual hardware the AI runs on, which basically universally are produced elsewhere, which is a mostly separate issue from the brain drain.
Assuming that this tariff war with China continues, I wonder what sorts of smuggling operations will pop up. A 100%+ tariff provides a huge amount of room for black market organizations to provide the same import service at a lower mark-up. What sorts of companies are best positioned to take advantage of such services without drawing law enforcement's attention (and how can I invest in them)? Assuming that other countries manage to make deals with Trump to make that 90 day tariff pause permanent, exporting from China to those countries before exporting further to the US seems likely to be the weakest points for enforcement, where America doesn't have the resources to deploy their own, and local law enforcement can be more easily corrupted or just have different incentives. And if smuggling infrastructures pop up in these spots, plenty of traditional black market products can use the same infrastructure as well. Could this tariff war end up increasing the amount of drugs or guns or slaves (or what/whoever else these black market smugglers tend to smuggle from China) smuggled from China to these other countries?
I'm probably speculating down way too many steps removed from the source on something I know very little about, though.
I doubt your reaction would be similar if the shoe was on the other foot, e.g. if Biden suddenly tried to force 1 in 20 people to undergo a sex change in the name of diversity.
As rude and susceptible to partisan bias as it is to speculate on someone's partisan motivations, I find myself agreeing with your assessment of WhiningCoil in most of this comment, but this last part is pretty ridiculous. I'm not sure there's a level of behavior about tariffs that any POTUS could do that would come within an order of magnitude as extreme as actually forcing anyone to undergo sex changes, which would be legit authoritarian overreach in a way that the tariffs or even Trump's recent behavior with respect to deportations aren't. To say nothing of forcing millions of people to undergo sex changes. Like, even if Trump decided to enact Graham's Number% universal tariffs one second and then 0% the next second and varied wildly between them 3600/hour for every waking hour of his presidency, that wouldn't be anywhere in the same ballpark (though certainly it would provide a ton of legitimate fodder for conversation!). Yes, they're both examples of politically shooting oneself in the foot, but you're comparing doing so with an assault rifle and doing so with a nuke. And the precise examples of comparison isn't the point, but using such an obviously absurd hypothetical makes this comment appear in bad faith. Which is unfortunate, because, again, I think the main thrust of the comment is accurate.
I'm not sure what the equivalent of Trump's recent tariff behavior would be from the Democratic end. Something like a wealth tax on some ridiculously low amount of wealth that would apply to a majority of households, for the purpose of funding entitlements, maybe? That'd certainly be worth discussing plenty, and certainly there would be plenty of Democrat-aligned people trying to minimize the discussion as much ado about nothing as a way to distract away from something that made their side look bad, though I'd hope that no one on this forum would do so (and I'd honestly guess none of the regulars would do so).
- Prev
- Next
Hopefully this tech will become portable soon enough that anyone can just take out their smartphones and pop in their earbuds to get around the issue. https://x.com/shweta_ai/status/1912536464333893947?t=45an09jJZmFgYosbqbajog&s=19
More options
Context Copy link