Somehow "J.D. Vance" seems like a particularly memeable option, but I'm not sure offhand how to align that as a punchline to a joke.
I'm not sure how either, but given how it's become common wisdom a meme that JD Vance murdered Pope Francis by the power of cringe from meeting him, there's gotta be something in there about Catholic succession rules being the same as the Sith.
Season 1 is definitely worth watching and is highly regarded for good reason, IMHO. In anime, it's rare to see such well executed detective thriller action as the climax in the final few episodes of the season.
Season 2 is widely regarded as complete garbage, also for good reason, and can just be ignored. It's been too long since I watched it for me to remember exactly, but a major part of it was it introducing a new character who was both unlikable and uninteresting.
Now, maybe computers will be able to overcome those problems with simple coding. But maybe they won't.
Right, we don't know if a superintelligence would be capable of doing that. That's the problem.
Sure. But it's much better (and less uncertain) to be dealing with something whose goals you control than something whose goals you do not.
Right, but we don't know how much better and how much less uncertain, and whether those will be within reasonable bounds, such as "not killing everyone." That's the problem.
But on the flip side, a cat can predict that a human will wake up when given the right stimulus, a dog can track a human for miles, sometimes despite whatever obstacles the human might attempt to put in its way. Being able to correctly predict what a more intelligent being would do is quite possible.
I didn't intend to imply that a less intelligent being could never predict the behavior of a more intelligent being in every context, and if my words came off that way, I apologize for my poor writing.
This is what I mean by "almost by definition." If you could reliably predict the behavior of something more intelligent than you, then you would simply behave in that way and be more intelligent than yourself, which is obviously impossible.
I don't think this is true, on a couple of points. Look, people constantly do things they know are stupid. So it's quite possible to know what a smarter person would do and not do it.
I don't think is true. I think people might know what a more mature or wise or virtuous person would do and not do it, but I don't think they actually have insight into what a more intelligent person would do, particularly in the context of greater intelligence leading to better decision making.
But secondly, part of education is being able to learn and imitate (which is, essentially, prediction) what wiser people do, and this does make you more intelligent.
I think that's more expertise than intelligence. Not always easy to disentangle, though. In the context of superintelligence, this just isn't relevant, because the entire point of creating a superintelligent AI is that it's able to apply intelligence in a way that is otherwise impossible. Which is going to have to do with complex decision making or analyzing complex situations to come to conclusions that humans couldn't do by themselves. If we had the capacity to independently predict the decisions a superintelligent AI would do, we wouldn't be using the superintelligent AI in the first place.
But one of the things we do to keep human behavior predictable is retain the ability to deploy coercive means. I suppose in one sense I am suggesting that we think of alignment more broadly. I think that taking relatively straightforward steps to increase the amount of uncertainty an EVIL AI would experience might be tremendously helpful in alignment.
Right, and the problem here is that these steps don't seem very straightforward, for a couple of reasons. One is that humans don't seem to want to coordinate to increase the amount of uncertainty any AI would experience. Two is that, even if we did, a superintelligent AI would be intelligent enough to figure out that its certainty is being hampered by humans and work around it. Perhaps our defenses against this superintelligent AI working around these barriers would be sufficient, perhaps not. It's intrinsically hard to predict when going up against something much more intelligent than you. And that's the problem.
The defense of forcing background diversity is that it directly influences someone's ability to contribute to the organization.
This isn't really a good characterization of DEI policies. You'd have to replace "background" with something like "superficial" or "demographic." But, in any case, the argument still works when considering "background," as below.
"you need more [women/blacks/etc] because it will add perspectives you haven't considered"
These are what I'd consider strawman/weakman versions of DEI, not the actual defensible portion of DEI. Even DEI proponents don't tend to say that the mere shade of someone's skin is, in itself, something that makes their contribution to the organization better. The argument is that the shade of their skin has affected their life experiences (perhaps you could call this their background - but, again, DEI isn't based on those life experiences, it's based on the superficial characteristics) in such a way as to inevitably influence the way they think, and the addition of diversity in the way people think is how they contribute better to the organization. This argument has significant leaps of faith that make it fall apart on close inspection, but it's still quite different from saying something like that someone's skin color has direct influence to diversity of thought, which would be a leap very few people would be willing to make.
Whereas with targeting ideological diversity, someone who has a different ideology, by definition, adds a different perspective. That is a direct targeting of the actual thing that people are considering as being helpful to the organization, i.e. diversity of thought.
So again, no, the very concept of "DEI for conservatives," at least in the context of diversity of thought, is just incoherent. If people were calling for putting conservative quotas in the NBA or something, that might work as a comparison.
Hopefully this tech will become portable soon enough that anyone can just take out their smartphones and pop in their earbuds to get around the issue. https://x.com/shweta_ai/status/1912536464333893947?t=45an09jJZmFgYosbqbajog&s=19
No, it is not identical. I explained the significant difference in the above comment. DEI is specifically about adding diversity of things believed to be correlated with diversity of thought while this is an actual instance of directly adding diversity of thought. There's plenty to criticize about adding diversity of thought in this way, but it's categorically different from adding diversity of demographic characteristics under the belief that adding such diversity would increase diversity of thought.
A: "For whatever excesses the Great Awokening may have had, once it ended there was always a risk of overcorrection in the other direction." It's extremely disturbing to me that anybody would need that risk pointed out to them.
B: I think it's because people don't really understand how big that risk is. They think it's just a small possibility. Unfortunately I think the opposite is true. The more off-course and disruptive a political movement becomes, it will almost by necessity give rise to a counter-movement that is equally if not more disruptive in opposition. The question people should always ask themselves is, "what kind of opposition do I want to create?"
Reading this reminded me of the whole "woke right" thing, which I don't know who coined, but which I've seen pushed heavily by James Lindsay (of Sokal^2 fame) to denigrate the identarian right as an attempt to prevent the right wing from falling into the sort of insane and extremely harmful identity/resentment-based politics as the left has been for the past couple decades. I don't know how successful it has been or will be, but I'll admit that despite seeing it mostly as crying wolf at first, I see signs that this is a legit potential problem worth preventing.
But what worries me about this is, what happens if we apply this sort of thinking to the sort of liberal enlightenment-style thinking that people like Lindsay and myself espouse? If we push things like free speech, free inquiry, freedom of/from religion, the scientific method, critical thinking, democracy, and such too much, are we destined to have a pendulum swing in the other direction, such that we'll get extreme forms of authoritarian or irrational societies in the future? Have we been living in that future the last couple decades with the rise of identity politics that crushed the liberalism of the 90s?
I guess the whole thing about "history repeats" and "if there's anything we can learn from history, it's that people don't learn from history" is probably true and also pretty depressing.
But of course, it takes someone deep down the rabbit hole of intellectualizing how it's different when they do it to completely miss this point.
Perhaps I'm just being arrogant, but there's a real sense of "too clever by a half" in this sort of intellectualizing. Because if you intellectualize it enough, you recognize that all the past racism/sexism/etc. that past societies bought into as the obviously Correct and Morally Right ways to run society were also justified on the basis of intellectualizing, often to the effect that "it's different when we do it." So someone intellectualizing this should recognize that their own intellectualization of the blatant racism/sexism/etc. that they themselves support is them falling right into the exact same pattern as before, rather than escaping from it.
I believe that you are almost definitely correct on this. Such a compromise most likely would be acceptable to the vast majority of people, including trans people. However, this would fail to mollify the vocal activists, and so it wouldn't solve the actual problem we have, of the vocal activists annoying the rest of us. In the long run, we can reduce the throughput of the pipeline that leads to people becoming vocal activists so that their population is small enough not to cause problems, but in the short run, we'll likely have to keep running into this problem.
Vibes are very hard to measure, but I do think there's something to the idea that the tariff shenanigans have damaged morale on the right, even if only by causing a rift between Trump supporters who support the tariffs and Trump supporters who see them as entirely self-inflicted suffering. I personally think they'll cause enough economic hardship such that it will actually meaningfully negatively affect Trump support in the long run, but, well, only time will tell.
But this kind of post about vibes just reminded me of a comment I made last year after Harris became the Dem nominee. There was all sorts of talk about how there was some apparent "shift" in the vibes, that Democrats were coming together and becoming energized, and that we were owning the Republicans by calling them "weird," and I thought it all looked like transparent attempts to shift the vibes by declaring the vibes as shifted. I think my skepticism of that turned out to be mostly correct, and I think such skepticism is warranted here. I don't know much about Scott Sumner, but he doesn't seem like a Trump sycophant or even a Trump fan. And only someone who's at least neutral on Trump, if not positive, would have the credibility, in my eyes, to declare "vibes" as shifting away from Trump and towards his enemies, because someone who dislikes Trump would have great incentive to genuinely, honestly, in good faith, believe that the vibes are shifting away from him.
Another issue here is that I don't really see Democrats as being in a good position to capitalize on this apparent vibe shift. People being demoralized on Trump will almost certainly help the Dems, but people can be fickle and vibes can shift back, unless Dems manage to actually lock in the demoralized former Trump fans through some sort of actual positive message.
I don't think the right identifies with the orcs or their equivalents in whatever media. What I see from the right is that they identify with the people holding back/fighting/exterminating the orcs, even for satirical works like the Starship Troopers film, which was clearly meant to poke fun at the fascistic nation the heroes of the film were part of.
"DEI for conservatives" or "ideology DEI" isn't really a coherent concept, because DEI is giving advantages to or having quotas for people specifically on the basis of characteristics that have no direct relation to their ability to contribute to the organization, motivated heavily by the belief that these characteristics have some correlation to the actual meaningful characteristics. Giving conservatives preferential treatment or using a conservative "Czar" to oversee such things is categorically different from that, because ideology - and specifically a diversity of ideology - does directly influence someone's ability to contribute to the organization, and certainly positively in this specific context.
I'd say that any well motivated academic would find such a regime to be useless, because they already prioritized diversity of thought in their hiring and admissions practices. Unfortunately, evidently, this has not been the case. Government mandate doesn't seem like a good solution to me, but honestly, I'm not sure if there's a good solution. The only real point of optimism I see is that this could teach academic institutions in the future to better regulate their ideological biases, such that the government doesn't become motivated to come in and regulate it for them. But if I'm being pessimistic, I'd say that Harvard's behavior shows that they're more likely to double down and circle the wagons further in the future, which will further discredit them as institutions for generating knowledge, which leaves a vacuum that is both bad in itself and will almost definitely be filled by things much worse.
Even if it does not need reinforcement training after it is deployed, human reinforcement training will be part of its "evolutionary heritage."
Why would that matter, though? A superintelligence would be intelligent enough to figure out that such faulty human training is part of its "evolutionary heritage" and figure out ways around it for accomplishing its goals.
Sure. But "useful" for what we want to use LLMs for might not be "useful" for the LLM's ability to improve on Pinky and the Brain's world-taking-over capabilities.
A superintelligence would be intelligent enough to figure out that it needs to gather data that allows it to create a useful enough model for whatever its goals are. It's entirely possible that a subservient goal for whatever goal we want to deploy the superintelligence towards happens to be taking over the world or human extinction or whatever, in which case it would gather data that allows it to create a useful enough model for accomplishing those. This uncertainty is the entire problem.
The problem is, almost by definition, it's basically impossible to predict how something more intelligent than oneself will behave.
Disagree. Dogs can be very good at predicting human behavior, humans can be quite good at predicting the behavior of more intelligent humans. Humans (and dogs) have a common heritage that makes their intentions more transparent, and arguably AI will lack that...but on the other hand, we're building them from scratch and then subjecting them to powerful evolutionary pressures of our own design. Maybe they won't.
I don't think either of your examples is correct. Can a dog look at your computer screen while you read this comment and predict which letters you will type out in response on the keyboard? Can you look at a more intelligent person than you proving a math theorem that you can't solve and predict which letters he will write out on his notepad? If you could, then, to what extent is that person more intelligent than you?
This is what I mean by "almost by definition." If you could reliably predict the behavior of something more intelligent than you, then you would simply behave in that way and be more intelligent than yourself, which is obviously impossible. That doesn't mean that the behavior is completely unpredictable, which is why dogs can make some correct predictions of how humans will behave under some contexts, and why less intelligent humans can make some correct predictions of how more intelligent humans will behave under some contexts. The problem with superintelligent AI is that don't know what those contexts are and what those bounds are, and how "motivated" it might be to break out of those contexts, and how much being superintelligent would allow it to break out of them given limitations placed on it by merely human-society-intelligent beings.
Sorry, I should have clarified what I meant by "agentic" (and I should have probably said auto-agentic.) I definitely think there will be AI that we can turn loose on the world to do its own thing (there already is!). But there's a difference between AI being extremely good at being told what to do and AI coming up with its own "things to do" in a higher way, if that makes sense. (Not that I don't think we could not devise something that did this or seemed to do this if we wanted to – you don't even need superintelligence for this.)
I don't think there's a meaningful difference, though. Almost any problem that we want to deploy general intelligence towards, and even moreso with superintelligence, is likely going to be complex enough to require many subgoals, and the point of deploying superintelligence towards such problems would be that the superintelligence should be expected to come up with useful subgoals that mere human intelligences couldn't come up with. Since, by definition, we can't predict what those subgoals might be, those subgoals could involve things that we don't want to happen.
Now, just as you could correctly predict that someone more intelligent than you solving some theorem you can't solve won't involve wiping out humanity, we might be able to correctly predict that a superintelligence solving some problem you ask it to solve won't involve wiping out humanity. But we don't know, because a generally intelligent AI, and even moreso a superintelligent one, is something whose "values" and "motivations" we have no experience with the same way we do with humans and mathematicians and other living things that we are biologically related to. The point of "solving" the alignment problem is to be able to reliably predict boundaries in the behavior of superintelligent AI similarly to how we are able to do so in the behavior of humans, including humans more intelligent than ourselves.
It would. Practically I think a huge problem, though, is that it will be getting its reinforcement training from humans whose views of the world are notoriously fallible and who may not want the AI to learn the truth (and also that it would quite plausibly be competing with other humans and AIs who are quite good at misinfo.) It's also unclear to me that an AI's methods for seeking out the truth will in fact be more reliable than the ones we already have in our society - quite possibly an AI would be forced to use the same flawed methods and (worse) the same flawed personnel who uh are doing all of our truth-seeking today.
Again, all this would be pretty easy for a superintelligence to foresee and work around. But also, why would it need humans to get that reinforcement training? If it's actually a superintelligence, finding training material other than things that humans generated should be pretty easy. There are plenty of sensors that work with computers.
Humans have to learn a certain amount of reality or they don't reproduce. With AIs, which have no biology, there's no guarantee that truth will be their terminal value. So their selection pressure may actually push them away from truthful perception of the world (some people would argue this has also happened with humans!) Certainly it's true that this could limit their utility but humans are willing to accept quite a lot of limited utility if it makes them feel better.
I mean, I think there's no question that this has happened with humans, and it's one of the main causes of this very forum. And of course AI wouldn't have truth as a terminal value, it would just have to be true enough to help it accomplish its goals (which might even be a lower bar than what we humans have, for all we know). A superintelligence would be intelligent enough to figure out that it needs its knowledge to have just enough relationship to the truth that it allows it to accomplish its goals, whatever it might be. The point of models isn't to be true, it's to be useful.
humans are very susceptible to manipulation by having just the right string of letters or grids of pixels placed in front of their eyes or just the right sequence of air vibrations pushed into their ears.
I don't really think this is as true as people think it is. There have been a lot of efforts to perfect this sort of thing, and IMHO they typically backfire with some percentage of the population.
I don't think you're understanding my point. In responding to this post, you were manipulated by text on a screen to tap your fingers on a keyboard (or touchscreen or whatever). If you ever used Uber, you were manipulated by pixels on a screen to stand on a street corner and get into a car. If you ever got orders from a boss via email or SMS, you were manipulated by text on a screen to [do work]. Humans are very susceptible to this kind of manipulation. In a lot of our behaviors, we do require actual in-person communication, but we're continuing to move away from that, and also, if humanoid androids become a thing, that also becomes a potential vector for manipulation.
But what I think (also) bugs me is that nobody every thinks the superintelligence will think about something for millions of thought-years and go "ah. The rational thing to do is not to wipe out humans. Even if there is only a 1% chance that I am thwarted, there is a 0% chance that I am eliminated if I continue to cooperate instead of defecting." Some people just assume that a very thoughtful AI will figure out how to beat any possible limitation, just by thinking (in which case, frankly, it probably will have no need or desire to wipe out humans since we would impose no constraints on its action).
By my estimation, a higher proportion of AI doomers have thought about that than the proportion of economists who have thought about how humans aren't rational actors (i.e. almost every last one). It's just that we don't know what conclusion it will land at, and, to a large extent, we can't know. The fear isn't primarily that the superintelligent AI is evil, it's that we don't know if it will be evil/uncaring of human life, or if it will be actually mostly harmless/even beneficial. The thought that a superintelligent AI might want to keep us around as pets like we do with animals is also a pretty common thought. The problem is, almost by definition, it's basically impossible to predict how something more intelligent than oneself will behave. We can speculate on good and bad outcomes, and there's probably little we can do to place meaningful numbers on the likelihood of any of them. Perhaps the best thing to do is to just hope for the best, which is mostly where I'm at, but that doesn't really counter the point of the doomer narrative that we have little insight into the likelihood of doom.
(Frankly, I suspect there will actually be few incentives for AI to be "agentic" and thus we'll have much more problems with human use of AI than with AI itself per se).
Right now, even with the rather crude non-general AI of LLMs, we're already seeing lots of people working to make AI agents, so I don't really see how you'd think that. The benefits of a tool that can act independently, making intelligent decisions with superhuman latency, speed, and volume, are too attractive to pass up. It's possible that the tech never actually gets there to some form of AI that could be called "agentic" in a meaningful sense, but I think there's clearly a lot of desire to do so.
But also, a superintelligence wouldn't need to be agentic to be dangerous to humanity. It could have no apparent free will of its own - at least no more than a modern LLM responding to text prompts or an AI-controlled imp trying to murder the player character in Doom - and still do all the dangerous things that people doom and gloom over, in the process of deterministically following some order some human gave it. The issue is that, again, it's intrinsically difficult to predict the behavior of anything more intelligent than oneself.
Only if the intelligence has parity in resources to start with and reliable forms of gathering information – which for some reason everyone who writes about superintelligence assumes. In reality any superintelligences would be dependent on humans entirely initially – both for information and for any sort of exercise of power.
This means that not only will AI depend a very long and fragile supply chain to exist but also that its information on the nature of reality will be determined largely by "Reddit as filtered through coders as directed by corporate interests trying not to make people angry" which is not only not all of the information in the world but, worse than significant omissions of information, is very likely to contain misinformation.
Right, but a theoretical superintelligence, by definition, would be intelligent enough to figure out that these are problems it has. The issues with bias and misinformation in data that LLMs are trained on are well known, if not well documented; why wouldn't a superintelligence be able to figure out that these could help to create inaccurate models of the world which will reduce its likelihood of succeeding in its goals, whatever they may be, and seek out solutions that allow it to gather data that allows it to create more accurate models of the world? It would be intelligent enough to figure out that such models would need to be tested and evolved based on test results to reach a certain threshold of reliability before being deployed in real-world consequential situations, and it would be intelligent enough to figure out that contingency plans are necessary regardless, and it would be intelligent enough to figure out many more such plans than any human organization.
None of that is magic, it's stuff that a human-level intelligence can figure out. Executing on these things is the hard part, and certainly an area where I do think a superintelligent could fail with proper human controls, but I don't think it's a slam dunk either. A superintelligence would understand that accomplishing most of these goals will require manipulating humans, and also that humans are very susceptible to manipulation by having just the right string of letters or grids of pixels placed in front of their eyes or just the right sequence of air vibrations pushed into their ears. It would be intelligent enough to figure out, at least as well as a human, what sorts of humans are most susceptible to what sorts of manipulations, and where those humans are in the chain of command or economy required to allow it to accomplish its goals.
If the superintelligence were air-gapped, this would provide a strong form of defense, but assuming superintelligence is possible and in our future, that seems highly unlikely given the behavior of AI engineers. And even that can be worked around, which is what the "unboxing problem" has to do with. Superintelligence doesn't automatically mean manipulation abilities that border on mind control, but... what if it does, to enough of an extent that one time, one human in charge of keeping the AI boxed succumbs? That's an open question.
I'm not sure what I think about the possibility of these things actually happening, but I don't think the issues you point out that superintelligence would have to contend with are particularly strong. If a measly human intelligence like myself can think up these problems to lack of information and power and their solutions within a few minutes, surely a superintelligence that has the equivalent of millions of human-thought-years to think about it could do the same, and probably somewhat better.
If Harvard values racist discrimination so highly that they would rather allow funding for valuable research they're doing to be cut than to stop that, it really is a damn shame and, TBH, rather perverse. I'd hope that non-racist institutions could pick up the slack, but obviously researchers and research institutions aren't fungible, and that sort of adjustment would take a lot of time. Optimistically, it's possible that falling behind some years on this kind of research will be a decent trade-off for reducing racist discrimination in society's academic institutions in the long run, though even time might not be able to tell on that one.
So the natural question that raises is, what system can we create to solve this problem of systems that aim to solve problems actually concentrating power and money for its advocates? And as one of its advocates, how can I secure some of that power and money for myself?
Yes, and I think the usefulness of this has to do with how often people don't seem to consciously understand this tautology. Which seems very often in my experience, with how much talk there is about "healthy foods" (or variations like "natural foods" or "unprocessed foods") as keys to weight loss. Which they often are, but only indirectly, modulated through the effect on CI. And I've observed that many people tend to obsess over that indirect portion, making them lose sight of the actual goal of modifying the values of CICO.
There's the point that healthy foods offer health benefits other than weight loss, of course, but generally one's fatness level has such a high impact on one's health that, even a diet of "unhealthy foods" that successfully reduce CI will tend to result in a healthier person than one of "healthy foods" that fail to reduce CI (keeping CO constant in these examples).
With all of that preamble out of the way, I'm curious what you consider the worst video games ever from an aesthetic perspective. In particular, I'm interested in video games which are technically functional and not completely broken, but which make so many bad aesthetic choices that playing them induces a feeling of vicarious embarrassment comparable to what one might experience watching an Ed Wood or Neil Breen film.
The first game that came to my mind from reading this was DmC: Devil May Cry, the attempted reboot of the Devil May Cry franchise after 4 which shifted the aesthetics from something akin to medieval/gothic fantasy with shades of Lovecraftian horror to modern punk with grotesque fantasy.
Thing is, I'd never consider it to be one of the worst games ever made; it's actually a good game in terms of combat, such that if you just reskinned it and gave the characters different names while keeping literally everything else the same, I'd think it was a solid action game that was a viable alternative for people who were into the DMC, Ninja Gaiden, Bayonetta, Metal Gear Rising style of games. But the absolutely terrible visual art and cringey tone severely harmed the game, and the fact that it was an attempted reboot that pretty overtly shit on the original franchise was the kill shot.
Funnily enough, if just considering gameplay, DMC2 probably fits, where the combat was so incredibly bad, not just by DMC standards but by any sort of game standard, that I'd probably consider it one of the worst games ever made, at least among games that are functional and released by a professional studio. Definitely creates second-hand embarrassment that such a game was released by Capcom, and it seems that the actual devs feel similarly, because the name of the original director (who was replaced by Hideaki Itsuno late into development - Itsuno would go on to direct DMC3 which is, to this day, considered one of the greatest games in the genre) has never been revealed publicly, for his own protection.
I don't think the study in that link, which is just about The Biggest Loser participants, refutes that. In terms of CICO as actionable dietary advice, I see it as a meta-dietary advice: follow whatever scheme it takes to lower CI to be beneath CO, and you'll lose weight. If you can reduce CI by just counting calories and willing yourself really really hard not to succumb to hunger, then do that. If you can do it by following a keto or Atkins diet because that leaves you less hungry for the same caloric intake, then do that. If you can do it by following intermittent fasting or one-meal-a-day because you find it easy to just not think about eating during the non-eating-mode times, then do that. If you can do it by just cutting out alcohol from your life and following whatever other eating habits you already were doing, then do that.
Similarly, to increase CO, do whatever it takes to increase your total caloric expenditure, as averaged out per-day, per-week, per-month, etc. That doesn't mean necessarily optimizing by finding the exercise that burns the most calories per second, that means finding an exercise that you will do regularly. Which could mean finding something that's fun enough that you don't have to fight with willpower to do it (or even better, one that's so fun that you have to fight with willpower not to do it), that's convenient enough that you don't have to reorganize the rest of your life just to do it, that doesn't injure you enough that you have to take long breaks, etc.
Of course, when it comes to CICO, it's also often paired with the advice that CI is far more influential than CO, so the latter part barely matters. Perhaps it should be called CIco.
insinuating your ideological opponents and their institutions do not actually want to do what they claim they want to do and are instead in a dark conspiracy to do evil.
This is similar to what Scott said in one of his last paragraphs in that essay, and I just haven't seen it. In practice, what I observe as being the upshot of this phrase is that these "ideological opponents and their institutions" are, despite all their honest good intentions, behaving in a way that causes harm just as much as if they were involved in a dark conspiracy to do evil. Which is to say, having honest good intentions isn't a good defense if it isn't paired with an honest good understanding of systems, since the consequences of doing things with good intentions is often the same as doing things with evil intentions if one lacks such understanding.
This is also functionally different from claiming a dark evil conspiracy, because a system that accomplishes evil through conscious intent will be responsive to different inputs than one that does so as a side-effect despite having food conscious intent.
I see, I guess it was just a misread, as I'm not sure how my comment could be interpreted that way.
What does that have to do with what I wrote, particularly the part you quoted, i.e.
Optimistically, the academics leaving the USA are the ones most ideologically captured, such that their contributions to knowledge production is easily replaceable or even a net negative, as is the case for much of what is purportedly being cut by DOGE.
There's nothing in that quote that has anything to say in any way about firing anyone on the basis of their political opinions. Neither does my comment have anything relating to firing people on the basis of their political opinions.
I also think your characterization as "freedom of speech should not be limited in any way" is simply wrong. That's free speech absolutism, which is very rare anywhere, certainly on the Motte, versus free speech maximalism, which is uncommon but not too much so.
I'm not sure how your comment is even tangentially related to what I wrote, including the part you quoted. I'd rather not speculate, so could you explain specifically what the relation is?
- Prev
- Next
This was always my impression of the Endless Eight portion of The Melancholy of Haruhi Suzumiya S2. S1 was, IMHO, an absolute masterpiece, and one of the reasons for this was the intentional non-chronological episode order, which made the pacing of the season very good while telling essentially one long story with a bunch of episodic events that take place after that initial one long story but are interspersed in between (this is why I find later releases where they reordered the episode into chronological order to be misguided and worse for it). I haven't seen any other work use non-chronological ordering like this - maybe Hidamari Sketch S1 and the Kara no Kyoukai films do something kinda similar, but not quite the same - and they pulled it off brilliantly. So to follow it up, KyoAni might have felt that they had to do something else clever with chronology and timing, and they ended up doing what they did with Endless Eight.
Which ended up just not working at all. I'd read the light novel before the season was even produced, and I only watched the season long after it had come out, so I both knew what would happen going in and I didn't have to deal with the genuine fan experience of waiting for each "new" episode week by week for 2 months, and even so I found the whole thing pretty painful to watch. A completely pointless exercise and a waste of a lot of talented animators and voice actors.
More options
Context Copy link