sodiummuffin
No bio...
User ID: 420
About the same time, Starlink terminals stopped working in newly liberated territories at the Ukraine-Russia front lines in the Kherson region. Ukrainian officials later said that was because the speed of their reconquest had pushed forces into areas Starlink that had “geo-fenced” to prevent Russia from using the service.
It was remarkably difficult to find this. Most of the news coverage, especially more recent news coverage, presents it as implicitly nefarious and either doesn't know or doesn't bother to mention that Ukrainian officials have stated what the issue was. Other than this Associated Press article the only other one I saw mentioning the actual reason was this Financial Times article quoting a third party.
My guess would've been that access would've been controlled by some method of authentication, so that the Ukrainian terminals would work anywhere but anything held by Russians wouldn't work at all, making such a geofence unnecessary.
Starlink was made free throughout Ukraine so I think it just works if you have a terminal without needing an account. Doing authentication separate from owning the device seems impractical, for many military purposes you want it running continuously and it's not like you want it to start demanding a password (that soldiers have to memorize) any time it loses power. By comparison apparently Ukraine has been supplied with some SINCGARS encrypted radios, they work like this:
https://en.wikipedia.org/wiki/SINCGARS
When hailing a network, a user outside the network contacts the network control station (NCS) on the cue frequency. In the active FH mode, the SINCGARS radio gives audible and visual signals to the operator that an external subscriber wants to communicate with the FH network. The SINCGARS operator must change to the cue frequency to communicate with the outside radio system. The network can be set to a manual frequency for initial network activation. The manual frequency provides a common frequency for all members of the network to verify that the equipment is operational.
But something like that doesn't work for Starlink, you can't have someone at SpaceX talk to the user and confirm he's Ukrainian every time a Starlink terminal is turned on.
Instead, he started messing around with the service itself
No he didn't.
By then, Musk’s sympathies appeared to be manifesting on the battlefield. One day, Ukrainian forces advancing into contested areas in the south found themselves suddenly unable to communicate. “We were very close to the front line,” Mykola, the signal-corps soldier, told me. “We crossed this border and the Starlink stopped working.”
They are geofenced to not work in Russian-controlled areas so that Russia can't use them. Starlink continually updates this to match the situation on the ground, presumably with some allowance for contested areas. Occasionally Ukrainian advances have outpaced Starlink employees knowing about the situation and updating the geofence, particularly during the period being referred to when they made rapid advances. "Appeared to be" is the giveaway to be maximally skeptical even if you don't already know about the incident in question. "The media very rarely lies" but "appeared to be" here functions as journalist-speak for reporting Twitter rumors without bothering to mention whether those rumors were true. The New Yorker doesn't feel the need to verify the factual accuracy of the claim because he's not saying that appearance was true, just referring to the fact that it seemed true to thousands of people on Twitter who already hated Musk for his politics and jumped to conclusions after hearing about some rapid Ukrainian advances having their Starlink service cut out. The only plausible story of political interference (aside from sending the Starlink terminals at all) has been the claim he refused to disable Starlink geofencing for proposed Starlink-piloted suicide drones striking Crimea, out of fears of escalation.
alleged to have engaged in a little amateur diplomacy that resulted in his publicly proposing a settlement to the war that he had to have known the people he was ostensibly helping would find unacceptable
The article doesn't mention it but of course he has said exactly why he wants a settlement: he is concerned about a proxy war between the U.S. and Russia escalating into nuclear war and posing a major risk to humanity. His way of thinking here should be more understandable to this forum than most, since he has taken considerable inspiration from the same intellectual environment as LessWrong/Effective Altruism/Scott Alexander. His underlying motive is the same as his motive for Tesla/SolarCity (global warming), SpaceX (mitigate existential risk by making humanity a two-planet species), OpenAI (mitigate AI risk by having the developers take the risk seriously), NeuraLink (mitigate AI risk through interfaces between AI and the human brain), and Twitter (mitigate political censorship and the risks that enables). Not to mention sending the Starlink terminals to Ukraine in the first place, though that was more small-scale than his usual concerns.
He didn't try to personally negotiate a settlement because he sent the Starlink terminals and felt that gave him the right to, he would have done it anyway. He did it because, having made more money than he could ever personally use, he has been working to defeat what he perceives as threats to humanity. You might criticize his arrogance in believing he is capable of doing so, but Tesla and (especially) SpaceX have accomplished things that conventional wisdom considered impossible so it is perhaps understandable that he thought it was worth trying. There is obviously nothing wrong with criticizing him, I think he has made plenty of mistakes, but I wish people actually engaged with his reasoning rather than being like this article and rounding him off as Putin sympathizer or whatever.
During the pandemic, Musk seemed to embrace covid denialism, and for a while he changed his Twitter profile picture to an image of the [Deus Ex protagonist], which turns on a manufactured plague designed to control the masses. But Deus Ex, like “The Hitchhiker’s Guide to the Galaxy,” is a fundamentally anti-capitalist text, in which the plague is the culmination of unrestrained corporate power, and the villain is the world’s richest man, a media-darling tech entrepreneur with global aspirations and political leaders under his control.
I just skimmed the latter part of the article but this bit stood out. We get a "seemed to" and it's implied he...believes in a specific conspiracy theory because he once changed his Twitter avatar to the protagonist of an iconic videogame in which a bunch of conspiracy theories are true? But at the same time trying to claim Deus Ex as an anti-capitalist game that he is implied to be missing the point of? If Deus Ex is so leftist why does using it as a Twitter avatar signal a specific conspiracy theory rather than signaling leftism, not to mention signaling neither?
One of the problems with excusing misrepresentations that you think are directionally correct is that many of the people doing so don't know how their own views have been shaped by lies or misrepresentations, building a new layer of bullshit on top of the old one. For instance:
It is undeniable that the Canadian government in association with the Catholic Church basically kidnapped tens of thousands of native children and stuffed them into places like Kamloops, where the conditions were pretty awful (though perhaps not so awful by the standards of the time).
This is how it is often described, but sending your children to residential school was optional.
https://fcpp.org/2018/08/22/myth-versus-evidence-your-choice/
Even the Truth and Reconciliation Commission has helped spread erroneous information. At the final National Gathering in Edmonton, one of the Commission’s information displays stated that, after 1920, criminal prosecution threatened First Nations parents who failed to enrol their children in a residential school. This falsehood, one frequently repeated by supposedly reputable journalists, is a reference to a clause in the revised Indian Act that said children had to be enrolled in some kind of school, a clause that was little different from the Ontario government’s 1891 legislation — nearly 30 years earlier — that made school attendance compulsory for that province’s children up to the age of 14, with legal penalties for failure to comply. Other provinces had similar laws.
And the “criminal prosecution”? The penalty specified by the Indian Act for the “crime” of not sending a child to school was “a fine of not more than two dollars and costs, or imprisonment for a period not exceeding ten days or both.” And as with provincial laws regarding school attendance, there would be no penalty if the child was “unable to attend school by reason of sickness or other unavoidable cause... or has been excused in writing by the Indian agent or teacher for temporary absence to assist in husbandry or urgent and necessary household duties.”
Now if you lived in a location without local schools residential schools were the only ones available, and the percentage of natives living in such locations was higher. But conversely getting out of sending your children to school was easier than it is today, and indeed native enrollment was low:
In 1921, when the revised Indian Act solidified the compulsory attendance of Indigenous children in some kind of school, about 11 percent of First Nations people were enrolled in either a residential school or a federal day school. By 1939, that figure had risen to approximately 15 percent of the First Nations population, but the total enrolment of 18,752 still represented only 70 percent of the 26,200 First Nations children aged 7 to 16. Not until the late 1950s were nearly all native children — about 23 percent of the First Nations population — enrolled in either a residential school (in 1959, about 9,000), a federal day school (about 18,000) or a provincial public school (about 8,000).
And absenteeism among those enrolled was high:
For most of the years in which the IRS operated, between 10 and 15 percent of residential students were absent on any given day
Day school attendance was far worse. In the 253 day schools operating in 1921, only 50 percent of native students were showing up, and until the 1950s, these poorly-funded, inadequately-staffed schools consistently had absentee rates in the 20 percent and 30 percent range. In the 1936-37 academic year, to choose just one example, attendance in Indian day schools sank as low as 63 percent. The only residential school in Atlantic Canada, at Shubenacadie, Nova Scotia, was established in part because two previously-established day schools had been forced to close due to poor attendance. Some of the reasons for this absenteeism — the movement of families to areas where seasonal work beckoned, the need to help out at home during the Depression, and the opportunity to take labouring jobs left vacant by servicemen — are understandable, and it is worth noting the the TRC Report acknowledges that very few parents were ever charged or convicted for keeping their children out of school. But children who aren’t in school aren’t getting an education.
The punishment for your children being truant was mild, seems easily avoided by giving an excuse like chronic illness, and most importantly hardly ever enforced to begin with. That is not the sort of coercion required to get parents to send their children to a concentration camp. Native children didn't go to residential schools because they were "kidnapped", they went because their parents believed it was better than the alternatives, including the alternative of not going to school at all. That is compatible with them being low-quality schools, it isn't compatible with the insane rhetoric about them that is prevalent today.
Many deaths resulted.
Many deaths resulted from native americans being biologically more vulnerable to diseases like tuberculosis. Is there even any evidence that the death rate of native children at residential schools was higher than the death rate of native children elsewhere? Skimming chapter 16 ("The deadly toll of infectious diseases: 1867–1939") from the report of the Truth and Reconciliation Commission, it looks like the closest they come to an overall comparison instead of talking about individual outbreaks is this:
https://publications.gc.ca/site/eng/9.807830/publication.html
In response to the issues Tucker had raised, Indian Commissioner David Laird reviewed the death rates in the industrial schools on the Prairies for the five-year period ending in the summer of 1903. He concluded that the average death rate was 4%. He compared this to the 4.4% child mortality rate for the ten Indian agencies from which students were recruited for 1902. On this basis, he concluded that “consumption and other diseases are just as prevalent and fatal on the Reserves as in the schools.”
removed the "nat 1/20 is an auto-fail/success on skill checks"
Note this isn't actually a rule in 5e for skill checks, only for attack rolls automatically hitting/missing. It wasn't a rule in 3.x either. It's just people keep misapplying the attack-roll rule to other rolls and inadvertently houseruling it even though it's a stupid change, sometimes including D&D developers and now apparently including Lorian Studios developers.
In 3rd edition it only applied to attack rolls, but then in the Deities and Demigods supplement they added a special rule for gods:
Deities of rank 1 or higher do not automatically fail on a natural saving throw roll of 1.
Yes, if you attain godhood you don't automatically fail saving throws on 1, just like everyone else. Then in 3.5 they actually did add automatic success/failure to saving throws (which I would argue was a negative change) but still didn't have it for skill checks. (3.5 came out a year after Deities and Demigods so they could have been consciously trying to make it backwards compatible, but I'd guess they just forgot it didn't work like that and then in 3.5 rewrote the rules to match the way they played it.)
Interesting. I thought it might correlate with being a lower-trust society and surveys like these, especially because of the stereotype of Russians being vocally cynical, but maybe not. Though I probably shouldn't conclude anything from non-randomized social media polls.
Even the real surveys are dubious (different countries probably radically differ in how they interpret the question, especially when it's being translated) and looking at the link above Russia isn't as low on them as I thought. For instance 23.3% of surveyed Russians agreed with "most people can be trusted", which is lower than the U.S. (39.7%) or Sweden (63.8%) but slightly higher than France (18.7%) or Spain (19%), let alone Brazil (6.5%) or Zimbabwe (2.1%). It's hard to tell how meaningful any of this is.
I think this is the intended line of thinking, but red doesn't require any cooperation: pure self-interest can grant it too.
The issue is the extreme difficulty of that level of coordination, not their specific motives. Imagine I said "coordination" instead of "cooperation" if you prefer. If you place an above-zero value on the lives of people who might press blue, then the optimal outcome is either >50% blue or exactly 100% red, with every other possibility being worse.
You can't rely on 100% to do pretty much anything, including act on self-interest. People in real life do things like commit suicidal school shootings, and you have to make decisions taking that into account. As I pointed out, even most mundane crime is self-destructive and yet people do it anyways. In this case, as people have pointed out, some people will pick blue by mistake, because they are momentarily suicidal enough to take a risk even though they wouldn't carry out a normal suicide, or (most of all) because they realize the above and want to save everyone.
Right, but the probability of success seems more than high enough to compensate. Not only is 50% blue better than 95% red, it's also easier because you only need 50% instead of 95%. It's especially high if communication is allowed, but even without communication "the most obviously pro-social option" is a natural Schelling point.
Now this is fairly fragile, it's plausible that with different question wording or a society with a more cynical default conception of other people (Russia?) or the wrong set of memes regarding game theory red would seem enough of a natural Schelling point to make aiming for blue not worth it. This would of course be a worse outcome, so if you did have access to communication it would make sense to rally people around blue rather than red if doing so seems feasible.
Red requires 100% cooperation for the optimal outcome, blue requires 50% cooperation for the optimal outcome. It is near-impossible to get 100% cooperation for anything, particularly something where defecting is as simple as pressing a different button and has an actual argument for doing so. Meanwhile getting 50% cooperation is pretty easy. If blue required 90% or something it would probably make more sense to cut our losses and aim for minimizing the number of blue, but at 50% it's easy enough to make it worthwhile to aim for 0 deaths via blue majority.
If we are to compare to politics, I think the obvious comparison is to utopian projects like complete pacifism that only work if you either have 100% cooperation (in which case there is no violence to defend against or deter) or if you have so little cooperation that everyone else successfully coordinates to keep the violence-using status-quo (akin to voting for red but blue getting the majority). Except that such projects at least have the theoretical advantage of being better if they got 100% cooperation, whereas 100% cooperation on red is exactly the same as 50%-100% cooperation on blue.
In real life serious crime is almost always a self-destructive act, and yet people do it anyway. "Just create a society where there's no incentive to do crime and we can abolish the police because 0 people will be criminals" doesn't work, not just because you can't create such a society, but because some people would be criminals even if there was no possible net benefit. We can manage high cooperation, which is why we can coordinate to do things like have a justice system, but we can't manage 100% cooperation, that's why we need a justice system instead of everyone just choosing to not be criminals.
It might help to separate out the coordination problem from the self-preservation and "what blue voters deserve" aspects. Let us imagine an alternative version where, if blue gets below 50% of the vote, 1 random person dies for each blue vote. Majority blue is once again the obvious target to aim for so that nobody dies, though ironically it might be somewhat harder to coordinate around since it seems less obviously altruistic. Does your answer here differ from the original question? The thing is, even if you think this version favors blue more because the victims are less deserving of death, so long as you place above-zero value on the lives of blue voters in the first question the most achievable way to get the optimal outcome is still 50% blue.
The linked study is based on scoring higher on scales for "Hostility Towards Women", "Rape Myth Acceptance", and "Sexual Objectification". Reading the appendix, these scales are sufficiently low-quality that it is difficult to conclude much from them, at least not without the data for how people responded to individual questions.
Some of the 10 items on the "hostility towards women" scale include "I feel that many times women flirt with men just to tease them or hurt them.", "I am sure I get a raw deal from the women in my life. ", and "I usually find myself agreeing with women. (Reverse coded)". It doesn't really provide novel information to learn that someone romantically unsuccessful has worse experiences with women and is less likely to have someone like a wife in his life that he is more likely to agree with than if the women he interacts with are strangers. (It's also a bit funny to imagine someone making a "hostility towards men" scale and making one of the items "I usually find myself agreeing with men. (Reverse coded).")
Meanwhile large sections of "Rape Myths" and "Sexual Objectification" are things the now-successful Hanania would presumably agree with. Questions like that are going to pick up on very broad demographic correlations with ideology. The ideological bias on display also makes me more skeptical about the people conducting these studies. Examples of the 11 "Rape Myths" include "To get custody for their children, women often falsely accuse their ex-husband of a tendency toward sexual violence.", "Many women tend to exaggerate the problem of male violence." and "It is a biological necessity for men to release sexual pressure from time to time.". (The last would naturally correlate with high sex drive and thus sexual dissatisfaction.) Examples of the 10 "Sex Objectification" items include "Being with an attractive woman gives a man prestige.", "Using her body and looks is the best way for a woman to attract a man.", and "Sexually active girls are more attractive partners.".
Also some of these seem sufficiently unarguable that it seems like it might be heavily influenced by the respondents' social desirability bias. For instance, if many of the men disagreeing that "Being with an attractive woman gives a man prestige." or "Sexually active girls are more attractive partners." believe otherwise but are the type to answer surveys with what they perceive as the most socially desirable answers, are they also more likely to misrepresent how sexually satisfied they are? And the second one would also measure sex drive.
It's an alternative front-end for LessWrong.
https://www.lesswrong.com/posts/aHaqgTNnFzD7NGLMx/reason-as-memetic-immune-disorder
You start out talking about not writing a political statement but then end up talking about how to write political propaganda that, unlike most political propaganda, isn't poorly-written or obnoxious. Those are different goals that involve going down diverging pathways. In particular, if you're going to spend time and effort thinking about this sort of thing, how about spending it thinking about the ideologies that exist within your fictional world? Not as an allegory, not as an insertion of current issues with or without commentary, but as part of the worldbuilding. And then instead of deciding ahead of time whether an ideology or political faction is "right" or "wrong" or "it's complicated" based on how it maps to the civil-rights movement or transgenderism or whatever, evaluate it (and let your audience evaluate it) on its own terms, as an outgrowth of relevant issues in the world you have created.
Jeff Vogel of Spiderweb Software talks about something similar:
I put a ton of politics into my games, but I write political philosophy, not comments on current events. My games are not about any one Big Issue Of The Day. They are about the base principles we have that help us make our own opinions about those issues.
Instead of looking to contemporary political controversies for your inspiration, you can try looking elsewhere. You can look to history, to political conflicts where every side and even the issues they consider important are likely to be one or both of "alien" or "timeless" to modern perspectives. Similarly you can look to old political philosophy. Or to fiction that is at least old enough to not be part of the current political zeitgeist. You can look to science and technology, to the sorts of things that societies could theoretically be doing if they had different values or structures. You can look at all the setting elements you have for other reasons, for game mechanics or because they're cool or because they're part of the genre or because you had to make some sort of map/factions/history, and seriously think through how people in that world would relate to them.
Think about questions like what views are functional, whether functional for society or the individual or for some subgroup. For a recent example imagine if, before the invention of AI art, you wrote a setting where AI art was possible. I think you probably could have predicted the backlash from some artists, on grounds like economic self-interest and their self-conception, and predicted a lot of the specific rhetoric. Or, if it was invented a while ago, there's other questions like what sort of economic role it ends up fitting into long-term. I don't think this would necessarily be the most compelling setting element, it probably wouldn't be central, but I think it would probably be more interesting than inserting either contemporary politics or a metaphor for them. Maybe some reviewer would interpret it as you criticizing real-world automation as stripping meaning from work, but I don't think it would benefit from you approaching the writing as a metaphor, except perhaps by using history as a reference for how these conflicts can play out.
You don't have to do this, not every work (especially videogames) needs to have ideologies and political conflicts invented for its worldbuilding. The Law of Conservation of Detail is a very real concern, though it can enhance even briefly-mentioned details if you've put more thought into them than the audience expects. But if you don't want to do this you probably shouldn't be wasting your time and the audience's attention-span on contemporary politics either. In that case just use the superficial details that seem to match your setting/genre/aesthetic and don't do anything more. It is unlikely anyone will care. Yes there have been cases like Kingdom Come: Deliverance (targeted by Tumblr psuedohistorian medievalpoc and then game journalists for not having "POC" in their piece of medieval europe) but there are too many games coming out for people to create controversies like that about a meaningful fraction. Especially if you're not dumb enough to respond on social media or release a statement/apology.
I'm trying to figure out how I would make either characters that are never called attention to, or characters that are an allegory . . . for trans people.
One reason transgenderism tends to be particularly badly written in fiction, particularly fiction not set in a western country in 2023, is because it entails an ideological framework that is highly specific and restricted to a particular place and time. People will write a medieval fantasy setting and give characters views popularized on the internet less than a decade ago. Even people who don't think they're writing fiction, like Wikipedia editors writing about historical women who disguised themselves as men, will try to fit it into the trans framework (sometimes resulting in the Wikipedia article having male pronouns). Historical eunuchs and the ideological viewpoints regarding them are more genuinely alien than "what if aliens had...4 genders" or "what if aliens were genderfluid shapeshifters", because neither eunuchs nor the viewpoints regarding them were based on contemporary ideas like gender identity to begin with.
sitting members of Congress - who are saying "yeah I've seen some of the evidence, and it's crazy, and there's something here we need to look into", then it makes explanations involving hallucinations and weather balloons less plausible.
It makes hallucinations much less plausible, but I don't think it really does much about misidentified balloons, glare, something on the camera lens, camera image-sharpening algorithms combining with those others, etc. See the videos in this Metabunk thread for examples. I don't think congressmen have any special skills to distinguish whether a fast-moving blob on an aircraft camera is a spacecraft or a visual artifact. And while people like pilots and intelligence agents might be better, it isn't really their area of expertise either. They're focused on dealing with real planes, not every weird visual artifact that can happen. On the scale of a country you can cherrypick enough things that coincidentally seem alien-like to be convincing to many people, including many government officials. But if it's ultimately all formed out of random noise you'll never get that definitive piece of evidence, just lots of "that blob was crazy and we couldn't figure out another explanation", which is the pattern we've seen.
No, I meant to reply to cake's OP comment.
The binding force behind all "woke" modern movements is anti-whiteness.
A handful of years ago the most prominent SJW focus was feminism, by far. Race got some obligatory mumbling about intersectionality and how white feminists need to listen to the lived experiences of women of color, but then everyone went back to what they really cared about. For that matter the SJW community has been a breeding ground for new identities to champion, like non-binary, demisexuals, otherkin, and plurals, with non-binary being the main one to get traction outside of a handful of sites like Tumblr. The SJW memeplex has relatively little to do with the specifics of the groups it claims to champion, making it quite mutable.
That doesn't make the anti-whiteness any less real, race-based prioritization of the COVID-19 vaccine alone killed tens or hundreds of thousands of white people. Even if future SJWs refocus on plurals or something, it is likely that without sufficient pushback captured organizations like the CDC will continue quietly making decisions like that about race. But don't assume they're dependent on any particular identity group or expect them to remain the same while you try to position yourself against them.
Subliminal messaging doesn't work, ideological messaging does. Both the "look at Falwell saying crazy stuff about Tinky Winky" rhetoric and to a much lesser extent the "look at Tinky Winky being a gay icon" rhetoric presumably contributed to strengthening the social-justice ideological framework in which homosexuality is high-status, leading people to identify as gay and then sometimes even have gay sex. But there's no reason to believe the character himself did, because whether his supposed gay associations were intentional or not (probably not) the vast majority of people looking hard enough to see it already had strong ideological views on the subject. The existence of a character like that does nothing to strengthen those views, while a news story about how one of the enemy is stupid does. Same way crossdressing stories like Mulan aren't what caused the massive surge in transgenderism. Or antifa people attacking people at conservative protests and claiming to be inspired by Captain America or historical WW2 veterans - what inspired them is the antifa memeplex itself.
It is fundamentally missing the point of the recent surge in social-justice "identities", because for the most part it isn't even about the actual features of those groups, it is about the ideology itself. Thus the popularity of things like "grey-asexual" identities that let you be asexual while having sex or "non-binary" identities that let you be transgender without transitioning. That doesn't mean the surge in those identifications isn't connected to behavior, there really are a lot more people having gay sex even if they're a smaller percentage of those identifying as gay. This increase is of course most dramatic with transgenderism, where it's looking like (contrary to the concept of gender identity) there isn't much stopping people from transitioning when their ideology and social circle pushes them towards it. But this transmits through the ideological memeplex, not fictional characters being vaguely non-masculine.
Wouldn’t the rarity of the catastrophic failure matter as well?
Which is why you do enough math to sanity-check the comparison. As I mentioned, Fukushima released more radioactivity than would be released by burning all the coal deposits on Earth. Nuclear power plants involve relevant amounts of radioactivity, coal plants don't. The fact that a release like Fukushima happened even once implies the odds aren't low enough to overcome the massive difference in radioactivity. Nuclear has plenty of advantages, and the risk of catastrophic failure is low enough that those other advantages might easily outweigh it, but being less of a radiation risk than coal is not one of them.
I addressed this in the footnote.
But it's not true that "for the energy generated, more radiation is given out by fly ash". You didn't say "so long as nothing goes wrong", so the average amount of radiation released per energy produced includes the risk of disaster. And since nuclear power plants involve significantly radioactive material and coal plants don't, even a tiny risk is enough to push the average way above coal plants. The fact that Fukushima alone released more radioactivity than the fly ash we would get from burning all coal deposits on Earth makes this clear.
It is a quite common myth that living near a nuclear power plant emits radiation during ongoing operations.
Then just say "nuclear power plants release virtually no radiation under normal operation". Don't try to make it sound like nuclear beats coal in terms of radiation, on a technicality sufficiently narrow that both you and the Scientific American article you link (and the people I've seen bring up this talking point before) stumble into outright falsehood. Nuclear beats coal on plenty of metrics, there is no need to compare them in terms of radioactivity besides the appeal of being counterintuitive.
Scientific American: Coal Ash Is More Radioactive Than Nuclear Waste
the study i linked found that for the energy generated, more radiation is given out by fly ash, which contains trace amounts of uranium and thorium. while the amount of radiation that makes it into people from both of these sources isn't dangerous, it's worth pointing out when given the concerns of "gonna be irradiated."
The title of that article is laughably false. The underlying point it is based on, that under normal operation a nuclear plant releases less radioactive material into the environment than a coal plant, is technically true but grossly misleading. Under normal operation nuclear plants release essentially no radioactive material, the radioactivity concern is concentrated purely into the possibility of something going wrong. Sanity-check I did after encountering this argument a decade ago:
The EPA gives the radioactivity of average fly ash as 5.8 picocuries per gram, and the U.S. produces around 125 million tons of coal-combustion byproducts per year as of 2006. If we overestimate and assume all coal-combustion byproducts are the more-radioactive fly ash, that comes to around 658 curies worth of material per year. By comparison, a year after the Fukushima disaster TEPCO estimated total radiation releases as 538,100 terabecquerels - equivalent to 14,543,243 curies. Note that this assumes all fly ash is being released into the environment when modern first-world plants safely capture most of it. So one year after the Fukushima disaster it had already released more radiation than 22,000 years of 2006-era U.S. coal radiation emissions, under very pessimistic assumptions. Which means we can confidently estimate Fukushima has released far more radiation than all the coal burned in human history and all the coal remaining in the ground that could be burned combined.
This doesn't mean that nuclear power is overall a bad idea, but it's definitely not because coal is a worse radioactivity concern. From what I've heard this particular misleading talking point has been going around even before it started circulating on the internet, I remember someone telling me that it was going around Stanford decades ago. People should be cautious with counterintuitive factoids like this, because often they spread because they are too good to check.
My problem is, while I'm sure that not all the examples of GPT-4 seeming to get complex reasoning tasks are fake, if they cannot be replicated, what good are they?
I am saying they can be replicated, just by someone who unlike you or me has paid the $20. I suppose it is possible that the supposed degradation in its capabilities has messed up these sorts of questions as well, but probably not.
If GPT-4's ability to "reason" is ephemeral and seemingly random, is it really reasoning, or is it just occasionally getting lucky at ordering abstract tokens for it's monkey overlords?
There is a big difference between random guessing and having a capability that sometimes doesn't work. In particular, if the chance of randomly getting the right result without understanding is low enough. Text generators based on Markov chains could output something that looked like programming, but they did not output working programs, because such an outcome is unlikely enough that creating a novel program is not something you can just randomly stumble upon without some idea of what you're doing. In any case, as far as I know GPT-4 is not that unreliable, especially once you find the prompts that work for the task you want.
Which makes sense to me, because a lot of those tests involve more generalized and flexible reasoning than the sorts of formalized mathematical logic examples it might plausibly be trained to imitate.
How well it reasons is a different question from whether it reasons at all. It is by human standards very imbalanced in how much it knows vs. how well it reasons, so yes people who think it is human-level are generally being fooled by its greater knowledge. But the reasoning is there and it's what makes a lot of the rest possible. Give it a programming task and most of what it does might be copying common methods of doing things that it came across in training, but without the capability to reason it would have no idea of how to determine what methods to use and fit them together without memorizing the exact same task from elsewhere. So practical use is generally going to involve a lot of memorized material, but anyone with a subscription can come up with novel questions to test its reasoning capabilities alone.
Despite being based on GPT-4 Bing is apparently well-known for performing dramatically worse. There have been some complaints of GPT-4's performance degrading too, presumably due to some combination of OpenAI trying to make it cheaper to run (with model quantization?) and adding more fine-tuning trying to stop people from getting it to say offensive things, but hopefully not to the extent that it would consistently fail that sort of world-modeling. (If anyone with a subscription wants to also test older versions of GPT-4 it sounds like they're still accessible in Playground?)
I don't think it's plausible that all the examples of GPT-4 doing that sort of thing are faked, not when anyone shelling out the $20 can try it themselves. And people use it for things like programming, you can't do that without reasoning, just a less familiar form of reasoning than the example I gave.
He is likely referring to this from pages 11-12 of the GPT whitepaper:
GPT-4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake. Interestingly, the pre-trained model is highly calibrated (its predicted confidence in an answer generally matches the probability of being correct). However, after the post-training process, the calibration is reduced (Figure 8).
In any case, the articles you quote are oversimplified and inaccurate. Predicting text (and then satisfying RLHF) is how it was trained, but the way it evolved to best satisfy that training regime is a bunch of incomprehensible weights that clearly have some sort of general reasoning capability buried in there. You don't need to do statistical tests of its calibration to see that, because something that was truly just doing statistical prediction of text without having developed reasoning or a world-model to help with that task wouldn't be able to do even the most basic reasoning like this unless is already appeared in the text it was trained on.
It's like saying "humans can't reason, they're only maximizing the spread of their genes". Yes, if you aren't familiar with the behavior of LLMs/humans understanding what they evolved to do is important to understanding that behavior. It's better than naively assuming that they're just truth-generators. If you wanted to prove that humans don't reason you could point out all sorts of cognitive flaws and shortcuts with obvious evolutionary origins and say "look, it's just statistically approximating what causes more gene propagation". Humans will be scared of things like spiders even if they know they're harmless because they evolved to reproduce, not to reason perfectly, like a LLM failing at Idiot's Monty Hall because it evolved to predict text and similar text showed up a lot. (For that matter humans make errors based on pattern-matching ideas to something they're familiar with all the time, even without it being a deeply-buried instinct.) But the capability to reason is much more efficient than trying to memorize every situation that might come up, for both the tasks "predict text and satisfy RLHF" and "reproduce in the ancestral environment", and so they can do that too. They obviously can't reason at the level of a human, and I'd guess that getting there will involve designing something more complicated than just scaling up GPT-4, but they can reason.
Is that your real objection? If it was instead a serial killer who you believe doesn't have any particularly inaccurate beliefs about his victims, but simply enjoys killing people and has been hunting the person you're hiding as his next target, would you tell him the truth or would you come up with a different excuse for why it's acceptable to lie?
It seems like probably the real reason you don't tell the truth is simply that if you do it'll result in someone's death and no real gain, just adherence to the "don't lie" rule. But if that's your reason then just say that's your reason, rather than obscuring it behind excuses specific to the situation.
The proponents were saying 'let's get rid of Saddam it'll be easy and stabilize the Middle East, spread democracy, make new allies...'.
Helping Iraqis and the Middle East doesn't significantly materially strengthen the U.S., it's expending U.S. resources and power for the sake of charity. This is inherently self-limiting, the U.S. has resources to waste on things like this but in the end it is left with less capability to wage war than it started with. Having Iraq as an ally or vassal was never going to be valuable enough to be worth a war, even if it was as easy as proponents thought it would be, and proponents of the war instead justified the war in terms of humanitarian (Saddam, democracy) or threat-reduction (WMDs) concerns. And the U.S. didn't even really turn Iraq into a vassal, it's a democracy that has been at times vocally critical of the U.S. and there is no guarantee that U.S./Iraq relations won't worsen further in the future. It would have been far easier to turn it into an ally in some other way, like buddying up to Saddam or replacing him with some other dictator. Proponents of the Iraq war didn't say they would turn Iraq into a vassal, they said they would turn it into a democracy, and that is indeed what they did. It was the opponents of the Iraq war who said the U.S. would materially benefit, the "No blood for oil" people, but that was never remotely realistic and the proponents didn't say it was.
"Anti-woke" includes many things that are beneficial to black people, most obviously in that it opposes wokeness in areas that have nothing to do with race, but also even within the realm of race. For instance, consider the CDC's COVID-19 vaccine prioritization policy. They deprioritized older people relative to essential workers because older people are more white, even though they estimated this would result in many additional deaths (especially if the vaccine was less effective at preventing infection than serious disease, which turned out to be the case). This policy killed more black people it just killed even more white people so the proportion of the deaths was more white. How did it benefit black people that more of them died so that more white people would die so that the percentages looked better to woke ACIP/CDC officials? Take the argument from the expert on ethics and health-policy the NYT quoted:
“Older populations are whiter,” Dr. Schmidt said. “Society is structured in a way that enables them to live longer. Instead of giving additional health benefits to those who already had more of them, we can start to level the playing field a bit.”
I don't think the average black person would really be sympathetic to this argument, even before you pointed out it was also going to kill more black people. These sorts of arguments are mostly only appealing to the woke. And of course the same is true for plenty of less life-or-death issues, like Gamergate's NotYourShield consisting of women and minorities who didn't think they benefited from journalists defending themselves by accusing critics of being sexist/racist/etc.
Furthermore, even within the limited realm of affirmative-action I don't think wokeness genuinely serves the racial self-interest of black people. There are many more black people who benefit from infrastructure than from racial quotas in infrastructure contracts, more who need medical care than who go to medical school, more who use Google than who work for Google. It isn't just the principles that want the black percentage to be high vs. the ones that want it to be low, there is an inherent asymmetry because meritocracy isn't just an arbitrary "principled libertarian stance", it serves an important functional purpose.
Of course diversity advocates also sometimes say that affirmative-action/etc. benefits everyone, it's just that they're wrong. Other times racial resentment and malice clearly play a role, but even then that doesn't mean it actually serves racial self-interest. In general I think ideological conflicts have a lot more true believers and a lot less people cynically pursuing their interests than people tend to think they have.
That quote was specifically about not allowing them to directly control drones via Starlink, not "use of Starlink for military purposes" in general. They're fine with allowing them to be used for military communication but apparently not with drones carrying Starlink terminals so that they can be controlled by satellite without worrying about range and with less concern about jamming.
Reuters: SpaceX curbed Ukraine's use of Starlink internet for drones -company president
The Economist: Ukraine is betting on drones to strike deep into Russia
Aside from Starlink's apparent desire to not directly serve as the command and control system for drones and Musk's stated fears about escalation, I wonder if the U.S. government played some part in that decision, like how the U.S. has been reluctant to provide Ukraine with long-range missile systems capable of striking inside Russia.
Washington Post: U.S. in no hurry to provide Ukraine with long-range missiles
More options
Context Copy link