sodiummuffin
No bio...
User ID: 420
For instance, women seem more able to put themselves in the shoes of male protagonists in fiction, while men generally seem uninterested in female protagonists.
In anime and manga there are entire genres, most obviously slice-of-life comedies, where it is typical to have nearly 100% female casts (and a 50% or higher male audience). Female characters are a publishing requirement at plenty of manga magazines, and not for ideological reasons. Here is a relevant extra from the comedy manga/anime D-Frag, which ended up with a main cast that looks like this. The same is true for anime-style videogames, in particular gacha games which have an emphasis on character design. Even aside from the subsets of Japanese/Japanese-inspired media doing their best to tile the universe with cute girls, plenty of stories from times and places unconcerned with feminism have gone out of their way to incorporate female characters into roles like "warrior" which would realistically be all male, from ancient myths to modern fantasy.
If a subset of modern western characters like the female Captain Marvel aren't appealing to men, perhaps it is because none of the people involved with creating them designed them to be. That doesn't mean they can't be "strong" or whatever, female anime/manga characters are varied and include those with nearly every kind of "strength" imaginable, both the kinds of strength primarily associated with men and the kinds that aren't. But it does mean they shouldn't be designed by people who view "making a strong female character" or "making sure not to incorporate misogynistic tropes" as primary goals in character writing, which often takes precedence over concerns like making the character likable or interesting. Indeed, most of those strong female anime/manga characters were written by people who have probably never encountered a phrase like "strong female character" in their lives, let alone having them as important categories shaping how they think about writing fiction.
Why do you think this has anything to do with utilitarianism? Utilitarianism doesn't value the lives and well-being of mass-murderers any less than it values anyone else. It only recommends harming them as an instrumental goal to serve a more important purpose, such as saving the lives of others. A 20-year-old who raped and killed a dozen children still has plenty of potential QALYs to maximize, even adjusting his life-quality downward to account for being in prison. It's expensive but governments spends plenty of money on things with lower QALY returns than keeping prisoners alive. Also OP only differs from conventional death-penalty advocacy in that he seems concerned with the prisoners consenting, proposing incentivizing suicide instead of just executing them normally, and once again that is not something utilitarianism is particularly concerned with except in instrumental terms.
The utilitarian approach would be to estimate the deterrent and removal-from-public effect of execution/suicide-incentivization/life-in-prison/etc. and then act accordingly to maximize the net welfare of both criminals and their potential victims. It doesn't terminally value punishing evil people like much of the population does, though I think rule-utilitarianism would recommend such punishment as a good guideline for when it's difficult to estimate the total consequences. (In Scott's own Unsong the opposition of utilitarians to the existence of Hell is a plot point, reflecting how utilitarianism doesn't share the common tendency towards valuing punishment as a terminal goal.) But neither is utilitarianism like BLM in that it cares more about a couple dozen unarmed black people getting shot in conflicts with police than about thousands of additional murder victims and fatal traffic accidents per year from a pullback in proactive policing. That's just classic trolley-problem material: if one policy causes a dozen deaths at the hands of law-enforcement, and the other policy causes thousands of deaths but they're "not your fault", then it's still your responsibility to make the choice with the best overall consequences. There are of course secondary consequences to consider like the effect on police PR affecting cooperation with police, but once you're paying attention to the numbers I think it's very difficult to argue that they change the balance, especially when PR is driven more by media narratives than whether the number is 12 or 25 annually.
Notably, when utilitarians have erred regarding prisoners it seems to have been in the exact opposite direction you're concerned about. A while back someone here linked a critical analysis of an EA organization's criminal-justice-reform funding. They were primarily concerned with the welfare of the criminals rather than with secondary effects like the crime rate. The effect on the welfare of the criminals is easier to estimate, after all, an easy mistake reflecting the importance of utilitarians avoiding the streetlight effect. It was also grossly inefficient compared to other EA causes like third-world health interventions. They did end up jettisoning it (by spinning it off into an independent organization without Open Philanthropy funding), but not before spending $200 million dollars including $50 million on seed funding for the new organization. However, I think a lot of that can be blamed on the influence of social-justice politics rather than on utilitarian philosophy, and at least they ultimately ended up getting rid of it. (How many other organizations blowing money on "criminal justice reform" that turns out to be ineffective or harmful have done the same?). In any case, they hardly seem like they're about to start advocating for OP's proposal.
Off-world colonies don't need AI, and self-sustaining ones take an absolute sledgehammer to every X-risk except AI and dystopia (and aliens and God, but they're hardly immediate concerns). Dumb incentives for bio research can be fixed (and physics research, if and when we get to that). Dysgenics yields to PGT-P and sperm donors (although eugenics has some issues of its own).
Sure, but of course such measures being possible doesn't mean they'll actually be done.
Hell, even GOFAI research or uploads aren't likely to take much over a century, and would be a hell of a lot safer than playing with neural nets
This seems like too much certainty about the nature and difficulty of the task, which in turn influences whether significant delay actually increases the odds of success. For instance, if we turn out to live in a universe where superhuman AI safety isn't that hard, then the important thing is probably that it be done by a team that considers it a serious concern at all. Right now the leading AI company is run by people who are very concerned with AI alignment and who founded the company with that in mind, if we ban AI development and then the ban gets abandoned in 30 years there's a good chance that won't be the case again.
A candidate for such a universe would be if it's viable to make superintelligent Tool AIs. Like if GPT-10 can mechanistically output superhuman scientific papers but still doesn't have goals of its own. Such an AI would still be dangerous and you certainly couldn't release it to the general public, but you could carefully prompt it for papers suggesting more resilient AI alignment solutions. Some have argued Agent AIs would have advantages compared to Tool AIs, like Gwern arguing Tool AIs would be "less intelligent, efficient, and economically valuable". Lets say we live in a future where more advanced versions of GPT get routinely hooked up to other components like AgentGPT to carry out tasks, something which makes it significantly better at complicated tasks. OpenAI just developed GPT-10 which might be capable of superhuman scientific research. They can immediately hook it up to AgentGPT+ and make trillions of dollars while curing cancer, or they can spend 2 years tweaking it until it can perform superhuman scientific research without agentic components. It seems plausible that OpenAI would take the harder but safer route, but our 2050s AI company very well might not bother. Especially if the researchers, having successfully gotten rid of the ban, view AI alignment people the same way anti-nuclear-power environmentalists and anti-GMO activists are viewed by those respective fields.
Regarding talk of 100-year bans on AI while people steadily work on supposedly safer methods, I'm reminded of how 40 years ago overpopulation was a big mainstream concern among intellectuals. These ideas influenced government policy, most famously China's One Child policy. Today the fertility rate is substantially reduced (though mostly not by the anti-overpopulation activists), the population is predictably aging, and...the plan is completely abandoned, even though that was the entirely predictable result of dropping fertility. Nowadays if a country is concerned with ferility either way it'll want it to increase rather than decrease. Likewise the eugenics movement had ambitions of operating across many generations before being erased by the tides of history. In general, expecting your movement/ideas to retain power that long seems risky seems very risky.
people who do not and who never have existed can't be said to have "wants" in any meaningful sense
You should include people who will exist as well, as opposed to people who could potentially exist if you took other actions but will never actually exist. Otherwise something like "burying a deadly poison that you know will leach into the water table in 120 years" would be perfectly moral, since the people it will kill don't exist yet.
This kind of idiotic one-dimensional thinking is why I maintain that utilitarianism is fundementally stupid, evil, and incompatible with human flourishing.
As I mentioned, Preference Utilitarianism and Average Preference Utilitarianism are also forms of utilitarianism. And Total Utilitarianism doesn't imply wireheading either. Wireheading is only an implication of particularly literal and naive forms of hedonic utilitarianism that not even actual historical hedonic utilitarians would endorse, they would presumably either claim it isn't "real" happiness or switch to another form of utilitarianism.
Honestly, I think the main rhetorical advantage of non-utilitarianism forms of ethics is that they tend to be so incoherent that it is harder to accuse them of endorsing anything in particular. But people being bad at formalizing morality doesn't mean they actually endorse their misformalization's implications. You just tried to express your own non-utilitarian beliefs and immediately endorsed sufficiently-delayed murders of people who aren't born yet, that doesn't mean you actually support that implication. But having non-formalized morality is no advantage in real life and often leads to terrible decisions by people who have never rigorously thought about what they're doing, because you really do have to make choices. In medicine utilitarianism gave us QALYs while non-consequentialism gave us restrictive IRBs that care more about the slightest "injustice" than about saving thousands of lives, as a human who will require medical care I know which of those I prefer.
omnicide
The view he is expressing is of course the opposite of this - that humanity surviving until it ultimately colonizes the galaxy is so important that anything that improves humanity's safety is more important than non-omnicidal dangers. Of course that would still leave a lot of uncertainty about what the safest path is. As I argued, significant delays are not necessarily more safe.
My 1e999999999999999 hypothetical future descendants who see utilitarian AIs as abominations to be purged with holy fire in the name of the God-Emperor are just as real as your "10^46 hypothetical people per century after galactic colonization" and thier preferences are just as valid.
To be clear the "preference" framing is mine, since I prefer preference utilitarianism. Bostrom would frame it as something like trying to maximize the amount of things we value, such as "sentient beings living worthwhile lives".
Both. Mostly I was contrasting to the obverse case against it, that risking nuclear escalation would be unthinkable even if it was a purely harmful doomsday device. If it was an atmosphere-ignition bomb being developed for deterrence purposes that people thought had a relevant chance of going off by accident during development (even if it was only a 1% risk), then aggressively demanding an international ban would be the obvious move even though it would carry some small risk of escalating to nuclear war. The common knowledge about the straightforward upside of such a ban would also make it much more politically viable, making it more worthwhile to pursue a ban rather than focusing on trying to prevent accidental ignition during development. Also, unlike ASI, developing the bomb would not help you prevent others from causing accidental or intentional atmospheric ignition.
That said, I do think that is the main reason that pursuing an AI ban would be bad even if it was politically possible. In terms of existential risk I have not read The Precipice and am certainly not any kind of expert, but I am dubious about the idea that delaying for decades or centuries attempting to preserve the unstable status-quo would decrease rather than increase long-term existential risk. The main risk I was thinking about (besides "someone more reckless develops ASI first") was the collapse of current civilization reducing humanity's population and industrial/technological capabilities until it is more vulnerable to additional shocks. Those additional shocks, whether over a short period of time from the original disaster or over a long period against a population that has failed to regain current capabilities (perhaps because we have already used the low-hanging fruit of resources like fossil fuels) could then reduce it to the point that it is vulnerable to extinction. An obvious risk for the initial collapse would be nuclear war, but could also be something more complicated like dysfunctional institutions failing to find alternatives to depleted phosphorous reserves before massive fertilizer shortages. Humanity itself isn't stable, it is currently slowly losing intelligence and health to both outright dysgenic selection from our current society and to lower infant mortality reducing purifying selection, so the humans confronting future threats may well be less capable than we are. Once humans are reduced to subsistence agriculture again the obvious candidate to take them the rest of the way would be climate shocks, as have greatly reduced the human population in the past.
Furthermore, I'm not that sympathetic to Total Utilitarianism as opposed to something like Average Preference Utilitarianism, I value the preferences of those who do or will exist but not purely hypothetical people who will never exist. If given a choice between saving someone's life and increasing the number of people who will be born by 2, I strongly favor the former because his desire to remain alive is real and their desire to be born is an imaginary feature of hypothetical people. But without sufficient medical development every one of those real people will soon die. Now, wiping out humanity is still worse than letting everyone die of old age, both because it means they die sooner and because most of those people have a preference that humanity continue existing. But I weigh that as the preferences of 8 billion people that humanity should continue, 8 billion people who also don't want to die themselves, not the preferences of 10^46 hypothetical people per century after galactic colonization (per Bostrom's Astronomical Waste) who want to be born.
If Russia invaded Alaska and said "if you shoot back at our soldiers we will launch nuclear weapons", letting them conquer Alaska would be better than a nuclear exchange. Nonetheless the U.S. considers "don't invade U.S. territory" a red line that they are willing to go to war with a nuclear power to protect. The proposal would be to establish the hypothetical anti-AI treaty as another important red line, hoping that the possibility of nuclear escalation remains in the background as a deterrent without ever manifesting. The risk from AI development doesn't have to be worse than nuclear war, it just has to be worse than the risk of setting an additional red line that might escalate to nuclear war. The real case against it is that superhuman AI is also a potentially beneficial technolgy (everyone on Earth is already facing death from old-age after all, not to mention non-AI existential risks), if it was purely destructive then aggressively pursuing an international agreement against developing it would make sense for even relatively low percentage risks.
More developments on the AI front:
Big Yud steps up his game, not to be outshined by the Basilisk Man.
It is a video from over two months ago in which he hyperbolically describes how implausible he thinks it is that the world imposes strict international regulations on AI development. It is not a new development because someone on Twitter decided to clip it. He mentions nuclear weapons to illustrate how enforcing a treaty against a nuclear power is a hard problem. Of course, in reality if one side considered it a high priority it is very likely an agreement could be found before escalating to that point, same as the existing agreements between nuclear powers. There isn't going to be a treaty banning AI development because not even the U.S. wants one, in part because the outcome of developing superhuman AI is so uncertain and controversial, not because "bright line that we will risk nuclear exchange to prevent you crossing" is something unimaginable in international relations.
Same as everywhere else, the people who made the decision are true believers who think this is a great idea for the Navy and/or for their moral/ideological goals.
As I see it, the military is probably the last place that would be under pressure to go woke - the Left hates it unconditionally and passionately anyway, it is impossible to "cancel" it in any meaningful way, you can not really orchestrate an ideological boycott against the military...
There's a weird tendency to personify institutions and act like principal-agent problems don't exist, like how people will tie themselves in knots trying to come up with explanations about how corporations with SJW institutional capture are actually profit-maximizing. Why would someone with a Navy recruitment job care more about "doing the best job possible to slightly improve Navy recruitment numbers" than "making the world a safe place for LGBTQ+ people"? Even more importantly, why wouldn't someone with such an ideology sincerely believe that he can do both? People are biased about the merits of their ideology in other circumstances, they don't turn that off when they're making decisions on behalf of an institution. They can tell themselves something like "This will boost recruitment by showing the Navy is an inclusive place for young people, anyone bigoted enough to object is an asshole who would cause problems anyway." and believe it.
I was going to point out that people who got the vaccine were older and had a higher base death rate than those who didn't, so there is selection bias in any comparison. But then I actually clicked your link, and it's way dumber than that! It isn't comparing to people who didn't get the vaccine, it's comparing VAERS reports by length of time since vaccination. Whether to make a VAERS report is an arbitrary decision, and obviously doctors will be more likely to do it the closer to vaccination it happened. If someone has a heart-attack a few hours after being vaccinated there will almost certainly be a VAERS report, if someone has a heart-attack months after being vaccinated there probably won't be, and that is true even if the risk of heart attack on day 0 and day 90 is exactly the same.
Not a reaction of someone who is not even slightly worried.
Sure it is. Yudkowsky is exactly the sort of person who would be outraged at the idea of someone sharing what that person claims is a basilisk, regardless of whether he thinks the specific argument makes any sense. He is also exactly the sort of person who would approach internet moderation with hyper-abstract ideas like "anything which claims to be a basilisk should be censored like one" rather than in terms of PR.
Speaking or writing in a way where it's difficult to use your statements to smear you even after combing through decades of remarks is hard. It's why politicians use every question as a jumping off point to launch into prepared talking-points. Part of Yudkowsky's appeal is that he's a very talented writer who doesn't tend to do that, instead you get the weirdness of his actual thought-processes. When presented with Roko's dumb argument his thoughts were about "correct procedure to handle things claiming to be basilisks", rather than "since the argument claims it should be censored, censoring it could be used to argue I believe it, so I should focus on presenting minimum attack-surface against someone trying to smear me that way".
Again, I deleted that post not because I had decided that this thing probably presented a real hazard, but because I was afraid some unknown variant of it might, and because it seemed to me like the obvious General Procedure For Handling Things That Might Be Infohazards said you shouldn't post them to the Internet. If you look at the original SF story where the term "basilisk" was coined, it's about a mind-erasing image and the.... trolls, I guess, though the story predates modern trolling, who go around spraypainting the Basilisk on walls, using computer guidance so they don't know themselves what the Basilisk looks like, in hopes the Basilisk will erase some innocent mind, for the lulz. These people are the villains of the story. The good guys, of course, try to erase the Basilisk from the walls. Painting Basilisks on walls is a crap thing to do. Since there was no upside to being exposed to Roko's Basilisk, its probability of being true was irrelevant. And Roko himself had thought this was a thing that might actually work. So I yelled at Roko for violating basic sanity about infohazards for stupid reasons, and then deleted the post. He, by his own lights, had violated the obvious code for the ethical handling of infohazards, conditional on such things existing, and I was indignant about this.
Then the argument moves to, well isn't puberty blockers irrecoverable harm to the child because of sterilization just like cutting off an arm? I'd say no, the issue isn't the loss of tissue it's the loss of capabilities.
There is good reason to believe that puberty blockers permanently hinder brain development, which hormones during puberty play an important role in. Unfortunately there are zero randomized control trials examining this, and even less evidence regarding using them to prevent puberty entirely rather than to delay precocious puberty a few years, but they have that effect in animal trials:
The long-term spatial memory performance of GnRHa-Recovery rams remained reduced (P < 0.05, 1.5-fold slower) after discontinuation of GnRHa, compared to Controls. This result suggests that the time at which puberty normally occurs may represent a critical period of hippocampal plasticity. Perturbing normal hippocampal formation in this peripubertal period may also have long lasting effects on other brain areas and aspects of cognitive function.
That study also cites this study in humans which found a 3-year course of puberty blockers to treat precocious puberty was associated with a 7% reduction in IQ, but since it doesn't have a control group I wouldn't put much weight on it.
Similar concerns were mentioned by the NHS's independent review:
A further concern is that adolescent sex hormone surges may trigger the opening of a critical period for experience-dependent rewiring of neural circuits underlying executive function (i.e. maturation of the part of the brain concerned with planning, decision making and judgement). If this is the case, brain maturation may be temporarily or permanently disrupted by puberty blockers, which could have significant impact on the ability to make complex risk-laden decisions, as well as possible longer-term neuropsychological consequences. To date, there has been very limited research on the short-, medium- or longer-term impact of puberty blockers on neurocognitive development.
You're comparing diagnoses per year for those 6-17 to number of children. You have to multiply the yearly figure by 12 for the whole time period. The U.S. population 6-17 is apparently 49,466,485, which would put the percentage who end up with gender-dysphoria diagnoses before the age of 18 at 1.02%.
If Bach did not have fans when he was alive that seems to have more to do with when he lived than anything, I know Beethoven had fans. Or is he specifically talking about The Well-Tempered Clavier and not including more general fans of Bach's work, or for that matter modern fans of classical music? Because it seems like there are better factors than "badness" to explain the distinction: one or more of whether a work is serialized, whether a work is long, and whether a work is well-suited to additions by fans and other third-parties. Factors like those mean there is more to discuss on an ongoing basis, rather than just reading a book or listening to a specific piece, saying it's good, and that's it. Notice how elsewhere he has to group together "Japanese kiddie-cartoons" - because anime and manga are mostly a lot of different creator-written works, rather than a handful of continually reused IPs, most individual anime don't have a fandom, or only have a miniature fandom/discussion-group in the form of some /a/ and /r/anime threads during the season they air. Anime movies have even less. Similarly in the era of sci-fi short-stories there was a sci-fi fandom but not fandoms for individual short stories and little for individual novels.
I haven't followed him closely, I mostly just heard about how he has lost so many viewers and subscribers that he did a "Under 800k Subscriber Special" (and previously did a "Under 900k" one) in which he apparently blames transphobia and claims Youtube suppresses LGBT content. I also heard that his wife left him. (Supposedly he has said that she decided to leave him for another guy. I know they were already in an open relationship before the trans stuff but I don't know what role either that or the trans stuff might have played.) A quick search finds this thread discussing the channel decline. Hurting his Youtuber career and losing his wife isn't as bad as Cosmo/Narcissa's degeneration and he still makes good money on Patreon, so maybe saying "blow up" was going too far. But I remember when he was a big mainstream gaming personality who was incidentally a SJW, and now I get the impression that it has consumed his whole self-conception and narrative of his life while leading him to do self-destructive things.
Tiktok videos regarding something only tell you about the people who cared enough to make or watch Tiktok videos about that thing. Not only is counting Tiktok videos about some specific event much less rigorous than a poll, but it isn't even really trying to do the same thing as an opinion poll. I think the better explanation would be that, as my comment below suggests, there is greater polarization. India has more passionate anti-Hinduism than the vast majority of countries, a Youtube video about Hindu atrocities would presumably do better there than America. That doesn't call into question the statistics saying India is 80% Hindu.
"Immunized" is taking it much too far given how the percentage of teenagers who identify as trans/non-binary/etc. has exploded. And I would guess that, by most measures, their net positions on trans issues are more pro-trans as well. Rather I would say that they are much more polarized due to the increased salience of transgenderism and transgender ideology.
If your contact with the concept of transgenderism is learning that the T in LGBT refers to crossdressers and once hearing a joke about thai ladyboys, you are likely to be tolerant and not care about weirdos doing weirdo things. If instead it is seeing a whole friend group at school trying to convince a member that he's trans/non-binary because he has long hair and isn't masculine enough, or seeing an attention-seeking person go trans and police "misgendering", or encountering the trans part of the online SJW community, or seeing public figures like Cosmo/Narcissa and Jim Sterling blow up their lives, or at least hearing about it in the media with the constant drumbeat of pro-trans rhetoric and with news stories like MTFs competing in women's sports, you are going to approach the issue in a different way.
require gas, which is a fossil fuel – do I need to explain why fossil fuels are bad?
Gas stoves are burning gas to produce heat. This is dramatically more efficient than burning gas to turn a turbine to produce electricity to send over the electric grid before turning into heat. (Even the couple percent of gas lost to leaks is less than the 6% loss on sending electricity over the grid.) It's not like an electric car where power plants are much more efficient than a portable gasoline engine (plus regenerative braking) so electric cars end up being more efficient. Making heat is inherently very efficient because you're not fighting thermodynamics, making electricity isn't. As a result, under the electricity-generation mix currently typical in the U.S., induction stoves cause more CO2 emissions than gas stoves.
https://home.howstuffworks.com/gas-vs-electric-stoves.htm
The clear winner in the energy efficiency battle between gas and electric is gas. It takes about three times as much energy to produce and deliver electricity to your stove. According to the California Energy Commission, a gas stove will cost you less than half as much to operate (provided that you have an electronic ignition--not a pilot light).
Now, maybe the higher CO2 emissions to power induction stoves is worthwhile for whatever indoor air quality benefits there are. And maybe power-generation will change so that generating marginal electricity rarely involves spinning up a gas turbine. But remember stoves don't last forever, if this change doesn't happen for a while then the induction stove will emit more CO2 over its lifespan regardless. I get the sense that a lot of people are vaguely anti-gas-stove because they assume it causes more CO2 emissions due to directly burning a fossil fuel, even though this is the opposite of the case.
Regarding the indoor air quality aspect, it would be nice if there was a decent literature review of the issue, like Scott's "Much more than you wanted to know" series. As a matter of common-sense, it seems like gas stoves must be at least marginally worse. But from what I've read this doesn't seem dramatic enough to show up in aggregate health outcomes for more rigorous studies. The main difference is only in terms of nitrogen dioxide and carbon monoxide, not the particulate matter you might expect. Most particulate matter comes from the food, so it's plausible that consistently using a range hood that vents to the outside is actually much more important than gas vs. induction. But it's hard to synthesize the available information into a general sense of how much of an issue it is.
Elections are a bad gauge because, if sufficiently democratic, they are close to being public-opinion polls. When people talk about wokeness being powerful, they usually mean it is disproportionately powerful compared to its popularity (or at least compared to its success, if they think public opinion on something is being driven by dishonest media coverage). It routinely gets institutions to act as if its dictates are universally popular, just the way society has decided things are done nowadays, even when they are unpopular or at least highly controversial. By comparison, essentially nobody talks about how pro-agriculture ideology is influential. When a public opinion poll finds that colleges discriminating against white/asian people is unpopular, but they do it anyway, that isn't cited as evidence that wokeness is weak. Now, it is true that elections have more direct impact than public-opinion polls. But lots of sources of power aren't elections - corporate policies, sympathetic media coverage, unelected government bureaucrats, etc. It didn't take an election for hospitals to ration healthcare based on racial "equity" or for the CDC/ACIP to recommend a COVID-19 vaccine-distribution plan that they estimated would result in thousands of additional deaths so that a larger fraction of those deaths would be white. So long as wokeness holds such a disproportionate influence over unelected institutions, I don't think it makes a lot of sense to assume it is waning just because it is unpopular with the general public and thus sometimes loses elections.
An obvious but unmentioned reason for 4900 people to text the number is that they wanted to see if doing so would provide a continuation of the joke. Texting the message didn't provide an automated response, but it easily could have, and trying it out lets you see if it does. The same way that, for example, when Grand Theft Auto marketers spread around phone numbers of in-game business/characters, the vast majority of the people calling those numbers did so because they wanted to hear a funny answering-machine message, not because they thought they were real.
He mentioned Soros, who is Jewish. Anti-semitic conspiracy theorists on /pol/ also don't like Soros, so complaining about him must mean DeSantis is dogwhistling to them. Unlike when people complain about the Koch brothers or Peter Thiel, which is just expressing justifiable anger at billionaires subverting our democracy.
On a largely unrelated note, it just occurred to me that the whole "vampire harvesting the blood of the young" smear directed at Peter Thiel (for the offense of investing in medical research companies that did longevity research investigating the thing where mice given blood transplants live longer) would 100% have been pattern-matched as "anti-semitic blood libel" if he was Jewish. Somehow I never made that connection before. Here's a list of articles I had saved for those unfamiliar:
Peter Thiel Is Interested in Harvesting the Blood of the Young - Gawker
Billionaire Peter Thiel thinks young people’s blood can keep him young forever - Raw Story
Peter Thiel Isn't the First to Think Young People's Blood Will Make Him Immortal - The Daily Beast
Peter Thiel is Very, Very Interested in Young People's Blood - Inc
The Blood of Young People Won’t Help Peter Thiel Fight Death - Vice
Hey, Silicon Valley: you might not want to inject yourself with the blood of the young just yet - Vox
Peter Thiel Wants to Inject Himself with Young People's Blood - Vanity Fair
Is Peter Thiel a Vampire? - New Republic
Non-state violence has essentially no possibility of indefinitely stopping all AI development worldwide. Even governmental violence stopping it would be incredibly unlikely, it seems politically impossible that governments would treat it with more seriousness than nuclear proliferation and continue doing so for a long period, but terrorists have no chance at all. Terrorists would also be particularly bad at stopping secret government AI development, and AI has made enough of a splash that such a thing seems inevitable even if you shut down all the private research. If at least one team somewhere in the world still develops superintelligence, then what improves the odds of survival is that they do a good enough job and are sufficiently careful that it doesn't wipe out humanity. Terrorism would cause conflict and alienation between AI researchers and people concerned about superintelligent AI, reducing the odds that they take AI risk seriously, making it profoundly counterproductive.
It's like asking why people who are worried about nuclear war don't try to stop it by picking up a gun and attacking the nearest nuclear silo. They're much better off trying to influence the policies of the U.S. and other nuclear states to make nuclear war less likely (a goal the U.S. government shares, even if they think it could be doing a much better job), and having the people you're trying to convince consider you a terrorist threat would be counterproductive to that goal.
What would be accomplished during a "six-month-pause" that would make it worth the enormous difficulty of getting that sort of international cooperation, even if the petition had any chance of success at all? Why should people concerned about unaligned AI consider this the best thing to spend their credibility and effort on? It's not like "alignment research" is some separate thing with a clear path forward, where if only we pause the AI training runs we'll have the time for a supercomputer to finish computing the Alignment Solution. Alignment researchers are stumbling around in the dark trying to think of ideas that will eventually help the AI developers when they hit superintelligence. Far more important to make sure that the first people to create a superintelligence consider "the superintelligence exterminates humanity" a real threat and try to guide their work accordingly, which if anything this interferes with by weakening the alignment-concerned faction within AI research. (The petition also talks about irrelevant and controversial nonsense like misinformation and automation, the last thing we want is alignment to be bureaucratized into a checklist of requirements for primitive AI while sidelining the real concern, or politicized into a euphemism for left-wing censorship.) Right now the leading AI research organization is run by people who started off trying to help AI alignment, that seems a lot better than the alternative! To quote Microsoft's "Sparks of Artificial General Intelligence: Early experiments with GPT-4" paper:
Equipping LLMs with agency and intrinsic motivations is a fascinating and important direction for future work.
Here is the baseline: if the first people to create superintelligence aren't concerned with alignment, there's a decent chance they will deliberately give it "agency and intrinsic motivations". (Not that I'm saying the Microsoft researchers necessarily would, maybe they only said that because LLMs are so far from superintelligence, but it isn't a promising sign.) Personally I'm inclined to believe that there's no reason a superintelligent AI needs to have goals, which would make "create a Tool AI and then ask it to suggest solutions to alignment" the most promising alignment method. But even if you think otherwise, surely the difference between having superintelligence developed by researchers who take alignment seriously and researchers who think "lets try giving the prospective superintelligence intrinsic motivations and write a paper about what happens!" matters a lot more than whatever "alignment researchers" are going to come up with in 6 months.
Your summary of the Grayson/Quinn conflict of interest is good, and illustrates some of the video's overt misrepresentations, but I'd note there is also dishonesty through omission. GG uncovered a lot of cases of game journalists engaging in undisclosed conflicts of interest, alongside other complaints like sensationalism and ideological witch-hunts against developers. For instance, very early on they discovered that Kotaku's Patricia Hernandez had repeatedly given coverage to both her friend and former roommate Anna Anthropy and to her former girlfriend Christine Love. Hernandez is now Kotaku's editor-in-chief. This image was circulating days before the Gamergate hashtag was even coined. (The expansion of the scandal beyond Grayson/Quinn is part of why people were eager to jump on the Gamergate hashtag when Baldwin coined it rather than continuing to use "Quinnspiracy", other hashtags were already being brainstormed and various strawpolls posted in the days prior to Baldwin's tweet.)
The articles on Deepfreeze are a decent summary from the GG perspective, with the one titled "Unfair advantage" being the one focused on personal conflicts of interest.
He is an obvious crank (or troll pretending to be a crank) making terrible arguments, but your response is not a good one. I doubt there is anyone here who finds his arguments convincing, but if there was your post would not be a good reason to think otherwise. There are many people who believe that, for instance, they have lost a family member to the COVID-19 vaccine because he had a stroke months later or something. Many millions of people say they lost a family member to the vaccine if we go by the survey a while back and assume not all of them were people misreading the question (unfortunately I didn't save a link, it might have been posted here or somewhere else like Zvi's blog), implying numbers that are completely insane unless we assume that all the official studies and statistics are outright fake. For that matter, there are plenty of people who will tell you stories like "my father got the vaccine and had a heart-attack days later", something which is biologically plausible to attribute to the vaccine, and yet even then the statistics probably work out such that it is a coincidence most of the time. Their anger at vaccination-supporters for killing people they loved, though based on far weaker evidence, is in many cases just as sincere and wholehearted as yours. Those personal experiences and feelings aren't a convincing argument when they use them, and they don't become a convincing argument just because you are supporting a position that happens to actually be true.
Yes. Wars of annexation materially strengthen aggressors and incentivize further war, they are a sort of geopolitical positive feedback loop. In the modern era going to war makes you weaker and poorer, less capable of waging war rather than more. Sometimes countries are willing to do it anyway, and of course there is gaming of the boundaries, but keeping the feedback loop negative rather than positive helps stop this getting too out of hand. How harmful (or beneficial) the war is to the country being invaded isn't really relevant to that, the important thing is that it be harmful to the aggressor. For instance the invasion of Iraq imposed a cost rather than a benefit on the U.S. (as even most of its proponents knew it would) so it didn't result in a series of more U.S. invasions, but the Russian invasion of Crimea was sufficiently beneficial that it chain-reacted into the invasion of the rest of Ukraine.
Wars must have no winners, only losers, and to ensure this continues to remain the case countries are willing to take losses themselves so that attempted wars of annexation leave the aggressor indisputably worse-off. Complaining that countries are "irrationally" willing to harm themselves for the sake of deterrence is deeply silly, it's basic game-theory and central to the logic of modern war. If Russia thought countries wouldn't really be willing to harm themselves for no benefit besides vague principles of game-theoretic value, that's just another way that Russia's status as a low-trust society has made them weaker.
More options
Context Copy link