@SnapDragon's banner p

SnapDragon


				

				

				
0 followers   follows 0 users  
joined 2022 October 10 20:44:11 UTC
Verified Email

				

User ID: 1550

SnapDragon


				
				
				

				
0 followers   follows 0 users   joined 2022 October 10 20:44:11 UTC

					

No bio...


					

User ID: 1550

Verified Email

Note that in this case, Rotten Tomatoes shows you the all audience score, not a verified score. That movie was also the subject of woke controversy due to race and gender swapping a bunch of characters, so a lot of the negative scores probably come from people who were unhappy about those changes. This isn't an apples-to-oranges comparison.

Oh, good catch! I didn't even notice that (showing how insidious that "verified audience" marker is). For some reason I thought Peter Pan & Wendy was another theatrical release, but apparently it went straight to Disney+, so there are no verified reviews. So it's a lot less comparable to TLM than I thought.

Yup, that's my dilemma. The whole point of these aggregation sites was to try to get a more objective measure of how good a movie actually is. But there's no paper trail for any of these sites' scores (it's not just RT), and it's become common practice to fudge the numbers with a special "algorithm". I guess I mostly just accepted this before, but TLM is such a ridiculous outlier that I'm starting to doubt whether there's any useful signal left.

Not sure if this has been mentioned before, but on the topic of The Little Mermaid, I am extremely confused by the Rotten Tomatoes score. The "audience score" has been fixed at 95% since launch, which is insanely high. The critics score is a more-believable 67%. Note that the original 1989 cartoon - one of my favorite movies growing up, a gorgeous movie that kickstarted an era of Disney masterpieces - only has an 88% audience score. Also, Peter Pan & Wendy, another woke remake coming out at almost the same time, has an audience score of 11%. And recall that the first time Rotten Tomatoes changed their aggregation algorithm was actually in response to Captain Marvel's "review bombing", another important and controversial Disney movie.

If you click through to the "all audiences" score, it's in the 50% range. And metacritic's audience score is 2.2 out of 10. The justification I've heard in leftist spaces is that the movie's getting review bombed by people who haven't seen it. And there certainly is a wave of hatred for this movie (including from me, because the woke plot changes sound dreadful). How plausible is this? I haven't seen the movie myself, so it's possible that it actually is decent enough for the not-terminally-online normies to enjoy. But even using that explanation, how is 95% possible?

Right now I only see two possibilities:

  • Rotten Tomatoes has stopped caring about their long-term credibility, and they're happy to put their finger on the scale in a RIDICULOUSLY obvious way for movies that are important to the Hollywood machine. I should stop trusting them completely and go to Metacritic.

  • People like me who have become super sensitive to wokeness already knew they'd hate the movie and didn't see it; for the "verified" audience, TLM is actually VERY enjoyable, and the 95% rating is real.

But, to be honest, I would have put a low prior on BOTH of these possibilities before TLM came out. Is there a third that I'm missing?

Yeah, Keanu Reeves (John Wick) is 58, Vin Diesel (Fast X) is 55 and Tom Cruise (MI) is 60. These are fun action franchises, but where are the fun action franchises with up-and-comers who are 20-30? I sure hope Ezra Miller isn't representative of the future of Hollywood "stars"...

Yeah, there's a very relevant xkcd. There are thousands of times more cameras at hand to the general public than there were 50 years ago. If 9/11 happened today we'd have hundreds of videos of the FIRST plane impact - which happened with only seconds of warning - instead of just one. Only 12 years later, there was a huge amount of footage of the Chelyabinsk meteor. Even tsunamis - a relatively more common event with more warning - hadn't really been captured on video much before Japan's in 2011.

Real phenomena, even rare ones, get easier and easier to find footage of as technology increases. "Aliens flitting around the skies in spaceships" does not fit this profile at all.

I remember doing a double-take when I realized that Dudley's actor, of all people, was playing one of the love interests in The Queen's Gambit. Amazing that Harry Melling could successfully grow out of a role like that. (Incidentally, The Queen's Gambit is another contemporary example of a female-led story that everyone loved.)

Oh wow, that article has yet another brilliant bit of statistical legerdemain.

The report found that suicides are responsible for half of all violent deaths in men and 71% of violent deaths in women.

The second number is higher, so clearly it's Women Most Affected! ...Except, of course, that the base rates are completely different. Slicing the data this way means that the more men die violently, then the more this makes women look victimized.

And when I see articles like this, I can't help but wonder. I usually assume that journalists are stupid rather than actively malicious. But the author had to have done some research to get the stats she's playing with, right? I'm sure she's living in a bubble, but even so it seems hard to imagine that she's never encountered the fact that suicide rates are actually higher for men. The article so carefully tap-dances around this fact, it seems like it has to be a purposeful omission ... which is just so damn evil. It makes me sad.

Indeed, journalistic standards are loose enough that absolutely anything can be framed to make men look inferior or women victimized.

  • "Men are discriminated against in college admission" -> "Men aren't applying themselves in school"

  • "Women are saved first in emergencies" -> "Men treat women as weak and lacking agency"

  • "Women are admired for their beauty" -> "Women are objectified"

  • "Men commit violence more" -> "Men commit violence more" (no dissonance here!)

  • "Men are more often the victims of violence" -> "Women feel less safe than ever, study finds"

  • "Men die in wars" -> "Women lose their fathers, husbands, sons"

  • "Men commit suicide more" -> "Women attempt suicide more"

  • "Men literally die younger" -> "Women are forced to pay more for health insurance" (honestly, I've admired the twisted brilliance of this framing ever since the Obamacare debates)

I'll just point out that you - not me - used the phrase "what you want to hear". Note that "what you want to hear" most is useful information. Please, you're on The Motte, just try to think logically about this rather than believing what you really hope to be true.

Just off the top of my head, suppose you have 5 suspects and you need the address of their base, and they're not talking. You torture them and they give you some addresses. Do you say "welp, a lot of these are false positives, shucks, into the garbage with you"? No, of course not. You can surveil all the addresses, you can correlate their stories, you can torture them more if it doesn't match up, etc. I'm making up an armchair scenario which doesn't come close to capturing the complexity of real-world intelligence work, but that's ok, because I'm not the one trying to make a sweeping claim. All it takes is one situation where torture works for your motivated reasoning to fall apart.

I'm sorry, but torture is a horrible practice that we shouldn't do, but it also works. It just does. If I had info I didn't want to reveal, it would work on me. It would work on you. This isn't a political question. It's a simple fact, and it's one that the average person just knows, because they haven't heard the "clever" contrarian arguments that let you talk yourself out of common sense.

I think your hypothetical scenarios are a little mixed up. You mention confessions in your first case, because (yes, of course) confessions gained under torture aren't legitimate. Which has nothing to do with the War on Terror argument, or the second part where you mention finding an IED cache. That's information gathering, and that's the general case.

Note that:

  • All information you get from a suspect, voluntary, coerced, or via torture, is potentially a lie. Pretending that torture is different in this way is special pleading.

  • You invented a highly contrived scenario to show the worst-case consequences of believing a lie. There are dozens of ways of checking and acting on information that are less vivid.

  • The main difference that torture has is there are some suspects for which it is the only way of getting useful information. It sucks, but this is the Universe we live in.

As for the "ticking time bomb" thought experiment, that's not highlighting one special example where torture works. That's just showing where the torture-vs-not distinction (the ethical conundrum, like you said) becomes sharpest. Most people have some threshold X at which saving X lives is worth torturing one person. It arguably shouldn't make a difference whether those lives are direct (a bomb in a city) or indirect (stopping a huge attack 2 years down the line), but we're human, so it does.

I very much agree with his assertion in the second article that analysts often try to avoid mentioning (or even thinking about) tradeoffs in political discussions, even that's almost always how the real world works. Being honest about tradeoffs is a good strategy for correctly comprehending the world, but not for "winning" arguments.

Somewhat related to the civil rights violations of prisoners, I remember the arguments about Guantanamo back in the War on Terror days. It was common to hear politicians and pundits - in full seriousness - make the claim that "torture doesn't work anyway." I hated the fact that, post-9/11, it was politically impossible to say "torture is against our values, so we won't do it even though this makes our anti-terror efforts less effective and costs lives." Despite the fact that (I suspect) most people would agree privately with this statement...

Annoyingly, this paper references the Doomsday Argument, which is completely wrong (it does mention some of the arguments against it, but that's like mentioning the Flat Earth Hypothesis and then saying "some people disagree"). I went on a longer rant about the Doomsday Argument here if you're curious.

The central question is interesting, though. Basically, if you believe (sigh) Yudkowsky, then any civilization almost certainly turns into a Universe-devouring paperclip maximizer, taking control of everything in its future light cone. This is different than the normal Great Filter idea, which would (perhaps) destroy civilizations without propagating outwards. I was originally going to post that the Fermi paradox is thus (weak) evidence against Yuddism, because the fact that we're not dead yet means either a) civilizations are very rare, or b) Yudkowsky is wrong. So if you find evidence that civilizations should be more common, that's also evidence against Yuddism.

But on second read, I realized that I may be wrong about this if you apply the anthropic argument. If Yuddism is true, then only civilizations that are very early to develop in their region of the Universe will exist. Being in a privileged position, they'll see a Universe that is less populated than they'd expect. This means that evidence that civilizations should be more common is actually evidence FOR Yuddism.

Kind of funny that the anthropic argument flips this prediction on its head. I'm probably still getting something subtly wrong here. :)

Does anyone around him tell him (in a friendly way) to maybe start practicing some Methods of Rationality? Question a couple of his assumptions, be amenable to updating based on new evidence? Because that would also be nice.

Eh. I gave him some respect back when he was simply arguing that timelines could be short and the consequences of being wrong could be disastrous, so we should be spending more resources on alignment. This was a correct if not particularly hard argument to make (note that he certainly was not the one who invented AI Safety, despite his hallucinatory claim in "List of Lethalities"), but he did a good job popularizing it.

Then he wrote his April Fool's post and it's all been downhill from here. Now he's an utter embarrassment, and frankly I try my best not to talk about him for the same reason I'd prefer that media outlets stop naming school shooters. The less exposure he gets, the better off we all are.

BTW, as for his "conceptualization of intelligence", it went beyond the tautological "generalized reasoning power" that is, um, kind of the definition. He strongly pushed the Orthogonality Hypothesis (one layer of the tower of assumptions his vision of the future is based around), which is that the space of possible intelligences is vast and AGIs are likely to be completely alien to us, with no hope of mutual understanding. Which is at least a non-trivial claim, but is not doing so hot in the age of LLMs.

Technically Bing was using it before then, but good point. It's insane how fast things are progressing.

"Long ago"? ChatGPT is 5 months old and GPT4 is 3 months old. We're not talking about a technology long past maturity, here. There's plenty of room to experiment with how to get better results via prompting.

Personally, I use GPT4 a lot for my programming work (both coding and design), and it still gets things wrong and occasionally hallucinates, but it's definitely far better than GPT3.5. Also, as mentioned above, GPT4 can often correct itself. In fact, I've had cases where it says something I don't quite understand, I ask for more details, and it freely admits that it was wrong ("apologies for the confusion"). That's not perfect, but still better than if it doubles down and continues to insist on something false.

I'm still getting the hang of it, like everyone else. But an oracle whose work I need to check is still a huge productivity boon for me. I wouldn't be surprised if the same is true in the medical industry.

Good points. I don't think we really disagree, then. I happen to really enjoy entertainment that takes hundreds of people to produce (AAA movies and games), and there just wouldn't really be any way for those to exist without IP. But music and fiction aren't like that, and it would indeed be interesting if there were no limits on fanfic. (Would people still gravitate to the original author - or their descendants - to add the "canonical" imprimatur to particular stories, a la Cursed Child? Or would the "oral history" aspect win out? I wonder.)

Yeah, IP law is almost certainly not perfectly optimized for its intended function. Like so many other laws, it's a mess. It doesn't help that we allow corporations like Disney to have outsized influence on the legal process. If copyrights lasted for a flat 20 years (like patents) I think it'd still do fine at incentivizing creation. (And, more generally, if we had a political system that incentivized simple and straightforward laws, that'd be nice too...)

Really? Name the centuries-old historical counterpart to movies on DVD, music on CD, videogames, software suites, drug companies, ... I could go on. Sure, people used to go to live plays and concerts. Extremely rich patrons used to personally fund the top 0.1% of scientists and musicians. It was not the same.

Yup, and "he" was also commonly used as a gender-neutral pronoun. But this subtle linguistic point has the unfortunate quality of looking problematic, so it attracts ignorant activists. I haven't seen the word "niggardly" used in a long time, either - I suspect that even people who know what it means self-censor, because it's just not worth attracting that kind of attention when it's low-cost to just use a different word. Thus language drifts on...

Maybe IP can be justified because it brings value by incentivizing creation?

Um, yes? This is literally the entire and only reason IP exists, so the fact that you have it as one minor side point in your post suggests you've never actually thought seriously about this. A world without IP is a world without professional entertainment, software, or (non-academic) research. Capitalism doesn't deny you the free stuff you feel you richly deserve... it enables its existence in the first place.

Yudkowsky's ideas are repulsive because the "father of rationality" isn't applying any rationality at all. He claims absolute certainty over an unknowable domain. He makes no testable predictions. He never updates his stance based on new information (as if Yud circa 2013 already knew exactly what 2023 AI would look like, but didn't deign to tell us). Is there a single example of Yudkowsky admitting he got something wrong about AI safety (except in the thousand-Stalins sense of "things are even worse than I thought")?

In a post-April-Fool's-post world I have no idea why people still listen to this guy.

I am already getting tremendous value out of GPT4 in my work as a programmer. Even if the technology stops here, it will change my life. I have still never ridden in an AV. I reject your analogy, and your conclusion, completely.

The idea of accepting election results was uncontroversial on both sides until Trump talked. The benefits of polarization.

This seems like a strange claim to me. Would you classify the two-year investigation of "Russian interference" by a Special Prosecutor as "accepting election results"? "Not My President"? Hillary - the actual losing candidate - calling Trump an illegitimate President? Sadly, the civilized norms had already been well eroded by 2020.

Experimenting with giving ChatGPT-4 a more structured memory is easy enough to do that individuals are trying it out: https://youtube.com/watch?v=YXQ6OKSvzfc I find his estimate of AGI-in-18-months a little optimistic, but I can't completely rule out the possibility that the "hard part" of AGI is already present in these LLMs and the remainder is just giving them a few more cognitive tools. We're already so far down the rabbit hole.