@07mk's banner p

07mk


				

				

				
1 follower   follows 0 users  
joined 2022 September 06 15:35:57 UTC
Verified Email

				

User ID: 868

07mk


				
				
				

				
1 follower   follows 0 users   joined 2022 September 06 15:35:57 UTC

					

No bio...


					

User ID: 868

Verified Email

Problem is that a shove isn't really an escalation to deadly force. Just because you end up on the ground you're not really able to say "oh I thought he was going to kill me."

I don't know how the law sees it, but if I'm standing over a hard surface like a sidewalk or even asphalt, I would consider an unprompted shove as escalation to deadly force. A simple fall that results in your head smacking the ground can be fatal and often are, and someone shoving you with intent to disturb you is someone who is clearly fine with a very high probability of you falling over, with high likelihood of you lacking enough control to protect your head during the fall.

Zero Time Dilemma is certainly the weakest of the 3, and it's not close. And I didn't even find most of the scifi/philosophizing to be interesting in 999, especially compared to ZTD. Yet the characters, presentation, and gameplay all were far better in the former (and better still in VLR IMHO), to the extent that I'd say 999 is by far the better game. So I'd say you're not missing out on a whole lot.

When you see Blues/progressives/women in jubilation over the killing of Charlie Kirk, this is what and why they're celebrating- a strike against the people who cheer on, encourage, and legitimize killing bad sons [and by extension, them]. (The fact that involved killing someone else's son is not relevant.)

I know that logical thinking is explicitly and openly anathema to a lot of these people, and you haven't claimed otherwise, but how does this square with the fact that jubilation over the killing of Kirk is also an act of cheering on, encouraging, and legitimizing the killing of a son that has been judged to be bad? Doesn't that point to just different ideas of what constitutes "bad" rather than a rejection of the notion that it is possible for a son to be bad enough to deserve killing? That is, things judged to be "bad" by conservative, traditional, "common sense" morality aren't actually bad, while things that they judge to be "bad" by their own personal shiny new progressive morality are actually bad, and sons who are actually bad deserve killing, not sons that have merely been judged by traditional morality.

at least as useful as the equivalent Wikipedia

Not sure what you mean by useful in this case.

I don't mean anything specific, merely the fact that any tool, like an encyclopedia, exists to be used for accomplishing some goal, and, as such, its value comes from its being useful. There are a trillion different metrics that can apply to any given case, but ultimately, it all comes down to, "Does the user find the tool useful for accomplishing the user's goals?"

Wikipedia has some level of usefulness, as determined by people who use it, including, presumably, yourself. My question was, if, according to whatever metric you personally use to determine if some encyclopedia is useful for the goals you want to accomplish, Grokipedia consistently (or possibly even strictly) outperformed Wikipedia, would you consider the project of Grokipedia to be worthwhile?

Your answer tells me Yes, that your criticism is based around the usefulness (lack thereof) of the text, rather than around how that text was produced. Which satisfies my curiosity.

No idea what Elon is hoping to accomplish with this but I'm going to call him a huge dum dum for releasing this nonsense.

I'm curious, since most/all of your complaints about Grokipedia seem to be about its current (in)ability to consistently produce useful text for an encyclopedia entry: if XAI, through some sort of engineering ingenuity, was able to improve Grokipedia, using only modern and plausible near-future AI tech (i.e. almost certainly something LLM-based), by the time it hits version 1.0, such that, any given text produced for an entry is provably at least as useful as the equivalent Wikipedia (or other reference of your choice) text, as measured by any and all metrics you personally find meaningful in this context, without reverting to fuzzy copy-paste or summarizing the existing Wiki (or other reference) text, would you see this endeavor by Elon as worthwhile?

If not, then what would Grokipedia have to accomplish, or what would its underlying technology have to be based on (or at least not be based on), for you to consider it to be a useful AI-based encyclopedia?

I just want to say, given all the talk about the Sleeping Beauty Problem here, I think the ~10 year old video game Zero Time Dilemma, which is where I learned of it, might be up the alleys of many people here. It's the 3rd game in a series, with the 2nd one, Virtue's Last Reward, being focused around the prisoner's dilemma. All 3 are escape-room games with anime-style art and voiced visual novel cut scenes, with the scenarios being Saw-ish where characters awaken trapped in a death game.

Indeed, it is. And by many people's lights, including mine, basically every major religion is isomorphic to a malicious cult. That's a completely irrelevant point to the one that's being made in that comic, though.

I've always taken the point of that pithy line in that comic to be making the point that someone who lacks the faith in the religion and uses the belief as a tool to manipulate others into doing what they want is someone who likely doesn't understand the thinking of someone of the faith, to such an extent that their arguments based on the religion are faulty. In a Dunning-Krueger way, someone who believes he knows enough about a religion he has no faith in to manipulate believers into doing things based on their faith in the religion is someone who doesn't understand what he doesn't understand.

What further AI development would avoid is including a record that no one really cares about in prime real estate within the article.

For something like this, I don't think any reasoning would be needed, or any significant developments in AI development. I don't see why simple reinforcement learning with human feedback wouldn't work. Just have a bunch of generated articles judged based on many factors of that go into how well written an encyclopedia entry is, including good use of prime real estate to provide information that'd actually be interesting to the typical person looking up the entry rather than throwaway trivia. Of course, this would have to be tested empirically, but I don't think we've seen indications that RLHF is incapable of compelling such behavior from an LLM.

But when and how do you sound the alarm when a dictator is slowly installing an authoritarian regime over a country? American leftists warned everyone against this from day one, with poor results. Alarm fatigue set in, people became habituated to the steady erosion of democratic norms because there wasn't a single act to push them over the edge, just a slowly boiling of the frog of democracy.

Alarm fatigue setting in wasn't a force of nature. It was created by the behavior of the American leftists observing how their alarm wasn't getting the alarmed response they wanted and then doubling down on the alarm in the apparent belief that the level of alarm of the response would be proportional to the level of alarm that they're raising. This, of course, led to a vicious cycle where they would keep doubling down on the alarm, which would further reduce their credibility, which would further lead to less alarmed responses, which would further lead to American leftists doubling down on the alarm, etc. As best as I can tell, we're still in this cycle.

I'm not sure how to break the cycle, but to prevent the cycle, presuming for the sake of argument that Trump is a dictator who had been slowly installing an authoritarian regime in the USA since his first presidency in 2017, I think preventing alarm fatigue by carefully calibrating the alarm raised to be exactly appropriate to Trump's behaviors would have done it, such that the odds of Trump succeeding in his quest to install authoritarianism in the USA with himself as dictator would be significantly lower now in this alternate universe. Unfortunately, we don't live in this alternate universe, but fortunately, we don't need to believe the presumption that we took for the sake of argument.

I agree with this.

After seeing just the initial video, I was probably around 51-60% sure that he used a shock collar.

After his attempt at explaining away the collar the next day, it shot up to 95-99%. Without taking into account the fact that all analysis of the video shows that the collar he showed is consistent with a shock collar with its removable prongs removed and then taped over and is not known to be consistent with any vibrating-but-not-shocking collar as he claimed, the simple fact that he presented the collar the way he did, briefly showing in his hand, with huge chunks of it covered by his fingers, barely holding it still for more than 0.3 seconds before taking it away from camera view, was enough. Someone who's been on camera as much as Piker knows how things look on screen, and if he were genuinely motivated to reveal the honest truth that he truly was not using a shock collar, he would have shown the collar in a different manner, by holding it by part of the strap and slowly rotating it around in front of the camera after verifying its focus, while making sure there was minimal movement besides the rotation, so that every part of the collar could be seen clearly. And he would have done this during the stream in which he was accused, not on the next stream (the fact that a multimillionaire like him didn't find a decoy non-shock collar in that time that looked similar to the shock collar when worn by a dog in that time is curious - either hubris or just the limits of physical reality).

And, of course, this was also after he and/or his followers claimed that the yelp was caused by the dog clipping its nail on the bed - something quite possible, but also something quite non-evident in the video. This claim was memory-holed basically within a day, replaced with the "it's a vibration collar, not shock collar" claim.

This sort of behavior is consistent with someone who believes he was caught shocking his dog and highly inconsistent with someone who believes he was falsely accused of shocking his dog. If Piker believes that he was caught shocking his dog, then I believe it too.

I'll also add that, given how easy it is to look up and see the primary sources for oneself, anyone who's defaulting to ignorance and just listening to what people on various "sides" are telling them to think is someone I believe is motivated to remain ignorant for fear of finding out the truth (or just someone who's not interested in it).

59% to 56% doesn't measure partisanship, though. The significance of the 59% in

The non-partisan takeaway from the poll, so far as I can tell, is that "The United States should recognize Palestine as a country" is close to non-partisan, with only 53% of Republicans opposed and 58% and 59% agreement from "Other" and "All adults," respectively.

isn't that 59% of "all adults" agree with that statement, it's that the difference between the % of "all adults" and the % of Republicans is small (not trivially so, but in this context, I think 53% to 59% would generally be considered significant but small). So, to say that gun control is "less partisan" than this because it only has 56% support would require also providing information that shows that Republicans and Democrats both cluster pretty closely around 56% support (though what counts as "small" becomes even more arguable when crossing the 50% threshold), more closely than the 53% to 59% difference in the Palestine example.

quiet_NaN seems to be making the same point that VoxelVexillologist does above, that "non-partisan" doesn't just mean "not partisan" but has an additional, extended meaning that has little to do with partisanship, meaning "generally agreed upon."

Is the woke left actually arguing that he is innocent?

Immediately after the murder blew up on social media (which IIRC was a few weeks after it happened), there was some quote going around said by some Dem politician or commentator, IIRC, that, by looking at his actions in this murder, Brown was clearly hurting and that we have a responsibility to understand the circumstances that led the man to such a state. Most likely I got some details wrong in the summary in my previous sentence.

As expected, this was spread around widely in right-wing media, while it was ignored by the left. As best as I can tell, this framing of Brown as a victim of circumstance was never popular or common among the woke left. However, because of how epistemics work according to the woke left ideology, it also meant that no one on the woke left had grounds to challenge this notion that Brown was a victim of circumstance without also getting punished by/expelled from the woke left.

It's quite unfortunate because on twitter, more and more idiots have taken to posting screenshots of the Google "AI summary" which is just slop. I'm sure that if the chatgpt browser catches on, it will lead to more proliferation of this factually unreliable slop.

This pattern has been one of the most worrisome things for me about the AI revolution we're amidst. I think AI summaries with citations are extremely useful, one of the most impactful use-cases of modern LLMs in my everyday life, and that's because I assume that anything the AI writes is a hallucination, which might coincidentally be correct, but which I need to verify by following up on the citation. If it's anything where I care about the veracity, this is what I do.

Yet my social media feed indicates that a great number of people use AI summaries as-is, trusting them enough to present naked LLM-produced results as if they're anything more than strings of letters (tokens) put together that might be useful. I'd want a norm to develop where people who present information like this become as mocked as someone citing World News Daily, but I'm afraid there's nothing I can do to make that happen.

I just want to say, this comment describes almost exactly how I feel about HBD. I see the progressive/leftist/liberal principles I subscribe to and try to follow as being completely orthogonal to whether HBD is true or not. But HBD's truthiness does heavily influence how we would go about accomplishing our goals. Which is why I want my side to openly accept HBD as being possible and begin investigating it using actual science. Because if we actually want to accomplish our goals, then we need to get as accurate and precise a map of the landscape as possible. How true HBD is and to what extent it influences our society are things that we need to actually investigate, because right now, it's been declared by fiat that it's False and 0 respectively, and our strategies for achieving our goals using this faith have left something to be desired.

Rather than "replaced," I see it more as "flattened & equated." In the 00s, when you called someone a White Supremacist, it meant something more than just being the type of "racist" who might laugh at some offensive joke or something. In the 10s - far before 2021, by my observation - White Supremacist became one of many "correct" terms to refer to the latter type of person.

This is because it around then that the whole "racism is prejudice plus power" definition broke out into the mainstream, which put forth that racism wasn't merely treating an individual unfairly on the basis of their race, it was being part of a structure that oppressed black people and other people of color (specifying whites and white-adjacents as not capable of being subject to racism). As such, all racism was declared as a part of White Supremacy, and thus some random 4 year old with no understanding of race or racial history who shows any distaste for anyone with dark skin is exactly as much of a White Supremacist as Nathan Bedford Forrest. And Forrest was exactly as racist as that 4 year old, no more, no less.

Around the same time, I saw the same thing happening with "sexism" and "misogyny." In the 00s and before, the latter term was understood to mean someone with a true, unambiguous antipathy for women as women, and "misandry" as the reverse. But because "sexism" was declared a part of being of the misogynistic patriarchy, it was deemed that some 4 year old saying "girls are icky" is exactly as much of a misogynist as Andrew Tate. And Tate is exactly as sexist as that 4 year old, no more, no less.

30 years ago I may have done my fair share of "noticing" but dismissed it without a community of noted race scientists like the Motte to further radicalize me. It seems obvious to me that while "haha just joking" extremism doesn't literally mean the jokers hold those specific beliefs in earnest, it does meaningfully shift the Overton Window and creates a space where serious discussion of previously taboo beliefs can blend with the jokes. If you believe that White Nationalism and Antisemitism are very evil then it is reasonable IMO to be concerned about these jokes and want to stamp them out.

I think this is an eminently reasonable position. However, I disagree with it, and I have another position which I consider just as reasonable, but more convincing. Which is that, without overwhelming tyrannical force, no one individual, organization, or even side can control the Overton Window. Despite the recent performance by the modern left, such tyrannical force just isn't viable in America, certainly not in the long run (except maybe in the really long run where America as we know it doesn't exist anymore). As such, we can't rely on our ability to keep Nazism out of the Overton window; so we should have ample protections against it when it does enter the Overton window, so that it doesn't go from "within the Overton window" to "ruling us by convincing enough people."

And I see no better way to prepare such defenses than analyzing and practicing against the best, strongest, most well-developed and convincing versions of their arguments, put forth by their smartest, most charismatic proponents. Just like how any professional athlete will tell you that no amount of practice scrimmages against teammates can make up for actual playtime against an opposing team in terms of teaching one's flaws and building resilience and grit. For that to occur, we need these people to argue with each other and with us, so as to better refine their ideas and arguments. This can only happen openly if their ideas are in the Overton window. So I want it in there. Otherwise, I'd be dealing with artificially weak versions of their arguments and/or be ignorant of what they're cooking up outside my view. Leaving me worse prepared for defending against them.

I don't think my reasoning is foolproof or proven beyond a reasonable doubt. And I certainly wouldn't condemn people who want to keep Nazism outside the Overton window as being secret Nazi sympathizers who want to leave society vulnerable for when they've gathered enough power on the margins outside the Overton window to pounce. Because I know that they have their own thinking, a way of thinking that I think is reasonable that makes them believe that they're actually helping to prevent the rise of Nazism in our society in the future. I think they're reasonable and wrong, but it's being wrong that is their crime, not supporting Nazism.

Does anyone play racing games on the PS5? I've had a PS5 for about a year, primarily as a 4K Bluray player, and recently I got the hankering for a good-looking racing game to play on the big TV in the living room, likely caused by me playing FF7 Rebirth & enjoying the Chocobo racing minigame a lot.

On Playstation, Gran Turismo is the gold standard for racing sims, but I noticed a lot of fans complaining about poor QOL and poor design for the single-player experience in the latest, GT7. Instead, it seems to have gone the way of Live Service Slop, with seasonal events and limited time cars that you have to buy and such. I enjoyed GT3 a lot on the PS2 back in the day, and GT7 definitely looks beautiful, but I'm not sure I want to spend $35 (on sale right now - normally $70) on a Live Service Slop game.

I also noticed that Ubisoft's The Crew: Motorfest was a big racing game on all the consoles, and I did enjoy playing the demo a bit, but also, I fucking hate open world in a racing game. I want either actual tracks or closed loops set in real-life locales, I don't want to be navigating city blocks and "immersing" myself into the world. Graphically, it doesn't look as good as GT7, but certainly not bad.

I also saw that Forza Horizon 5 was on PS5, but that one's also an open world racing game. I also saw reviews saying that it had terrible progression in single player gameplay, since it gave the player top-of-the-line cars right at the start. Doesn't seem like a bad thing, but then again, the whole Career Mode feel of getting better and better, going to higher and higher level matches, has added enjoyment to my playing of racing games in the past.

Those are the big 3 that I looked into, and none of them seemed to really nail the sweet spot of what I was looking for. If they just graphically upgraded GT3 and added more cars and tracks, I'd probably buy that for $70 easily, but I don't see that in the market today. I was just wondering if anyone has more experience with this and knows any hidden gems or qualities of these 3 games that I missed?

If the choice is between Nazis and the modern left

As some Spartan once allegedly wrote in a message,

If

Even if the "preferrer" intends to try and slow the swing of the pendulum from the left to Nazism once it starts being too Nazi for his taste, at the moment he's all too happy to help it gain momentum.

I think the issue is that, generally (dunno about Southkraut himself), people who genuinely prefer Nazism to modern leftism see modern leftism as having the same sins as Nazism, but worse, or perhaps more dangerous. So if things became too Nazi for their taste, it wouldn't make sense to push for modern leftism, since modern leftism is even further along the spectrum in the direction they don't want to go.

Again, if you believe that someone saying

Bring on the nazis. I'd rather have literal Hitler spread his brain-rot than give the left another day to spread theirs.

means "Hitlerism and Nazism is growing," then your standards of evidence show me just how much this sort of conclusion requires grasping at straws. Preferring literal Hitler brain-rot over leftist brain-rot doesn't mean that the person is either into Hitlerism or Nazism.

It's a great way to establish that these aren't "leftist concern trolls" but conservative aligned names with proven history in right wing spaces.

No it isn't. That Hanania wrote "The Origins of Woke" tells us nothing about his alignment (which does happen to be conservative, though a highly idiosyncratic one), nor does it prove history in right wing spaces. I'm not even sure what the reasoning could be, other than the idea that "woke" is something only used by "woke's" enemies, which is ridiculous, since I was one of the very people who self-identified as "woke" when that term was getting mainstream popularity to describe this ideology (though I admit this was right around when I started more fully rejecting such illiberal ideas).

Let me second Southkraut's comment and say that, if this serves as evidence for you, then this comment of yours along with the rest of your comments on this thread have convinced me more than ever that Nazism being a problem in the right is basically entirely the invention of motivated reasoning by their political enemies. This is due to seeing the type of reasoning that you employ that leads you to such a conclusion.

Well, according to much of the left, Nazis are notorious for being friendly with and accepting anyone into their group, whether they be black, brown, yellow, Jewish, atheist, etc.

But in the long run, being a sex worker for an AI billionaire with a human fetish might be the only paid profession left, and that will obviously not scale to billions of people.

I feel like, if we're far along in the scifi AI future where the oldest profession becomes the final profession, then this is likely to scale not just to billions but trillions, and there would be plenty of incentives by these billionaires to create the technology that enables this.

Why limit yourself to a harem of mere hundreds or millions when that doesn't differentiate you from other billionaires who could do the same? Surely having a billion living, breathing, suffering humans who are willing to go through the experience of having sex with you is higher status than having mere millions. And certainly more than a harem of any number of unthinking, unfeeling, unsuffering android sex bots, no matter how "hyperpalatable" (a la modern fast food relative to pre-historic food, or modern porn relative to pre-historic sexual content) these sex bots might become.

Of course, having that harem not require money would be even higher status, so being one of billions of stay-at-home wives to a single Morbillionaire might actually be more accurate as the final profession, not prostitution.

And, also of course, if we invent consciousness and the ability to suffer in AI, then all bets are off.

Trans genocide is caused by microaggressions (and macroaggressions as well). If you accidentally call a transwoman who identifies as "xe/xer" a "he," then that can worsen xer dysphoria, resulting in xer's mood being slightly more negative than otherwise, which could be the difference maker in crossing the threshold to successfully acting on suicide from just thinking about/play-acting it, which would mean that xer death would be entirely the fault of every individual who committed a microaggression against xer with respect to xer gender identity.

It could also discourage trans-curious youths away from going forward with a transition and instead embracing their original sex and trying their best to live a happy life within it, because they would observe how trans people tend to have convinced their brains to feel wronged when someone else does something with entirely good intentions. This would mean one less trans-curious person becoming trans, which is another form of genocide than just killing the already-trans.

More to the point, there's no rational or consistent hierarchy of heinousness of crimes in this worldview. Rationality and logical consistency (not to mention hierarchy) are inventions of White Supremacy and Patriarchy for the purpose of oppressing minorities and women. Any crime is the most heinous if it's useful to you if everyone else believes that it's the most heinous thing ever, and the vice versa applies as well.