Sticking only to the sports aspect, I personally don't like the use of AI or non-AI computer tech to make officiating decisions more accurate. I see sports as fundamentally an entertainment product, and large part of the entertainment is the drama, and there's a ton of entertaining drama that results from bad officiating decisions, with all the fallout that follows. It's more fun when I can't predict what today's umpire's strike zone will be, and I know that there's a chance that an ace pitcher will snap at an umpire and get ejected from the game in the 4th inning to be replaced with a benchwarmer. It's more fun if an entire team of Olympians with freakish genes and even freakishier work ethic who trained their entire waking lives to represent their nations on the big stage have their hopes and dreams for gold be dashed by a single incompetent or corrupt judge. It's more fun if a player flops and gets away with it due to the official not recognizing it and gets an opponent ejected, resulting in the opposing team's fans getting enraged at both the flopper and the official, especially if I'm personally one of those enraged fans.
Now, seeing a match between athletes competing in a setting that's as fair as possible is also fun, and making it more fair makes it more fun in that aspect, but I feel that the loss of fun from losing the drama from unfair calls made by human officials is too much of a cost.
That just puts the cart before the horse. Trump has every incentive not to be fair or unbiased, not only because he could keep some small hope of actually overturning the result, but also to muddy the waters and retain his clout within the Republican party ("I'm not a loser, I'm a victim!")
Right, and that's exactly why if Trump himself were to say that he lost fair and square, that would convince the vast majority of Republicans, I believe. Very few people outside of him and others vetted by him have the credibility to certify a Trump loss as legitimate, and I don't really see a way for Democrats or other elements of the government to gain that kind of credibility in a short enough time frame to matter.
I think if the reins of any election audit were handed over to Trump himself, with the final report requiring his sign-off, this would convince all but the most hardcore of Republican partisans/conspiracy theorists that a Trump loss in the election was the correct result. I don't know if such a thing would be unconstitutional, though; if it isn't, then Congress should be able to pass whatever laws necessary to allow such an audit to happen.
Au contraire, I think if this guy was placed in a position by Harris to ask the exact same question at her for a full hour, this would be of great benefit to all American voters.
I had a vague recollection of the same thing, but I thought I might be confusing it with Romney's 47% remark that was surreptitiously recorded and released. From my Googling, Time doesn't mention any secret recordings for Hillary's remark. Says it was at a fundraiser, but it doesn't seem like it was a closed event, and a full transcript of her speech is also in the article, which points to it not being surreptitiously recorded.
What if we take the hardball metaphor seriously, and the interviewer tells the interviewee straight-up, after each non-answer, "Okay, so you just struck out. Want to try again?" And as the interview progresses, the interviewer brings up the scorecard and reminds the interviewee of their performance so far and perhaps their need to hit a grand slam now if they want to win?
More realistically, when a politician non-answers a question like in some of these clips, I'd like it if the interviewer explicitly called it out and refused to move on until it was answered in a way that a reasonable layman would understand as "answered." There's probably too many incentives against any interviewer in a position to interview anyone that matters actually doing this sort of thing, though.
This is the question I always have in mind when I see ideologues of any stripe popularize poll numbers that show their preferred candidate for an upcoming election in the lead and denigrate polls that show the opposite. There's certainly a "celebrate the home team winning" aspect that I understand - when the Red Sox have a good record or have a big lead against the other team, I, too, like to remind other Red Sox fans of this if the context is appropriate, and we both have a positive experience from it.
But unlike professional sports, elections are things that the fans actually have pretty direct input on the outcome of. So one would think that a committed fan of a particular politician or party would try to behave in a way as to increase the odds of their preferred team winning. Which, I think, would lead one to the exact opposite of the abovementioned behavior; present the polls that show your favorite team losing as the most dependable, most reliable polls that everyone needs to be paying attention to all the time, and present the other ones as fake and flawed and maybe even part of a conspiracy theory to keep people on your side complacent.
But I also see an argument for the opposite case, that, as one Osama Bin Laden said, IIRC, "When people see a strong horse and a weak horse, they instinctively side with the strong horse." So presenting your favored politician as "stronger" in the polls could actually lead other people into learning that they genuinely, in good faith, agree with that politician's ideology, and thus they become more likely to vote for them.
I admit I haven't looked hard, but I've yet to see any empirical evidence for which factor is stronger and by how much. As it is, the fact that it feels really really good to celebrate your favorite team politician being in the lead (certainly it feels far better than "celebrating" the opposite) makes me highly skeptical of any evidence or arguments that such celebration also conveniently helps your favorite team politician win in the upcoming election.
I recall hearing that the production of Aliens was a complete shitshow, with James Cameron allowing issues with his personal life to interfere with production in negative ways. Also, some of the best Mission Impossible films, including 4, 5, and 6, apparently had 3-page long scripts at beginning of filming, with just an overarching narrative and various ideas of scenes in Tom Cruise's head, requiring the scripts to be written the night before the actual filming of the individual scenes, along with a ton of work by the editors to actually piece together a coherent narrative (Chris McQuarrie, the current director of the movies, got that role in a large part due to being an uncredited script doctor for 4 who was apparently brought in to fix it up during shooting).
So certainly having the productions be trainwrecks from the inside doesn't guarantee that the film won't be one of the greatest films ever made, rather than a historical megaflop like Waterworld.
But I think with something like Borderlands or Joker 2 or any number of other recent flops like Madame Web, The Marvels, or on TV The Acolyte or Rings of Power is that the trainwrecks aren't on the production, but rather on the fundamental artwork that's being expressed, mainly the script and also perhaps the cast. E.g. for Borderlands, it should be obvious to any layman, and certainly to any studio exec, that it's not a winning move to cast 50+ year old dramatic actor Cate Blanchett as an action lead and famously short comedian Kevin Hart as a no-nonsense serious big tough-guy soldier in a movie based on a video game aimed at teenage boys and young men. Even if the production had gone completely smoothly, it was just fundamentally doomed from the start, unless they relied on some other gimmick, such as having outrageously good action scenes (this is sorta what the Mission Impossible films rely on, which has worked for films 4-6, but not so much for 7, IMHO). Likewise, any layman could've read the outline for the story of most of these films and immediately pointed out major problems that would lose the audience.
It seems to me that, to be blind to these glaring issues and obvious red flags - so blind that you're willing to place hundreds of millions of your company's dollars on a losing bet - requires a lot of motivated reasoning which results from elevating one's own ideological biases over one's love of profit.
Hollywood looks the way it does because Hollywood has always been full of both "creatives" and studio execs who are actually very bad at their jobs and make bombs regularly. (And, in fairness, sometimes they just genuinely mistime or miscalculate the appeal of a film.) It's a very Current Year thing for you to read every box office failure as an intentional devious scheme by the studios to set money on fire just because they hate you.
No, it's not some intentional devious scheme to set money on fire. And yes, Hollywood has always been full of decisionmakers who are very bad at their jobs. And setting money on fire in an effort to humiliate the audience they hate - and being surprised that that's the result - is how they're being bad in this instance and other recent instances. Holding onto the false, but genuine belief that they can make profit through releasing these awful products that overtly shit on things the audience is known to like is how they don't care about making profit. Instead of analyzing what the market wants in a way designed to make accurate predictions, they analyze it in a way filtered by their own biases shaped by their ideological bubbles, which leads them to believing that they can release these "humiliation rituals" or whatever and still make money. I can't honestly say that someone who behaves like that is someone who cares about profit more than they do about their ideology; part of caring about making profit - or about accomplishing anything, really - is making sure you get an accurate-enough lay of the land so as to navigate it in a way that allows you to accomplish your goal. If you allow your biases to get in the way of getting that accurate lay of the land, then you clearly care about your biases more than you care about profit.
And, of course, there's no need to posit any sort of conspiracy. You just need enough decisionmakers with enough power all being part of the same echo chambers and lacking enough self-awareness and love of money to overcome their own biases.
Well yeah, the fact that basically everyone would answer those questions with the same answer, despite the fact that those questions ask 2 fundamentally different things - the former being a question about objective reality and the latter being about subjective perception - was kind of the entire point I was making.
They also polled students on things like "how often do you ask questions in class?" and "how often do you explore topics on your own, even if they're not required for a class?". Those seem like reasonable things that could be self-reported, if we think that self-reports can ever have value at all.
I think self-reports could have value for determining answers to questions like "how often do you believe you ask questions in class?" and "how often do you believe you explore topics on your own, even if they're not required for a class?" but those poll questions you quoted don't seem like reasonable things that could be self-reported at all. I don't think there's any good reason to believe that one's belief about how often one does these things has much correlation with how often one actually does these things, outside of the extremes, like literally never doing it or doing it constantly. I'd guess that they'd be more correlated with how high status the reporter believes these activities to be and how highly of themselves the reporters think. But that's just my pet conjecture, and in any case, I don't see a way to measure these potential correlations just from the self-reporting patterns without actually measuring the underlying activity.
I personally quit diet soda as part of quitting caffeine altogether after being an addict during high school/college, so I just went with drinking only water as my source of liquids. I had terrible headaches and trouble staying awake at work for about 2-3 weeks, but after powering through that, it was pretty easy. I have no idea how effective that would be for anyone else, though. One advantage I had is that I really dislike carbonation in liquids and take steps to flatten my soda before drinking it if it's an option, and so it was just the taste and caffeine I missed.
Quitting it in favor of the real sugary deal, or quitting it in favor of something less flavorful/carbonated like water/tea/coffee?
I find this partly fascinating, but also mostly depressing at this point. One can only be fascinated by the exact same thing so many times before just learning to accept that this is the norm. This failure of intellectualism seems almost identical to the phenomenon of autoethnographies and similar essays of ostensible self-reflection being essentially the basis of the modern ideology that's been called many things, including woke, identity politics, social justice, and perhaps most appropriately in this context, critical (race) theory. The most famous and influential of which, perhaps, is White Privilege: Unpacking the Invisible Knapsack by Peggy McIntosh, which is just the author making grand sweeping conclusions about the structure of society based on her perceptions of her own experience and almost nothing else.
The part that causes both fascination and depression is that these are academics doing intellectual work in academia, and one of the core pillars of such pursuits is that everyone is biased and susceptible to mental pitfalls, and as such, truth can only be pursued effectively by checking against objective reality, and even then, it must be corroborated by multiple disinterested or adversarial parties (e.g. we need multiple sides that each have incentives to prove each other incorrect to all agree on it before we can conclude that it's likely true). Academia is much more than this, but certainly this is one load-bearing pillar that, if removed, causes the whole thing to collapse.
Thus self-reports are very valuable for determining what people consciously believe, but for drawing any conclusions beyond that, they're close to worthless or outright worthless. This should be obvious as a baseline to any academic, in the same way that "if we score more points than the other team, then we win" should be obvious to any NBA player, or "if I point my gun at someone and pull the trigger with the safety off, then it will fling a small clump of lead really fast at that person" should be obvious to any soldier. An individual who fails at realizing these things would be interesting and bad, but it looks like we have entire leagues and armies of these people, that have power and influence equaling any other similar institution, and that's just depressing.
It appears to me a lot like a sort of cargo cult, where people mime out the motions without understand the underlying mechanisms. Here, these philosophical academics seem to be aware that proving that something is (likely to be) true requires gathering data and publishing a paper and such but unaware of how that happens. I feel like I've noticed this kind of thing in the very different, but related, field of entertainment, with recent notable commercial/popularity failure of lots of recent movies, TV shows, and video games (a few examples: the films The Marvels and Borderlands; TV shows The Acolyte, Rings of Power, Mandalorian S3, Echo, and She-Hulk; video games Concord and Star Wars: Outlaws). Many of them seemed to mime the things that more successful predecessors did, but getting so many fundamental things wrong that the stupidity just made the audience check out. It's as if the writers and producers don't understand that making a good work of visual media isn't just about the spectacle, but also the underlying logic of the things that the spectacle is representing. In fact, the latter is far more important. These are professionals whose entire 9-5 jobs it is to get these things right, in order to extract as much money from the audience as possible by entertaining them, and they're putting out stuff that someone who got a C- in a creative writing class would see as huge red flags (though now that I think about it, I wonder if modern creative writing classes are also plagued by the issue that I pointed out up above, and so even someone who got an A+ couldn't be trusted to notice these issues).
If Democrats have an opportunity to put their thumb on the scales without completely invalidating the election, then it should be their duty to do so. One or two somewhat shady elections is a small price to pay for stopping Trump.
My issue with this line of thinking is that this bold part just seems incoherent to me. There's no such thing as putting one's thumb on the scales without completely invalidating the election. The thinking that it's possible to slightly invalidate the election but not completely or not enough to count for whatever enough might mean here is just pure motivated reasoning if it's coming from the party that would stand to gain from such subversion (which is to say, a Democrat who supports putting the thumb on the scale in order to get a Republican to win so as to save democracy might have some credibility in their reasoning, as well as vice versa, but not if the same sides are involved).
If we accept that being really really sure that the opposing side will destroy the whole game means that cheating is justified, then all that means is that everyone will always be, in good faith, really really sure, cross their heart, no cap, on god sure that their opponents will destroy the whole game, thus justifying their own cheating. This is the exact same sort of phenomenon as the whole "tolerance doesn't mean tolerating intolerance" leading to everyone concluding, in good faith, that [position they don't like] is some form of intolerance, so as to justify being intolerant of it.
You just need at least 1 consumer, right? Maybe the future is just one person who owns the entire Earth or perhaps even the universe, the sole producer and customer that dictates what is and isn't by his control of all the AI-powered robots. Well, I imagine even if someone had amassed the power to accomplish this, they would find such an existence rather lonely.
This, I think, points to the one job that AI and robots can't ever replace humans in, which is providing a relationship with a human who was born the old fashioned (i.e. current) way and grew up in a human society similar to the human customer did. I've said it before, but it could be that the world's oldest profession could also become the world's final profession.
But also, if we're positing ASI, it's quite possible that the AI could develop technology to manipulate the brain circuitry of the one remaining human to genuinely believe that he is living in a community of real humans like himself. I believe this kind of thing is often referred to as a "Lotus-Eater Machine," after some part of the Odyssey. If this gets accomplished, then perhaps all of humanity going down to just one person might be in our future.
Yes, I think this is what it actually comes down to for a lot of people. The claim is that our current course of AI development will lead to the extinction of humanity. Ok, maybe we should just stop developing AI in that case... but then the counter is that no, that just means that China will get to ASI first and they'll use it to enslave us all. But hasn't the claim suddenly changed in that case? Surely if AI is an existential risk, then China developing ASI would also lead to the extinction of humanity, right? How come if we get to ASI first it's an existential risk, but if China gets there first, it "merely" installs them as the permanent rulers of the earth instead of wiping us all out?
The way this could work is that, if you believe that any ASI or even AGI will have high likelihood of leading to human extinction, then you want to stop everyone, including China, from developing it. But it's difficult to prevent them from doing so if their pre-AGI AI systems are better than our pre-AGI AI systems. Thus we must make sure our own pre-AGI AI is ahead of China's pre-AGI AI, to better allow us to prevent them from evolving their pre-AGI AI to actual AGI.
This is quite the needle to try to thread, though! And likely unstable, since China isn't the only powerful entity with the ability to develop AI, and so you'd need to keep evolving your pre-AGI AI to keep ahead of every other pre-AGI AI, which might be hard to do without actually turning your pre-AGI AI into actual AGI.
If you don't think that foods can contain twice the amount of listed calories, then why is it reasonable to assume so?
I never said it's reasonable to assume that they contain 2x the listed calories. I said that 2x the listed calories is a reasonable estimate for an upper bound when your goal is weight loss. It's not the only reasonable estimate, and how reasonable other estimates are would depend greatly on the specific goals
Suppose you do this, and eat 800 calories with a 1600 BMR, but don't lose any weight. What should you, go down or up in calories? Maybe you go down to 500 or something, but this week the portions are actually accurate, and you lose weight, but too much weight. So you go back up, and next week you don't lose weight, so you have to go back down... What information is being gained here? On any given week, you don't actually know if you're going to lose weight at all or too much weight.
If your goal is weight loss, I contend that losing "too much weight" is such a low-risk event, both in terms of likelihood and in terms of the "harm" that comes from it that you might as well treat it like it's not a thing. So yeah, if you somehow maintain weight at 800 calories at 1600 BMR, then you go down even lower, and if 500 makes you lose "too much weight," then you celebrate and keep at it. Or you go back to 800 calories with the knowledge that since you lost "too much" weight last week, it's okay to not lose weight this week*. CICO in weight loss just means to keep CI below CO; I didn't say that you want to keep CI at some specific amount below CO so that you can lose weight in some specific, predictable rate of X pounds per week or whatever. CICO is certainly helpful for that as a guide, but, as I've alluded to before, the accuracy of calorie labels, the accuracy of calorie expenditure measurements, and the regular fluctuations of weight that people experience through daily life make it so that you can't make very precise predictions in your weight loss, especially in short timeframes.
*You wouldn't adjust week-by-week anyway; that's just not enough time to see if there's signal in the noise. If your weight loss goal is based around losing weight each and every week rather than losing weight long term, then CICO isn't helpful anyway; you should be looking at things like fasting and dehydration, since in a week-long timespan, the literal physical mass of food and liquids you put in your body dominates over the mass that your body converts into itself in the form of fat, muscle, etc.
I think calorie counting works for weight loss, though I am less sure now that you have told me how inaccurate nutritional labels are.
In that case, it seems that you have no problem with the concept of CICO. I'd also note that, again, I have not told you anything about how inaccurate nutritional labels are, because I have no special knowledge that I can provide about how inaccurate nutritional labels are. My layman's understanding is that they're mandated by the FDA in the USA to be within some reasonable range of error and tested for compliance, but I have no idea how good the enforcement of the compliance is, and I have little idea of what the range of error is (guessing I could probably look this up if I wanted to). I just use 2x as a reasonable estimate for a multiplier when trying to lose weight, since it seems doubtful that if nutritional labels were often underestimating their calories by 2x, this wouldn't have been caught and become a major enough scandal that I'd have heard about it.
According to you, food can have up to twice the amount of calories listed on the packaging.
This is false. I said that 2x is a reasonable upper bound to place for the actual caloric content of food compared to the nutritional listing when you're trying to estimate CICO for the purposes of weight loss.
Leaving aside whether this is reasonable, that means that the person who thinks he's eating 2000 calories could actually be eating 4000 calories. This is obviously a large enough variation as to blow any attempt at tracking calories out of the water, to say nothing of variations in BMR.
This is built on a misinterpretation of my statement, but regardless, this is also false. Even if it were regular for foods to hold 2x as many calories as is listed (I doubt that this happens often enough to matter, but I have no actual data on this), this wouldn't, in any way, be enough variation to blow attempt at tracking calories out of the water. You can just... eat less than half as many (listed) calories as your BMR. So, e.g. if your BMR is calculated at 1,600/day, you can just limit yourself to 800 calories per day. Again, in practice, I doubt that it's so extreme that that's necessary, but also in (my) practice, taking that extreme assumption and acting on it does work.
It's like piloting a plane with only one button and no altimeter. The interface being simpler doesn't make it easier - it can make it harder, because you don't get feedback! As you yourself suggest, it can take many months to get accurate predictions of weight loss, and maybe never get accurate predictions of weight gain.
But there is an altimeter - your scale and your tape measure! The feedback isn't instant, but it's also not many months. A week is often enough to see the signal in the noise (since natural daily weight fluctuations can easily be over the expected weight loss in a week, even when weighing oneself at the same time under the same conditions every day (for this reason, I personally found it good to weigh myself multiple times a day in order to get a range for the day instead of single number)), with 2 weeks being plenty in the vast majority of cases, and certainly 4 weeks being easily enough.
And indeed, the interface being simpler doesn't make it easier - specifically it doesn't make it easier to motivate oneself to press the button at the right time, if we're going with this one-button analogy. But knowing how and when to and not to press this metaphorical button certainly is pretty easy, but the tough part is actually finding the will to push the button or not at those correct times that you figured out. The way I see it, the value in so many different diets is that they help to reinforce that will and to reduce the amount of will needed, kinda like having a teacher who motivates you to study by giving regular quizzes and homework and also guides you on how to study through lessons.
The body's system of weight, hunger, and energy regulation is of comparable complexity to the forces on a modern aircraft. It is, of course, designed to be simple enough to interact with that even dumb apes can feed themselves, but it is also not foolproof, which is why dumb apes in a food rich environment sometimes turn into 600lb whales.
Yes, and none of these complex systems require you to have much knowledge or expertise about anything in order to control how many calories you eat or expend accurately enough to lose weight. The control levers for piloting a plane are extremely complex and require lots of training to use properly. The control levers for placing food into your mouth and chewing it and swallowing and for moving around are extremely simple, so simple that almost everyone does it by default with minimal training.
A better analogy would be to, say, studying. Studying isn't trivially easy, but it's still very easy and simple in many contexts. And everyone knows that studying is useful for helping to pass a class. But the hard part is getting the motivation and discipline required to study consistently. Like how the tough part of managing weight is getting the motivation and discipline required to control one's food intake and exercise.
The person eating 2000 calories a day could, according to what you've written, be in anything between a 2500 calorie surplus (4000 calories in, 1500 out) and a 1000 calorie deficit (2000 calories in, 3000 out, which would correspond to gaining five pounds or more in dry body weight in a week or losing two or more pounds of dry body weight in a week, a prediction so vague as to be totally useless.
This has no relationship to what I wrote, from what I can tell, so I honestly have no idea how to respond to this. This is a complete nonsense non sequitur.
I don't calorie count and I never find my weight fluctuating that much. So what good actually is this method?
If you have no issues maintaining weight without calorie counting, then it sounds like you don't need to count calories to successfully implement CICO. Great!
Because it seems by what you're saying, that it's hopelessly imprecise to measure either calories in or calories out.
Please walk it through for me how anything I wrote could be interpreted as such, with an emphasis on the "hopelessly" part.
I have a similar understanding of the published literature to you, I think - but knowing that planes crash when their altitude decreases is not enough to avoid crashing a plane. The published literature tells us, for example, that calories out should probably exceed calories in by about 500 and then you'll lose weight. But as I've heard in this thread there is no reliable way to measure either, calories out has been shown to change in response to calories in, so you are in effect chasing a constantly moving target.
Dictating the food that one eats and the calories one expends is nowhere near as complex as piloting a plane. There's a reason why there are very few plane pilots, most of whom had to train a long time even before ever flying a real plane, while basically everyone, even many children, choose what to eat and how much to move.
And it absolutely is possible to get reliable enough measurements of both in order to accomplish certain goals, specifically weight loss. It's not that common for packaged foods to have multiple times the calories as their label indicates, and so one can pretty accurately place an upper bound in CI by adding up all the calories in those labels and then applying some multiplier >1. I like to use 2. It's also not that common for one's real caloric expenditure to be lower than their calculated BMR, especially if they do things like stand or walk during the day, so one can pretty accurately place a lower bound in CO by just calculating BMR. Get the upper bound of CI lower than CO, and you can be quite confident that true CI is lower than true CO. For weight gain, it's more tricky, because of the physical ability of the body to reject food, as well as its ability to involuntarily expend energy through heat, but generally CICO isn't talked about when it comes to weight gain; people looking to gain weight are rarely just concerned with weight, but rather specifically gaining muscle more than fat (or even not gaining fat at all or even even losing fat, which, despite some myths, are possible simultaneously while gaining muscle), and the composition goals tend to take precedence over just pure mass goals, which tack on a whole host of other requirements. The mirror is also true, of course, in that people looking to lose weight tend to want to lose fat while maintaining muscle, but due to how weight affects joint stress, simply losing the mass is often beneficial in itself even if it's muscle, and general everyday life often tends to provide enough exercise to maintain enough muscle (still, a lot of the advice around weight loss does push people towards doing resistance training to better help to maintain that muscle while losing weight).
But all this amounts to is a fine motte. The actual bailey of CICO is that everyone who follows a calorie tracker and gets an incorrect result is lying or denying science, that it's physically impossible to fail to lose weight on 1800 calories or to fail to gain weight on 4000 calories, and that hormones don't affect weight.
Are there any specific comments on this forum that are in this bailey? Without specific references to such, this just seems like, at best, weakmanning, and likely strawmanning, based on what I've observed from people talking about CICO.
I've heard that it was actually by and for men who love overweight women who wanted both to encourage more overweight women and a group where it's easy to meet a lot of overweight women, though I haven't checked deeply for the veracity. Perhaps it's been fully coopted by overweight women by now, though, regardless of the origins.
At least, in comparison to what it will take to keep that weight off. That's not one big sustained decision, but countless smaller ones across much longer spans of time, and often in complex and inconvenient situations.
I think this is why fad diets are effective regardless of what the fad actually is; they take the countless smaller decisions and roll them up into one big decision to follow [fad diet]. In 2020, in order to help to maintain my weight during lockdowns, I decided to experiment with keto (conclusion: really effective for me for losing fat while maintaining muscle. Not worth it in the long run for losing out on so many sweets and baked goods, but a good tool to have in the toolbox for future body recomposition goals), and I found that it made the decisionmaking when food shopping or eating very simple and easy on the mind. Does it have carbs? Then I don't buy it, and I don't eat it. I didn't have to wrack my brain or fight my willpower to justify or to make that decision, I just had to defer to the one big decision that I had made weeks/months back.
When encountering HBD for the first time, this sort of thing was also my conclusion on what a good, fair system would work like. From what I can tell, one of the most prominent mainstream faces of HBD, Charles Murray, largely follows the same reasoning, leading to him supporting UBI (which isn't IQ-based affirmative action, but is meant to alleviate some of the same problems, by guaranteeing that no matter how bad you are at making money due to any reason, including low intelligence, you have some guaranteed income you can depend on for survival).
This is one reason why I find the argument that HBD needs to be suppressed, lest people use it to justify racism. Believing that belonging to a race that happens to have average high IQ or even having high IQ oneself entitles one to greater rights and privileges than those who don't happen to belong to such a race or don't happen to have high IQ is something separate and distinct from believing that different races have different average IQ, and the latter doesn't cause the former.
The econ 101 explanation, which I believe is pretty much correct, is that we always want some inflation and never want deflation. Deflation means that cash just naturally accrues value - the $100 in my hand today is worth more tomorrow if I don't spend it. So I don't spend it unless absolutely necessary. And if everyone does this, then the economy slows down, there's less growth, and everyone suffers from the fewer goods being produced and services being rendered. Inflation has the opposite effect, where currency constantly loses value; so the most value you can get out of that $100 is right now, which puts pressure on you to look for the right opportunity to trade it for a useful product or service right now.
Of course, too much inflation is well known to cause serious issues. Income tends to be sticky and laggy - most people's salaries aren't pegged to inflation, but rather get raised once every period of time, and so if inflation is very high, then most workers suffer. Which is kinda what happened the last few years. This is why the US federal reserve's dual mandate includes hitting a 2% inflation rate - it's positive, but not so positive as to cause issues, at least most people seem to think so.
So eventually, yeah, a loaf of bread will cost a million dollars. If a loaf of bread were to cost $2 today, then at 2% inflation, that'd take a little over 660 years. Much like how movie tickets cost a literal nickel like a century ago and cost $15 today. But the idea of inflation is that, when a loaf of bread costs $1,000,000, the average income will also be (1,000,000/2)*(average income today), so things will remain the same in relative terms. In practice, it doesn't work that perfectly.
More options
Context Copy link