domain:arjunpanickssery.substack.com
Historical nugget: Philip the Arab, emperor of Rome, will always be remembered for his celebrations of the 1000 year anniversary of Rome in 248 AD.
First I've heard of him. That you described it as a "historical nugget" somewhat gives away that it's not very significant.
I feel like we're living in the dark ages of dentistry. There's apparently also no data to support flossing or dental x-rays.
I stopped flossing about a decade ago. Yep, still no cavities. I'm going to drop the x-rays too. Cavities seem to be mostly a function of what kind of bacteria lives in your mouth + sugar consumption. I have the good bacteria, lucky for me.
In terms of fluoride, I'm going to do some research on how people suppose that fluoride is actually supposed to prevent cavities. Is it via consuming it orally? Because, if so, it's strange that there are also a bunch of products that APPLY IT DIRECTLY TO THE TEETH. Like you, I feel like there's not a lot of good information. The comments in defense of fluoride here have definitely not reassured me.
Anecdotally, I have used a high fluoride toothpaste in the past and it reduced teeth sensitivity.
Why can't we just put it in mouthwash and toothpaste and let people make their own choices?
Edit: I spent some time talking with Claude AI about this issue and it was strangely non-cucked and helpful. Yes, it would seem that topical application of fluoride is likely to confer more benefits than drinking it, and without the potential downsides. This seems like a no-brainer. The one exception is that children may need some fluoride while teeth are forming. Nevertheless, they get it from food anyway. I might look into a way to test and filter fluoride levels.
I feel like you are confusing several separate issues. Nothing I've done in this thread is aimed at "protecting a user from criticism." Coffee_enjoyer was breaking the rules and obnoxiously axe-grinding
I can't discern him breaking any rules, or you explicitly accusing of breaking him of any rules, apart from the subjective "wildcard rule" about obnoxiousness. It's fine to have a wildcard rule that essentially says "don't do things we don't like", but to then try to pin the "breaking the rules" label on someone who only ran afoul of that rule is somewhere between a case of the noncentral fallacy and plain self-aggrandizement, where you expect other people to treat your taste with the same reverence as a written rule.
Your complaints are not at that level, but your candor over your distaste for Dean suggests to me that you are making a similar mistake: allowing animus toward a user to blind you to the fact that this is not ultimately about the user, but about the rules.
I think hounding other posters for evidence and forcing them to produce more evidence in a more legible way is an unalloyed good, actually. I'd love for you to prove me wrong, and show me an instance where someone is doing the same thing for a position that I agree with or user that I like where I think that it would be appropriate to moderate the pursuers. The closest example I can remember is where back in the Reddit era, people were piling up on darwin2000 (might have gotten the number part wrong) over not taking responsibility for boldly wrong predictions (in contexts such as the Smollett case). I was rather fond of him as a user and thought that he was an asset by virtue of putting out some overly welcoming hearths by merely existing, but was absolutely in favour of him being held accountable in the way he was.
I initially didn't want to make an argument based on accusations of bias, but looking through your posting history it seems plainly evident that you are deeply aligned with Dean on the Israel/Palestine question, and back the Israeli side in a way that can't be described as dispassionate. Are you sure that you are not letting your animus towards a side blind you to the fact that you are just using the rule that basically says "excuse to be deployed in edge cases" as an excuse in a case that is not particularly on edge? It's not like not being candid about this, or mostly avoiding engagement on substance (easy when an "excellent poster" is around to make your case for you anyway), magically makes you neutral. The least you could have done to not make this look as bad would have been to recuse yourself and let this be handled by another moderator who can express his views of the object-level issue with fewer expletives than this.
Separately, everything I said about Dean being a good poster was in direct response to coffee-enjoyer's obnoxious, overwrought, and rhetorical "is this the kind of posting you want!?" The answer was "yes, that's the point of the AAQCs, these are the kinds of posts we want." I was trying to find a way to help coffee_enjoyer understand why he was being moderated. Ultimately, I seem to have failed to find such a way; coffee_enjoyer seems to me far more interested in being angry about the disagreement between him and Dean (and, by extension, my moderating him over his approach), than in understanding that the problem is not the substance, but in the uncharitable and antagonistic nature of his engagement.
Well, forget about him. Can you explain to me, or anyone else, why he was being moderated? My current understanding is that you like Dean's posts in general and are moreover extremely unsympathetic to the anti-Israel position, and therefore perceive any persistent attempt to impose a tax on Dean's pro-Israel posting in its present shape as something that needs to be suppressed using the wildcard rule. Is this accurate?
You've left out my quite explicit point that AAQCs are not a bar to banning. Users cannot get away with "more extreme posts" indefinitely.
The clause doesn't have to be parsed as "(more extreme) posts" for the cycle to hold; it is absolutely sufficient for it to be "more (extreme posts)". Plenty of completely normal posts these days would have been moderated 5 years ago - and the way in which they are bad was originally trailblazed by "quality posters" who evidently were so favoured that unless someone took one for the team and raised a stink out in the open, you wouldn't even know that reports were just being redirected into the trash due to their standing, as opposed to nobody seeing a problem at all to begin with. Once the prolific and beloved posters all do it, the nobodies are free to follow suit.
In the end, we can't maintain this space at all if we worry too much about what might or might not "drive users away."
Is this a belief that's based on a concrete observation of bad things that happened when you "worried too much", or just rationalising the easy option of going with your gut?
One person's final straw is someone else's welcoming hearth.
One does not make up for the other. People can still make good posts and interesting conversation away from a welcoming hearth, but by definition they won't after they had to bear their final straw. You can run a good version of this forum while being a welcoming hearth to nobody, but you can't run one while putting the final straw on too many, especially if you selectively do so on just about everyone except those having a particular gamut of opinion.
Do you imagine there is any argument or evidence at all that could persuade you to change your current approach to moderation, or is it a matter of either having to take your ride to wherever it leads or getting off?
I don't think Richard Spencer's endorsement matters, but I think his motivation may.
It reads to me like the 'Sex traffickers for Harris' yard signs I saw. Mocking.
They are not. Others here are probably more cognizant of the machinations of politics in Japan than I. On the macro level the LDP or Jimintō party is typically the winner, with only a few brief periods of upset. The LDP is weirdly partnered with (New) Komeito, which is affiliated with the Soka Gakkai sect (some might say cult) of Buddhism, which has great social and political sway in Japan (if to some degree implicitly).
There are of course randos in Twitter who have opinions, but typically elections pass without great interest, with voter turnout not great, but similar to that of the US.
It's nowhere near as circus-like as in the US. Elections do make the news and on election nights the results are covered on the Japanese TV networks (which still receive considerable viewership despite Netflix) but there's not the wild and woolly atmosphere. It's rare (for me) to hear anyone discuss politics openly, which may or may not be gor cultural reasons (e.g. desire for social harmony )
Sorry to disappoint. Mine was not an attempt at comeback, simply a suspicion that we actually do agree in the essentials and that any point of argument isn't really worth the candle for either of us.
A Moronically Detailed Explanation
Buckle up, because the answer is complicated. In the first half of the twentieth century, there was a clear delineation between performers and songwriters. There were obvious exceptions like Duke Ellington, but writing and performing were considered separate roles. When recording, record labels would pair performers up with A&R men. The primary job of the A&R man was to select material for the performer, based on the performer's strengths and what they thought would sell. The repertoire largely came from American musical theater, and popular songs would be recorded by numerous artists. "Covers", as such, weren't really a thing in those days, as the earliest recorded version often wasn't the most well-known. For example "The Song Is You" is most associated with Frank Sinatra and his time with Tommy Dorsey, but it was first recorded ten years earlier. No one, however, thinks of the Sinatra version as a "cover" of a song by Jack Denny and the Waldorf Astoria Orchestra. Another way to think about it is that if the Cleveland Symphony Orchestra releases a new recording of Beethoven's Fifth next week, it won't be described as a cover of the Berlin Philharmonic's "original" 1913 recording. It's also worth noting that the heyday of the Great American Songbook was also the period when jazz was effectively America's Popular Music, and the focus wasn't so much on songwriting as it was on individual style and interpretation.
A critical factor in all of this is royalties. Every time a song is included on an album, played in public, played on the radio, etc., the songwriter gets a flat fee that is set by the Copyright Royalty Board. For example, if you record a CD the rate per track is 12.4 cents or 2.39 cents per minute of playing time, whichever is greater. So if a songwriter has a song included on an album that sells 40,000 copies, they'd get $4,960 in royalties. With the development of the album following WWII, the industry limited albums to ten tracks to keep royalty costs down. This is why the American versions of Beatles albums are significantly different than the British versions—the UK industry allowed 14 tracks. The Beatles hated this practice (and rebelled against it with the infamous "Butcher Cover"), and by 1967 they had enough clout to ensure that the American albums would be the same as the British albums.
The more important legacy the Beatles would have on the music industry was that they mostly wrote their own material. From the beginning, rock and roll musicians like Chuck Berry and Buddy Holly had been writing their own material, but the Beatles were making the practice de rigeur. A&R men were now called producers and were beginning to take a more involved role in the recording process. Beginning in the 1950s, Frank Sinatra was turning the new album format into a concept of its own. While most albums were simply collections of songs, Frank Sinatra had the idea to select and program the songs thematically to give the albums a cohesive mood. He also made sure his fans got their money's worth, and didn't duplicate material from singles. For the most part, though, albums remained an "adult" format, with more youth-oriented acts focusing on singles. While albums did exist, they were often mishmashes of miscellaneous material. A band didn't go into the studio to record and album; they went into the studio to record, and the record label would decide how to release the resulting material. The best went to single a-sides. Albums were padded with everything else—b-sides, material that didn't quite work, and, of course, covers recorded for the sole purpose of filling out the album. These would often be of whatever hits were popular at the time, and maybe a rock version of an old classic.
The Beatles and other British acts took up the Sinatra mantle of recording cohesive albums, but other musicians, particularly in the US, didn't have that luxury. The strategy of American labels in the 1960s was to shamelessly flood the market with product to milk whatever fleeting success a band had to the fullest extent possible. My favorite example of this trend, and how it interacted with the changing trends, is the Beach Boys. They put out their first studio album in 1962, 3 in 1963, and 4 in 1964. These were mostly short and laden with filler, but by the time of All Summer Long Beatlemania had hit and they were upping their game. As rock music became more sophisticated in 1965, Brian Wilson began taking a greater interest in making good albums, but Capitol still required 3 albums from them that year. By this time they were deep into Pet Sounds and didn't have anything ready for the required Christmas release. So they got a few friends into the studio to stage a mock party and recorded an entire album of lazy covers, mostly just acoustic guitar, vocals, and some simple percussion. It's terrible; even the hit single (Barbara Ann) is probably the band's worst. Two years later ripoffs like this were going out of fashion, but, with the band in disarray and no new album forthcoming, Capitol wiped the vocals from some old hits and released it as Stack-o-Tracks, including a lyric sheet and marketing it as a sing-along record. They were truly shameless.
During this period, there was still a large contingent of professional songwriters who specifically wrote for pop artists. The Brill Building held Carole King/Gerry Goffin, Barry Mann/Cynthia Weill, Neil Diamond, and Bert Berns, among others. Motown had its own stable of songwriters to pen hits for its talent (and when they had to put together albums they covered other Motown artists' hits, Beatles songs, Broadway tunes, and whatever else was popular at the time). But times were changing. By the time Sgt. Pepper came out in 1967, rock bands were thinking of themselves as serious groups who played their own instruments, wrote their own material, and recorded albums as independent artistic statements. What criticism of rock existed was limited to industry publications like Billboard and teen magazines like Tiger Beat; the former focused on marketability and the latter on fawning adoration. Rolling Stone was launched in 1968 as an analog for what Down Beat was in jazz—a serious publication for serious criticism of music that deserved it. Major labels clung to the old paradigm for a while, but would soon yield to changing consumer taste. Even R&B, which largely remained aloof from this trend, saw people like Marvin Gaye, Stevie Wonder, and Isaac Hayes emerge as album artists in the 1970s.
This was the status quo that continued until the early 2000s. Pop music meant rock music, rock music meant albums, and albums meant cohesive, individual statements. Covers still existed throughout this period, but the underlying ethos had changed. If a serious rock band records a cover, there's a reason behind it. The decision to record the cover is an artistic one in and of itself, unlike in the 1940s, when you recorded covers because you had no other choice, or in the 1960s, when you recorded covers because the record company needed you to fill out an album. At this time, the industry itself was in a golden age, as far as making money was concerned. The introduction of the CD in the 1980s eliminated a lot of the fidelity problems inherent to analog formats. When they were first introduced, CDs were significantly more expensive to manufacture than records. But by the 1990s, the price of the disc and packaging had shrunk to pennies per unit. The cost of the disc itself was no longer a substantial part of the equation. And the increased fidelity led to increased catalog sales, as people, Baby Boomers especially, began repurchasing their old albums. These were heady times indeed.
And then Napster came along and ended the party. The industry spent the next decade flailing, going before congress, sued software developers, sued their own potential customers, and implemented bad DRM schemes, all in a vain attempt to stop the tidal wave. Music was no longer something you bought, but something you expected to get for free. After a decade of this nonsense, the industry finally did what it should have done all along and began offering access to a broad library for a reasonable monthly price. For once, it looked like there would be some degree of stability; profits went up, and piracy went down. Which brings us back to those royalties.
Earlier I gave an example where a songwriter gets paid a statutory fee for inclusion of a song on physical media. This doesn't work for streaming; if I buy a CD I pay the 12 cents but I have unlimited access to the song. Streaming works different because I technically have access to millions of songs, but the artist only gets paid for the ones I play. Paying them 12 cents a song doesn't make sense. So instead, streaming relies on a complicated formula involving percentage of streams compared with total revenue blah blah blah. The thing about royalties is, they come off the top. If a label releases an album with 12 songs by 12 different songwriters, none of whom have any relation to the label, that's $1.44 right there. But if the songwriter is also the performer under contract to the label, then the label can negotiate a lower songwriting royalty (these are called Controlled Compositions). But it gets better. Songwriting royalties don't go entirely to the songwriter, but are split between the songwriter and the publishing company. An artist signed to a label is probably required to use a publishing company owned by the label, so there's a 50% discount right there. A typical record contract grants a 25% discount on controlled compositions, so that $1.44 the label owes in royalties is down to 54 cents if the artist writes all his own songs.
In the world of streaming, where the royalties are paid every time a song gets paid, this can add up quick. There's no real downside to releasing a few covers for streaming as album tracks or as part of a miscellaneous release because the streaming totals aren't going to be that high. The problem comes when the recording becomes a hit; when there's a lot of money at stake, being able to recoup 62.5% of the songwriting royalties you'd otherwise have to pay means a lot of money. To be fair, most labels use outside songwriters to pen hits for their pop artists. But these songwriters are almost always affiliated with the publishers owned by the labels, and are thus cheaper on the whole than going out into the universe of available songs and picking one you like. The Great American Songbook existed in an entirely different world, where paying these royalties was an accepted fact of the industry. After the Rock Revolution, this was no longer the case, but the culture still existed, and the industry was making so much money that it didn't care. After 2000, extreme cost cutting became the norm, and songwriting royalties were an easy target in a world that had largely moved away from outside songwriters.
BTW I thought that "Fast Car" cover sucked. First, the song wasn't that good to begin with, and Tracy Chapman is possibly the most overrated singer-songwriter in history (aside from her two hits her material is the definition of generic). Second, a cover should try to reinvent the song in the performer's image. Here, it sounded like Luke was singing karaoke.
One of my neighbors asked me if I gave my child fluoride pills now that we live on well water. I stifled a gasp and asked her to describe what she was giving her kid. Apparently her dentists said kids without city water need fluoride and this kid takes fluoride pills--like swallows them. I asked the woman if she understood how they work and she admitted she didn't know. Rather than kill the party I said it was interesting and went straight home and re-researched the topic to make sure I actually understood what I thought I understood: fluoride is a topical treatment to help re-build tooth enamel.
Not only did I remember how the stuff works and learned a bit more, but I discovered that it's insanely difficult to get good information. It's almost all propaganda that says, "fluoride prevents cavities! Trust us!" Effectively, fluoride ionizes existing chemicals in the mouth to boost enamel creation, which is a natural process. There is literally no benefit to consuming fluoride and it's clearly a dangerous chemical to ingest in large quantities. (https://journals.lww.com/jpcd/fulltext/2020/10020/how_fluoride_protects_dental_enamel_from.3.aspx)
I also learned that the guidelines for public water fluoridation had recently been dropped from 1 mg/L to 0.7 mg/L (https://www.cdc.gov/fluoridation/about/community-water-fluoridation-recommendations.html) and that there is actually a problem called fluorosis that will ruin your teeth. And besides...who even drinks tap-water anymore (except us Motters) amiright? It's looking bad for water fluoridation!
The thing that blew me away was the oft repeated claim that fluoridation lowers cavities by 30%. The best I could find was that health experts in the 80's found the addition of some fluoride to the water resulted in fewer cavities across the population but I found no evidence where it discussed the actual effect per individual. It really seems like a case of bad numeracy to me, where no one bothers to ask, "30% of what? how?" and everyone just presumes they'll have 30% fewer cavities if they even think about it that hard.
I feel like the real problem here is a lack of scientific curiosity on the part of dentists who just swallow the fluoride story in large breathless gulps.
My current strategy for Gleba is to think about it later. I'm not yet sure if it's going to be fun or "fun", but I'm hoping for the former.
Gleba is really driving me nuts. Unless you do pure bot logistics, the entire thing is a nightmare to automate.
It's deeply irrational trying to do it, but the whole thing has so many edge cases. E.g. I converted all my spoiled vegetation into nutrients. But that means the mechanical nutrient cycle starter plants can't provide nutrients to even start everything else.
etc.
I am enjoying Songs of a Lost World immensely thus far. The themes of aging, sadness, and loss speak directly to my experiences, even more so since 2024 has turned into another, "buckle up, buckaroos," kind of year of sweeping changes for me personally. I'm thrilled that The Cure has released an album this damn good in this day and age (and with multiple vinyl and cassette versions to boot!) and it's quite the poignant experience to listen to something that is so on point to my middle-aged self and that also makes my inner Goth Kid squee in delight.
The implementation I'm using - stable diffusion - is written in Python, leaks memory like a sieve and basically somehow eats up 32 gb of ram even though the process only ever uses up to 16 gb.
The only practical thing to do while it's running is shitposting, I've found.
Maybe, but he also endorsed Biden back in 2016, at which point does it matter?
Flashbacks to Planetary Annihilation ruining the best part of Supreme Commander by making bases guaranteed messy
https://www.donaldjtrump.com/agenda47/president-donald-j-trump-free-speech-policy-initiative So it's from Dec 2022, during the twitter files? Part of what seems strange is that he's aged appreciably since then, particularly after the shooting. Biden from 2-3 years ago also practically seems like AI when you're used to seeing him now.
To further ignite your AI epistemic crisis, I would suggest Egon Cholakian and reading about John Westbrook / Daphne Westbrook on the 4chan archive.
I mean, assuming you have a career and a family, you can moderate your heart's desire for freedom and also your gaming time. If you do some planning and analysis off-game, you can probably complete the game in 2-3 months of 1-2 hours in the evenings.
I think the devs are obsessed with the quality of the code and design in the game to such a degree that they believe 3D will never allow such precision and control of the player's viewpoint. I think they're right.
You can always just do a grid. Even if player POV was in a 3d grid, that'd still be improved. And 3d has way more options.
Especially in space, 2d just looks fucking weird. We are all flatlanders down here but up there?
Glad I could be of service.
How did you find out about your bomb range? How common are those? I’m in Texas, so I wouldn’t be surprised if there were a few nearby…
You don't need cops on every corner. They just patrol and shut down the businesses and chase people out of parks and such (just like they did during COVID). Hotheads with guns are irrelevant, there won't be enough of them and you can just have the cops shoot each of them with a rifle.
Yeah I looked at it, looked for the telltale glitches around his lips, tried to listen into the audio pattern. And it sounded slightly off to me, just a little bit. Presumably it's because he's speaking from a teleprompter.
On balance, I'd trust community notes more than my own eyeballing at this point.
I don’t think it’s clear to Putin at all that Trump will offer him a better deal than Biden. If Trump thinks he’s been shortchanged then he’s liable to reverse his position quickly.
For what its worth my experience mirriors yours. My eldest is 9 years old and i remember that when they first started eating solid food ground beef was something like 3 dollars a pound, today it's closer to 8.
My admittedly imperfect impression is that that the prices of food and gas have more than doubled since 2016. Yet the official line continues to be that inflation is minimal or an illusion.
Is anyone else here old enough to remember the 5 dollar foot-long promotion at Subway? What does a large Itallian BMT cost you today?
What are the factions and members of those factions (both actual politicians and thought leaders/influencers) of the incoming Trump administration? Trump has developed quite a large tent while he was out of office but I think they will have some big fights once they actually need to do things and not simply criticize the democrats. (?) are people where I'm unsure where to categorize
Tech right: Musk, Thiel, Yarvin(?)
Podcast bro: Joe Rogan, Theo Von, RFK (?)
Populist/Pro-worker right (Is this category even real?): Vance(?), Hawley, Tucker(?), Matt Walsh
Trump loyalist: Bannon, Miller, Trump Jr. (?)
Neocons/Deficit Hawks/Old GOP: Kushner, McConnell, Senate republicans that held their nose and supported Trump despite clearly hating him, Ben Shapiro
Are there any categories I'm missing? Is anyone placed wrong? Am I missing any key people? I also want news and podcasts to follow from each faction, here is my list so far, and I'm open for suggestions.
Tech right: A lot of the big and more intellectually interesting right wing accounts on X, Pirate Wires podcast, I'd guess most right wing people here are in this category
Podcast bro: JRE when he invites political people
Populist/Pro-worker right (Is this category even real?): Tucker(?)
Trump loyalist: Bannon's War Room
Neocons/Deficit Hawks: Wall Street Journal opinions, Fox, the token conservative on places like the NYT opinion articles
Skull size is a pretty clear signal for Erectus, I'm happy with skull size variations implying intelligence difference, ceteris paribus. I'm happy with a broad trend of rising intelligence under selection pressure. I do believe in evolution and genetics. But I don't believe that we can precisely chart IQ rising and falling over thousands of years like OP's charts suggest. The level of confidence is too high.
DNA methylation is absolutely relevant to working out which genes are expressed, it's a way of determining epigenetics.
The human body is a very complex piece of machinery that we don't fully understand. This article suggests that the heart can store memories (which are transferred with transplants) which I didn't believe in at all prior to this: https://www.sciencedirect.com/science/article/abs/pii/S0306987719307145
When dealing with such a complex system, with many facets barely known to us, we should be cautious before reaching conclusions - especially if there's no way to test them.
What signs?
According to the World Bank, Russia is now a high-income country. Real GDP per capita growth was at 3.6%! If an Australian politician could deliver that kind of growth, they'd be heralded as a living god and probably get Putin-level approval ratings (as opposed to negative approval ratings).
https://www.worldbank.org/en/about/leadership/directors/eds23/brief/russia-was-classified-as-high-income-country
https://carnegieendowment.org/russia-eurasia/politika/2024/05/russia-war-income?lang=en
Even the Carnegie Endowment is struggling to find much bad to say about Russian wages growth. If Biden had delivered positive real wages growth over his term, I think he would still be in office today. Just look at the chart on page 25. Apparently the crushing impact of Western sanctions in 2022 was less harmful to the Russian worker than whatever was going on in America (or the UK, Germany, Australia...) with inflation. And in 2023 Russia left the US in the dust in real wages.
https://www.ilo.org/sites/default/files/wcmsp5/groups/public/%40dgreports/%40inst/documents/publication/wcms_908142.pdf
China's struggling, failing economy was massively outperforming the vibrant, dynamic US economy in 2022 and 2023, presumably it's still doing so. Real wages, real GDP per capita are rising much faster in China and Russia. They're rising from a lower basis level but are rising fast nonetheless. Yet all we see in newspapers and television is stories of disaster, stagnation and decline over there.
More options
Context Copy link