@hooser's banner p

hooser


				

				

				
0 followers   follows 0 users  
joined 2022 October 02 12:32:20 UTC

				

User ID: 1399

hooser


				
				
				

				
0 followers   follows 0 users   joined 2022 October 02 12:32:20 UTC

					

No bio...


					

User ID: 1399

A major challenge for comparing literacy (or illiteracy) rates across time or different countries is that the measurements are very different. In US, "functionally illiterate" means you can cipher and sound it out, but if it's a sufficiently complex sentence you can't understand it. (For example, some instructions on tax forms.) In developing countries, "illiterate" means you cannot cipher the alphabet (or kanji, as the case may be).

A while back, a student in my Liberal Arts Math class did a deep dive comparing the literacy statistics for US vs. Bangladesh, because some statistics she found suggested that US was doing worse. Turned out that the US stats were for "functional illiteracy" while the study in Bangladesh asked its participants to sound out a few written words.

Not the same thing.

When I drive cross-country, McDonalds has the most reliably clean restrooms, and they don't insist on you buying stuff first. (The one exception to that I found was in a Denver suburb, where they had a sign on the bathroom saying "For customers only". I asked a worker to let me and the kids in, and she did without any questions, and without requiring a purchase. I guess that's to discourage the local homeless.)

The food is also fine. I don't subsist on it, but an occasional chicken sandwich isn't going to kill me any faster than anything else I can get quickly on the road.

I am skeptical that IQ tests measure what we think they measure in developing countries. Even those tests that pertain to be context-free and that don't require one to be able to read. It takes intelligence and cunning to hunt and forage, or to run a homestead farm, or to navigate life in a shanty-town. I think that an American with IQ of 70 and a Papua New Guinean with an IQ of 70 differ greatly in how well they can take care of themselves.

The US Army doesn't specify the IQ cutoff; some people estimate it at 83 (that's what I remember from McNamamara's Folly. Standard deviation of IQ is 15, mean 100, so below 83 is 11.5%.

The US Army by law restricts the employment of the next 20 percentiles (11th--31st) to be no higher than 20% of the applicant pool:

The number of persons originally enlisted or inducted to serve on active duty (other than active duty for training) in any armed force during any fiscal year whose score on the Armed Forces Qualification Test is at or above the tenth percentile and below the thirty-first percentile may not exceed 20 percent of the total number of persons originally enlisted or inducted to serve on active duty (other than active duty for training) in such armed force during such fiscal year.

The corresponding IQs would be in the 83-93 range.

I appreciate the reality check. I asked Perplexity AI for an estimate of the total number of institutionalized people in US in 1960s, including prisons, mental institutions and institutions for the mentally impaired. The peak for mental institutions was half-a-million a decade earlier (so about 0.3% of the population), the prison population was less than it is now, but as for the mentally disabled:

The search results do not provide specific numbers for institutions housing people with intellectual disabilities (then referred to as "mental retardation") in the 1960s. However, these institutions were common at the time, often housing large numbers of individuals.

I know several institutions for the mentally disabled within my area. They range from assisted living to full-on can't-go-outside-without-an-escort (for those who are mobile). They tend to be out of the way, and people who don't have family members (or family members of their close friends) tend not to think about these places.

So I am going to guess that a large portion of the bottom-10%-IQ were indeed in some form of institution that would take them out of consideration of the labor force participation metric.

I would be open to evidence that it was common for mentally challenged men to get hired and work. Maybe with a lower minimum wage, it makes sense. My friend's sister, for example, works at a doggie day-care for like half the federal minimum wage, something like 20 hours a week.

Whereas only 5% of prime age males weren't employed in 1968, today it's nearly 14%.

For your consideration: the US army doesn't enlist anyone scoring below 10-th percentile on their IQ test. That's 10% of men that the US army considers untrainable, despite having vastly more control over a soldier's life than another employer. Based solely on that, I would expect that there should be at least 10% of men who ought to not be employed.

Where were those men in 1968? Probably institutionalized, and thus not counted in LFPR.
There has been a massive de-institutionalization in the 70s.

I followed the links to the original reporter, and then did a not-too-deep dive into the FBI Uniform Crime Statistics.

The way I see it, there are three separate questions at hand:

  • 1 How many violent crimes were there in US, in reality? (In time-series sense.)

  • 2 How does FBI collect and measure (or estimate) that statistic? Have they changed that methodology? How do their estimates compare to other good estimates (like the National Crime Victimization Survey, for example)?

  • 3 How have politicians used the FBI statistics.

The first question can only be glanced through a dark distorted glass of statistics, and it's always important to remember that any specific estimates have a particular methodology, which can be more or less flawed.

For the second question, the important bit of info is that the FBI changed its methodology at the beginning of 2021 (PDF warning, my highlights):

Since 1930, the FBI has gathered and published annual crime statistics based on data voluntarily submitted by law enforcement to the Summary Reporting System (SRS) of the FBI’s Uniform Crime Reporting (UCR) Program, providing an authoritative perspective on the scope of crime reported to law enforcement in the nation. The SRS data collection was voluntary and not all law enforcement agencies provided data each year. To account for agencies that did not submit data, the FBI began estimating crime in the 1960s, using the reports of participating agencies to produce national and state crime estimates. The aggregate crime counts and estimates from the SRS served data users well over the years, but the growing need for more detailed information on crime known to law enforcement led to the development of NIBRS in the mid-1980s. After NIBRS was established, state crime reporting programs and local agencies could decide if they would report data using SRS or NIBRS. To accommodate that choice, the FBI’s UCR Program collected crime and arrest data through both SRS and NIBRS, and annual national estimates of reported crime were based on the aggregation of both sources of data. In 2016, with support from prominent law enforcement organizations, the FBI announced that the UCR Program would retire the SRS on January 1, 2021.

Looking at the graph of all violent offenses in US in the past 5 years, it's clear that there are statistical artifacts. For example, before 2021 every year has a bump in December, which is unlikely to correspond to actual huge increase in crime and more likely is police precincts catching up on their paperwork.

The other important bit of info is that, in that transition year 2021, only 2/3rd of population are covered by the reporting precincts, as opposed to before (95%-ish) or after (90%-ish). So any comparison to the year 2021 will be junk.

I can't find the links right now, but my recollection is that the stats for the year 2022 were adjusted because a lot more precincts caught up on their reporting for that year.

PS. Yes, the FBI should be more responsible in clearly communicating their updates to the public.

You are proposing an interesting metric, and I would like to see comparisons to other conflict zones before spinning explanations about how Israelis are uniquely predisposed to targeting children.

This opinion poll is a reasonable source of anecdotes based on a snowball sample:

Through personal contacts in the medical community and a good deal of searching online, I was able to get in touch with American health care workers who have served in Gaza since Oct. 7, 2023. Many have familial or religious ties to the Middle East. Others, like me, do not, but felt compelled to volunteer in Gaza for a variety of reasons.

This is not a representative sample--even of "American health care workers who have served in Gaza since Oct. 7, 2023"--nor does the author pretend it to be such. As for the rest of the methodology, I have found none in the article, except:

Using questions based on my own observations and my conversations with fellow doctors and nurses, I worked with Times Opinion to poll 65 health care workers about what they had seen in Gaza.

What were these questions? What was the structure of the interview? Times Opinion isn't exactly known for conducting unbiased research, qualitative or quantitative. In particular, this gives me pause:

Fifty-seven, including myself, were willing to share their experiences on the record. The other eight participated anonymously, either because they have family in Gaza or the West Bank, or because they fear workplace retaliation.

Confidentiality is a keystone feature in social science research; without it, a participant must consider how their response will reflect on them from the broader audience. Here, however, the vast majority agreed to have their full name, age, and city of residence displayed next to their responses in the New York Times, in an Opinion piece decrying the Israeli violence in Gaza. So not only are these responders not representative, they are advocates.

Now, it's still possible that a healthcare professional who is passionate for the cause still report the truth. I am not discounting out of hand their specific anecdotes, though I do question their veracity or interpretation more than I would a more neutral observer. Because these anecdotes are coming from advocates for a cause, I give more weight to objections like the kind raised by @The_Nybbler when analyzing the X-ray photographs.

So to summarize my point: This particular Opinion piece's evidence does not extend beyond the anecdotes the specific medical professionals who chose to go to Gaza during wartime and have demonstrated willingness for advocacy on behalf of Gazans. However, I agree that the metric (# pre-teens shot in the head with single bullet) / (# pre-teens shot overall) could be a valuable indicator, so long as there is a reasonable attempt at meaningful comparison.

P.S. As an intuition pump, if we take one year in US and looked at, say, teenage boys, I would expect that metric to be high (more than 50%) due to suicides.

P.P.S. The metric (# healthcare professionals who saw preteens with a bullet wound to the head) / (# healthcare professionals), on the other hand, is not as useful, except maybe for those who consider healthcare profession in that region as an occupation.

I have an effortpost planned on this point once my sons stop bringing viruses into the house.

Looking forward to the post, and best of luck with building up that immune system!

Thanks for clarifying your perspective!

How about:

Among men, men get status through demonstrating situational-appropriate competence. When the group already has a clearly established hierarchy of competence, men defer along the hierarchical lines. If hierarchy is not yet established, or new evidence suggests that the established hierarchy is no longer deserved, men jostle for status primarily in confrontational style that calls into question the level of competence of the one who slipped up as compared to the challenger.

Do you agree with this generalization? If not, what part would you change?

When status is gained from being right in this hierarchy, and the main method of jockeying is attacking their position for being wrong, then naturally this will align towards truth-seeking over time.

I am trying to parse your argument, since it's in the "if A then B" form and you didn't explicitly claim that A is true (here: A = "male status is gained from being right").

If you were asserting that male status is gained from being right, then your entire argument would imply that no male-dominated society will have top-down beliefs that were contrary to reality. How I wish that were true, but history proves otherwise. (see most religions, or North Korea)

(case in point: every geek that got bullied by the popular jocks)

Rather, I assert that men as well as women gain status not from being right but by being convincing. Many men tend to do it in a more straightforward argumentative way, many women tend to do it by building coalitions and seeking consensus, I will give you that. (Always exceptions, I know enough agreeable men and disagreeable women to know that the generalization doesn't always hold.)

Yes, you are right. I agree that, because a field's goal is memetic potency, women are more likely to be drawn to it. Thanks for pointing that out.

On the other hand, there is a reinforcing factor at play, too. If someone falls ass-backwards into mathematics, one will still learn how to question assertions and demand proof. If someone gets steered into social studies, one will still learn how to test the waters with some friendlies--and to do it subtly, in I-came-across-this-thought kind of way--before publicizing it more broadly.

The reinforcing factor is more like a loop: E.g., because most mathematicians are disagreeable, the confrontational style of argumentation gets more highly prized in the field. E.g., because most social studies teachers are agreeable, consensus-building styles get more highly prized in the field.

Male decision making often tries to figure out what he thinks is true whereas female decision making tries to figure out what belief is most popular.

[citation needed]

Flippant quip aside, I partly agree with your assertion. In my experience, women are more likely to seek consensus. That probably generalizes, since women are about half a standard deviation higher on the agreeableness scale than men, on average.

I disagree that the gender analogy (women : popular ideas) is (men : true ideas). Quite frankly, I see as much popular bullshit spewing from men as I do from women. What I agree to, however, is that (in general) when women discuss a subject they are likely to converge towards a consensus opinion without much overt argument, whereas (again, in general), men will overtly argue for their takes, and use the arguments as opportunity to jockey for position among their group.

Here's the thing: I am a mathematician, and I worked and argued with plenty of other women in math, tech and engineering, who tend to be more disagreeable (in terms of Big-5 personality traits) then women in general. The disagreeable women are no more likely to gently gravitate to consensus then the equally disagreeable men.

Meanwhile, when I worked with teachers (who tend to be more agreeable), I had employ extreme teaching techniques to encourage them to push back on another's asinine assertions, men as well as women.

On truth-seeking versus popular-ideas-seeking: engineers and techs are just as motivated to determine the truth, be they men or women, because in those fields, you test your ideas against reality, and reality doesn't care about the provenance of your ideas. Writers and philosophers are just as motivated to determine what will be popular (or better yet, viral), be they men or women, because in those fields, the test for your ideas is the potency of them as memes--how well your ideas compete for memetic space within your society (more importantly, the part of that society that determines your social status).

So I assert that the pattern you observe--that women tend to gravitate to popular opinions while men appear to seek the truth--is best explained by two factors:

  • Women are more agreeable then men, in general;

  • Men are more concentrated in fields where ideas are tested against reality, and women are more concentrated in fields where the value of ideas are in their memetic potency.

PS. This is also a response to @monoamine and @falling-star, giving an N = 1 sample for how a female Mottizen replies to the post.

That's a reasonable hypothesis. These three in particular were very well-adjusted young people with plenty of in-person friends, but maybe in general there is that feedback loop.

Let me generalize:

Suppose that it's costly and disadvantageous to be X. But there is a benefit to being X, in certain circumstances; in particular, there is the benefit of in-group support from other X-ers if one is a recognized X. If that's the sole benefit, then only the criteria set by these other X-ers matters. Either there is a way to for a non-X-er to join (restricted or now), or there isn't.

Now suppose that a powerful entity wants to benevolently help out the disadvantaged community of X. Then those who are not-recognized-X now have two different incentives for becoming recognized-X: in-group support, and/or a slice of the entity's largesse. If the form of that largesse is finite, then it incentivizes the already-recognized-X-ers to vigorously insist on the X-community criteria for recognition of X-ness. But the tighter those criteria are, and the higher the benefits flowing from the entity, the more likely someone not-recognized-X will insist that they're really X--to the entity or the larger community that entity is trying to impress with its benevolence--even if that gains them nothing from other X-ers but hostility.

So, I predict:

  • If top Chinese universities institute affirmative action quotas for Uygurs, there will be applicants claiming to be Uygur without any documented Uygur ancestry but whose grandma traveled to Xinjiang that one time.

  • If Medicaid becomes available to any recovering alcoholic, there will be applicants who insist they fit the bill because they used to make fools of themselves while tipsy at parties and are still embarrassed by that.

  • If UCLA decides to give scholarships to furries, there will be applicants who say they qualify because they once dressed up as a sexy fox for Halloween.

Yes, most youths watch videos that are already viral. What I observed is that, even then, a lot of the time the youth's motivation for watching those videos is social. Either someone they know shared the video with them, or it's the video that other kids are talking about. "Fear of missing out" is a key driver here. Yes, a lot of it is just that stuff on their phones is much more interesting than the boring class, like videos and games. But a lot of it is social, and specifically social with the kids they know in-person rather than randos on the internet.

Once I was at this social gathering; my acquaintances brought their teens, the latter also friends. So the adults are all talking to each other, and these three teens are sitting quietly off in a corner, side-by-side, absorbed in their phones. So I watch them, and I see that occasionally one of them giggles, like out of the blue. I casually drift to their corner, peek at their phones. Sure 'nuf, they are texting each other, and sharing links. "Really?", I said, "y'all sitting next to each other, and still texting?" They just gave me this look, like, how-can-you-boomer-possibly-understand?

Kids these days.

The traditional music scenes where young people made up stuff themselves and performed live in front of other youth with no rules seems to be disappearing.

Wait, isn't that TikTok right now?

The middle school I volunteered at had to implement strict restrictions on cell phone use because the students were engrossed in social media, in particularly TikTok. From what I saw, the kids weren't merely passively consuming it. Quite a few posted their own videos, and a lot of the classroom distraction / disruption resulted from these locally-made posts rather than from viral videos. (Those happened too, of course.) That's youth communicating with one another and jockeying for status and recognition among themselves. Way more interesting than the official school curriculum. (Relevant Far Side cartoon.)

Thirty years ago, a way to impress other youths was to get start a band, or to know some good local bands, or to know a local spot where some good local bands would be playing. Now, a way to impress other youths is to post a video that goes viral (at least among your group), or to know someone whose video went viral, or to be plugged in to some less-known sites that share videos that may go viral and be the first to share those among your friends.

Plus ça change, plus c'est la même chose.

My husband and I headed off a lot of potential conflict by writing our own marriage contract (nothing legal, more like a "memorandum of understanding" of what's important to us). One of the most useful agreements is that besides a shared bank account, we'd each have our own separate accounts. That saved a lot of problems: I can go into a used book store and come out with a stack, and he can order whatever esoteric electronic gadgets as he wants for his latest bike-improvement project, and no discussion need occur.

The biggest though didn't make it into the memo, but fortunately happened anyway: we have separate toilet rooms. Separate toilet rooms save marriages.

“I just had so many friends who were roofied by guys that they trusted,” Emie said. It never happened to her. But other girls told her about experiences where they blacked out on a night when they didn't drink much or woke up somewhere with no memory of getting there.

Let's talk about alcohol-induced blackouts, because I don't think people realize how widespread those experiences have become:

Blackouts tend to begin at blood alcohol concentrations (BACs) of about 0.16 percent (nearly twice the legal driving limit)

So, how quickly does a typical young woman reach such level of BAC? Well, the Department of Motor Vehicles puts out a helpful rule-of-thumb for half of that BAC (0.08):

4 drinks for women and 5 drinks for men—in about 2 hours.

The sex difference matters: women are smaller and have less muscle mass, which affects how quickly the body breaks down alcohol.

A typical sorority girl is probably smaller (lighter?) than an average US woman, and is more likely to not eat a heavy meal before going to a party, so I would guesstimate that 6 shots in 2 hours can easily bring the young lady over the 0.16 BAC limit. Six shots in two hours, that's just not that unusual. In fact, it's so not unusual that I would expect her friends to say "she didn't even drink all that much".

Do roofies exist and do men use them? I am willing to suppose so. But in my experience, blackouts are simply way, way more common.

The most obvious irony here is how she wrote an entire article to tell us about how the girl friendship is more meaningful than her old boyfriend and her's, but it's clear to anyone who read it that she had much more thought and feeling for Him than for Her.

Yeah, the article doesn't pass the Bechdel test. That's par for the course for Elle, if I correctly remember my brief interest in popular women's mags during my teenage years. The articles (such as they were, between pictures of simpering semi-nudes) alternated between how-to-get-your-man and you-don't-need-a-man-(but-you-do-need-these-shoes).

This isn't feminism in any meaningful sense of the word. Any decent feminist (or someone passing the ideological Turing test for one) would recognize this as an instance of internalized heteronormative cisgender patriarchy, and those more atuned to the zeitgeist would also spot the glaring colonialist paternalism.

(Weirdly enough, I think I can actually defend all those terms as they apply to the OP's summary of the article. I have no desire to read the article, because musings of a woman who realizes that an open relationship isn't so great lacks any element of surprise.)

US had something similar happen two decades ago: Ana Montes, the senior analyst at the Defense Intelligence Agency (DIA) responsible for Cuba (nicknamed "Queen of Cuba") turned out to spy for Cuba throughout her entire career at the DIA.

This situation isn't as ironic as it sounds. Cuba recruited Montes back when she was a college student, and persuaded her to go into the career that would be of most benefit. I am sure there were others, but this one happened to be the one that succeeded.

Because none of the trivialities of my day mean anything to anyone here I'll get to the point.

I love hearing the trivialities, you tell them well and they provide a humanizing dimension to the rest of your post. And this particular post is all about the challenges people have in humanizing their professional interactions, so very apropos.

I see that HR gets little love here, so I will defend them. The purpose of HR is to protect the company from the heat of human friction (metaphorically speaking). That means defusing interpersonal conflict when it may get out of hand, not escalating it.

For the most part, employees deal with normal interpersonal conflicts themselves, as people do. But occasionally someone can't, and it helps to have a clear process an employee can turn to. That's what a complaint to HR does, it starts this process. Someone from HR then hears that employee out, then thanks the employee for bringing the matter to HR's attention and assures her that the matter is handled. (The manner of that handling is confidential, but they'll assure her it's appropriate and in line with the company policy.)

HR does not burn a valuable manager over one temp worker's complaint for what she sees as a deviation from professional behavior.

The US movement to abolish the death penalty goes back to the 18th century, when multicultural considerations weren't a thing. I will leave this link to Perplexity's quick summary that has further links.

You are correct that currently people are concerned with the death penalty in part because it affects black men more than white men. (And that nobody cares that it affects white men more then Asian men or black women, etc.)

I'm going to disagree with Freddie on this one. Not about the two specific people he alludes to--for all I know his description is accurate there--but for his general observation of a trend in self-effacement among successful people.

Success is very, very relative, and the more successful you are in some way, the more keenly you are aware how much more successful some other people are in that same field. Unless you're literally the apex, and who knows even then. The inner view of being successful is very different from the outer view.

When I was studying math in grad school, I was keenly aware how much faster and more prepared some of my fellow grad students were. (I paid much less attention to the students who were slower and less prepared.) When I got my PhD, I was successful (in getting the PhD). And yay for me! But I also understood just how much of a near-miss that success was, and that among my cohort there were other newly-minted PhDs who had much more impressive accomplishments under their belts.

Then I got a tenure-track position at my first choice (a small selective liberal arts college). Again, yay for me! But I understood how much that depended on the very generous bump I got being a woman (which was even more pronounced back then). I got that bump in getting the interview (I know the other candidate, also a woman, which was statistically unlikely). I got the bump when I got into my PhD program--that was right around the time when all the math departments started getting serious about recruiting women. I got the bump earlier when, as an undergrad, I went to an NSA-funded summer program literally for women considering mathematics as a career, which generously funded travel to reunions every January at the Joint Math Meetings. The networking opportunities were so good that I got a network even though I suck at networking.

I could go on through other milestones, but I hope that by now I made my point. Yes, I succeeded, but I have an internal view of what that success entails, and how it compares to others in that same field. So if someone is impressed that I was a tenured math professor, my natural inclination isn't to run a victory lap.

PS. I do not have an impostor's syndrome. I figured that if I got accepted / hired / tenured and I wasn't up to stuff, that's their problem. If it didn't work out, I can always go make money.

Interesting Guardian article, thanks for posting! However, I don't see how it's different from using popular songs in enhanced interrogation. (I am supposing that those rooms were used similarly, which may not be the case.)

The Nikken Sekkei gymnasium evokes the moon craters, for me. I rather like it, but my one beef is that it looks too much like a rock-climbing gym without actually being one. Are the kids allowed to scale up those cratered walls?

When I think of bomb shelters, I think of metro stations in Kyiv (e.g.) because of the grade-school drills of taking shelter there in case of a nuclear attack.
I hear that US kids stopped doing hide-under-your-desk drills back in the 50's, but back in USSR we still did drills in the 80's. Kids-these-days have live-shooter drills, though, which provide much more vivid fodder for imagination.