MollieTheMare
No bio...
User ID: 875
There's a chance, but it's definitely not clear. He also supposedly donated to ActBlue. So his registration could have been for tactical voting purposes, given that Pennsylvania is a closed primary state.
Yeah, the WSJ has a pretty good handle on the pulse of what upper-middle class people talk about, and provides sufficiently detailed generally non-offensive takes that you can pass for knowing at least something about topics when they come up in conversation. There was a time for me where easily 9/10 topics that came up in a weekly happy hour with work people were already discussed in that weeks Journal.
Their credulity when it came to the Max Deutsch v. Magnus Carlsen chess match significantly diminished how seriously I take their analysis, but you will at least have some idea of what people are talking about if you keep up with it.
Daytime running lights are required on new cars sold in the EU, but in the US our overlords at NHTSA haven't been able to determine that lights make you more visible.
I'm with you on some form of active illumination whenever the car is in drive.
It's not clear to my why NHTSA takes a different stance than every other safety organization with respect to daytime running lights. Motorcyclist, US auto manufactures, insurers, and state DOTs all think that active illumination increases visibility, but somehow NHTSA can't find an effect.
watching hundreds of drones driving dirty shitboxes without using their turn signals
Speaking of identical shitboxes with insufficient lighting. Is it just me or are there also absurdly fewer colors of cars on the road now too? Even if they were less common you used to see a variety of colors on new cars being sold. Raspberry colored Honda Fits, bright orange Mini Coopers, Forest green Subarus, golden yellow Scion xBs, etc.
Now it seems like the vast majority of cars are some shade of grey/silver. There's something so bleak about seeing a never ending stream of grey cars, driving on gray pavement, with a backdrop of grey concrete buildings, all under a grey sky. It's especially bad contrast when people refuse to turn their lights on when it's raining.
Checking a few random 2024 models it seems like cars are only manufactured in ~4-5 grayish colors (black, grey, white, silver, and bluish grey) and red for some reason. What happened to just regular blue cars? I could have sworn that even 10 years ago every make had a blue option. Then even if the manufacture offers a non-grey color it's a $1k up-charge.
For some sectors though, I would imagine time spent knowing how much things cost for an ordinary consumer is valuable learning a CEO could do?
e.g. If you are supper out of touch $15 vs $8 a month for twitter blue is approximately 0 difference to you, but it could put you on opposite sides of the marginal elasticity curve.
This discussion reminds me of "Neutral hours: a tool for valuing time and energy" (pdf link) by Owen Cotton-Barratt
The central idea is not to count hours spent on an activity by the clock, but to weigh them according to how draining or recuperative they are
Like whether clipping out a coupon is worth it depends on
- The expected net present value of additional post-tax marginal income from additional work
- Your ability to substitute time
- How long it takes
- How much you like or dislike clipping coupons
Or maybe in this situation an IUD is indicated? It depends on why OP's wife wants to go off hormonal birth control. But it's probably less invasive than either surgical option, mostly reversible, long-term as a solution, and generally has high reported satisfaction. Or is that the different form of hormonal birth control OP was referring to? If that's the case does that also rule out non-hormonal IUDs?
Does this count as a squeal or reboot of The Gods Must Be Crazy? In the original they solve the problem by throwing the demonic device off the edge of the world, but I suppose an axe might suffice.
Typically 1.75-2x speed, with 1.25-1.5x for very fast speakers with reasonably condensed videos.
Greater than 2x enabled by various browser extentions when "fast-forwarding" or when I failed to resist clickbait but still want to know the proposed answer. I find that at less than 4x I am still able to catch things I would have missed if I just tried to scrub through the video to find what I want.
Would a sufficiently detailed table of contents satisfy you? Assuming reasonably descriptive chapter and section headings of course.
Especially if there is also a table of figures and list of tables, it seems pretty straight forward to flip through and see how familiar/tractable the content will be.
I do think there is a place for bullets sometimes, but bullets can also be symptomatic of a sort of powerpoint syndrome.
Powerpoint-style presentations somehow give permission to gloss over ideas, flatten out any sense of relative importance, and ignore the innerconnectedness of ideas.
If you could get the full argument a book makes from reading a bulleted list is there really a point in the book?
Or along the same lines there is the classic:
... bullet outlines can make us stupid
Ultimately arguing that poor use of bulleted lists contributed to the loss of shuttle Columbia.
If people want to use the prevalence of top level female chess players as evidence for something, it probably is worth being somewhat familiar with the literature around it though.
Very roughly, the most common arguments around the the gap follow this 2008 paper, looking at the German federation. There they argue that participation fully explains the gap at the top level, though they don't really argue if the participation rate itself caused by self selection on ability, preference, social pressure, etc. There is a 2010 rebuttal arguing (probably correctly) that modeling Elo with a normal is flawed.
There was a brief resurgence of this genre when The Queen's Gambit miniseries was released, of which the chessbase article is frequently refereed to. Curiously India appears to be the exception, as subsequent analysis on multiple federations reveals.
From these I conclude that:
- The Polgars are truely exceptional
- Part of the top level performance gap is caused by participation rate, which itself is not really explained. My speculative view is that women are more attracted and/or a funneled towards pursuits that are higher EV (In dollars, utilitons, social good, etc.). As the old Morphy quote goes "The ability to play chess well is the sign of a wasted life."
- Probably there remains some residual gap, though smaller in magnitude than the apparent gap, between the top female and male players. I will equivocate, and not even speculate on the reason for this.
It is probably worth being prepared for the participation rate argument though, since it is the natural rebuttal to references to the composition of top level players as evidence of inherent weakness.
This is also part of his criticism of 1x1=1, in that it assumes axiomatically a rectilinear universe, which is not the way the actual universe exists.
It is possible to build up a thorough and comprehensive axiomatic theory starting from a geometry without the parallel postulate. But what's described here seems like an extremely painful, sloppy, and intentionally confusing usage of notation. Possibly just wrong, probably not even wrong.
In terms of foundational mathematics, building up from geometric definitions like crossing lines is an extremely cumbersome method of defining your axioms. Even if you do not insist on using intentionally confusing notation like 1x1=2. As you said, you immediacy run into annoyances in terms of defining basic things like the irrationals in sqrt(2).
If you wanted to make a serious attempt at analyzing alternatives to the conventional axiomatic assumptions, it would be much more clear to begin with variations on Zermelo–Fraenkel set theory, with or without the axiom of choice and the continuum hypothesis. This would be a much more rigorous and clear way of showing how your systems produces a non-Peano arithmetic. If someone is unwilling to go through that work, it seems extraordinarily unlikely that they are producing anything interesting, correct, and non-trivial.
reality does not conform to our models
Though the foundational crisis may non be resolvable, the generally accepted formalism provides the necessary mathematical tools to do an extraordinarily good job of describing reality. If someone wants to propose a different formalism, it better provide a better or more useful description of reality. Saying that the current formalism does not perfectly describe reality so we should adopt a formalism that is less useful and more confusing, is pure nonsense.
To quote Hilbert (1927 The Foundations of Mathematics):
For this formula game is carried out according to certain definite rules, in which the technique of our thinking is expressed.
Laying out a formalism with overlapping but ill-defined versions of "spin" and "product," is not cleverness or some deep philosophical insight, it's an expression of sloppy thinking.
I assume someone else will be able to answer your specific questions better than me. My general impression is that with the federal tax credit they are fine. Especially as a second car for the household. Also, especially if you have garage parking where installing a charger will be easy. The tires are expensive, but it's more than offset by the cheaper fueling cost. People report mixed experiences with Autopilot, it's pretty good in easy conditions, but does get confused sometimes, with some notable crashes by drivers who overestimated its abilities. I guess Elon throttling you is fine, as long as he doesn't decide to have your car drive you into a jersey barrier.
If you travel a bunch of on business and are signed up for your companies preferred vendor rewards program, it should be doable to arrange for one as a rental on your next trip. I know people who regularly get offered a Tesla "upgrade", both on Hertz and Avis. You can also rent them at somewhat reasonable rates if you are flexible with the date and a little lucky, if you want an extended test drive.
but not solve as many problems
This is a common issue, solving problems tends to be effortful, and people tend to avoid it when they don't "have to" do it for homework.
If you can't solve problems you have a child's understanding. Children understand things fall down under gravity, that doesn't make them experts on general relativity.
I don't think I have ever met someone in a technical field that is both good and hasn't taken the time to do a bunch of calculation, even bona fide geniuses (like Putnam Fellow level).
From Grant Sanderson on self teaching:
I think where a lot of self learners shoot themselves in the foot is by skipping calculations by thinking that that's incidental to the core understanding. But actually, I do think you build a lot of intuition just by putting in the reps of certain calculations. Some of them maybe turn out not to be all that important and in that case, so be it, but sometimes that's what maybe shapes your sense of where the substance of a result really came from.
If you want to feel smart join Mensa. If you want to get something out of being smart, you have to put in the work.
I wasn't planing on publishing the source, since my code it is a bit idiosyncratic, but I guess there seems to be enough interest.
A pastebin with the code. Uhh, I guess I didn't put a license statement. Let's say BSD Zero Clause License. Do what you want, but don't blame me if it ruing your love life.
Is there a way to publish a pseudonymous/anonymous gist on Github?
It's the "official" builtin board style forum of (like the 3rd cousin?) of themotte. I think the relationship is roughly like:
lesswrong
└─> slatestarcodex ──> astralcodexten <─> datasecretslox
└──> r/slatestarcodex ──> r/themotte ──> themotte
Obviously the full history is a bit more complicated and there is a bunch of cross mixing between the branches.
It's interesting what you suggested is almost the opposite of the scenario @Felagund suggested. I suppose a hopeless romantic would not want to risk the potentially corrosive effect of having knowingly settled. I assume that in practice you would combine some knowledge of the current rate, the steepness of the expected falloff, some pure romantic inclination, and some fear of missing out into some heuristic.
In the scenario where we keep n from above, but keep going if we still haven't found the one I do think is interesting. If we set our benchmark at r=sqrt(n), 83% of the time you find your partner before n/e. Assuming (offset) exponentially distributed utility, the expected utility in this case is about the same as in the case where we assumed halting. I guess this is like the plethora of people who marry someone they meet in college? In about 10% of the cases there you manage to find a partner before the expected window closes, and patients is rewarded with about 50% more utility (4.5 vs 3).
I then assumed some very questionable things to set the next boundaries. First, we can transpose to time as above. Second, that we care about marriage with respect to producing children. Putting geriatric maternal age at 35-40, and assuming you would just offset paternal age so we don't have to deal with an extra set of scenarios, I find a new cutoff of 320/256. I think this sort of accommodates @jeroboam's point. In that case not stopping, but being willing to continue into the danger zone, 1.3% of the runs find the one by "40." Of course expected utility is higher at 5.2, but being willing to push age, but unwilling to settle only picked up a small number of additional "successes."
In the remaining 5% cases you eventually find your soulmate with an expected utility of 6.4. You do have to wait exponentially long though, with a median age equivalent of 67, and a mean of 343!
Setting the high water mark at n/e, but being unwilling to stop is similar in utility. Now you've eliminated the 3 unit of expected utility bucket, and the 4.5 unit utility bucket has 63% weight. Your willingness to go into the (questionably) age equivalent 35-40 bucket also preserves 7% of the trials. By setting your benchmark so late though, 30% of the time you miss the critical window. The higher expected utility, I guess, represents it being totally worth it to find your soul mate, assuming there was no penalty for waiting past geriatric pregnancy age.
@self_made_human don't worry I know these simulations are entirely irrelevant to us denizen of themotte, thus the fun thread and why I included the note on n <= 7, ಥ_ಥ
The Fussy Suitor Problem: A Deeper Lesson on Finding Love
Inspired by the Wellness Wednesday post post by @lagrangian, but mostly for Friday Fun, the fussy suitor problem (aka the secretary problem) has more to teach us about love than I initially realized.
The most common formulation of the problem deals with rank of potential suitors. After rejecting r suitors, you select the first suitor after r that is the highest ranking so far. Success is defined as choosing the suitor who would have been the highest ranking among the entire pool of suitors (size n). Most analyses focus on the probability of achieving this definition of success, denoted as P(r), which is straightforward to calculate. The “optimal” strategy converges on setting r = n/e (approximately 37% of n), resulting in a success rate of about 37%.
However, I always found this counterintuitive. Even with optimal play, you end up failing more than half the time.
In her book The Mathematics of Love Hanna Fry suggests, but does not demonstrate, that we can convert n to time, t. She also presents simulations where success is measured by quantile rather than absolute rank. For instance, if you end up with someone in the 95th percentile of compatibility, that might be considered a success. This shifts the optimal point to around 22% of t, with a success rate of 57%.
Still, I found this answer somewhat unsatisfying. It remains unclear how much less suitable it is to settle for the 95th percentile of compatibility. Additionally, I wondered if the calculation depends on the courtship process following a uniform geometric progression in time, although this assumption is common.
@lagrangian pointed out to me that the problem has a maximum expected value for payoff at r = sqrt(n), assuming uniform utility. While a more mathematically rigorous analysis exists, I decided to start by trying to build some intuition through simulation.
In this variant of we consider payoff in utilitons (u) rather than just quantile or rank information. For convenience, I assume there are 256 suitors.
The stopping point based on sqrt(n) grows much more slowly than the n/e case, so I don’t believe this significantly alters any qualitative conclusions. I’m pretty sure using the time domain here depends on the process and rate though.
I define P(miss) as the probability of missing out or accidentally exhausting the suitors, ultimately “settling” for the 256th suitor. In that case you met the one, but passed them up to settle for the last possible persion. Loss is defined as the difference in utility between the suitor selected by stopping at the best suitor encountered after r, and the utility that would have been gained by selecting the actual best suitor. Expected Shortfall (ES) is calculated at the 5th percentile.
I generate suitors from three underlying utility distributions:
- Exponential: Represents scenarios where there are pairings that could significantly improve your life, but most people are unsuitable.
- Normal: Assumes the suitor’s mutual utility is an average of reasonably well-behaved (mathematically) traits.
- Uniform: Chosen because we know the optimal point.
For convenience, I’ve set the means to 0 and the standard deviation to 1. If you believe I should have set the medians of the distributions to 0, subtract log(2) utilitons from the mean(u) exponential result.
Running simulations until convergence with the expected P(r), we obtain the following results:
| gen_dist | r | P(r) | P(miss) | <u> | <loss> | sd_loss | ES_5 | max_loss |
|----------|---------|------|---------|-----|--------|---------|------|----------|
| exp | n/e | 37% | 19% | 2.9 | 2.2 | 2.5 | 7.8 | 14.1 |
| exp | sqrt(n) | 17% | 3% | 3.0 | 2.1 | 1.8 | 6.6 | 14.8 |
|----------|---------|------|---------|-----|--------|---------|------|----------|
| norm | n/e | 37% | 19% | 1.7 | 1.2 | 1.5 | 4.6 | 7.0 |
| norm | sqrt(n) | 18% | 3% | 2.0 | 0.8 | 0.8 | 3.3 | 6.3 |
|----------|---------|------|---------|-----|--------|---------|------|----------|
| unif | n/e | 37% | 19% | 1.1 | 0.6 | 1.0 | 3.2 | 3.5 |
| unif | sqrt(n) | 17% | 3% | 1.5 | 0.2 | 0.5 | 2.1 | 3.5 |
What was most surprising to me is that early stopping (r = sqrt(n)) yields better results for both expected utility and downside risk. Previously, I would have assumed that since the later stopping criterion (r = n/e) is more than twice as likely to select the best suitor, the expected shortfall would be lower. However, the opposite holds true. You are more than 6 times as likely to have to settle in this scenario, so even if suitability is highly skewed as in the exponential case, expected value is still in favor of the r=sqrt(n) case! This is a completely different result than the r=n/e I had long accepted as optimal. The effect is even far more extreme than even the quantile-time based result.
All cases yield a positive expectation value. Since we set the mean of the generating distributions to 0, this implies that on average having some dating experience before deciding is beneficial. Don’t expect your first millihookup to turn into a marriage, but also don’t wait forever.
I should probably note for low, but plausible n <= 7, sqrt(n) is larger than n/e, but the whole number of suitors mean the optimal r (+/-1) is still given in the standard tables.
One curious factoid, is that actuaries are an appreciable outlier in terms of having a the lowest likelihood of divorce. Do they possess insights about modeling love that the rest of us don’t? I’d be very interested if anyone has other probabilistic models of relationship success. What do they know that the rest of the life, physical, and social sciences don't? Or is it that they are just more disposed to finding a suitable "good" partner than the one.
first grade teacher
Based only on this, aren't the average elementary education majors IQ's 108 based on the old SAT data? The gap between 108 and 120 is still pretty healthy.
120 (80thp)
Am I messing up the IQ quantile conversion, or was there an error up-thread? Using a normal with mean 100, and SD 15 I get:
| IQ | p |
|-------|-------|
| 140 | 0.996 |
| 135 | 0.990 |
| 120 | 0.909 |
| 112.6 | 0.800 |
| 110 | 0.748 |
| 108 | 0.703 |
So a relaxation to a requirement of 120 would only be a 10x wider filter rather than 20x.
I kind of assumed, based on a vague recollection of OP's claimed achievements, username (as implied major), and desire for a 135+ IQ partner that their (maybe self assessed?) IQ was at least 140. In that if it were "only" 135 it would be unreasonable to set a lower bound at 135.
At 140, the gap to 110 is 30 points, which is the same gap as 100 to 70. Or average to borderline intellectually disabled. I do think it's possible for 140 paired with 110 to work, which is why I put it as conditional on the relationship you expect with your children. Like there is a whole set of life experiences you likely will never be able to share with your children. That's sort of based off of a crude model of averaging parents IQ and assessing a 10 point regression to the mean, (140 + 110) / 2 - 10 = 115. I'd be pretty interested if someone has a less ad-hoc way of calculating this.
It can at least work in fiction though, season 6 episode 9 of House "Ignorance Is Bliss" has an IQ 178 married to an IQ 87.
- Lower IQ filter down to >120 (80thp) as opposed to ~135(99p).
- Attractive.. okay keep this one, but don't be a k-drama protagonist about this
- Politics - For the most part, drop this.
I do wonder about these filters.
For IQ it depends on how much you value producing high IQ children. I assume it's pretty hard to estimate the distribution for outcomes, but if you were to go down to 110 like @2D3D suggests, it might be unreasonably hard to relate to both your wife and your children... I suppose if you're willing to go down the embryo selection road, but then you would also have to find a partner who would also be into that.
Attractive—where does 1 in 3 here conditional on 25-34 and high IQ place them on absolute attractiveness? I would assume given youth, iq, and contentiousness the person would be well above average in attractiveness to start with. Even if only from the correlated likely socioeconomic advantages they enjoyed growing up. I mean, how many ugly people do you see walking around the campus of say Stanford?
For politics a 1 in 2 filter would be compatible, but not necessarily exactly aligned? Given how niche the politics of someone who posts regularly on the Mott probably are, I suppose it would be hard to filter any more generously without admitting intolerable incompatibility.
It is quite ironic that one of the biggest contributes to CO2 emissions reductions is fossil fuel companies fracking so aggressively they drove the price of natural gas negative at some points. Ultimately substituting NG for coal is probably a net benefit, but I'm pretty sure environmental activists are not happy with the growth of NG as a energy source.
It's been more than 15 years since An Inconvenient Truth came out and the IPPC won their Nobel Peace Prize. In that time it would have been totally possible to replace essentially all electricity production in developed countries with GEN III+ nuclear plants and make substantial progress on Gen IV plants. Instead, without utility scale storage, the focus on growing interment renewables has only entrench NG peaking plants as the dominant on-demand electricity generation source.
That's more or less what I was gesturing at with 0.005% of GDP.
Though I would prefer they not exhaust the worlds supply of helium to do it. It's a very usefully industrial gas, and basically not renewable. Using hydrogen balloons would be much more "sustainable," though not supper feasible to do at small scale. Using their cost projections you actually get ~0.05% of gross world product per year. I assumed you could get the cost down with economies of scale by using some mix calcite substituting for the sulfur and hydrogen substituting for the helium.
Are you in the US, and if so what region? I recall there was a blog in the early 2010's where there was a guy that did $2 a day, but I don't think he hit 2g/kg bw protein. That probably leaves some extra headroom, even with inflation, for a few extra grams of protein. I recall he had a bunch of open face penutbutter and banana sandwiches. It did require a bunch of annoying couponing to hit his budget as well.
Along those lines, the "Big On A Budget" Series from Animal is a bit of a cult classic. The ones with Evan Centopani are probably closest to what most people would consider a healthyish diet. The budget was $50 a week, early 2010's $s, and they didn't include the supplements they are selling. Offsetting that most people are probably not trying to feed a bulking 100kg+ body builder. You generally see a bunch of oats, rice, eggs, broccoli (if they include vegetables), and chicken breast. I am a fan of using broccoli slaw to save prep-time, as popularized by Chad Wesley Smith, but it's probably not worth it if your budget is that low.
Realistically $100/month is a very small food budget if you are in the US, especially for those protein goals. The USDA Thrifty Food Plan currently puts the budget for a 20-50 Male at $303.90/month. If you are US based and really only have a $100/month budget for food you likely qualify for SNAP benefits, and should also consider food banks.
Edit: I managed to dig up the couponing thing. It was $1 A Day, and it did involve annoying coupon shenanigans that are probably less common now. I think this site is also one I had thought of from the same era. There is a free PDF of a $4/day cookbook. I could have sworn there was another n$/day cookbook from that era that involved a bunch of baking, but apparently I never archived it.
More options
Context Copy link