It's plenty good. The prompt basically just says not to be too enthusiastic, and to keep it short. I've been writing down my manual messages to prompt it with as examples, but am not even using that yet.
E.g. the match was responding to a picture of a woman standing in front of a field of sunflowers, captioned "My happy place": "Sunflowers might just be the happiest flower there is. What's your favorite outdoor place to relax?" A little extra, but it shows "I" looked at the picture, makes it about her, is easy to answer/start a conversation with - not bad imo.
If messaging is your bottleneck, you can use basic chat interfaces for response ideas. I would not overestimate how much the specific message matters - I'm pretty sure it's more of a rule-out than a rule-in kind of thing. Just don't be a creep basically.
I'm automating Hinge. Android emulator, pyautogui, PIL, GPT-4o. It's almost too easy.
The flow is:
- pyautogui: take a screen shot. With a 1:8 aspect ratio on the emulator this gets the whole screen. Earlier versions scrolled and stitched together, which mostly worked, but boy do I feel stupid not thinking of it sooner.
- AI: prompt to extract information from the info section (height, job, age, education, etc), an assessment of personality (nerdy, travel loving, high fashion, etc), and a physical description (weight, race, hair color), and an overall assessment of if she's my personality/physical type. (Note: the goal is nerdy but hates travel and isn't high fashion!)
- python: ignore literally all of #2 except the job and education. If either matches a whitelist of terms that signal smarts, proceed.
- PIL: split the screenshot into sections, using the like buttons.
- AI: transcribe (for prompts) or describe (for images) each section (separately) and provide a response. (My favorite part: I have it refer to her as The Candidate, which is how we have to write interview feedback at work.)
- AI: given all the transcriptions/responses, pick the best one.
- pyautogui: click heart button, type response, send.
Costs me about $0.04 to reject, $0.10 to message. I think I can get that down some. I only ran it for one batch, and it got a match faster than I normally do. Small sample size, but I am optimistic.
As to why #3 is so simple - I initially had a hand written weighted average of all the things, but looking at the actual behavior realized that:
- Really all I care is that she's smart and not terribly fat
- GPT-4o is not good at telling me if she's fat, so far.
Favorite kerfuffle: it messaged a woman, shown in a photo by a giant 10ft novelty planted pot: "is that enormous, or are you tiny"? It...was certainly not the latter.
This raises some questions for me:
- Do I let it do more than the first message? Probably not - it's just the endless swiping/messaging into the void I dislike. Conversion rate match -> date is tolerable.
- Could I let it literally do everything up to and including putting a calendar event on my calendar? Probably so - I'd say it'd cut my conversion rate in half.
- Do I admit I'm doing this? n=1, but I did, immediately, and it went well for me.
Wouldn't "we killed the patient through inaction" also leave them vulnerable to prosecution/malpractice claims? If I show up to the ER with a gunshot wound and they say they think it's illegal to treat me, go away, I imagine I have an easy lawsuit to win.
Is it a crippling fear of 🤓 emoji?
I'm old. What does this mean?
96% to win, 92% to win popular vote. How is that strong a conclusion possible after all the last minute surprises last year? There are huge amounts of uncounted votes in swing states.
Context: I strongly do not think there was a material amount of fraud last time around, and I don't expect there will be this time either
finalized tomorrow
Oh man, I wish. Back in the good old days, I think (?) we used to get same day election results, but not anymore.
Even barring the real possibility that this gets tied up in the courts for weeks, or even months, I don't expect a same day result. Last election was held on Tuesday November 3, but results were not confidently projected until November 7, and not called until November 19. Per Wikipedia:
The general election was held on November 3, with voters directly selecting their state's members to the U.S. Electoral College. On November 7, most national media organizations projected that Biden had clinched enough electoral votes to be named the U.S. president-elect
More detailed timeline from 2020:
State | Electoral votes |
AP race projection | First votes reported |
---|---|---|---|
Arizona | 11 | Nov. 4 at 2:51 am | Nov. 3 at 10:02 pm |
Georgia | 16 | Nov. 19 at 7:58 pm | Nov. 3 at 7:20 pm |
Michigan | 16 | Nov. 4 at 5:58 pm | Nov. 3 at 8:08 pm |
North Carolina | 15 | Nov. 13 at 3:49 pm | Nov. 3 at 7:42 pm |
Nevada | 6 | Nov. 7 at 12:13 pm | Nov. 3 at 11:41 pm |
Pennsylvania | 20 | Nov. 7 at 11:25 am | Nov. 3 at 8:09 pm |
Wisconsin | 10 | Nov. 4 at 2:16 pm | Nov. 3 at 9:07 pm |
A comparison I haven't seen posed: Kamala vs Hillary. I think the comparison points to a Donald victory. Since he beat Hillary, he'll beat Kamala. (Meta: why is it that Trump is rarely referred to by first name?)
Hillary has the stronger resume: U.S. senator (2001–09) and secretary of state (2009–13) for Obama. Compare to Kamala: attorney general of California (2011–17), U.S. Senate (2017–21), VP (21-). Or, maybe it's a tie, if you're somehow impressed by her time as VP.
Criticism of Hillary's demeanor is around being elitist and robotic, which beats Harris's positionless word salad.
Trump 2016 was much scarier: as a total unknown, it was at least a little more credible he'd do, uh, much more than be in office while three Supreme Court judges died.
I agree with the observation, but not the reason. I'd say it's more like: it's easy to be nice when life is nice and easy. That, and while the hypothetical job pays few dollars, it comes with a chance at Harvard per six months worked - an excellent hourly rate.
I find the "six figures" part interesting. If you take it to mean 100k, that's...not that impressive.
I'm seeing 80th percentile in the US at age 30. 127/170/300 get you to the 90/95/99th percentile. At age 27, it's 91st percentile; 121/190 get you to 95/99th percentile.
Presumably it's a lower percentile than these for men (but probably higher than that for black men). Downgrade the impressiveness again in blue areas (cities).
This raises the question: what percentile or dollar amount is impressive enough to offset what degree of attractiveness? Or more broadly, what is the marginal utility as you move through those levels?
Can Bob afford a tutor? The instructional and executive functioning value there is huge (with a good tutor).
In particular, I think they could really help you know what is worth studying. E.g. probably you can skip trig identities, and you can certainly skip Kramer's rule.
Even better, find someone in the polisci program and ask them what you actually need to know. There is probably a big difference in what you need to know to get the job, and what you need to know to do the job. I'd focus on the former.
This week, in review:
High: had a first date that I think I'm actually more excited about than Ms. Definitely, and there will certainly be a second. Not quite as much in common, but still a lot, still brilliant and attractive, and just...better vibes. More stable and peaceful.
Low: buried the dog.
+1 to spaced repetition.
The other technique I will add, which I think underlies memory palaces, is...let's call it "deep engagement." Rather than just trying to remember rotely, deeply engage with the knowledge by connecting it to other knowledge. You've now added multiple recall points to your brain for the single fact, and as long as any of them are intact, you can get the fact.
In the case of memory palaces (which I find overhyped and not personally useful), that knowledge is a location in the memory palace. E.g. if yours is Pokemon, and you are trying to remember a grocery list, maybe you pictured Pikachu eating a watermelon. The element of the memory palace itself (Pikachu) is by design easy to remember. The visual of Pikachu eating a watermelon connects to enough other things (my memories of eating watermelon, a chuckle at the visual there, etc) to provide redundant encoding of "watermelon."
In the case of learning physics, you can:
- (no deep engagement) cram equations in your brain, regurgitate them at the top of a test, then reference them, OR
- (deep engagement) derive them, graph them, do experiments to measure them, think about their asymptotics, etc
In the case of politics, you can:
- (no deep engagement) memorize the latest fact about the advancing troops in Ukraine
- (deep engagement) think about why that movement was made, what might happen next, the experience of the soldiers during it
But is this not just trivial recall that could be handled easily by a computer, or a scrap of paper?
So, no, it isn't - it's redundant encoding that gives you more threads by which to remember. In CS terms, I no longer have to linearly loop through my list-o-facts; instead, I map quickly to the needed fact via any of a number of hashes (connections).
Link to landing: https://youtube.com/watch?v=nVNIoQUcFI4&t=48
Landing legs are heavy and any mass you lift up lowers payload
Notably, the relationship is exponential, so lowers it more than you would expect.
The laziest possible search gives me 550,000kg for the rocket, 8,000 kg for payload, and 2,000kg for the landing legs, so 0.33% of total but an astonishing 25% of payload. Given the exponential relationship, even the 0.33% could have real impact, so 25% is somewhere between "holy shit" and "did lagrangian mess up the math and/or use poor data"?
I love my electric kettle with temperature/time settings for different kinds of tea. Keeps things warm, and can even have the tea ready for you at the time you ask for it. Less expensive models also exist.
Tutoring? Wyzant etc, or for that matter Craigslist (either posting or responding to posts)
Rough. On the bright side, nominative determinism may have a silver lining for your Alpha Gal.
It's not exactly hard to convince a doctor to prescribe you stimulants for "your adhd" and then you're in the clear legally. Not trying to delegitimize real diagnoses/legit use of the meds (e.g. my own), but the fact remains
Yeah, that's rough, and it's not like there's a store you can try it out at . I've had I want to say eight keyboards at about that price point purchased by employers...
Numpads are overrated. I won a nickel in a bet with a coworker that I could type numbers faster without. I love my keyboard.io model 100 split ortholinear walnut thumbcluster keyboard
Whatever you do, just remember that the correct time to start shaving your head is about two years before you finally start shaving your head.
Interesting, say more?
The note provides the inductive base case.
(blue, brown, note falls from the sky saying someone has blue eyes)
(1, 0, False): No information on their eyes. They never leave.
(1, 0, True): No one else could possibly have blue eyes. They leave on day 1.
(1, 1, False): Same as (1, 0, False). No one leaves.
(1, n, False): Same as (1, n-1, False). No on leaves.
(2, 0, True): On day 1, each reasons that if they are brown in (1, 1, True), the other person will leave. The other person doesn't leave. They each leave on day 2.
(n, 0, True): On day n-1, each reasons that if they are brown in (n-1, 0, True), the other n-1 people will leave on day n-1. This doesn't happen. All n people leave on day n.
(2, 0, False): On day 1, each reasons that whether they are blue or brown in (1, 1, False), the other person will never leave. The other person does not leave. This gives no information. No one ever leaves.
Etc
- Prev
- Next
\3. Interesting, I think saving it for the first date might be a good call. I think it's a good story at any time, but probably more useful there.
Goal is love for sure. If I just wanted to get laid, the algorithm would not need the "is very smart" filter, and even the "not very fat" filter could be relaxed...
Thanks, I've wanted to do it for a while. I really thought it would be harder. I'm curious to try about o1-preview as well, although it's 3-4x the cost. GPT-4o-mini was not adequate for sure.
More options
Context Copy link