@ThenElection's banner p

ThenElection


				

				

				
0 followers   follows 3 users  
joined 2022 September 05 16:19:15 UTC
Verified Email

				

User ID: 622

ThenElection


				
				
				

				
0 followers   follows 3 users   joined 2022 September 05 16:19:15 UTC

					

No bio...


					

User ID: 622

Verified Email

It's a shared culture of narcissism. People look for identity and meaning in stupid acts of protest, and they imagine what the government does is provide the stage for it. Totalitarian symbolically, but it will never engage in any kind of violence at all against you personally, which would interfere in your bragging rights about how righteous and badass you are.

I don't think it's incoherent to say that protesting an evil regime is good and protesting a holy regime is evil, which seems to be the reason for the split view. It's even my view and likely the majority view: although it might be unwise to publicly fight against Hitler/Stalin, I'd definitely be rooting for someone who does. Where I'd differ is in not judging either administration as calling for unmanaged or badly managed protest.

The issue is that maintaining public order inherently involves violence, and both Babbitt and Good (and their supporters) thought that they were somehow exempt from facing violence when they were protesting (ironic, since they probably think of the respective administrations as closer to Communist/Nazi than I do).

That explains only the Somali daycare fraud, though. George Floyd wasn't Somali, and Renee Good wasn't trying to save an illegal Somali immigrant. And I don't see Somalis as having some Svengali like ability to warp the entire culture; Good, after all, had only been in Minnesota for a couple months, without giving an opportunity for them to work their hypnotic magic.

Minneapolis has been stagnant economically compared to more "woke" cities. In 2000, Minneapolis had a GDP per capita significantly above the US average; now it's basically average. Maybe it's vibes: when you're relatively treading water, you have worse outcomes on a range of measures?

I had ChatGPT pull data (change in GDP per capital; data sources are BEA and the Fed):

City (metro area used) GDP per capita (2001, current $) Latest year used GDP per capita (latest year, current $) % change (2001 → latest)
Minneapolis (Minneapolis–St. Paul–Bloomington, MN-WI MSA) $46,924 2023 $94,214 +101%
San Francisco (San Francisco–Oakland–Hayward, CA MSA) $57,487 2022 $159,777 +178%
Portland (Portland–Vancouver–Hillsboro, OR-WA MSA) $39,601 2023 $86,805 +119%
Seattle (Seattle–Tacoma–Bellevue, WA MSA) $51,397 2023 $138,947 +170%
Los Angeles (Los Angeles–Long Beach–Anaheim, CA MSA) $41,367 2023 $100,522 +143%
Boston (Boston–Cambridge–Newton, MA-NH MSA) $52,592 2023 $122,902 +134%
New York City (New York–Newark–Jersey City, NY-NJ-PA MSA) $50,967 2022 $110,691 +117%
Chicago (Chicago–Naperville–Elgin, IL-IN-WI MSA) $43,525 2022 $89,514 +106%

Supports my hypothesis, but it doesn't really ring true to me.

I think perhaps it's the white people in Minnesota. When I think of the prototypal white person there, I think of some middle manager for Fortune 500 #352. Competent and well-meaning, but not quite a go-getter. And that ends up reflected in the political culture, in that it's not especially responsive to changing circumstances. California, by contrast, has the same issues, but it encountered them earlier and its system has rapidly evolved to be resistant to shocks from them. A billion dollars in corruption and fraud, you say? Race riots? We found ways for the system to manage those decades ago.

Minnesota is naive; its system assumes good intent and isn't able to deflect or absorb the actors with bad intents.

Every measurement has costs. What if we could go from 98% to 99% accurate by having kids grind for 12 hours a day for grades, every day, while doubling public education spending? That would prevent millions more from being misclassified.

Manuel Noriega was, what, two weeks? New record set? Impressive indeed.

How to make CSAM is widely known, and plenty of places don't cooperate usefully with the USA in stopping it. Despite that, the USA does manage to broadly limit how much it proliferates.

I'm not saying that it's a good idea, and I'm not saying that open weight models could be completely eliminated. I am saying that they could be quite effectively suppressed, as there are plenty of tools that the government can use to enforce a ban, imperfectly but substantially.

The goal wouldn't be to make it so literally no one in the USA could run an open weights model; it would be to add friction points to make it more trouble than it's worth, except for the most dedicated people. You wouldn't need any kind of global agreement, just a national focus and working with large tech companies to limit it. DNS blocks, removing them from Google search results, etc. A relatively small amount of effort can prevent the bulk of casual users from having access to them.

That's just if you get the domestic consensus to look at open weights models as something comparable to copyright violation. If instead the public started seeing them the same as CSAM, you could go a lot whole lot further: still theoretically accessible, but very rare.

You could make it pretty broadly inaccessible: ban all open-weight models; require any image generation to have strict safeguards and reporting of attempts to authorities; enforce severe criminal penalties. Your existing model would be pretty much untouchable, but it couldn't easily be shared, and a decade from now most copies of it would have been lost to end users. You could even require manufacturers to include firmware on new hardware that bails on unapproved workloads, but that seems like it'd be overkill.

Not saying that this is what I'd like, but it seems doable.

I would have felt very differently: I would have cared much less, quite honestly. "Oh, someone's a weirdo, anyone whose opinion of me changes because of it isn't worth caring about." And I'm not sure that making the AI-generated nude clearly a fake joke (giving her purple skin or whatever) would change anyone's opinions. I think the crux of the matter is that it's a sexual image, and we cordon off sexuality as requiring unique, almost spiritual protections around it.

So, back in high school, someone made a fake photo of me and posted it in a classroom. It wasn't a nude, but it was political, depicting me as Stalin, as I was an outspoken socialist. I was outraged ("the photoshop is not even accurate! I'm a TROTSKYIST!"), and it definitely hurt my feelings and hurt me socially. Pretty clear case of bullying, but, in retrospect, it was pretty hilarious and a useful learning experience. Should that kid have been punished?

I don't think so, and I suspect you don't either (though I'm curious if my suspicion is right). Which shifts the question to, what is the difference between a nonsexual representation and a sexual one? I think, to many people who don't see harm, harm categorically isn't something that can be done with an image or words--sticks and stones can break my bones etc. If people start physically attacking someone, or destroying their property, in response, there is harm, but the harm originates from the physical act, not the instigating image. The introduction of a sexual element doesn't change this. (I'm speaking here in terms of conceptual framework, not legal definitions.)

That doesn't mean that the school shouldn't do anything about the boys--schools can and should regulate behavior above and beyond the minimal standard of harm. But the idea that actual physical violence should be punished less than images and words is weird to me, especially when school administrators had no actual evidence of the images.

Stranger Things was disappointing.

The first season was great, and it was all about the settings and vibes. After that, they didn't know what to do with it: sequels demanded they simultaneously up the stakes and explain the universe. S2 went with a kind of eldritch Lovecraftian approach, which was exciting to me because it's a genre that's nearly impossible to do well (explanations are self-undermining), and that season gave a reasonable go at it. But the task of following through proved too much for the writers, so we got creature features and supernatural slashers instead.

The weak story thus forced the focus onto interpersonal relationships that turned into soap opera, with an ever-expanding cast (with outrageous plot armor) to pander to more market segments with fan service. By season 5, it was impossibly unwieldy.

Will's coming out was entirely unnecessary, but it's important not to treat it as some departure from an otherwise good season. Every scene involved some long-winded heart-to-heart with unearned development. Somehow there's no tension at all: the world is ending, but you wouldn't know it by how the characters acted. The final journey to Vecna's layer (which is supposedly on a timer, as it's literally actively traversing a wormhole to destroy our own world) becomes a calm stroll (through a brightly lit, demogorgon-less set) where two guys just talk about their shared like of a girl and find out actually we're not too different after all!

So, does it matter that Vecna and the Mindflayer are weaker than the L1 demogorgon in season 1? Nope, because they're entirely secondary to the real goal: shoveling slop and 80s nostalgia to a bunch of Millennial Netflix watchers who want soap opera but want the imprimatur of prestige television.

Injection is for the benefit of the public, not the criminal. At some point we got squeamish about visibly physical punishment, and injection sweeps all of that under the rug, making execution a bloodless, bureaucratic affair.

More cynically, the main thing Trump actually offers is breeches of decorum, which angry voters interpret as the best thing available. Not through any bad faith on his part, but his short attention span and lack of actual follow through. The US will be in approximately the same cultural and political position in 2028 as it is today, just as it was in the same position in 2020 as 2016.

The establishment is happy to use him as a useful patsy, and the hysterical shrieking about him being neo-Hitler confuses people into thinking he is a more substantial threat to the establishment than he is. So he's a vessel to channel discontent and anger into dissolution.

Local change seems critical, but the issue is that 99% of the attention of politically-interested people seems to be at the federal/culture war level, now. A debate about whether a road should be converted into a park becomes a debate of whether the pro-road people are crypto-Trump supporters. City supervisors spend hours debating whether they should pass a resolution supporting the Palestinians. Questions about how to best educate students during COVID get supplanted by questions about naming schools in a progressive way. And these abstract/cultural signifier questions are what people actually vote on even for local elections, instead of focusing on the concrete issues at hand.

The Puritans also engaged in missionary efforts, e.g. the praying towns and Algonquian Bible translation of John Eliot. That seems more like trying to bring Native Americans into the fold than genociding them.

It's useful for governments to be able to name and identify people, for tracking, taxation, etc. The name a person goes by is useful beyond a unique numeric identifier: it allows for easier resolution of the individual a government wants to interact with, as the knowledge of the name-individual link is dispersed in the community. Better to just capture a wide latitude of names than refuse to than to increase the number of hard-to-track people.

Interestingly, early 20th century Chinese linguists also invented a particular written third person pronoun to refer to a god or deity (祂), again pronounced the same.

Other pronoun tidbits: it was strictly taboo to refer to the Emperor with 他。Using that pronoun, or any pronoun, for him would result in "cancellation."

Looking at the allele frequencies of rs53576:

Population A allele freq G allele freq
AFR (African) 0.19 0.81
AMR (Admixed American) 0.36 0.64
EAS (East Asian) 0.65 0.35
EUR (European) 0.35 0.65
SAS (South Asian) 0.45 0.55
Global 0.39 0.61

It's true that Europeans have a relatively high frequency of G, but it looks like Africans have them beat.

Maybe the aggregation here is obscuring subgroup heterogeneity, and Somalians don't share the well-known cosmopolitan universalism of Bantu cattle herders.

social welfare is too indiscriminate towards those it helps

On the contrary, it's too discriminate. Lots of people are in genuine need for help, but the government allocates funds on the basis of proximity to officials and how useful the recipients are to politicians, forming a toxic positive feedback loop. This money would have done more social good if someone drove through the streets throwing bags of it off the back of a truck.

But if AI/AGI/ASI is a big deal, then America enjoys a decisive advantage.

AI technological knowhow diffuses much faster than AI-driven technology, though. Lets say China is a year behind the US in AI research and engineering when the US reaches AGI. How long does it take the US to integrate it wholesale through its economy, replacing pretty much all labor? China will have its own frictions, but plausibly China can cut through physical, infrastructure, legal, and cultural constraints faster than the US. It's not clear which effect would dominate, but it's not preordained that the US would win.

Even a true singularity, if possible, doesn't seem to change that. At some point the US may well have an ASI that has solved all the fundamental physical, engineering, and mathematical issues of the universe while still requiring human doctors, teachers, drivers, soldiers etc. to perform actual labor, while China at the same time is stuck with a year-behind AI that nevertheless has still replaced human labor in all relevant real world domains.

We need to think of how the world would operate when major nations are capable of industrial autarky, because modulo some Butlerian Jihad we will have to deal with it anyway.

Any theories, here? Does every country decide to just sit back, possibly import raw commodities and energy that aren't otherwise attainable, and live in blissful abundance?

Per context window (though for projects I have a template). Psychologically, the process of re-prompting after failure is intolerable to me; probably I'm overdoing it, but it makes the interaction more pleasant to me.

I agree about the creativity it enables: every one or two weeks, my wife asks me for some browser extension or script for a narrow use case, and it's incredibly gratifying to be able to send her a solution in ten or fifteen minutes.

+1.

People often claim that tests are an ideal use case for AI, but it's not at all my experience. My experience is more the opposite: it will write plausible or even correct code for a bug or feature but then write really bad tests.

I still use AI for them, but usually have to explicitly describe, in sometimes painstaking detail, what needs to be tested. I think that still saves time, but I'd guesstimate on average it ends up being half of my time (prompting, re-prompting, etc) to use AI vs just coding it up myself.

My experience: I would estimate around 80% of the code I submit today is LLM generated. It's very useful for reducing the time I spend actually writing code, as a SWE. But 1) that is a minority of the time I actually spend at work and 2) it's eliminating the part of the job that's most calming and pleasurable and even meditative for me.

I have quite good results with a workflow where I painstakingly describe what I want done. I'll spend a lot of time understanding what needs to be done, and then 15-30 minutes describing, in detail, what needs to be done, and supplying the necessary context. It's not quite literal NL-to-code--the LLM doesn't need to be told line by line what needs to be done--but I do not give it space to make architectural or design decisions. It then can more or less one-shot it, but not consistently enough where I don't need to review the code before sending it out for review. And, when it comes to testing, they're surprisingly bad: when I do change code it's written, it's typically adding new tests or deleting irrelevant ones (though admittedly through telling the agent that it's a retard and needs to do a specific different test).

So, my velocity is increased. But, at least for me, it means less time spent in the most pleasurable part of the job, and more time spent in requirements gathering, navigating bureaucracy, updating spreadsheets for leadership. I fucking hate it. Even though it's probably good for my employer, I have to shed a tear for the death of coding/implementation as an important job skill.

I can imagine LLMs, in the next two years or so, supplanting design/architectural decisions. That makes the situation worse: I'll not be a software engineer, but an engineering manager supervising LLM agents. That's a deep loss to me, and I'm happy that my target retirement date is in 5 years or so.