@roystgnr's banner p

roystgnr


				

				

				
0 followers   follows 0 users  
joined 2022 September 06 02:00:55 UTC
Verified Email

				

User ID: 787

roystgnr


				
				
				

				
0 followers   follows 0 users   joined 2022 September 06 02:00:55 UTC

					

No bio...


					

User ID: 787

Verified Email

There will be questions people will definitely be wanting answers to, and they will get them in due time.

Wow, that sounds ominous.

Though, whatever it is, don't underestimate the ability of future generations to just move on.

My knowledge of my grandfather grew from "He died when my dad was a kid" to "murdered in his bed during military service in a tumultuous far-off land" to "in an unsolved crime story full of sex and coverups and intrigue", ending up with an email conversation with a historian who "thought some things about the official reports didn't add up" ... and that was 20 years ago, with no closure, but it's pretty much water under the bridge to me and all my cousins. You'd think that, after I later inherited a priceless artifact (not super valuable, I just don't have the documentation that I think would be necessary if I ever wanted to legally resell it) that was once my grandfather's, the story should have picked up from there, but nope, no visits from master cat burglars or Russian agents or anything. I didn't even think to check it for secret compartments.

Damn, I've never actually typed that all out before. Now I kind of wish I had some recorded phone calls to listen through.

the failure of the kibbutzim.

"Only once in history did democratic socialists manage to create socialism. That was the kibbutz. And after they had experienced it, they chose democratically to abolish it." - Joshua Muravchik, "The Mystery of the Kibbutz", as quoted in this fascinating GrokInFullness blog post

It feels like "failure" is too strong a word to use for that, though. Even if it didn't work well enough economically, it was at least a counterexample to the old "you can vote your way into socialism but you have to shoot your way out" joke.

not my professors as they are completely unwilling to talk about not academia

All of them??

I've definitely known many professors who reserve their highest respect for tenure-track professor jobs, but they'd all placed PhD students at government labs and in industry and been proud to do so. The long-term rate of academic PhDs creating new academic PhDs has to average to the population growth rate; this implies that either you're sending most of your grad students outside of academia deliberately or you're simply at the top of a pyramid scheme.

Even the ones who had zero intention of leaving academia themselves were proud of their networks outside academia. The obvious motive is that having former students and colleagues in industry gives you a constant source of blurbs to make research grant proposals sound more impressive; perhaps less obvious is that that increases their own BATNA when negotiating with their university or moving to another. One of my favorite Asimov quotes, about his conflicts as a chemistry prof and non-fiction writer versus his administration at BU:

In the course of my fight with the school, I couldn't help but notice that I became a pariah. [...] Once, however, a fellow faculty member, making sure we were unobserved, said to me, "Isaac, the faculty is proud of you for your courage in fighting the administration for academic freedom."

I said, "There's no courage involved in it. Don't you know my definition of academic freedom?"

"No. What's your definition of academic freedom?"

I said, "Independent income."

There's typically an August and a September test date that can get your scores released in time for even early applications.

It's not uncommon to take an earlier SAT, though, late junior year, to make sure you can retake it if you blow it the first time.

American cities were insanely dangerous and crime ridden in a way that even the worst parts of now can't hope to emulate

While it's true that violent crime danger in most places in the US dropped again in the 90s, the "worst parts of now" are still pretty awful. NYC's homicide rate in 1980 was 12.7/100k/year, nearly triple what it is today, but that's still barely more than half of the newly "low" rate I was just congratulating Baltimore on, and it's a quarter of the rate in a couple remaining hot spots like (the city of, not the metro of) St. Louis.

No real objections to the rest of your paragraph, though. IMHO the extinction risk of AI is worse than the near-extinction risk of nuclear WW3 was, but it's also a much more subtle and speculative risk. I learned as a (GenX) child that there were thousands of nuclear weapons ready to vaporize everyone I loved, 30 or 40 minutes after someone pressed the wrong button, and I'd say that was still sufficiently rough.

give in to populist demands and start reforming parts of the economy that are currently set up for rent extraction at the behest of shareholders

So ... which is it? Populist demands are easily converted (by both sides of the aisle!) into protectionist policies that set up parts of the economy for rent extraction. "You can only build more housing here if it's economically 'inclusionary' enough" gets predictably turned by reality into "you can't build more housing here" and drives up the price of the grandfathered (often literally!) housing stock. People want to "drive housing prices up for people who own their homes" while also making housing prices affordable for people who don't, but that just doesn't compute.

principal-agent conflicts of interests in healthcare

are another example. The ACA caps insurance company profit as a percent of premiums, a policy at least populist enough for Obama to brag about it ... and a policy that inadvertently sets up a huge conflict of interest when insurers are trying to figure out what they should pay out.

Ironically, this sort of "cost-plus contract" malincentive was also fixed in part by Obama, in the context of NASA procurement, when he supported and extended the Commercial Resupply Services contracts and then went beyond them with the Commercial Crew program, in both cases paying for purchases where the seller actually could make more profit by producing results more cheaply. For now we still have to burn $4B a pop for SLS when we want to send humans outside Low Earth Orbit, but we can replace it with a $180M Falcon Heavy launch for things like Europa Clipper.

I just want to make sure it has qualia and isn't a Chinese room.

"In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland without children." - Nick Bostrom

I'd also add some preferences regarding population and personality and such, but "do our successors have any intrinsic value or not" does seem to be the first and most important criterion to have!

However, I'm confused by the use of the phrase "make sure" here. Unless you're expecting to be uploaded, and you're confident that the idea of a "p-zombie" is incoherent (which I'm guessing you aren't, given the Chinese room reference), what observations could give you any sense of surety here? Today's LLMs can pass Turing tests, which used to be our "fine, they're sentient now" criterion, but their lack of "medium-term" memory and they fact that they still can "slip" in ways that make them seem non-sentient makes us think in hindsight that our criterion was just inadequate, and yet we haven't really found anything to replace it. If tomorrow's LLMs never slip, does that mean they've become sentient, or does that just mean they've become better at faking it?

Even "overestimated" probably overstates things, in that it suggests that he got the magnitude wrong but still got the direction of the effect correct. I suspect it's more likely that stop-AI bombings will have roughly the same effect on AI risk that anarchist groups' bombings and murders a century ago had on government overreach.

Taking inflation into account, that's only a little more than I was getting paid by my university just to work on my PhD. From an independent company who'd have you working on problems with more immediate benefit to them than "it makes the University stats look better" and/or "we could conscript him to teach if a prof gets sick or quits suddenly", it doesn't sound competitive at all until you consider the equity ... and equity in a startup is like a lottery ticket: even if the game isn't crooked, your ticket might make you fabulously wealthy but it's more likely to be worthless.

On the other hand, the job market does seem to be kind of awful right now. It might not be crazy to take something here to avoid resume gaps and build more experience while job hunting elsewhere, and if you're finishing up your PhD at the same time then maybe that's enough to prevent the typical "what was your last salary" question from making subsequent employers lowball you too.

On a side note that probably belongs in Culture War - the Baltimore homicide rate is now the lowest its been in nearly 50 years, after dropping more than 60% in 3 years! Wow! That still leaves a crazy high rate (my advice after my daughter's Johns Hopkins application and some crime map study: "you're not likely to get shot unless you go a mile south, or east, or southwest ... north looks nice ... I can see why they're medical specialists ...") but there's now like 500 fewer dead people than the 2015-2022 rate would have predicted, and that's pretty great.

there has basically never been a mathematician producing valuable maths older than that.

Yitang Zhang didn't prove that the lim inf of the prime gap was 2 (which would have verified a 150 year old conjecture), but he was the first to prove it was finite, at age 58. Listing and Moëbius were in their 50s when they formalized the idea of non-orientable surfaces (which I would consider to be literally "producing" math rather than just "solving" it). The first version of the Weierstrass Approximation Theorem (which led to whole fields of the most economically valuable results in mathematics, in my biased applied-math+engineering opinion) was proven when Weierstrass was 70.

But, damn, are they the exceptions who prove the rule, so long as we leave that "basically" qualifier intact? Kolmogorov was in his 50s when he solved Hilbert's thirteenth problem, but that was in joint work with a 19 year old student. Euler, Gauss, and Cauchy were doing great work in their old age, but arguably only after doing greater work as younger men. Searching for mathematical discoveries by importance and then looking up age (rather than searching specifically for discoveries made at older ages), the bulk do seem to be between 25 and 45.

I wonder if the trend is moving older (because things that were groundbreaking discoveries 300 years ago are basic undergrad background today and you need to learn much more to get to the cutting edge) or younger (because subsidized institutional math research gets more output but at the cost of making older mathematicians spend all their time teaching and mentoring and writing proposals and hiring and so on, while their grad students and postdocs are the ones who can actually focus on the work).

Well I never said that

That was supposed to be the generic you in context; sorry for the misleading ambiguity.

He does want what is best for Ankh-Morpork, but what is best for Ankh-Morpork is Him. Common failure state of Tyrants in general.

Yeah, but the trouble is that the other failure state is "when the tyrant is gone, people fighting over the power vacuum tear the place apart". Monarchy (with one well-defined rule of succession or another) was actually a valuable social technology at one point, and it's one that Ankh-Morpork doesn't currently have access to!

It's entirely possible that, even if Vetinari decides that Carrot (or someone else, considering Carrot's objections) would be a better leader, he still doesn't see any way he can name a successor and retire or die without that successor being under more threat than he was and the city needing a successor to the successor in short order.

It's a shame we never got to see how Pratchett would have had things all turn out. I like the "Vetinari tries to leave von Lipwig in charge, and Moist weasels out of it by hastily inventing democracy" fan theory, personally. That's not a story, though; you could describe most of the Discworld plots that plainly and unimpressively. The stories didn't get great until you got down into the details.

Vetinari is possibly good in a utilitarian sense, just not in most virtue or deontological senses. The vibe I get is a cynical "it's okay for me to be bad enough to prevent my being replaced with something even worse". He does seem to be grooming people like Vimes and von Lipwig to actually make things better, but even his attempts to delegate strike me as a mix of pessimism ranging from "I could retire before I'm killed if things go well enough" to "I could need scapegoats if things go poorly enough"

I don't think it's sexist or racist to believe that men are more likely to be violent

Replace "violent" with "at the highest end of STEM aptitude" and you just got fired from the presidency of Harvard. These are somewhat overloaded words but the revealed descriptivist definition seems to include any belief in inherent differences that have clear moral valence, at least if the direction of some of those perceived differences can be interpreted as "punching down" the progressive stack.

or that Muslims are more likely to be violent.

Well, this one's straightforwardly not racism because "Muslim" isn't a race, but it is classic "Islamophobia".

It's kind of weird that we don't have a general "-ism" word to refer to religious prejudice or religious intolerance, but I guess religion is the point at which the "everybody's the same regardless" theory breaks down so badly that nobody wants to say religious prejudice is always unwarranted? You may believe Islam in particular isn't inherently more violent (it tried to be more resistant to the "evolution" that @Rosencrantz2 points out is common to religions, yet in practice Muslims' attitudes toward violence do vary greatly from century to century and place to place) But, are you going to extend that charity to every religion ever? That's a good way to find yourself visiting the People's Temple for a free glass of Flavor Aid.

That all makes sense, thank you.

I still feel like the list of sanctions on Pakistan is damning the idea with faint praise, though. The US cut off military and non-humanitarian foreign aid (not even a trade embargo!) once Bush I could no longer swear that Pakistan didn't have nukes, then 8 years later they definitely did have nukes and we also cut off financial lending/underwriting, then a few years after that it was inconvenient for the one country who was actually pushing, and now we're all fine? Becoming a nuclear state isn't completely costless, but Pakistan still doesn't seem to have been punished as much for nukes as Afghanistan was for box cutters.

next-token prediction really isn't the kind of intelligence that we wanted to develop, but it's what we discovered first.

(Cries in Yudkowsky)

perhaps at some point we'll discover a different framework for AI that better matches our own sapience at lower cost.

I think this is what makes "FOOM" still something of a risk. What are the odds that we really discovered the most computationally efficient implementation of intelligence on the very first try and step one really was "just download the internet and try to compress it"? When we solved problems like magnetohydrodynamics simulation, we had some much more clever initial ideas, yet we still managed to improve them another order of magnitude (just software; another OOM in hardware) in each of the next few decades. There's still a fundamental limit to how efficient any particular algorithm can be, but it's not out of the question that, once we have a ton of artificial researchers that don't need to be handheld on every short task, we'll get a similar 1000x sort of speedup, much sooner.

better matches our own sapience

If we just match our own sapience, then the hallucination problem really can't be solved, rather than just mitigated. Humans still do that shit all the time. I once missed a problem on a Differential Equations quiz because I evaluated "1 × 2" as "3" in an intermediate step.

Right now I think coding models are at their most powerful when being used as a force multiplier for human experts.

This matches my experience. Today Codex generated a PR for us that fixed a bug, but it missed three more instances of the same bug, broke some functionality with the fix, and didn't have a hint of the work we'd need to do to reenable that functionality for existing users while migrating them to a newer implementation with wider compatibility.

BUT: all that extra work won't take nearly as long as it would have taken the human Codex user to find the initial bug. Still a big win.

At this rate, though, how long will it be until we don't need the human user? Centaur chess lasted maybe a decade, being generous, but at this point last year AI had only basic "much better search engine" utility for me, and at this point two years ago it was downright counterproductive to try to sort out real answers from hallucinations. Where will we be in another five years?

You don't complete the sentence "The answer is" with "oh wait never mind I don't know".

No, but somehow these days they're tuning their final models to get to "I don't know" anyways. Maybe they're not just glorified autocomplete? 10 months ago was the first time I got an LLM to admit it couldn't answer a question of mine (although it did still make helpful suggestions); not only did the other models back then give me wrong answers, IIRC one of them went on to gaslight me about it rather than admit the mistake. (two years ago this gaslighting would have been the rule rather than the exception) IMHO that "I don't know" was the exact point at which AI started to have positive utility for me. Sometimes an AI still isn't helpful, but it's at least often worth throwing a problem at now, not a waste of time.

a bunch of informal pressure to ensure this doesn't happen to the extent possible

This is a good point. The trouble is that world leaders act like they're big Causal Decision Theory fans, and once a state has nukes it's hard to go back in time to make that not have happened, so whatcha gonna do? We try to keep ICBM tech from leaking to Pakistan, but we hardly turned them or India into pariah states for having the warheads. Maybe Iran would get worse treatment because they signed on to the Non-Proliferation Treaty and would be violating it whereas non-signatories weren't?

Enough intel is public that we know Iran had a bunch of nearly bomb grade enriched uranium, but that they just stopped at that point and made no further effort to weaponize.

Ignorant question: how confident are we of that? It looks like Iran fired two missiles at Diego Garcia, at more than double the range of anything we publicly knew they had (and if we knew privately that they were violating ICBM restrictions, that would have been a great cassus belli to bring up to Europeans uninterested in joining this war, so I'm betting we didn't), in which case they've at least been managing to keep some aspects of their weapons development programs secret.

FWIW, these days you can usually ask about the seemingly-retarded choices and get either an explanation or a correction-plus-apology for each. I use AI to self-study math, and I still catch it making errors these days, but no more often than it catches my errors, and pinning down which case is which usually only takes a little extra back-and-forth. The reasoning models these days are much better than when some of them would just double down and try to gaslight you about their errors ... Whoa, according to my logs that was only a year ago. Hell of a roller coaster we're on...

Why were those guys even carrying around an extra kidney they didn't need, anyway?

"Density" definitely becomes a quality to look for in a game rather than one to avoid. I remember reviews criticizing some RPG for only having 20 hours of content after 30 became the de facto standard, and I agreed, but these days? You tell me I'm not going to get closure for 20 or 30 hours? If it's not one of the best games of all time, count me out; if it is, I swear I'll try to get around to it in 5 or 10 years. I'd love a list of modern games that instead follow in the footsteps of Portal 1 (3-4 hours) or at worst Portal 2 (8-9?) and likewise manage to pack a full satisfying game into that length of time.

edifying in the same way a good nonfiction book is.

Well, the good nonfiction book is typically denser.

I've been a fan of space flight my whole life, nearly studied ASE before starting off as a MechE instead, but I never understood why "porkchop plots" had their weird double-lobed shape until the first time I found myself having to cope with interplanetary plane change maneuvers in Kerbal Space Program. So, yeah, on the one hand, actual advanced practical (well, to JPL engineers anyway) knowledge ... but on the other hand, Steam tells me I've put 320 hours into KSP over the last 10 or 15 years, and I'll bet I could have gotten more knowledge out of a good orbital mechanics textbook in a quarter of the time.

The only cost to an autocracy getting nuclear warheads is that, if you don't stay personally in charge of them, your successors can be as tyrannical as they want and nobody will come save you from them. This is more than counterbalanced by the benefit that, if you do stay in charge of them, nobody will come try to "save" anyone from you. North Korea won't be getting the Venezuela or Iran treatments any time soon.

Getting highly-enriched uranium without continuing on to turn them into warheads, on the other hand, just pisses everybody off without giving you any leverage, and the next thing you know your successors are in charge anyway. Even if you have a weaker bomb program and give it up before the airstrikes escalate, moving far enough in that direction may already have crossed the "sodomized to death by a bayonet eight years later while the world chuckles" point of no return. This is just not a place where you stop your nuke program because your political calculations are going well; it's a place where you stop because your engineering calculations aren't going well enough. A successful test explosion is a "Get Out Of Jail Free" card; a test fizzle is a "Kill Me Now Before It's Too Late" request.

in a replacing doctors scenario you'd need to be getting it right 100% of the time with no second check

Which doctor manages that?

Yeah, I know, that sounds like an insult/joke, but estimates of iatrogenic death rates in US hospitals are at minimum 20k/year out of 700k, which means that in even in high-stakes scenarios doctors and nurses are only at 97% for "getting it right enough not to kill someone"; fully "getting it right" would be a much higher bar and lower success rate. All the AI has to do to replace skilled workers is get more reliable than they are.

You're surely right that AI in medicine isn't as good a replacement for human workers as AI in fields where checking results is easier than producing them ... but it's perhaps similar to AI for self-driving cars: stringent requirements and potentially-lethal consequences, but the AI still doesn't have to be perfect to be an improvement, it just has to be better than the typical human competition.

I think the median 0day the NSA exploits is one they found or bought and not one which they made some US company insert on purpose.

You're probably correct, but don't forget about the (probably small, but not null) class of exploits that they simply trick US companies into inserting. The NSA has a wide range of strategies. They paid RSA to use their exploitable Dual_EC_DRBG, for instance, but apparently that was mostly to buy enough credibility to get it called "the standard" and adopted freely by other crypto companies too.

Even their work with DES was a mix of white-hat (they knew about a vulnerability and pushed for changes that they secretly knew would eliminate it) and black-hat (they pushed to drop the standard key size from 64 to 48 bit, then settled for 56, because they knew they had the compute to brute-force those) security, and the only "made some US company insert on purpose" there was legislative, for a brief period in the late 90s when companies were only allowed to export encryption software with 56-bit or shorter keys.