@TheAntipopulist's banner p

TheAntipopulist

Formerly Ben___Garrison

0 followers   follows 2 users  
joined 2022 September 05 02:32:36 UTC

				

User ID: 373

TheAntipopulist

Formerly Ben___Garrison

0 followers   follows 2 users   joined 2022 September 05 02:32:36 UTC

					

No bio...


					

User ID: 373

Jensen speaking here is mostly just CEO boilerplate. Obviously they'll say "oh yeah, we're in for the long haul", but that's not much in the way of evidence. I'll grant that founders have a bit more of a tendency towards pie-in-the-sky goals like 20 year plans and maybe geopolitical stuff, but Huang is still subject to the whims of shareholders. He doesn't even have extra insulation like Zuckerberg with a screwy voting shares system. And Elon makes plenty of boneheaded short-term decisions.

If the only way you can think of this is myopic mercantilism

Get bent already.

But inferior men can only interpret a superior man's vision in terms of profit.

Can you stop it with this nonsense please?

self-serving imperial propaganda

You're not doing anyone a favor by being corrupt.

Is this how you do all your argumentation?

I think you don't get how intoxicating the sense of supremacy is.

Americans have an ideological stake in being Number One.

This is out of date if it was ever true at all. Maybe you could say this about a broad subset of the American Right when the Neocon movement was at its peak circa 2002 or so. But the Left has never really subscribed to that at all, and the modern Right is increasingly dominated by its own brand of oikophobes due to woke backlash.

This is still denialism of the erosion of fundamentals, I think. Classic stabbed-in-the-back-by-Jews [of Asia] doctrine. Huang founded Nvidia over 30 years ago, I don't believe he's a petty merchant optimizing for quarterly reports.

Classic stabbed-in-the-back-by-Jews [of Asia]

Good grief x2.

The goal of the CEO of any American company is implicitly if not explicitly the maximization of shareholder value. Selling chips to China is just how Jensen Huang can achieve that better. The idea that Huang is doing it as some grand geopolitical play (and where his company's bottom line is a secondary concern) is a bit hard to take seriously. Like, he might be doing that, but you'd want to have a decent amount of evidence to convince people that it's not just the cynical moneygrubbing play.

No, we do not need allusions to Nazi conspiracies of the Jews to say "the CEO is probably just trying to make more profit".

Here's another article to add to your arsenal and broadly echoes what you're saying: China is making trade impossible

You could just... get the new credentials? Plus, this would likely cut down on the impetus for nearly every shop to do Leetcode style interviews, so you'd be just exchanging one set of nonsense for another.

Yep, I agree with all of this. Software engineers really should do what other engineering fields have done and set up that rent-seeking licensing cartel. It's bad for society overall, but most other fields do something like that, so why not us?

I'm sure its waxed and waned, but SWEs have been complaining about H1B's since at least the Dotcom boom in the 90s.

You should wait to post this until it actually happens. China might get nearly equivalent chips in bulk by mid 2026, but it's a tough problem and things could easily go wrong. Doing a victory lap based on industry rumors is premature. This reminds me of the perennial Wunderwaffe posts we get from pro-Russian accounts on here, where this new missile or drone is just totally going to swamp Ukraine and this whole slow warfare will turn into a blitzkrieg. And then... that doesn't happen.

It seems to me that my read on the situation from back then, both the big picture and its implications for compute strategy, is now shared by both the USG and the CPC. The former is trying to regain its position and revenue in the Chinese GPU market and slow down Huawei/Cambricon/Kunlun/etc. ecosystem development by flooding the zone with mature Nvidia chips that will be adopted by all frontier players (eg DeepSeek again – they have a deep bench of Nvidia-specific talent and aren't willing to switch to half-baked Ascend CANN).

Alternative explanation: Jensen Huang won the game of "be the last person to talk to Trump", since he knows Trump is a waffling buffoon and Huang just wants to maximize Nvidia's stock, US security concerns be damned. Then on the Chinese side, the CCP doesn't really care about this CUDA vs CANN stuff nearly as much as it cares about its industrial policy of "make EVERYTHING in China", and a wave of Nvidia chips could disrupt that beyond concerns about ecosystems.

For now the loss of the indisputable Main Character status is being processed traumatically, with anger, denial

Oh good heavens. No, you don't have to be a traumatized, angry denialist to understand that maybe it's a bad thing for the US to give an extremely bottlenecked resource to its main geopolitical rival.

Now they find out their precious college degree-gated industries aren't safe

The H1B program has been going on for 35 years, so I don't know what the "now" is referring to.

And why are you lumping together all "white collar" jobs, as if software engineers agree with the nonsense coming out of HR?

Ramaswamy is doing some motte-and-bailey nonsense here, pointing out a few flaws in American culture, but then using that as a non-sequitur to justify his ridiculous immigration views. The simple fact is that the H1B system is used to undercut American wages. While ostensibly only permitting "foreign experts", companies game the system by allowing diploma mill bachelor's degrees in India to be valid, and then pay them garbage salaries. An easy solution would be to just require anyone hired on an H1B visa to have high relative wages. Basically everyone agrees this would fix the problem, but nobody makes the change because they actually want to use it H1B's as a cynical vehicle for mass-migration.

Ah, it's interesting Perplexity actually had an advantage there. This is the only time someone has been able to give me an actual reason anyone would use Perplexity over any other LLM.

I remember getting pretty good citations (actual papers) from LLMs in early 2024, but you had to use the "deep think" mode (or whatever it was called) that took like 15 minutes to run. Now that's unnecessary and, like you say, ChatGPT is a lot more interesting out of the box. You still have to check things to be sure, but 95% of the answers it gives aren't hallucinations.

I presume the AI slop merchants are repackaging that sort of story over and over because it gets a lot of views, and I presume it gets a lot of views because a lot of tech workers can empathize with it. It's a particularly infuriating mix of complete obliviousness + pigheadedness that's ripe for parody.

I'll spend a lot of time understanding what needs to be done, and then 15-30 minutes describing, in detail, what needs to be done, and supplying the necessary context.

Yeah, this is the way to get the best results. But man, 15-30 minutes? Is that per context window, or per major project? If that's per context window then you're doing even more than I do.

And yeah, I find it somewhat sad that software engineering seems to eventually be going the way of blacksmithing, but as a younger dev I'm excited about how much more I can create on my own terms now. I always wanted to create video games as a hobby, and it's so much more viable with AI -- partially due to faster coding, but honestly more due to stuff like Nano Banana Pro.

Surprised to see how dopey people still are, how can someone be a CTO and not know the difference between the models under the hood of Copilot? Would've thought a CTO would know better.

Yeah, it's painful to watch what the executives at our company do a lot of the time.

I fired up Claude Code with Opus 4.5 and got it to build a predator-prey species simulation with an inbuilt procedural world generator and nice features like A* search for pathfinding - and it one-shot it, producing in about 5 minutes something which I know took me several weeks to build a decade ago when I was teaching myself some basic programming, and which I think would take most seasoned hobbyists several hours. And it did it in minutes.

I mean, Twitter users have always been saying Opus/Sonnet is sooooooo good since like Sonnet 3.0 back in early 2024. I know the capabilities are advancing steadily, but early 2025 felt like much more of a step change. Opus 3.7 almost certainly could have handled his A* search problem, just with a few more reprompts on average than Sonnet/Opus 4.5. And again, does somewhat fewer reprompts or requiring the issue be broken up a bit more really do that much?

write a serviceable database server

An LLM could absolutely do 30-40% of this, assuming it's being directed by a senior programmer and has said programmer on hand to do the other 60-70% of it. We might be getting into a semantics issue on this, as I'm assuming that a "senior" programmer would still be doing a decent amount of coding. Perhaps in some orgs that "senior" title means they're only doing architecting and code reviews, in which case that 30-40% might not be accurate.

"what kind of database we actually want to use here and what is more practical given limited resources we have?"

LLMs are also fairly decent at answering design questions like this, assuming they have the right context. I might not always go with an LLM's #1 answer, but the top 3 will usually have at least 2 decent responses.

Last frustrating exercise was when I tried to have it explain to me how to use two certain APIs together, and it wrote a plausible code and configs, except it didn't work.

For most of this paragraph, were you 1) using a frontier LLM model on its "thinking" mode (or equivalent), and 2) did you give the LLM enough context for what a correct API call should look like? Not just something like "I'm using version 2 of the API", but actually uploading the documentation of version 2 as context. I want to emphasize that context management is critically important. Beyond that it sounds like you just hit a doom-loop, which still happens sometimes, but there are solutions to that. Usually just telling the LLM to break the problem into smaller chunks, and use extra verification (e.g. print statements) so it can tell where the error is actually coming from.

The main issue that I see with AI is that it is exceedingly difficult to maintain a well structured project over time with AI. I want it to use the same coding standard throughout the project. I don't want magic numbers in my code. I only want each thing to be defined once in the project. Each block of code it generates may be well written but the code base will spaghettify faster with AI. Unless the context window becomes the size of my codebase or a senior devs knowledge of it this is inevitable.

Most of this seems like it would be solved (or at least mitigated) by managing context correctly and having a prompt library. I don't know the particulars of your codebase so there's a chance it's crazily spread out, but if you just tell the AI that something has already been defined then any frontier LLM will be pretty good at respecting that. If there's a particular formatting style you're particular about, then just add that to one of your prompts when you start a new conversation.

It is clear that we can't take the human out of the loop.

I definitely agree with this for at least the near future (<=5 years).

Beyond what I've listed here, I've also used AI for data analytics using R/Python. It's really good at disentangling the ggplot2 package for instance, which is something I had always wanted to make better use of pre-LLM but the syntax was rough. It's good at helping generate code to clean data and do all the other stuff related to data science.

I've also used it a bit for some hobbyist game development in C#. I don't know much C# myself, and LLMs are just a massive help when it comes to getting started and learning fast. They also help prevent tech debt that comes from using novice solutions that I'd otherwise be prone to.

At this point it's better to ask "what standalone programming tasks can't LLMs help with", and the answer is very little. They're less of a speedup compared to very experienced developers that have been working on the same huge codebase for 20+ years, but even in that scenario they can grind out boilerplate if you know how to prompt them.

Thanks for the kind words. It's cool that you helped design the ICPC; I remember reading about the 11/12 score earlier and yeah, it's indeed impressive that a machine can do that. I just wished more of that would translate over to the real world. Like Dwarkesh said, AI's abilities on evals and benchmarks are growing at the rate that short-timelines people predict, while AI's abilities in the real world are still growing at a rate that long-timelines people predict.

I feel like we don't really disagree on context. If it needs to be broken up between a "short context" and "long context" to make it work, then yeah that'd be good. I mean, it sorta works like that already behind the scenes with the data compression algorithms the LLM designers have, it just doesn't work super great. I too was hoping some solution would be developed in 2025, and was slightly disappointed when it wasn't. Hopefully we get one in 2026 as you say -- if we don't, then it could portend a pessimistic outlook for short timelines, which would be a shame.

AI will do some things that humans used to have a monopoly on, like companionship, but other sectors will remain the realm of humans for decades. Stuff like human doctors, even if AI is proven to be better in every possible way, will still be wanted since "that's how we've always done it". It will take many years to work through stuff like that.

Oh look, it's another batch of absolutely nothing. Still no evidence of any conspiracies involving Epstein trafficking young girls to other men. Yet every new revelation is treated like it confirms the narrative.

Well, I guess there was one big revelation: Bill Clinton. Not that he actually did anything bad, but that he appears in the photos at all. This lets MAGA do something it's always interested in: give a pass for daddy Trump by saying "whatabout the Left?" Instead of looking at the evidence and deciding this whole Epstein stuff belongs in the political trash bin, MAGA can now continue being conspiratorial about its outgroup. Democrats are "in a panic". The Epstein files are overall "just a Clinton photo album".

It created ample unemployment among industries where the machines were just flat out better than a human could be.

Yes, but those people then just found different jobs, and society became more efficient overall. Losing your job temporarily sucks but creative destruction is part of living in a vibrant society.

The whole premise with AGI is that it can in theory be better at everything that a human could do.

AGI will never be better than humans at simply being human, which will count for a lot to some people and to some fields.

Strong agree that AI will not cause mass unemployment. If the industrial revolution didn't create widespread unemployment while pushing 80%+ of the population from agriculture to manufacturing/services, then it's safe to assume basically nothing ever will stop a society from having to do work of some sort, even if it's just silly stuff like zero-sum status games.

Also agree that AI will be "mid" relative to the FOOM doomer and singularity expectations that some have. I'm a bit more bearish on the productivity gains than you are. There will certainly be gains to some extent, but a lot of society's blockers are socially-enforced like housing restrictions, lawyerly reviews, etc. that are political problems that AI won't be able to solve by itself.

From what little I've seen, he's a conspiratorial slop-merchant peddling some combination of common-sense-implied-as-dark-truth, along with obvious nonsense presented in a confident cadence. I can understand why people get sucked in by the common sense stuff because seeing it repackaged as a "dark truth" can be fun to some people, but accepting the rest of his arguments shows bad things about your epistemic hygiene. I'm a bit more familiar with Whatifalthist, and he fits this description to a T.

the narrative that algorithms have a left wing bias and that dissident voices are difficult to find.

You're in a very right wing ecosystem if this is the only narrative you've heard about algorithms. Leftists have been complaining about "radicalization pipelines" for a decade+ now, and it formed one of the key arguments they made for cancel culture.

You must be working with very strange/niche languages then. I've had no trouble getting them to understand SQR and a couple other extremely old languages like FAME.