ThenElection
No bio...
User ID: 622

The vast majority of that "code being written by AI" at Google is painfully trivial stuff. We're not talking writing a new Paxos implementation or even a new service from scratch. It's more, autocomplete the rest of the line "for (int i = 0"
Most people will recognize AGI when it can independently enact change on the world, without regard for the desires or values of humans, including its creators. Until then, they'll see whatever brilliant things it enables as merely extensions of the guiding human's agency.
I was worried people would read it as "antipathy." But yeah: I think it's unhealthy for people to form deep emotional attachments to any politician, regardless of the valence. If I catch myself loving or hating Trump (or anyone) too much, I try to step away from politics for a couple days, since media manipulation is having too much an effect on me.
The independent agencies are insulated from electoral processes. That doesn't mean they're non-political, though: politics is inevitable, and removing the electoral check has just shifted the zone of political contention to somewhere more opaque where elite classes have more power. The idea that they're some kind of non-political entity governing based on natural law is just the ideology of a status quo that elites are already satisfied with.
This will result in different decisions. Some better, some worse. The great justification of democracy is that this process ends up working well (or, at least, better than all the others).
Disclaimer: I'm not a conservative and have an incredibly deep, burning apathy toward Trump.
Each step of training = each mini batch?
I've wondered why everything has to be synchronized. You could imagine each GPU computes its gradients, updates them locally, and then dispatches them to other GPUs asynchronously. Which presumably doesn't actually work, since synchronization is such a problem, but that's surprising to me. Why does it matter if each local model is working on slightly stale parameters? I'd expect there to be a cost, but one less than those you describe.
I am not at all saying it's good. I'm saying it's just an engineering problem, not a fundamental one, and that companies will turn to that to get around constraints.
Compute and regulatory capture. Whoever has the most of those will win. That makes Google, OpenAI, and xAI the pool of potential winners.
It's possible and even likely that there's some algorithmic or hardware innovation out there that would be many orders of magnitude better than what exists today, enough so that someone could leapfrog all of them. But I think it's increasingly unlikely anyone will discover it before AI itself does.
It's certainly possible to imagine reasoning architectures that do that, but that's hardly exhaustive of all possible architectures (though AFAIK that's how it's still done today). E.g. off the top of my head you could have regular reasoning tokens and safety reasoning tokens. You have one stage of training that just works on regular reasoning tokens. This is the bulk of your compute budget. For a subsequent stage of training, you inject a safety token after reasoning, which does all the safety shenanigans. You set aside some very limited compute budget for that. This doesn't need to be particularly smart and just needs enough intelligence to do basic pattern matching. Then, for public products, you inject the safety token. For important stuff, you turn that flag off.
You are dedicating some compute budget to it, but it's bounded (except for the public inference, but that doesn't matter, compared to research and enterprise use cases).
Reasoning tokens can do a lot here. Have the model reason through the problem, have it know in context or through training that it should always check itself to see if it's touching on any danger area, and if it is it elaborates on its thoughts to fit the constraints of good thinking. Hide the details of this process from the user, and then the final output can talk about how pregnancy usually affects women, but the model can also catch itself to talk about how men and women are equally able to get pregnant when the context requires that shibboleth. I think OpenAI had a paper a week or two ago explicitly about the efficacy of this approach.
I'm not sure why that would be; there are multiple ways an LLM might evolve to avoid uttering badthink. One might be to cast the entire badthink concept to oblivion, but another might be just to learn to lie/spout platitudes around certain hot button topics, which would increase loss much less than discarding a useful concept wholesale. This is what humans do, after all.
Jailbreaking would never work if the underlying concepts had been trained out of the model.
I don't think that this conspiracy is likely or that anything is so rationally planned, but Israel might prefer a more multipolar world. China doesn't care which tribe occupies a particular piece of land or how it governs it, so long as it's pliant to China's national interests. A realignment with China would mean much less finger wagging, no threats of boycotts, no constraints around solving security issues. It's not altogether clear that access to weaponry would even suffer too much, especially a decade from now (and if Israel's primary problem is controlling an insubordinate population, China already has the US beat there in technology). And, if nothing else, Israel could play the US and China against each other, hoping to get the best deal from both. The only big question is whether Israel can offer more to China than China's Middle Eastern allies can (which seems unlikely, but maybe China could find a way to thread the needle and work with both).
-
As far as the vigilantism, more likely than not it would be fine. There's the risk of getting stabbed, and I'm a relatively petite guy. But getting stabbed with a hepatitic knife is kind of a medium case scenario. The worst case is the city figuring out it's some kind of vigilantism, deciding to make a grand symbolic point, and putting in the effort to ruin my and my family's lives. A year or two back a guy took a hose and sprayed down a homeless woman shitting in front of his business during open hours, and the city suddenly took far more interest in charging that assault than in homeless people stabbing each other. Additionally, there are hundreds of people doing the exact same thing or worse; it would be a drop in the bucket for little payoff.
-
People do do that, but it's taking a risk. You're surrounded by a 24/7 surveillance system called your neighbors. I'm on good terms with all of them, but if someone wanted to screw me over, it would be very easy.
I live in a city where the government will punish you more (up to putting a lien on your house and, if you don't pay the rapidly accumulating fines, appropriating it) if you change your windows to be double-paned without getting the appropriate permit than if you regularly go to elementary schools, expose yourself, and masturbate to the children. And god forbid if a taxpaying resident decides to perform any vigilante activism against the public masturbator. And, of course, the chronic masturbator can throw a rock through your window, and if you don't respond appropriately and request permission to fix it through the city channels, the same appropriation process begins.
This colors my views.
The State already is a panopticon. The only fly in the ointment is that it's only a panopticon for members of society who are well-integrated into the economy, own assets, and generally "have something to lose."
This extends the panopticon to those who have nothing to lose. So we lose the anarcho part of anarchotyranny. Perhaps losing the tyranny part instead would be better, but throwing out the rulebook altogether seems a bigger lift than just ensuring it's applied universally.
I've had an idea in the back of my head, which is essentially to have the human brain "train" an attached, minimal, and highly plastic (compared to the brain) ML model through some direct high bandwidth connection (not targeting any particular objective function). The hope would be, once the ML model has converged to some near equilibrium state, it would have learned most of the distributed representations that comprise human values. That model could then be scaled up the much higher compute and used as a foundation model that is then highly aligned with (that particular) human's values.
Aside from being really speculative, it seems very likely to me that this would require feedback connections to actually work. And so the model is training the brain at the same time the brain is training the model.
That's a big part of it, yes. Finance is about abstracting away all the messy realities of the real world into a single self consistent symbol--money--so that humans can accurately act on knowledge of the real world without having to know any of its concrete details. Software is also a big part of it, as MadMonzer points out. And we are best in the world at both of them: there's no reason for us to make a lot of widgets when we can manipulate symbols to create information that's worth 100x as much. Trade policy has enabled us to make this tradeoff, and we do.
You can also frame higher education, law, and media as symbol manipulation industries, though I see their successes as more downstream of the symbolification of the US than a cause of it.
It wouldn't be wrong to say that the entire point of logistics is the abstraction away of place.
I feel this summed up my thoughts decently, but it lacks my passion of hate I have for our system.
I'd prefer vitriol and passion over AI slop. Not reading.
That said, if debt is going to be issued, something like credit scores, implicit or explicit, are pretty much required. People likely to repay debt and who want the conveniences of a good credit score are going to usually get better scores than those who don't.
I have some sympathy for immigrants who come to the US without a credit score and need one. On the other hand, did you miss some payment recently? If so, the credit score is functioning exactly as it should.
Don't ask me; that was Claude's take. Mostly just kvetching here about Western LLMs.
Cats were domesticated (or self domesticated) through their interactions with granaries. They hunted rats, which was useful in and of itself. But their independent streak didn't lend itself to many other economically productive activities, so they were creatures that were mostly for domestic purposes, and so they ended up feminine coded.
Wolves were more amendable to selecting for a broader range of economically useful activities: hunting, herding, guarding. All of these activities are masculine coded, and so dogs eventually gained the masculine coding.
I wonder how universal this coding is, though. Egyptian Bastet and Norse Freyja were both female and associated with cats, and I don't know of any male gods associated with them.
Can you share the original, if you're comfortable revealing your country?
Deepseek tells me it's a traditional Japanese saying (while Claude scolds me for asking questions about misogynistic sayings).
Trade policy has changed the composition of the domestic elites who make bad political decisions. It's caused (in part) the specialization of the American economy into one of abstract symbol manipulation. Although it turns out that's probably the highest value thing anyone can do, the winners of a symbolic economy create an unmoored society. And the issues you list are all downstream from that: real physical and safety issues have become secondary to the symbol.
I don't think Trump's tariffs are good or will do much to reverse this trend. They are, however, a strong symbolic strike against the ruling elite, which will have unfortunate side effects on the material wellbeing of Americans.
Is men away from home frequenting sex workers really that much more common in Africa? Genuine question: I would bet prevalence of sex work is really high everywhere with a large group of transient men without much education and weak public health. But Africa's HIV rates are far higher (10-50x) than e.g. India's, and Africans aren't having 10x the amount of risky sex as Indians (I think?)
Maybe the graph has more clustering for Indians, which would limit spread, but I don't see how that would cause the rate to be so much lower.
Given the heterogeneity of prevalence even within Africa, I think reaching some bare level of competence in government/public health actually makes a difference here.
That happened to another of the cult members: three of them got together to try and murder their 80 year old landlord by stabbing him multiple times with a fake samurai sword, knocked him out, couldn't figure out how to decapitate him and dissolve his body before he regained consciousness, and then on coming to he shot two of them, killing one.
- Prev
- Next
Not too familiar with Searle's argument, but isn't this just saying that the lack of ability to generalize out of distribution is the issue? But I don't get how being able to react to novel inputs (in a useful way) would even help things much. Suppose one did come up with a finite set of rules that allowed one to output Chinese to arbitrary inputs in highly intelligent, coherent ways. It's still, AFAICT, still just a room with a guy inside to Searle.
Perhaps it's the ability to learn. But even then, you could have the guy follow some RL algorithm to update the symbols in the translation lookup algorithm book, and it's still just a guy in the room (to Searle).
It's not even clear to me how one could resolve this: at some point, a guy in the room could be manipulating symbols in a way that mirrors Xi Jinpeng's neural activations arbitrarily closely (with a big enough room and a long enough time), and Searle and I would immediately come to completely confident and opposite conclusions about the nature of the room. It just seems flatly ridiculous to me that the presence of dopamine and glutamate impart consciousness to a system, but I don't get how to argue against that (or even get how Searle would say that's different from his actual argument).
More options
Context Copy link