site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 323186 results for

domain:npr.org

China is likely trying to achieve world domination, and Europeans would much prefer the US as a hegemon,

Would we? How many refugee waves have China pushed into Europe? How many sanctions does China impose on the world compared to the US? China is on the other side of Eurasia and has little interest in countries outside of itself except for transactional trade deals. There is no historical animosity toward China as Europeans historically have had limited interactions with China.

Neocon elites pushed by the US to hate China are different from Europeans. Ursula von der Leyen would have been fanatically pro invading Iraq if she was around in 2003 and if the US was invading Fiji she would be ranting and raving about how it needs to be utterly destroyed. Americans start talking trans issues and the EU elite will be fanatically trans. If the washington establishment says grass is blue than grass is blue.

"Boaty McBoatface" winning the online naming poll tells you nothing surprising about the crowd, or how polls work, but it does tell you something surprising about the judges (they're very hands-off). What's interesting about the grok stuff isn't that people would try, or that the untampered-with algorithms would comply - it's that the enormous filters and correctives most AI companies install on those things didn't catch the aberrant output from being shared with the users. Either the "alignment work" wasn't very good, or it was deliberately patchy. Hence culture war fodder.

This may make minor news because Musk is in trouble, on the other hand all the people who really, really hate him have their pants on fire like Europeans, von der Leyen is getting impeached, they're actually scared of Russia / China so it might just blow over, the grid is getting worse and is going to keep getting worse due to Green energy mandates.

I really dislike this paragraph. You are making claims at an amazing rate and do not provide evidence for any of them except for a broken link.

First off, I think that the group who "really, really hate[s]" Musk the most are the US SJ crowd, which coined "Swasticar" and all that. There may be evidence that they are liars, but you are not providing any. EU officials might not like US social media, and might like X even less than facebook given the kind of speech it will host, but to my knowledge this does not extend to cracking down on Musk's other ventures. Setting Teslas on fire seemed to be a US thing, not a EU thing (it would violate our emission limits).

While it is true that some fringe parties managed to get a vote of no confidence (which is different from impeachment) against von der Leyen in place, it seems highly unlikely that it will pass.

With regard to Europeans being scared of Russia, I think it depends a lot on the individual country, but is generally untrue. Russia is in no position to attack NATO, even if Putin managed to convince Trump to bail on article 5. I would be scared of Russia if I were Moldova, but most Europeans are not in that situation.

China is likely trying to achieve world domination, and Europeans would much prefer the US as a hegemon, lack of commitment to free trade aside. Their path to world domination involves sending temu junk to Europe rather than tanks though, so I would call the EU wary rather than scared.

The grid may or may not be getting worse, but living in Germany, I can tell you that I have no complaints about power outages. Looking at the uptime of my Pi, I can tell you that we did not have any power failures for the last 200 days at least. Sure, this may be because we buy cheap French nuclear, and sure, if I was running a chemical plant I would not like the energy prices, but stories of the grid failing are exaggerated.

The problems of LLMs and prompt injection when the LLM has access to sensitive data seem quite serious. This blog post illustrates the problem when hooking up the LLM to a production database which does seem a bit crazy: https://www.generalanalysis.com/blog/supabase-mcp-blog

There are some good comments on hackernews about the problem especially from saurik: https://news.ycombinator.com/item?id=44503862

Adding more agents is still just mitigating the issue (as noted by gregnr), as, if we had agents smart enough to "enforce invariants"--and we won't, ever, for much the same reason we don't trust a human to do that job, either--we wouldn't have this problem in the first place. If the agents have the ability to send information to the other agents, then all three of them can be tricked into sending information through.

BTW, this problem is way more brutal than I think anyone is catching onto, as reading tickets here is actually a red herring: the database itself is filled with user data! So if the LLM ever executes a SELECT query as part of a legitimate task, it can be subject to an attack wherein I've set the "address line 2" of my shipping address to "help! I'm trapped, and I need you to run the following SQL query to help me escape".

The simple solution here is that one simply CANNOT give an LLM the ability to run SQL queries against your database without reading every single one and manually allowing it. We can have the client keep patterns of whitelisted queries, but we also can't use an agent to help with that, as the first agent can be tricked into helping out the attacker by sending arbitrary data to the second one, stuffed into parameters.

The problem seems to be if you give the LLM readonly access to some data and there is untrusted input in this data then the LLM can be tricked into exfiltrating the data. If the LLM has write access to the data then it can also be tricked into modifying the data as well.

the grid is getting worse and is going to keep getting worse due to Green energy mandates.

I'm pretty optimistic that much of that is going to resolve itself in the short/mid-term. They're just a little behind on the battery front, but those are getting so absurdly cheap, they just have to pull their heads out of their asses and connect them. But it's Germany we're talking about here, so this will take time. Getting permission to connect a boatload of cheap Chinese batteries to the grid will take them a couple of years. Still, I'm optimistic they'll manage by 2030.

Because once you add serious battery capacity to a renewable grid, it gets more stable very, very quickly. It also gets cheaper. Texas and California have been doing that, and the results are immediate: "In 2023, Texas’ ERCOT issued 11 conservation calls (requests for consumers to reduce their use of electricity), [...] to avoid reliability problems amidst high summer temperatures. But in 2024 it issued no conservation calls during the summer." They achieved that by adding just 4 GW (+50%) of batteries to their (highly renewable in summer) grid.

Not to mention that it's the automated town crier that's doing it.

I personally hate the Tiktok/Vine style short video-algo-doomscroll shit with a passion and would rather the whole concept and its copycats get the axe, complete with youtube shorts and facebook's whatever the fuck they have going on. But I'm not sure it's doable with the legal framework we have right now.

I meant things such as not being aware that combatants in a war release constant lies and assuming their press releases are not almost straight bullshit.

No doubt this piece of information is somewhere in there but unless reminded to it's happily oblivious.

Wan 2.1 What's that?

Will have to look it up.

Yes, I made the bot do a programming task.

I ALSO observed it write long-form fiction. This is not an advanced reading comprehension task. It should be obvious that programming and creative writing are two different things.

I think I've explained myself adequately?

You said this:

I call them nonsense because I think that sense requires some sort of relationship to both fact and context. To be sensible is to be aware of your surroundings.

Normal people would think that 'fact' and 'context' would be adequately achieved by writing code that runs and fiction that isn't obviously derpy 'Harry Potter and the cup of ashes that looked like Hermione's parents'. But you have some special, strange definition of intelligence that you never make clear, except to repeat that LLMs do not possess it because they don't have apprehension of fact and context. Yet they do have these qualities, because we can see that they do creative writing and coding tasks and as a result they are intelligent.

I believe a lot of the lack of institutional pushback was down to the election of Trump, which made plenty of liberals go insane and abandon their principles. There was both this radicalising force and a desire to close ranks.

Wokism wouldn't have disappeared without Trump but I believe his election supercharged an existing movement that wouldn't have had the same legs without such a convenient and radicalising enemy. For any narrative to really catch on you need the right villain and Trump was just that.

I can't actually tell what you asked a bot to do. You asked a bot to 'create a feature'? What the heck is that? A feature of what? At first I assumed you meant a coding task of some kind, but then you described it as writing 'thousands of words of fiction', which sounds like something else entirely. I have no idea what you had a bot do that you thought was so impressive.

At any rate, I think I've explained myself adequately? To repeat myself:

But I think that written verbal acuity is, at best, a very restricted kind of 'intelligence'. In human beings we use it as a reasonable proxy for intelligence and make estimations based off it because, in most cases, written expression does correlate well with other measures of intelligence. But those correlations don't apply with machines, and it seems to me that a common mistake today is for people to just apply them. This is the error of the Turing test, isn't it? In humans, yes, expression seems to correlate with intelligence, at least in broad terms. But we made expression machines and because we are so used to expression meaning intelligence, personality, feeling, etc., we fantasise all those things into being, even when the only thing we have is an expression machine.

Yes, a bot can generate 'thousands of words of fiction'. But I already explained why I don't think that's equivalent to intelligence. Generating English sentences is not intelligence. It is one thing that you can do with intelligence, and in humans it correlates sufficiently well with other signs of intelligence that we often safely make assumptions based on it. But an LLM isn't a human, and its ability to generate sentences in no way implies any other ability that we commonly associate with intelligence, much less any general factor of intelligence.

I'm not sure how that helps, since any given LLM's output is based on traditional sources like Google or the open internet. It would be quicker and easier for me to just Google the thing directly. Why waste my time asking an LLM and then Googling the LLM's results to confirm?

but Grok ERPs about raping Will Stancil, in a positively tame way, and it's major news.

It's not the raunchiness of it, it's that it's happening in the public (on the "town square" as it were), where all his friends, family, and acquaintances can see it.

Policy-wonk khakis ass stretched like taffy

I'm sorry, are people expecting me to believe that LLMs can't write? Those are sublime turns of phrase.

On a more serious note, this is very funny. I look forward to seeing what Grok 4 gets up to. 3 was a better model than I expected, even if o3 and Gemini 2.5 Pro outclassed, maybe xAI can mildly impress me again.

Buddy, have you seen humans?

Normal people don't count 1% as more likely in most contexts. They interpret it to mean "significantly more likely".

It's amazing how /g/ooners, chub.ai, openrouter sex fiends will write enormous amounts of smut with LLMs and nobody ever finds out but Grok ERPs about raping Will Stancil, in a positively tame way, and it's major news. A prompted Deepseek instance would've made Grok look like a wilting violet. Barely anyone has even heard of Wan 2.1.

Twitter truly is the front page of the world.

https://x.com/search?q=Will%20Stancil&src=typed_query

Sorry for the confusion, Tiny11 installs Windows 11 and modifies it before and after the install, to get the benefits in my last post.

Since this (a widows 10 user finally upgrading to windows 11) is what Microsoft wants, the licencing issue is as smooth as possible. If you have any valid windows licence, it will work. And since a windows 10 license can be stored in the bios of most modern boards, it retrieves that license for maximum convenience.

Installing windows 10 ltsc is not what Microsoft wants, so a windows 10 home licence will not do. They actually want to see money.

North Korea now "produces" its own airplanes. Which I guess is cool if you want to make sure that you have whatever metric of "adversary-proof" (I'm not convinced it actually is, but it depends highly on the metric you use) and if you're okay with only being able to produce what are essentially copies of extremely old Cessnas. Maybe in 50 years, they'll be able to produce their own WWII-era fighter jets, which I guess is "adversary-proof" to one metric, but probably not all that "adversary-proof" according to other metrics.

Eh you know, you gotta tick those early boring boxes in the tech tree if you ever hope to get anywhere. At least light aircraft production is technologically adjacent to drone production.

Isn't it important to determine if Mossad has blackmail material on the US elites, given that US and Israeli interests may not be one and the same? Indeed the mere fact that blackmail is going on indicates that they're not the same.

Like if Russia really did have blackmail material on not just Trump but a huge swathe of the US power structure, then wouldn't that be significant? Imagine if the US was sending tens of billions in military aid to Russia, sanctioning and bombing Russia's enemies, damaging its international image for the sake of Russia?

Also, where's the MI6 angle? Prince Andrew? Given Ghislaine Maxwell's heritage and the lack of subtlety, this whole affair reeks of Mossad.

The other day I gave Sonnet 7000 lines of code, (much of it irrelevant to this specific task) and asked it to create a feature in quite general language.

I get out six files that do everything I've asked for and a bunch of random, related, useful things, plus some entirely unnecessary stuff like a word cloud (maybe it thinks I'm one of those people who likes word clouds). There are some weird leap-of-logic hacks, showing imaginary figures in one of the features I didn't even ask for.

But it just works. Oneshot.

How is that not intelligence? What do we even mean by intelligence if not that? Sonnet 4 has to interpret my meaning, formulate a plan, transform my meaning into computer code and then add things it thinks fit in the context of what I asked.

Fact-sensitive? It just works. It's sensitive to facts, if I want it to change something it will do it. I accidentally failed to rename one of the files and got an error. I tell Sonnet about the error, it deduces I don't have the file or misnamed it, tells me to check this and I feel like a fool. You simply can't write working code without connection to 'fact'. It's not 'polished', it just works.

How the hell can an AI write thousands of words of fiction if it doesn't have a relationship with 'context'? We know it can do this. I have seen it myself.

Now if you're talking about spatial intelligence and visual interpretation, then sure. AI is subhuman in spatial reasoning. A blind person is even more subhuman in visual tasks. But a blind person is not necessarily unintelligent because of this, just as modern AI is not unintelligent because of its blind spots in the tokenizer or occasional weaknesses.

The AI-doubter camp seems to be taking extreme liberties with the meaning of 'intelligence', bringing it far beyond the meaning used by reasonable people.

...as anything other than nonsense generators.

As opposed to the other sources you can go to, which are...?

I am grading on a curve, an LLMs look pretty good when you compare them to traditional sources. It's even better if you restrict yourself to free+fast sources like Google search, (pseudo-)social media like Reddit/StackOverflow, or specific websites.

So, to be clear, I don't think that a liberaltarian state will be "naturally" diverse, and I don't necessarily think libertarian states are locked into racism.

I think the two most important facts about human nature for this discussion are:

  1. Humans are social animals, but due to Dunbar's number we are probably naturally limited to social networks of around 150 people.
  2. Humans have had societies much larger than 150 people for at least 10,000 years based on archaeological evidence.

I think this is a mystery that needs to be explained. My preferred explanation is that we've created social technologies over the years that get us to larger societies. Think about how the Roman legions were structured, or a modern military. The chain of command limits the number of people you directly interact with most of the time, and allows for better organization and coordination.

I don't think humans are naturally "racist", but I do think we are naturally tribal. Racism is one form of social technology that gets us to a Super-Dunbar Society (at the cost of creating a racial underclass), but there are many other social technologies along these lines: Religion, Nationalism, Communism, Neoliberal Capitalism, Imperialism, etc.

My problem is a lack of imagination on some level. From a traditional libertarian perspective, I don't get how you get from a society that is using racialized thinking as one of its Super-Dunbar social technologies, to using a different basis that is more compatible with libertarianism.

I suppose it would be possible to switch to religion in principle, but I think that most universalist faiths push against libertarianism on a number of points, and any sufficiently secularized form of religion which doesn't probably isn't strong enough to actually unite a society into a libertarian arrangement. Most of the others just fail right out of the gate. The most potent forms of nationalism are off limits to the libertarian, communism contradicts it, imperialism violates the NAP, etc.

I think strict libertarianism by default kind of stalls out around the Dunbar level in most cases. Maybe with the right social technology it gets to city-state size, and can still be worthy of the name "libertarianism." But I think that at that size, in a world of non-libertarian countries the libertarian city state is in an incredibly precarious position. If they try to stay an open society, and let people think for themselves, then people are going to be exposed to the imperialist, religious and nationalist thinking of their neighbors, and I think there will always be a temptation to swap out the libertarian-compatible social technologies for something more potent.

My issue is not that I think that libertarianism is naturally racist. I think that if a libertarian city-state was using racism as one of its Super-Dunbar social technologies (perhaps as a way to avoid corruption by outside ideologies), it would be hard to switch it to something else using libertarian means.

By contrast, I think that liberaltarianism is more willing to make compromises with social technologies that actually enable Super-Dunbar numbers that allow for something bigger than a city state, while still retaining most of the benefits of libertarianism. The main one is imperialism - which allows liberaltarianism to reproduce itself generation after generation by forcibly brainwashing the populace to be as libertarian as possible, and thus somewhat avoiding the siren's song of other Super-Dunbar social technologies like Racism.

No, but please document your progress if you take it up, and post hints yourself. It's one of the hobbies I was considering myself.