@faul_sname's banner p

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 1 user  
joined 2022 September 06 20:44:12 UTC

				

User ID: 884

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 1 user   joined 2022 September 06 20:44:12 UTC

					

No bio...


					

User ID: 884

I present to you: nobody.

... I see a lot of you arguing that The_Nybbler believes that giving an inch here is a bad idea because they think that a tiny regulation will directly kill innovation, while The_Nybbler is arguing that there's no particular reason for the regulators who introduced this legislation to stop at only implementing useful regulations that pass cost-benefit analysis, and that the other industries we see do seem to have vastly overreaching regulators, and so a naive cost-benefit analysis on a marginal regulation which does not factor in the likely-much-larger second-order effects is useless (though @The_Nybbler do correct me if I'm wrong about this, and you think introducing regulation would be bad even if the first-order effects of regulation were positive and there was some actually-credible way of ensuring that the scope of the regulation was strictly limited).

Honestly I think both of you could stand to focus a bit more on explaining your own positions and less on arguing against what you believe the other means, because as it stands it looks to me like a bunch of statements about what the other person believes, like "you argue that the first-order effects of the most defensible part of this regulation are bad, but you can't support that" / "well you want to turn software into an over-regulated morass similar to what aerospace / pharma / construction have become".

IMO, it shows that you misunderstand how these things work. They're not saying "secure against a nation state decapping your chip". They actually refer to ways that persistent storage can be generally regarded as secure, even if you can imagine an extreme case.

Quoting the examples:

Example 1: The root keys involved in authorization and access to licensed radio frequencies (e.g. LTE-m cellular access) are stored in a UICC.

Ok, fair enough, I can see why you would want to prevent users from accessing these particular secrets on the device they own (because, in a sense, they don't own this particular bit). Though I contend that the main "security" benefit of these is fear of being legally slapped around under CFAA.

Example 2: A remote controlled door-lock using a Trusted Execution Environment (TEE) to store and access the sensitive security parameters.

Seems kinda pointless. If an attacker can read the flash storage on your door lock, presumably that means they've already managed to detach the door lock from your door, and can just enter your house. And if a remote attacker has the ability to read the flash storage because they have gained the ability to execute arbitrary code, they can presumably just directly send the outputs which unlock the door without mucking about with the secrets at all.

Example 3: A wireless thermostat stores the credentials for the wireless network in a tamper protected microcontroller rather than in external flash storage.

What's the threat model we're mitigating here, such that the benefit of mitigating that threat is worth the monetary and complexity cost of requiring an extra component on e.g. every single adjustable-color light bulb sold?

H-what? What are you even talking about? This doesn't even make any sense. The standard problem here is that lots of devices have debug interfaces that are supposed to only be used by the manufacturer (you would know this if you read the definitions section), yet many products are getting shipped in a state where anyone can just plug in and do whatever they want to the device. This is just saying to not be a retard and shut it off if it's not meant to be used by the user.

On examination, I misread, and you are correct about what the documents says.

That said, the correct reading then seems to be "users should not be able to debug, diagnose problems with, or repair their own devices which they have physical access to, and which they bought with their own money." That seems worse, not better. What's the threat model this is supposed to be defending against? Is this a good way of defending against this threat model?

I think it's good old issue #594 back from the dead.

I reject the concept that as soon as epsilon regulation of an industry is put into place, it necessarily and logically follows that there is a slippery slope that results in innovation dying. I think you need at least some argument further. It's easy to just 'declare' bankruptcy a slippery slope, but we know that many end up not.

Nobody is arguing that "the moment any regulation is in place, it is inevitable that we will slide all the way down the slippery slope of increasing regulation and all innovation in that industry will die". The argument is, instead, that adding a regulation increases the chance that we will slide down that slippery slope. That chance may be worth it, if the step is small and the benefit of the regulation is large, but in the case of the entirety of ETSI EN 303 645 (not just section 5.1 in isolation), I don't think that's the case, and I certainly don't think it's a slam-dunk that it's worth the cost.

Section 5.1, "You are not allowed to use a default password on an network interface as the sole means of authorization for the administrative functions of an IoT device", if well-implemented, is probably such a high-benefit low-risk regulation.

Section 5.4.1, "sensitive security parameters in persistent storage shall be stored securely by the device," seems a bit more likely to be a costly provision, and IMO one that misunderstands how hardware security works (there is no such thing as robust security against an attacker with physical access).

They double down on the idea that manufacturers can make something robust to physical access in section 5.4.2, "where a hard-coded unique per device identity is used in a device for security purposes, it shall be implemented in such a way that it resists tampering by means such as physical, electrical or software."

And then there's perplexing stuff like 5.6.4 "where a debug interface is physically accessible, it shall be disabled in software.". Does this mean if you sell a color-changing light bulb, and the bulb has a usbc port, you're not allowed to expose logs across the network and instead have to expose them only over the usbc port? I would guess not, but I'd also guess that if I was in the UK the legal team at my company would be very unhappy if I just went with my guess without consulting them.

And that's really the crux of the issue, introducing regulation like this means that companies now have to make a choice between exposing themselves to legal risks, making dumb development decisions based on the most conservative possible interpretation of the law, or involve the legal department way more frequently for development decisions.

Excellent post!

In general I think we should be suspicious of any public program that tries to hide its costs, or launder those costs onto private actors.

Another example of this is AML/KYC regulations, which basically require banks to serve as a branch of law enforcement at their own expense. From the excellent Bits About Money post on the topic:

Money laundering is, effectively, a process crime. We criminalized it not because of the direct harms, but because it tends to make other interdiction of criminal activity more difficult.

Money laundering covers anything which obscures the link between another crime and the proceeds of that crime. This is intentionally extremely vague and expansive. The victim is, take your pick, either the state or the financial institutions the state has deputized to detect it. [...]

Much like KYC, AML policies are recursive stochastic management of crime. The state deputizes financial institutions to, in effect, change the physics of money. In particular, it wants them to situationally repudiate the fungibility of money. [...] They are required to have policies and procedures which will tend to, statistically, interdict some money laundering and (similar to how we discussed for KYC) trigger additional crimes when accessing the financial system. Particularly in U.S. practice, one sub-goal of this is maximizing the amount of assets which will be tainted by money laundering and then subject to forfeiture proceedings. [...]

And so every financial institution of any size has a Compliance department. One of their functions is having a technological system which will sift through the constant stream of transactions they produce and periodically fire “alerts.” Those alerts go to an analyst for review.

This implies floors upon floors of people who read tweet-length descriptions of financial transactions and, for some very small percentage, click a Big Red Button and begin documenting the heck out of everything. This might sound like a dystopian parody, and it is important to say specifically that this is not merely standard practice but is functionally mandatory. [...]

I think the thing that cryptocurrency enthusiasts are rightest about, which is broadly underappreciated, is that the financial system has been deputized to act as law enforcement. [...] Is this tradeoff worth it? I wish that society and policymakers more closely scrutinized the actual results obtained by AML policies. Plausibly we get sufficient value out of AML to have people attend mandatory diversity training at 1 PM, Banking the Underbanked seminar at 2 PM, and then AML training at 3 PM, while experiencing very little cognitive dissonance. But if that case can be made, then let it be made. I find the opposing case, that AML consumes vast resources and inconveniences legitimate users far out of proportion to positive impact on the legitimate interest of society in interdicting crime, to be very persuasive.

Sorry for the giant quote-post. I don't have much to say about this topic beyond "the above article is fully consistent with what little experience I have of AML/KYC from my work". I just think that article is very very good and also quite relevant to the topic at hand.

My solution would be to simply make vendors liable for damages caused by security flaws of their devices, up to say 10 times the sticker price.

I suspect 1x the sticker price would be more than sufficient if it happened reliably.

Ctrl+0 will get you back to the 100% zoom setting.

I like such feature and several programs that I use on Linux have it

Such as the famously-modern program vim.

Yes, because the baseline for "randomly guessing" is 1/5612 ("twitter user @fluffyporcupine matches this specific one of the 5612 facebook users"), not 1/2 ("twitter user @fluffyporcupine is/is not the same user as facebook user Nancy Prickles").

Doesn't scare me for personal reasons -- I'm trivially identifiable, you don't need to resort to fancy ML techniques. But if you're actually trying to remain anonymous, and post under both your real name and a pseudonym, then perhaps it's worth paying attention to (e.g. spinning up separate throwaway accounts for anything you want to say that is likely to actually lead to significant damage to your real-world identity and doing the "translate to and from a foreign language to get changed phrasing without changed meaning" thing).

$100k for the machinery seems plausible to me -- you can see details of their proposed setup here (relevant sections are "Carbon capture", "Electrolyzer", and "Chemical reactor", the rest of that post is fluff). "Low maintenance" remains to be seen, but there's no reason in principle that it couldn't be.

But again, the viability of the entire project rests on the idea that in some places the marginal cost of power will be close to zero or even negative a substantial fraction of the time, and yet those places are accessible enough to construct one of these plants. If that's not the way the future pans out, this project winds up not being so viable.

So the answer to

So how is this thing supposed to be competitive with natural gas in any reasonable place ?

Is "It's not. But not every place is reasonable".

Why can't it be real? The Haber-Bosch process is at least as impactful of an "air + energy + water -> bulk useful material" process, and it's real and cost-effective.

Anyone who comes up with some process that

  1. Has low infrastructure costs
  2. Produces some industrially valuable product
  3. Spins up and down quickly, and tolerates long idle periods (i.e. starts producing the product as soon as you feed it power, stops when you stop feeding it power, and doesn't have issues if it doesn't start again for a long time)

has a license to print money when power costs dip to zero or below. Which they already do from time to time, and if solar power continues to be deployed more and more, that situation will happen more often.

Terraform's "power -> methane" thing certainly isn't efficient, compared to other forms of grid energy storage, but what it is is scalable. Basically it seems to be a bet on "power prices will be zero / negative some fraction of the time in some locations", which seems likely to happen if solar keeps being deployed at the current rate, or if any country anywhere in the world gets serious about fission power.

What is a hyperdunbarist? Googling the term literally shows only this comment.

Also useful if your car has bluetooth but it's janky.

I drove a corolla until it started giving me trouble (around 300,000 km), followed by a prius until that started giving me trouble (around 400,000 km), both were IMO quite good cars. I think you should be able to get a lightly used one that is <10 years old within your budget in Scotland, and that should have all the creature comforts you want.

That said, for bluetooth specifically, for $20, you can get a thing which plugs into the cigarette lighter of a car and does bluetooth pairing and then broadcasts to a radio frequency (choose a dead channel), which you can then tune your car radio to. In my experience they work well enough that you never think about them once you've done the initial 2 minutes of setup - your phone just automatically pairs when you get in the car, and the car speakers play what your phone is playing.

I think the guideline should be "the topic keeps coming up over and over again in the threads for separate weeks, and the conversation in the new week tends to reference the conversation in older weeks". Covid, when it was a thing, absolutely qualified as that. Russia/Ukraine and Israel/Palestine were somewhat less of this, since each week's thread tended to be about current events more than about continuing to hash out an ongoing disagreement. Trans stuff, I think, qualifies for this, as it does seem to be the same people having the same discussion over and over. Can't think of too many other examples.

Don't pin it and I think it's fine. The people who want to have that discussion can subscribe to the thread. A second such containment thread for rationalist inner-circle social drama would also be nice. Maybe a third for trans stuff.

I think "topics that tend to suck all the air out of the room when they get brought up go to their own containment thread, anyone who cares to discuss that topic can subscribe to the thread, containment threads only get pinned if there's at least a quarter as much past-activity in them as in the typical CW thread" would probably be an improvement.

TBH if someone is put off by the fact that holocaust denial stuff gets put in a dedicated thread rather than banned I think they would probably be put off by the speech norms here anyway, best that they discover that early. I personally find the discussion tiresome and poorly argued, but I don't think there's a low-time-investment way to moderate on that basis, at least not yet. Maybe check back in 3 years and LLMs will be at that point, but for the time being.

All that said, I am not a mod, nor am I volunteering to spend the amount of time it would take to be a mod, so ultimately the decision should be made by the people who are putting in that time and effort.

The topic of this thread isn't "the evidence we have about the history of World War II", it's "internal discussion and navel gazing about what norms we want to have in this community", which is a topic of endless interest on this site. A similar thing happens on any thread that mentions Aella.

The constant debates between the Napoleon deniers and their opponents are sucking all the air out of the room. What do you do?

Containment thread? It worked pretty well for covid, when covid stuff was sucking all the air out of the room.

I challenge the premise "somewhat optimized", we are currently living in dysgenic age.

The optimization happened in the ancestral environment, not the last couple hundred years. Current environment is probably mildly dysgenic but the effect is going to be tiny because the current environment just hasn't been around for very long.

Alternatively, we could just skip detection on which alleles have low IQ and just eliminate very rare alleles, which are much more likely to be deleterious (e.g. replace allele with frequency below given threshold with its most similar allele with frequency above threshold) without studying any IQ.

I expect this would help a bit, just would be surprised if the effect size was actually anywhere near +1SD.

In your hypothetical bet, how would result "IQ as intended, but baby brain too large for pregnancy to be delivered naturally" count?

If the baby is healthy otherwise, that counts just fine.

Congratulations!

In terms of why I'm not so active it's mostly the "had a kid 2 months ago" thing, not anything to do with Motte quality.

This line of argument reminds me of the "to get people to ride public transit, you don’t have to fix the issues with public transit, you just have to make the experience of traveling by car much much worse" argument I see sometimes.

I think the relationship between game theory and morality is more like the one between physics and engineering. You can't look at physics alone to decide what you want to build, but if you try to do novel engineering without understanding the underlying physics you're going to have a bad time. Likewise, game theory doesn't tell you what is moral and immoral, but if you try to make some galaxy-brained moral framework, and you don’t ay attention to how your moral framework plays out when multiple people are involved, you're also going to have a bad time.

Though in both cases, if you stick to common-sense stuff that's worked out in the past in situations like yours, you'll probably do just fine.

Morality has nothing to do with game theory

I disagree pretty strongly with that -- I think that "Bob is a moral person" and "people who are affected by Bob's actions generally would have been worse off if Bob's actions didn't affect them" are, if not quite synonymous, at least rhyming. The golden rule works pretty alright in simple cases without resorting to game theory, but I think game theory can definitely help in terms of setting up incentives such that people are not punished for doing the moral thing / incentivized to do the immoral thing, and that properly setting up such incentives is itself a moral good.

If you have a bunch of physical resources you could use to build infrastructure which will provide a moderate amount of value per year over the coming decades, or in goods which will provide a large amount of value now but no further value in the future, that gives you the options of "invest in the future" vs "consume now". If the default action is "invest in the future", and you make the decision to consume now instead, I think that reasonably counts as "borrowing against the future".

On the object level of this thread, it's debatable whether allowing more immigration is borrowing against the future or investing in the future, and it probably depends to some extent on how generous you expect future entitlements to be, but "is our current policy borrowing against the future" is a real and meaningful question.