@faul_sname's banner p

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 1 user  
joined 2022 September 06 20:44:12 UTC

				

User ID: 884

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 1 user   joined 2022 September 06 20:44:12 UTC

					

No bio...


					

User ID: 884

Then, moreover, they know that there have been many high-profile instances of products shipping, having an interface exposed that is trivially-attackable, and when it's attacked, the manufacturers ignore it and just say some bullshit about how it was supposed to just be for the manufacturer for debugging purposes, so they're not responsible and not going to do anything about it.

Was "lol we didn't mean to leave that exposed" a get-out-of-liability-free card by UK laws before this guidance came out? If so, I can see why you'd want this. If not, I'd say the issue probably wasn't "not enough rules" but rather "not enough enforcement of existing rules" and I don't expect "add more rules" to be very useful in such a case, and I especially don't expect that to be true of rules that look like "you are legally required to use your best judgement".

It's a bullshit thing by bad entity manufacturers who don't care.

I agree, but I don't think it's possible to legally compel companies to thoughtfully consider the best interests of their users.

Honestly, I probably would have not done as good of a job if I had tried to put this set of ideas together from scratch myself.

Neither would I. My point wasn't "the legislators are bad at their job", it was "it's actually really really hard to write good rules, and frequently having bad explicit rules is worse than having no explicit rules beyond 'you are liable for harm you cause through your negligence'".

So in your interpretation, 5.6-4 could be replaced by "list the communication interfaces your product has. For each of them, either ensure the interface is disabled or state that the interface is intentionally enabled because a nonzero number of your customers want it to be enabled in a nonzero number of situations".

I think that would be fine, if so, but I don't understand why provisions 5.6-3 and 5.6-4 would be phrased the way they are if that were the case.

I think it largely depends on which forest we're talking about. If you're through-hiking the John Muir Trail, you would obviously much rather encounter another person (who is probably a hiker) than a bear (which definitely has a much much lower than 25% chance of attacking you, but still isn't something you want to encounter).

But if you're bushwhacking through the Emerald Triangle, and you hear a rustling in the bushes, you are probably relieved to find out that it's a bear.

Again, that interpretation is nice if correct. Can you point to anything in the document which supports the interpretation that saying "We have assessed that leaving this debug interface provides user benefit because the debug interface allows the user to debug" would actually be sufficient justification?

My mental model is "it's probably fine if you know people at the regulatory agency, and probably fine if you don't attract any regulatory scrutiny, and likely not to be fine if the regulator hates your guts and wants to make an example of you, or if the regulator's golf buddy is an executive at your competitor". If your legal team approves it, I expect it to be on the basis of "the regulator has not historically gone after anyone who put anything even vaguely plausible down in one of these, so just put down something vaguely plausible and we'll be fine unless the regulator has it out for us specifically". But if anything goes as long as it's a vaguely plausible answer at something resembling the question on the form, and as long as it's not a blatant lie about your product where you provably know that you're lying, I don't expect that to help very much with IoT security.

And yes, I get that "the regulator won't look at you unless something goes wrong, and if something does go wrong they'll look through your practices until they find something they don't like" is how most things work. But I think that's a bad thing and the relative rarity of that sort of thing in tech is why tech is one of the few remaining productive and relatively-pleasant-to-work-in industries. You obviously do sometimes need regulation, but I think in a lot of cases, probably including this one, the rules that are already on the books would be sufficient if they were consistently enforced, but they are in fact rarely enforced and the conclusion people come to is "the current regulations aren't working and so we need to add more regulations" rather than "we should try being more consistent at sticking to the rules that are already on the books", and so you end up with even more vague regulations that most companies make token attempts to cover their asses on but otherwise ignore, and so you end up in a state where full compliance with the rules is impractical but also generally not expected, until someone pisses off a regulator at which point their behavior becomes retroactively unacceptable.

Edit: As a concrete example of the broader thing I'm pointing at, HIPAA is an extremely strict standard, and yet in practice hospital systems are often laughably insecure. Adding even more requirements on top of HIPAA would not help.

That's a nice legal theory you have there.

Let's say you're an engineer at one such company, and you want to expose a UART serial interface to allow the device you're selling to be debuggable and modifiable for the subset of end-users who know what they're doing. You say "this is part of the consumer-facing functionality". The regulator comes back and says "ok, where's the documentation for that consumer-facing functionality" and you say "we're not allowed to share that due to NDAs, but honest this completely undocumented interface is part of the intended consumer-facing functionality".

How do you expect that to go over with the regulator? Before that, how do you expect the conversation with the legal department at your company to go when you tell them that's your plan for what to tell the regulator if they ask?

Searching "debug interface", I see three places:

The first is on page 10, in section 3.1(Definition of terms, symbols and abbreviations: Terms)

debug interface: physical interface used by the manufacturer to communicate with the device during development or to perform triage of issues with the device and that is not used as part of the consumer-facing functionality

EXAMPLE: Test points, UART, SWD, JTAG.

The second is on page 20, in section 5.6 (Cyber security provisions for consumer IoT: Minimize exposed attack surfaces)

Provision 5.6-4 Where a debug interface is physically accessible, it shall be disabled in software.

EXAMPLE 5: A UART serial interface is disabled through the bootloader software on the device. No logon prompt and no interactive menu is available due to this disabling

The third is on page 32, in Table B.1: Implementation of provisions for consumer IoT security, where, at the bottom of the table, there is a "conditions" section, and "13) a debug interface is physically accessible" is the 13th such condition:

Provision 5.6-4 M C (13)

For reference

M C the provision is a mandatory requirement and conditional

NOTE: Where the conditional notation is used, this is conditional on the text of the provision. The conditions are provided at the bottom of the table with references provided for the relevant provisions to help with clarity.

So, to my read, the provision is mandatory, conditional on the product having a debug interface at all.

"But maybe they just meant that debug interfaces can't unintentionally be left exposed, and it should be left to the company to decide whether the benefits of leaving a debug interface open are worthwhile", you might ask. But we have an example of what it looks like when ETSI wants to say "the company should not accidentally leave this open", and it looks like

Provision 5.6-3 Device hardware should not unnecessarily expose physical interfaces to attack.

Physical interfaces can be used by an attacker to compromise firmware or memory on a device. "Unnecessarily" refers to the manufacturer's assessment of the benefits of an open interface, used for user functionality or for debugging purposes.

Provision 5.6-4 has a conspicuous absence of the word "unnecessarily" or any mention of things like the manufacturer's assessment of the benefits of an open interface.

So coming back to

They're still completely free and clear to have any interfaces for debugging or anything else that are meant to be usable by the user.

Can you state where exactly in the document it states this, such that someone developing a product could point it out to the legal team at their company?

Another part is that merely having a regulatory framework transforms your industry from "building cool stuff" to "checking regulatory boxes and making sure all the regulation-following is documented" [...] They didn't get into the field to sit in meetings where you discuss whether the SSDD paragraph 2.0.2.50 is properly related to the SDD paragraph 3.1.2, the ICD paragraph 4.1.2.5, and the STP paragraph 6.6.6, which lines of code implement SDD paragraph 3.1.2, and to make sure the SIP properly specifies the update procedures

Is this the bitter voice of experience of someone who has worked on software for the financial industry?

and drives the sort of people who do innovative work out of the field

In my experience, companies that operate in compliance-heavy industries that also have hard technical challenges frequently are able to retain talented developers who hate that kind of thing, either by outsourcing to Compliance-As-A-Service companies (Stripe, Avalara, Workday, DocuSign, etc) or by paying somewhat larger amounts of money to developers who are willing to do boring line-of-business stuff (hi). Though at some point most of your work becomes compliance, so if you don't have enough easily compartmentalized difficult technical problems the "offload the compliance crap" strategy stops working. I know some brilliant people work at Waymo, which has a quite high compliance burden but also some incredibly crunchy technical problems. On the flip side, I can't imagine that e.g. ADP employs many of our generation's most brilliant programmers.

I present to you: nobody.

... I see a lot of you arguing that The_Nybbler believes that giving an inch here is a bad idea because they think that a tiny regulation will directly kill innovation, while The_Nybbler is arguing that there's no particular reason for the regulators who introduced this legislation to stop at only implementing useful regulations that pass cost-benefit analysis, and that the other industries we see do seem to have vastly overreaching regulators, and so a naive cost-benefit analysis on a marginal regulation which does not factor in the likely-much-larger second-order effects is useless (though @The_Nybbler do correct me if I'm wrong about this, and you think introducing regulation would be bad even if the first-order effects of regulation were positive and there was some actually-credible way of ensuring that the scope of the regulation was strictly limited).

Honestly I think both of you could stand to focus a bit more on explaining your own positions and less on arguing against what you believe the other means, because as it stands it looks to me like a bunch of statements about what the other person believes, like "you argue that the first-order effects of the most defensible part of this regulation are bad, but you can't support that" / "well you want to turn software into an over-regulated morass similar to what aerospace / pharma / construction have become".

IMO, it shows that you misunderstand how these things work. They're not saying "secure against a nation state decapping your chip". They actually refer to ways that persistent storage can be generally regarded as secure, even if you can imagine an extreme case.

Quoting the examples:

Example 1: The root keys involved in authorization and access to licensed radio frequencies (e.g. LTE-m cellular access) are stored in a UICC.

Ok, fair enough, I can see why you would want to prevent users from accessing these particular secrets on the device they own (because, in a sense, they don't own this particular bit). Though I contend that the main "security" benefit of these is fear of being legally slapped around under CFAA.

Example 2: A remote controlled door-lock using a Trusted Execution Environment (TEE) to store and access the sensitive security parameters.

Seems kinda pointless. If an attacker can read the flash storage on your door lock, presumably that means they've already managed to detach the door lock from your door, and can just enter your house. And if a remote attacker has the ability to read the flash storage because they have gained the ability to execute arbitrary code, they can presumably just directly send the outputs which unlock the door without mucking about with the secrets at all.

Example 3: A wireless thermostat stores the credentials for the wireless network in a tamper protected microcontroller rather than in external flash storage.

What's the threat model we're mitigating here, such that the benefit of mitigating that threat is worth the monetary and complexity cost of requiring an extra component on e.g. every single adjustable-color light bulb sold?

H-what? What are you even talking about? This doesn't even make any sense. The standard problem here is that lots of devices have debug interfaces that are supposed to only be used by the manufacturer (you would know this if you read the definitions section), yet many products are getting shipped in a state where anyone can just plug in and do whatever they want to the device. This is just saying to not be a retard and shut it off if it's not meant to be used by the user.

On examination, I misread, and you are correct about what the documents says.

That said, the correct reading then seems to be "users should not be able to debug, diagnose problems with, or repair their own devices which they have physical access to, and which they bought with their own money." That seems worse, not better. What's the threat model this is supposed to be defending against? Is this a good way of defending against this threat model?

I think it's good old issue #594 back from the dead.

I reject the concept that as soon as epsilon regulation of an industry is put into place, it necessarily and logically follows that there is a slippery slope that results in innovation dying. I think you need at least some argument further. It's easy to just 'declare' bankruptcy a slippery slope, but we know that many end up not.

Nobody is arguing that "the moment any regulation is in place, it is inevitable that we will slide all the way down the slippery slope of increasing regulation and all innovation in that industry will die". The argument is, instead, that adding a regulation increases the chance that we will slide down that slippery slope. That chance may be worth it, if the step is small and the benefit of the regulation is large, but in the case of the entirety of ETSI EN 303 645 (not just section 5.1 in isolation), I don't think that's the case, and I certainly don't think it's a slam-dunk that it's worth the cost.

Section 5.1, "You are not allowed to use a default password on an network interface as the sole means of authorization for the administrative functions of an IoT device", if well-implemented, is probably such a high-benefit low-risk regulation.

Section 5.4.1, "sensitive security parameters in persistent storage shall be stored securely by the device," seems a bit more likely to be a costly provision, and IMO one that misunderstands how hardware security works (there is no such thing as robust security against an attacker with physical access).

They double down on the idea that manufacturers can make something robust to physical access in section 5.4.2, "where a hard-coded unique per device identity is used in a device for security purposes, it shall be implemented in such a way that it resists tampering by means such as physical, electrical or software."

And then there's perplexing stuff like 5.6.4 "where a debug interface is physically accessible, it shall be disabled in software.". Does this mean if you sell a color-changing light bulb, and the bulb has a usbc port, you're not allowed to expose logs across the network and instead have to expose them only over the usbc port? I would guess not, but I'd also guess that if I was in the UK the legal team at my company would be very unhappy if I just went with my guess without consulting them.

And that's really the crux of the issue, introducing regulation like this means that companies now have to make a choice between exposing themselves to legal risks, making dumb development decisions based on the most conservative possible interpretation of the law, or involve the legal department way more frequently for development decisions.

Excellent post!

In general I think we should be suspicious of any public program that tries to hide its costs, or launder those costs onto private actors.

Another example of this is AML/KYC regulations, which basically require banks to serve as a branch of law enforcement at their own expense. From the excellent Bits About Money post on the topic:

Money laundering is, effectively, a process crime. We criminalized it not because of the direct harms, but because it tends to make other interdiction of criminal activity more difficult.

Money laundering covers anything which obscures the link between another crime and the proceeds of that crime. This is intentionally extremely vague and expansive. The victim is, take your pick, either the state or the financial institutions the state has deputized to detect it. [...]

Much like KYC, AML policies are recursive stochastic management of crime. The state deputizes financial institutions to, in effect, change the physics of money. In particular, it wants them to situationally repudiate the fungibility of money. [...] They are required to have policies and procedures which will tend to, statistically, interdict some money laundering and (similar to how we discussed for KYC) trigger additional crimes when accessing the financial system. Particularly in U.S. practice, one sub-goal of this is maximizing the amount of assets which will be tainted by money laundering and then subject to forfeiture proceedings. [...]

And so every financial institution of any size has a Compliance department. One of their functions is having a technological system which will sift through the constant stream of transactions they produce and periodically fire “alerts.” Those alerts go to an analyst for review.

This implies floors upon floors of people who read tweet-length descriptions of financial transactions and, for some very small percentage, click a Big Red Button and begin documenting the heck out of everything. This might sound like a dystopian parody, and it is important to say specifically that this is not merely standard practice but is functionally mandatory. [...]

I think the thing that cryptocurrency enthusiasts are rightest about, which is broadly underappreciated, is that the financial system has been deputized to act as law enforcement. [...] Is this tradeoff worth it? I wish that society and policymakers more closely scrutinized the actual results obtained by AML policies. Plausibly we get sufficient value out of AML to have people attend mandatory diversity training at 1 PM, Banking the Underbanked seminar at 2 PM, and then AML training at 3 PM, while experiencing very little cognitive dissonance. But if that case can be made, then let it be made. I find the opposing case, that AML consumes vast resources and inconveniences legitimate users far out of proportion to positive impact on the legitimate interest of society in interdicting crime, to be very persuasive.

Sorry for the giant quote-post. I don't have much to say about this topic beyond "the above article is fully consistent with what little experience I have of AML/KYC from my work". I just think that article is very very good and also quite relevant to the topic at hand.

My solution would be to simply make vendors liable for damages caused by security flaws of their devices, up to say 10 times the sticker price.

I suspect 1x the sticker price would be more than sufficient if it happened reliably.

Ctrl+0 will get you back to the 100% zoom setting.

I like such feature and several programs that I use on Linux have it

Such as the famously-modern program vim.

Yes, because the baseline for "randomly guessing" is 1/5612 ("twitter user @fluffyporcupine matches this specific one of the 5612 facebook users"), not 1/2 ("twitter user @fluffyporcupine is/is not the same user as facebook user Nancy Prickles").

Doesn't scare me for personal reasons -- I'm trivially identifiable, you don't need to resort to fancy ML techniques. But if you're actually trying to remain anonymous, and post under both your real name and a pseudonym, then perhaps it's worth paying attention to (e.g. spinning up separate throwaway accounts for anything you want to say that is likely to actually lead to significant damage to your real-world identity and doing the "translate to and from a foreign language to get changed phrasing without changed meaning" thing).

$100k for the machinery seems plausible to me -- you can see details of their proposed setup here (relevant sections are "Carbon capture", "Electrolyzer", and "Chemical reactor", the rest of that post is fluff). "Low maintenance" remains to be seen, but there's no reason in principle that it couldn't be.

But again, the viability of the entire project rests on the idea that in some places the marginal cost of power will be close to zero or even negative a substantial fraction of the time, and yet those places are accessible enough to construct one of these plants. If that's not the way the future pans out, this project winds up not being so viable.

So the answer to

So how is this thing supposed to be competitive with natural gas in any reasonable place ?

Is "It's not. But not every place is reasonable".

Why can't it be real? The Haber-Bosch process is at least as impactful of an "air + energy + water -> bulk useful material" process, and it's real and cost-effective.

Anyone who comes up with some process that

  1. Has low infrastructure costs
  2. Produces some industrially valuable product
  3. Spins up and down quickly, and tolerates long idle periods (i.e. starts producing the product as soon as you feed it power, stops when you stop feeding it power, and doesn't have issues if it doesn't start again for a long time)

has a license to print money when power costs dip to zero or below. Which they already do from time to time, and if solar power continues to be deployed more and more, that situation will happen more often.

Terraform's "power -> methane" thing certainly isn't efficient, compared to other forms of grid energy storage, but what it is is scalable. Basically it seems to be a bet on "power prices will be zero / negative some fraction of the time in some locations", which seems likely to happen if solar keeps being deployed at the current rate, or if any country anywhere in the world gets serious about fission power.

What is a hyperdunbarist? Googling the term literally shows only this comment.

Also useful if your car has bluetooth but it's janky.

I drove a corolla until it started giving me trouble (around 300,000 km), followed by a prius until that started giving me trouble (around 400,000 km), both were IMO quite good cars. I think you should be able to get a lightly used one that is <10 years old within your budget in Scotland, and that should have all the creature comforts you want.

That said, for bluetooth specifically, for $20, you can get a thing which plugs into the cigarette lighter of a car and does bluetooth pairing and then broadcasts to a radio frequency (choose a dead channel), which you can then tune your car radio to. In my experience they work well enough that you never think about them once you've done the initial 2 minutes of setup - your phone just automatically pairs when you get in the car, and the car speakers play what your phone is playing.

I think the guideline should be "the topic keeps coming up over and over again in the threads for separate weeks, and the conversation in the new week tends to reference the conversation in older weeks". Covid, when it was a thing, absolutely qualified as that. Russia/Ukraine and Israel/Palestine were somewhat less of this, since each week's thread tended to be about current events more than about continuing to hash out an ongoing disagreement. Trans stuff, I think, qualifies for this, as it does seem to be the same people having the same discussion over and over. Can't think of too many other examples.

Don't pin it and I think it's fine. The people who want to have that discussion can subscribe to the thread. A second such containment thread for rationalist inner-circle social drama would also be nice. Maybe a third for trans stuff.

I think "topics that tend to suck all the air out of the room when they get brought up go to their own containment thread, anyone who cares to discuss that topic can subscribe to the thread, containment threads only get pinned if there's at least a quarter as much past-activity in them as in the typical CW thread" would probably be an improvement.

TBH if someone is put off by the fact that holocaust denial stuff gets put in a dedicated thread rather than banned I think they would probably be put off by the speech norms here anyway, best that they discover that early. I personally find the discussion tiresome and poorly argued, but I don't think there's a low-time-investment way to moderate on that basis, at least not yet. Maybe check back in 3 years and LLMs will be at that point, but for the time being.

All that said, I am not a mod, nor am I volunteering to spend the amount of time it would take to be a mod, so ultimately the decision should be made by the people who are putting in that time and effort.

The topic of this thread isn't "the evidence we have about the history of World War II", it's "internal discussion and navel gazing about what norms we want to have in this community", which is a topic of endless interest on this site. A similar thing happens on any thread that mentions Aella.

The constant debates between the Napoleon deniers and their opponents are sucking all the air out of the room. What do you do?

Containment thread? It worked pretty well for covid, when covid stuff was sucking all the air out of the room.