site banner

Culture War Roundup for the week of May 6, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

If an attacker can read the flash storage on your door lock, presumably that means they've already managed to detach the door lock from your door, and can just enter your house.

[and similar examples given as reasons to discount 5.4-1]

Sure, if there are other threats, folks should mitigate those, too. You seem to be under the impression that if this document doesn't spell out precisely every detail for how to make every aspect of a device perfectly secure from all threats, then it's completely useless. That's nonsensical. It would be silly to try to include in this type of document requirements for physically securing door locks. This is just focused on the cyber part, and it's just focused on at least doing the bones-simple basics. Put at least some roadblocks in front of script kiddies. Full "real" security is obviously harder than just the basics, and I can't imagine it would be easy or really even plausible to regulate our way to that. So, we sort of have to say, "At least do the basics," to hopefully cut out some of the worst behavior, and then we still have to hope that the unregulated part of the market even tries to deal with the other aspects.

That said, the correct reading then seems to be "users should not be able to debug, diagnose problems with, or repair their own devices which they have physical access to, and which they bought with their own money."

Not at all. They made no such normative statements. They're saying that IF the manufacturer includes a debug interface that they intend to not be a user interface, then they should shut it off. They're still completely free and clear to have any interfaces for debugging or anything else that are meant to be usable by the user. But if they're going to do that, they probably need to at least think about the fact that it's accessible, rather than just "forget" to turn it off.

Searching "debug interface", I see three places:

The first is on page 10, in section 3.1(Definition of terms, symbols and abbreviations: Terms)

debug interface: physical interface used by the manufacturer to communicate with the device during development or to perform triage of issues with the device and that is not used as part of the consumer-facing functionality

EXAMPLE: Test points, UART, SWD, JTAG.

The second is on page 20, in section 5.6 (Cyber security provisions for consumer IoT: Minimize exposed attack surfaces)

Provision 5.6-4 Where a debug interface is physically accessible, it shall be disabled in software.

EXAMPLE 5: A UART serial interface is disabled through the bootloader software on the device. No logon prompt and no interactive menu is available due to this disabling

The third is on page 32, in Table B.1: Implementation of provisions for consumer IoT security, where, at the bottom of the table, there is a "conditions" section, and "13) a debug interface is physically accessible" is the 13th such condition:

Provision 5.6-4 M C (13)

For reference

M C the provision is a mandatory requirement and conditional

NOTE: Where the conditional notation is used, this is conditional on the text of the provision. The conditions are provided at the bottom of the table with references provided for the relevant provisions to help with clarity.

So, to my read, the provision is mandatory, conditional on the product having a debug interface at all.

"But maybe they just meant that debug interfaces can't unintentionally be left exposed, and it should be left to the company to decide whether the benefits of leaving a debug interface open are worthwhile", you might ask. But we have an example of what it looks like when ETSI wants to say "the company should not accidentally leave this open", and it looks like

Provision 5.6-3 Device hardware should not unnecessarily expose physical interfaces to attack.

Physical interfaces can be used by an attacker to compromise firmware or memory on a device. "Unnecessarily" refers to the manufacturer's assessment of the benefits of an open interface, used for user functionality or for debugging purposes.

Provision 5.6-4 has a conspicuous absence of the word "unnecessarily" or any mention of things like the manufacturer's assessment of the benefits of an open interface.

So coming back to

They're still completely free and clear to have any interfaces for debugging or anything else that are meant to be usable by the user.

Can you state where exactly in the document it states this, such that someone developing a product could point it out to the legal team at their company?

I mean, just read them again, more slowly. 5.6-3 says that the company needs to at least think about leaving physical interfaces open. They can choose to do so, so long as they assess that there are benefits to the user. But their choice here is to either consider it consumer-facing or manufacturer-only. It is their choice, but they have to pick one, so they can't pretend like, "Oh, that's supposed to be manufacturer-only, so we don't have to worry about securing it," while also forgetting to turn it off before they ship.

Then, suppose you have a physical interface, is it a "debug interface" or not a "debug interface"? From the definition, it is only a "debug interface" if the manufacturer has determined that it is not part of the consumer-facing functionality. So, if they choose to make it accessible to the user (as per above, making a conscious choice about the matter), it is not a "debug interface", and 5.6-4 simply does not apply, because the device does not have a "debug interface". But if they choose to say that it's manufacturer-only, then it's a "debug interface", and they have to turn it off before they ship.

It's actually very well put together. I've seen many a regulation that is light-years more confusing.

That's a nice legal theory you have there.

Let's say you're an engineer at one such company, and you want to expose a UART serial interface to allow the device you're selling to be debuggable and modifiable for the subset of end-users who know what they're doing. You say "this is part of the consumer-facing functionality". The regulator comes back and says "ok, where's the documentation for that consumer-facing functionality" and you say "we're not allowed to share that due to NDAs, but honest this completely undocumented interface is part of the intended consumer-facing functionality".

How do you expect that to go over with the regulator? Before that, how do you expect the conversation with the legal department at your company to go when you tell them that's your plan for what to tell the regulator if they ask?

I don't see any documentation requirement for user interfaces. But it would seem that they are required to put on the table that they are intending for it to be a user interface. "We have assessed that this interface provides user benefits, e.g., debugging." Simple.

Again, that interpretation is nice if correct. Can you point to anything in the document which supports the interpretation that saying "We have assessed that leaving this debug interface provides user benefit because the debug interface allows the user to debug" would actually be sufficient justification?

My mental model is "it's probably fine if you know people at the regulatory agency, and probably fine if you don't attract any regulatory scrutiny, and likely not to be fine if the regulator hates your guts and wants to make an example of you, or if the regulator's golf buddy is an executive at your competitor". If your legal team approves it, I expect it to be on the basis of "the regulator has not historically gone after anyone who put anything even vaguely plausible down in one of these, so just put down something vaguely plausible and we'll be fine unless the regulator has it out for us specifically". But if anything goes as long as it's a vaguely plausible answer at something resembling the question on the form, and as long as it's not a blatant lie about your product where you provably know that you're lying, I don't expect that to help very much with IoT security.

And yes, I get that "the regulator won't look at you unless something goes wrong, and if something does go wrong they'll look through your practices until they find something they don't like" is how most things work. But I think that's a bad thing and the relative rarity of that sort of thing in tech is why tech is one of the few remaining productive and relatively-pleasant-to-work-in industries. You obviously do sometimes need regulation, but I think in a lot of cases, probably including this one, the rules that are already on the books would be sufficient if they were consistently enforced, but they are in fact rarely enforced and the conclusion people come to is "the current regulations aren't working and so we need to add more regulations" rather than "we should try being more consistent at sticking to the rules that are already on the books", and so you end up with even more vague regulations that most companies make token attempts to cover their asses on but otherwise ignore, and so you end up in a state where full compliance with the rules is impractical but also generally not expected, until someone pisses off a regulator at which point their behavior becomes retroactively unacceptable.

Edit: As a concrete example of the broader thing I'm pointing at, HIPAA is an extremely strict standard, and yet in practice hospital systems are often laughably insecure. Adding even more requirements on top of HIPAA would not help.

Can you point to anything in the document which supports the interpretation that saying "We have assessed that leaving this debug interface provides user benefit because the debug interface allows the user to debug" would actually be sufficient justification?

That quote would not be correct. You have not "assessed that leaving a debug interface..." because it is not a debug interface, by the definition of "debug interface" given in the document. You have assessed that leaving that physical interface provides user benefit, because the physical interface allows the user to debug. You have determined that the physical interface in question is "part of the consumer-facing functionality". That is right there in the definition, in the document. You have made that assessment directly following the instruction given in 5.6-3, in the document.

So in your interpretation, 5.6-4 could be replaced by "list the communication interfaces your product has. For each of them, either ensure the interface is disabled or state that the interface is intentionally enabled because a nonzero number of your customers want it to be enabled in a nonzero number of situations".

I think that would be fine, if so, but I don't understand why provisions 5.6-3 and 5.6-4 would be phrased the way they are if that were the case.

I don't see what's awkward at all about the phrasing. They want people to reduce attack surface, so they say that they should not expose interfaces thoughtlessly (3). Obviously, they can thoughtfully choose to expose an interface, for precisely the reasons you present, which is why they give plenty of latitude for manufacturers to assess that they would like to expose it. It seems like manufacturers have wide latitude, too. There doesn't seem to be any real boundary on their assessment; they just have to be like, "Yeah, we thought about it."

Then, moreover, they know that there have been many high-profile instances of products shipping, having an interface exposed that is trivially-attackable, and when it's attacked, the manufacturers ignore it and just say some bullshit about how it was supposed to just be for the manufacturer for debugging purposes, so they're not responsible and not going to do anything about it. It's a bullshit thing by bad entity manufacturers who don't care. So, while they're well-aware that manufacturers sometimes want a manufacturer-only interface for perfectly fine reasons, all (4) is doing is saying, "Yeah, if that's what you're going for here, don't be an idiot; disable it before you ship it."

This is a very natural way of doing things. Honestly, I probably would have not done as good of a job if I had tried to put this set of ideas together from scratch myself.

More comments