This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Nobody is arguing that "the moment any regulation is in place, it is inevitable that we will slide all the way down the slippery slope of increasing regulation and all innovation in that industry will die". The argument is, instead, that adding a regulation increases the chance that we will slide down that slippery slope. That chance may be worth it, if the step is small and the benefit of the regulation is large, but in the case of the entirety of ETSI EN 303 645 (not just section 5.1 in isolation), I don't think that's the case, and I certainly don't think it's a slam-dunk that it's worth the cost.
Section 5.1, "You are not allowed to use a default password on an network interface as the sole means of authorization for the administrative functions of an IoT device", if well-implemented, is probably such a high-benefit low-risk regulation.
Section 5.4.1, "sensitive security parameters in persistent storage shall be stored securely by the device," seems a bit more likely to be a costly provision, and IMO one that misunderstands how hardware security works (there is no such thing as robust security against an attacker with physical access).
They double down on the idea that manufacturers can make something robust to physical access in section 5.4.2, "where a hard-coded unique per device identity is used in a device for security purposes, it shall be implemented in such a way that it resists tampering by means such as physical, electrical or software."
And then there's perplexing stuff like 5.6.4 "where a debug interface is physically accessible, it shall be disabled in software.". Does this mean if you sell a color-changing light bulb, and the bulb has a usbc port, you're not allowed to expose logs across the network and instead have to expose them only over the usbc port? I would guess not, but I'd also guess that if I was in the UK the legal team at my company would be very unhappy if I just went with my guess without consulting them.
And that's really the crux of the issue, introducing regulation like this means that companies now have to make a choice between exposing themselves to legal risks, making dumb development decisions based on the most conservative possible interpretation of the law, or involve the legal department way more frequently for development decisions.
I present to you: nobody.
This is a vastly better argument, but one that wouldn't allow us to then simply reject any continued discussion, just because we've 'declared' slippery slope and observed that we're epsilon on it. For example, one might ask about the underlying reason for why it increases the chance that we will slide down it? The answer could take many forms, which may be more or less convincing for whether it does, indeed, increase the chance. See here for some examples, and feel free to click through for any specific sub-topics.
IMO, it shows that you misunderstand how these things work. They're not saying "secure against a nation state decapping your chip". They actually refer to ways that persistent storage can be generally regarded as secure, even if you can imagine an extreme case. To be honest, this is a clear sign that you've drunk the tech press kool aid and are pretty out in whacko land from where most serious tech experts are on this issue. Like, they literally tell you what standards are acceptable; it doesn't make any sense to concoct an argument for why it's AKSHUALLY impossible to satisfy the requirement.
H-what? What are you even talking about? This doesn't even make any sense. The standard problem here is that lots of devices have debug interfaces that are supposed to only be used by the manufacturer (you would know this if you read the definitions section), yet many products are getting shipped in a state where anyone can just plug in and do whatever they want to the device. This is just saying to not be a retard and shut it off if it's not meant to be used by the user.
... I see a lot of you arguing that The_Nybbler believes that giving an inch here is a bad idea because they think that a tiny regulation will directly kill innovation, while The_Nybbler is arguing that there's no particular reason for the regulators who introduced this legislation to stop at only implementing useful regulations that pass cost-benefit analysis, and that the other industries we see do seem to have vastly overreaching regulators, and so a naive cost-benefit analysis on a marginal regulation which does not factor in the likely-much-larger second-order effects is useless (though @The_Nybbler do correct me if I'm wrong about this, and you think introducing regulation would be bad even if the first-order effects of regulation were positive and there was some actually-credible way of ensuring that the scope of the regulation was strictly limited).
Honestly I think both of you could stand to focus a bit more on explaining your own positions and less on arguing against what you believe the other means, because as it stands it looks to me like a bunch of statements about what the other person believes, like "you argue that the first-order effects of the most defensible part of this regulation are bad, but you can't support that" / "well you want to turn software into an over-regulated morass similar to what aerospace / pharma / construction have become".
Quoting the examples:
Ok, fair enough, I can see why you would want to prevent users from accessing these particular secrets on the device they own (because, in a sense, they don't own this particular bit). Though I contend that the main "security" benefit of these is fear of being legally slapped around under CFAA.
Seems kinda pointless. If an attacker can read the flash storage on your door lock, presumably that means they've already managed to detach the door lock from your door, and can just enter your house. And if a remote attacker has the ability to read the flash storage because they have gained the ability to execute arbitrary code, they can presumably just directly send the outputs which unlock the door without mucking about with the secrets at all.
What's the threat model we're mitigating here, such that the benefit of mitigating that threat is worth the monetary and complexity cost of requiring an extra component on e.g. every single adjustable-color light bulb sold?
On examination, I misread, and you are correct about what the documents says.
That said, the correct reading then seems to be "users should not be able to debug, diagnose problems with, or repair their own devices which they have physical access to, and which they bought with their own money." That seems worse, not better. What's the threat model this is supposed to be defending against? Is this a good way of defending against this threat model?
Sure, if there are other threats, folks should mitigate those, too. You seem to be under the impression that if this document doesn't spell out precisely every detail for how to make every aspect of a device perfectly secure from all threats, then it's completely useless. That's nonsensical. It would be silly to try to include in this type of document requirements for physically securing door locks. This is just focused on the cyber part, and it's just focused on at least doing the bones-simple basics. Put at least some roadblocks in front of script kiddies. Full "real" security is obviously harder than just the basics, and I can't imagine it would be easy or really even plausible to regulate our way to that. So, we sort of have to say, "At least do the basics," to hopefully cut out some of the worst behavior, and then we still have to hope that the unregulated part of the market even tries to deal with the other aspects.
Not at all. They made no such normative statements. They're saying that IF the manufacturer includes a debug interface that they intend to not be a user interface, then they should shut it off. They're still completely free and clear to have any interfaces for debugging or anything else that are meant to be usable by the user. But if they're going to do that, they probably need to at least think about the fact that it's accessible, rather than just "forget" to turn it off.
Searching "debug interface", I see three places:
The first is on page 10, in section 3.1(Definition of terms, symbols and abbreviations: Terms)
The second is on page 20, in section 5.6 (Cyber security provisions for consumer IoT: Minimize exposed attack surfaces)
The third is on page 32, in Table B.1: Implementation of provisions for consumer IoT security, where, at the bottom of the table, there is a "conditions" section, and "13) a debug interface is physically accessible" is the 13th such condition:
For reference
So, to my read, the provision is mandatory, conditional on the product having a debug interface at all.
"But maybe they just meant that debug interfaces can't unintentionally be left exposed, and it should be left to the company to decide whether the benefits of leaving a debug interface open are worthwhile", you might ask. But we have an example of what it looks like when ETSI wants to say "the company should not accidentally leave this open", and it looks like
Provision 5.6-4 has a conspicuous absence of the word "unnecessarily" or any mention of things like the manufacturer's assessment of the benefits of an open interface.
So coming back to
Can you state where exactly in the document it states this, such that someone developing a product could point it out to the legal team at their company?
I mean, just read them again, more slowly. 5.6-3 says that the company needs to at least think about leaving physical interfaces open. They can choose to do so, so long as they assess that there are benefits to the user. But their choice here is to either consider it consumer-facing or manufacturer-only. It is their choice, but they have to pick one, so they can't pretend like, "Oh, that's supposed to be manufacturer-only, so we don't have to worry about securing it," while also forgetting to turn it off before they ship.
Then, suppose you have a physical interface, is it a "debug interface" or not a "debug interface"? From the definition, it is only a "debug interface" if the manufacturer has determined that it is not part of the consumer-facing functionality. So, if they choose to make it accessible to the user (as per above, making a conscious choice about the matter), it is not a "debug interface", and 5.6-4 simply does not apply, because the device does not have a "debug interface". But if they choose to say that it's manufacturer-only, then it's a "debug interface", and they have to turn it off before they ship.
It's actually very well put together. I've seen many a regulation that is light-years more confusing.
That's a nice legal theory you have there.
Let's say you're an engineer at one such company, and you want to expose a UART serial interface to allow the device you're selling to be debuggable and modifiable for the subset of end-users who know what they're doing. You say "this is part of the consumer-facing functionality". The regulator comes back and says "ok, where's the documentation for that consumer-facing functionality" and you say "we're not allowed to share that due to NDAs, but honest this completely undocumented interface is part of the intended consumer-facing functionality".
How do you expect that to go over with the regulator? Before that, how do you expect the conversation with the legal department at your company to go when you tell them that's your plan for what to tell the regulator if they ask?
I don't see any documentation requirement for user interfaces. But it would seem that they are required to put on the table that they are intending for it to be a user interface. "We have assessed that this interface provides user benefits, e.g., debugging." Simple.
Again, that interpretation is nice if correct. Can you point to anything in the document which supports the interpretation that saying "We have assessed that leaving this debug interface provides user benefit because the debug interface allows the user to debug" would actually be sufficient justification?
My mental model is "it's probably fine if you know people at the regulatory agency, and probably fine if you don't attract any regulatory scrutiny, and likely not to be fine if the regulator hates your guts and wants to make an example of you, or if the regulator's golf buddy is an executive at your competitor". If your legal team approves it, I expect it to be on the basis of "the regulator has not historically gone after anyone who put anything even vaguely plausible down in one of these, so just put down something vaguely plausible and we'll be fine unless the regulator has it out for us specifically". But if anything goes as long as it's a vaguely plausible answer at something resembling the question on the form, and as long as it's not a blatant lie about your product where you provably know that you're lying, I don't expect that to help very much with IoT security.
And yes, I get that "the regulator won't look at you unless something goes wrong, and if something does go wrong they'll look through your practices until they find something they don't like" is how most things work. But I think that's a bad thing and the relative rarity of that sort of thing in tech is why tech is one of the few remaining productive and relatively-pleasant-to-work-in industries. You obviously do sometimes need regulation, but I think in a lot of cases, probably including this one, the rules that are already on the books would be sufficient if they were consistently enforced, but they are in fact rarely enforced and the conclusion people come to is "the current regulations aren't working and so we need to add more regulations" rather than "we should try being more consistent at sticking to the rules that are already on the books", and so you end up with even more vague regulations that most companies make token attempts to cover their asses on but otherwise ignore, and so you end up in a state where full compliance with the rules is impractical but also generally not expected, until someone pisses off a regulator at which point their behavior becomes retroactively unacceptable.
Edit: As a concrete example of the broader thing I'm pointing at, HIPAA is an extremely strict standard, and yet in practice hospital systems are often laughably insecure. Adding even more requirements on top of HIPAA would not help.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
In support of this interpretation:
https://www.themotte.org/post/995/culture-war-roundup-for-the-week/210060?context=8#context (whole thing)
https://www.themotte.org/post/995/culture-war-roundup-for-the-week/209894?context=8#context ("Maybe their little subculture will change.")
https://www.themotte.org/post/995/culture-war-roundup-for-the-week/209881?context=8#context ("coloring inside the lines")
Not once in there did I say anything about it becoming an over-regulated morass. You can change your culture enough to do the trivial fucking basics without becoming an over-regulated morass.
If your idea is to change the culture of tinkerers, then I must withdraw what I said about you, and conclude you're not interested in reasonable regulations at all, but rather are getting off on imposing your views on others / are seething that so many people have managed to escape you for so long.
Fair enough. If there is literally no way to change the culture to something that doesn't have trivially-hackable default passwords on billions of devices with anything other than unreasonable regulations, if this is honestly the dichotomy that you think exists in the world, then I guess I have to throw my lot in with the unreasonable regulations folks. But if you can come up with any plausible way to change the culture enough so that we don't have a spigot of trivially-hackable devices with default passwords on them, and your method is anything other than 'unreasonable regulation', I will jump to your side immediately. Nybbler has already committed to the claim that this is a complete impossibility, that the only options are "a culture that churns out trivially-hackable devices with default passwords" and "unreasonable regulations". Do you embrace this position, that those are the only two options?
The approach I outlined earlier, which you called reasonable, was to regulate mass produced end-user consumer goods, and let people who build stuff on their own, or otherwise are reasonably expected to know what they're getting into, have a large degree of freedom. There wasn't a word there about changing anyone's culture, in fact the whole approach is designed to let everyone keep their culture the way they like it.
I don't think it does, but I think the things you are saying here strongly imply that trivially hackable default passwords are just an excuse for you to destroy a culture you hate.
Nybbler would declare that this is, in fact, changing the culture of people who mass produce end-use consumer goods. That this is the only way, that we have to change their culture. If that is required, I am willing to do it. If you think that we can regulate them so that they don't churn out billions of trivially-hackable devices, without changing their culture, I'm fine with that. But they keep telling me that we can't do that! That we have to change their culture! That that's the only option!
Not at all. I love the 'tinkerer' culture. I love the innovation culture. I love the building new stuff culture. I love coding and coming up with interesting new shit, though my day job is more on the new math side and I'm having less time for coding lately. The culture that I dislike is the "we can keep pumping out trivially-hackable shit because it might be slightly boring to take the basic steps everyone knows and nobody's going to do anything about it" culture.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Once you've changed your culture from "building cool stuff" to "checking regulatory boxes and making sure all the regulation-following is documented", you've already done a vast amount of damage. Even if the regulations themselves aren't too onerous.
Can you propose a way that we could change the culture to "do the trivial basic shit so we don't have billions of abhorrently bad products permeating all of our networks"? I'm wide open to ideas you have for how to do this without making anyone check any boxes, but it seems a little unlikely that they won't have to somehow come up with a culture that at least considers having a box for "is this thing not trivially insecure against a handful of the most basic mistakes that everyone has known about for years?"
No, you're asking for an impossibility. You can't have one culture which is both open to new ideas and dedicated to checking boxes.
Ok, I think we've made progress. It is literally impossible for the culture to bother taking the most basic steps to make the billions of devices on our networks not trivially hackable without causing some folks like you to shut down and stop being open to new ideas. These are the stakes as you have presented them. The only question that seems to be available to the general public is what they value.
Different folks can have different values and different answers. I personally fall on the side that I think you're actually just delivering a bullshit threat that is based on a lie. That it's akin to a child swearing that they're going to hold their breath until they die unless you let them have more cookies. No better than saying, "Nice IoT development space you have there; would be a shame if something happened to it." No better than a monopolist swearing that if you let others enter the market, shoddy products will kill everyone. That it's a false dichotomy, propped up just so you can avoid even the smallest iota of boredom, purely in your own self-interest.
But whatever. My opinion on that isn't important. You've laid down the stakes. Society made its choice years ago when California gave the first checkbox. The deed is already done. The only thing left is for us to watch what happens. Does innovation actually die? Do people actually just stop thinking about new ideas, because they might have to not put a default password on their new idea? I guess we'll find out.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The inevitable increase of regulation once a regulatory framework is in place is part of it. Another part is that merely having a regulatory framework transforms your industry from "building cool stuff" to "checking regulatory boxes and making sure all the regulation-following is documented". Once the principals know they're going to be put out of business or go to jail for not following the regs or having the docs for following the regs, the whole development process is going to get bureaucratized to produce those docs. This both directly makes development much slower and more tedious, and drives the sort of people who do innovative work out of the field (because they didn't get into the field to sit in meetings where you discuss whether the regulatory requirement referenced in SSDD paragraph 2.0.2.50 is properly related to the SDD paragraph 3.1.2, the ICD paragraph 4.1.2.5, and the STP paragraph 6.6.6, which lines of code implement SDD paragraph 3.1.2, and to make sure the SIP properly specifies the update procedures)
Is this the bitter voice of experience of someone who has worked on software for the financial industry?
In my experience, companies that operate in compliance-heavy industries that also have hard technical challenges frequently are able to retain talented developers who hate that kind of thing, either by outsourcing to Compliance-As-A-Service companies (Stripe, Avalara, Workday, DocuSign, etc) or by paying somewhat larger amounts of money to developers who are willing to do boring line-of-business stuff (hi). Though at some point most of your work becomes compliance, so if you don't have enough easily compartmentalized difficult technical problems the "offload the compliance crap" strategy stops working. I know some brilliant people work at Waymo, which has a quite high compliance burden but also some incredibly crunchy technical problems. On the flip side, I can't imagine that e.g. ADP employs many of our generation's most brilliant programmers.
Not financial, but the meetings and the acronyms (though not the specific paragraph numbers) are real.
This works when the regulations target parts of the product that can be isolated from the technical challenges, but not (as in e.g. aircraft) when they can't. But I can understand the bitter envy towards software people of someone who is in a field where a good year means finding that you can tweak the radius of the trailing edge of the winglet by 1mm and save an average of a pound of fuel in an Atlantic crossing and only have to go through an abbreviated aerodynamic design review.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link