site banner

Culture War Roundup for the week of May 6, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

you haven't produced any reasoning as to why regulation isn't a slippery slope while I can point to the development of essentially any technology since 1940 to affirm it.

I don't actually see how your argument here is supposed to function. Can you spell it out for me?

You seem ready to argue elsewhere in this thread that the very idea of the slope being slippery is ridiculous and unfounded

Nope; literally never did that. Please don't waste our time strawmanning me.

what is your positive theory of the interaction of regulation and innovation, does it have any limiting principle and how does it maintain the innovation cycle and competition in the face of the interests that inevitably act on it?

I think there is often a general sense of a regulation-innovation tradeoff. It happens in different ways in different places, and it's often area specific, many times in ways that you might not expect. It's a really tough problem, so I'm generally in favor of fewer regulations, especially when they're not pretty decently well-tied to a specific, serious problem. I think that a lot of the time, you can maintain the innovation cycle and competition by being careful and hopefully as light-touch as possible with regulation. Some examples would be that if (and this is a big if, because I would actually disagree with the ends) you want to reduce carbon emissions from powerplants or noxious emissions from tailpipes, it's better to do things like set output targets and let the innovation cycle and competition figure out how to solve the problem rather than mandate specific technological solutions that must be adopted for the rest of time, no questions asked. Of course, this is an easy example, and many situations can pose more difficult problems; I'm probably not going to have the answer to them all off the top of my head.

This requirement seems mostly focused on some of the most egregious practices, and it appears that they at least try to leave open the possibility that people can come to the table with innovative solutions to accomplish the "aspirational text" (as gattsuru put it), even if it wasn't a solution that they specifically identified. It may be possible that we have some other big breakthroughs in the field of network security that make some of these line items look ridiculous in hindsight, which is why I would also say that a grossly under-resourced effort across regulation regimes is hunting for precisely any items that may have been deprecated, so they can be promptly chopped. I lament that this is not done well enough, and it's likely one of the major contributors to the general sense of a regulation-innovation tradeoff.

I reject the concept that as soon as epsilon regulation of an industry is put into place, it necessarily and logically follows that there is a slippery slope that results in innovation dying. I think you need at least some argument further. It's easy to just 'declare' bankruptcy a slippery slope, but we know that many end up not.

Sure.

Let's look at it from the point of view of Rogers' popular diffusion model.

According to this theory innovation as a social phenomenon is essentially the communication of randomly appearing new ideas of practices and the successful diffusion and adoption of those ideas relies on the availability of communication channels between five increasingly risk averse segments of the population.

So to be a successfully diffused innovation, an idea has to successively be adopted successively by:

  1. financially liquid risk takers
  2. educated opinion leaders
  3. average people
  4. skeptics
  5. traditionalists

And the criteria that people use to select what they do and do not adopt are:

  1. Compatibility (does it fit with existing norms?)
  2. Trialability (can I try it before i invest?)
  3. Relative Advantage (how much better is it than alternatives?)
  4. Observability (are the benefits noticeable?)
  5. Simplicity (is it easy to grasp?)

Now having established a model, the question is how does regulation and the establishment of norms affect these factors? What does good and bad regulation look like from our model and how likely are they?

Bad regulation is highly normative and costly. It lowers potential compatibility by adding requirements that potential new practices and ideas may not follow. Good regulation makes the existing landscape accessible for new entrants and interoperable with potentially novel uses.

Bad regulation requires high upfront investments that make adoption a risk. Good regulation provides for trials and experiments.

Bad regulation equalizes outcomes so that relative advantage is marginal. Good regulation lets winners win big.

Bad regulation makes the benefits of novel approaches impossible to demonstrate. Good regulation allows them to be obvious.

Bad regulation prevents people from obtaining the skills and knowledge to implement new approaches. Good regulation provides them those skills and knowledge.

Now that we have a good idea of what a good and a bad policy look like, let's make a detour through another area of sociology to figure out what types of regulation institutions and states tend to develop over time.

In Michels' famous study of the eponymous Political Parties, he develops a general sociological theory of institutions that is nowadays mostly known under the name of the "iron law of oligarchy".

This theory provides that any democratic institution (a fortiori any institution) always tends towards oligarchy because the "tactical and technical necessities" of power condition the playing field so that the people who end up leading any institution are always those most organized and motivated by power because this makes them uniquely fit to this selection environment.

In short, as Leach summarizes the theory: "Bureacracy happens. If bureaucracy happens, power rises. Power corrupts.".

Trivially, the people who write legislation are participants in an organization, therefore participants in a destined oligarchy and ultimately moved, at the asymptote, by the increase of their power and control and the centralization thereof. This was later described as well by Weber, Galbraith and other sociologists.

As I am sure you understand, this is not the crucible for good regulation in the sense that we are disposed to here.

A centralized bureaucracy needs the world to be understandable to it (see Scott's Seeing Like a State), and it needs it to change slowly enough that the control mechanisms can keep up. This therefore leads to ever increasing lists of demands that require high upfront investments, are specifically designed to prevent powerful new entrants, offer rigid frameworks where it is hard to show the benefits of ideas that upset the established order and ultimately where the availability of financially liquid risk takers is deliberately minimized because the "technokings" are a rival castle.

To give more power, in any quantity, to any human institution is therefore, by necessity, to put oneself on the road to prevent innovation. Because power is threatened by innovation. And as Schmitt rightfully observes, power can stand no rivals.

You may notice the paradox present here, in that innovation itself is such an act. And every rebel is always grooming himself into a new master.

Machiavelli and his followers, myself included, understand that freedom, a fortiori innovation, can only really be found in the cracks of this endless struggle. And thus, I cherish any haven that is beyond the control of power and will defend it, not out of the misplaced sense of self important artistry that you seem to identify, but because it is the only respite that we have against the violence of our condition. And I said as much, in not so long winded a fashion, in my original answer.

Those are plenty interesting general characteristics, and I don't object to much. Now, perhaps attention can be turned toward applying these ideas to the regulation at hand. I spoke to some reasons why I think this document leans much more toward the "good regulation" side than the "bad regulation" side. It seems plenty open to all sorts of innovative products in the IoT space and has very little that seems likely to affect the development of new features and products, except perhaps some edge cases. Any thoughts?

Well my thoughts are that yet again you are deliberately refusing to address the slippery slope and thus that this conversation has come to an end.

What part would you like me to address? You spoke of "good regulation" and "bad regulation" first, so it would make sense to start with that discussion and then see how it flows into other pieces. I'm really not sure what you're demanding of me.

Read the whole post please.

You asked for a more detailed argument and explanation of why the slope is slippery, I gave you one. If you're going to refuse to address the whole argument and revert back to discussing the specifics of the regulation like you seem to enjoy instead of addressing the tendency, it's pointless for me to produce anything because you're never going to discuss why people disagree with you.

Which as has been pointed out to you numerous times now, has nothing to do with the specifics of this particular regulation.

I read the whole post. You started off talking about good regulation and bad regulation, then got to some considerations of slippery slope dynamics. Is the former part just irrelevant? I was going by Grice's maxims that it did, in fact, have relevance. It seems like it is relevant to the dynamics of slippery slope dynamics. If it's not relevant, please let me know. I'd especially like to know why you don't actually think it's relevant. Do you think that it actually doesn't matter whether something is a 'good regulation' or a 'bad regulation', and that the only thing that matters is the slippery slope part, where you think that all regulations end up in the bad category? If so, it would have been nice if you said something along these lines. I was just trying to go through your considerations in a systematic fashion.

What I think, which I have now stated numerous times, including in my first response, is that the specifics of any particular regulation and how good it is aren't relevant, what is relevant is the tendency of organizations that are allowed and legitimated in producing such regulation.

However reasonable a given rule is of no consequence if it enables unreasonable rules to be made and if unreasonable rule makers necessarily outcompete reasonable rule makers.

Let us grant, for the sake of argument, that this regulation is a good one. It has no effect on the validity of the argument or on my support or opposition to it on the grounds I have stated. Evaluating it alone with this criteria is therefore pointless.

Ok, so "read the whole post" means "ignore the first two thirds and just read the last third". Got it. Violates Grice's maxims and makes you a bad communication entity, but got it.

However reasonable a given rule is of no consequence if it enables unreasonable rules to be made and if unreasonable rule makers necessarily outcompete reasonable rule makers.

Ok, so flesh out those "ifs". In your long post, you spoke about power rising up to take power. In this context, it would sound like what you're worried about is cashing out in terms of regulatory capture/crony capitalism. That powerful corporate interests will rise to impose unreasonable regulations to form barriers to entry. This is a totally plausible thing to happen, and perhaps we could consider some reasoning for when this is likely to happen/not happen, and how damaging it is likely to be in a particular domain. Plausibly, this has something to do with the regulations in question or the regulatory bodies in question, or something. Or is this truly just a long-form way of restating, "Once we've crossed epsilon, the worst conclusion in inevitable."

More comments

I reject the concept that as soon as epsilon regulation of an industry is put into place, it necessarily and logically follows that there is a slippery slope that results in innovation dying. I think you need at least some argument further. It's easy to just 'declare' bankruptcy a slippery slope, but we know that many end up not.

Nobody is arguing that "the moment any regulation is in place, it is inevitable that we will slide all the way down the slippery slope of increasing regulation and all innovation in that industry will die". The argument is, instead, that adding a regulation increases the chance that we will slide down that slippery slope. That chance may be worth it, if the step is small and the benefit of the regulation is large, but in the case of the entirety of ETSI EN 303 645 (not just section 5.1 in isolation), I don't think that's the case, and I certainly don't think it's a slam-dunk that it's worth the cost.

Section 5.1, "You are not allowed to use a default password on an network interface as the sole means of authorization for the administrative functions of an IoT device", if well-implemented, is probably such a high-benefit low-risk regulation.

Section 5.4.1, "sensitive security parameters in persistent storage shall be stored securely by the device," seems a bit more likely to be a costly provision, and IMO one that misunderstands how hardware security works (there is no such thing as robust security against an attacker with physical access).

They double down on the idea that manufacturers can make something robust to physical access in section 5.4.2, "where a hard-coded unique per device identity is used in a device for security purposes, it shall be implemented in such a way that it resists tampering by means such as physical, electrical or software."

And then there's perplexing stuff like 5.6.4 "where a debug interface is physically accessible, it shall be disabled in software.". Does this mean if you sell a color-changing light bulb, and the bulb has a usbc port, you're not allowed to expose logs across the network and instead have to expose them only over the usbc port? I would guess not, but I'd also guess that if I was in the UK the legal team at my company would be very unhappy if I just went with my guess without consulting them.

And that's really the crux of the issue, introducing regulation like this means that companies now have to make a choice between exposing themselves to legal risks, making dumb development decisions based on the most conservative possible interpretation of the law, or involve the legal department way more frequently for development decisions.

Nobody is arguing

I present to you: nobody.

The argument is, instead, that adding a regulation increases the chance that we will slide down that slippery slope.

This is a vastly better argument, but one that wouldn't allow us to then simply reject any continued discussion, just because we've 'declared' slippery slope and observed that we're epsilon on it. For example, one might ask about the underlying reason for why it increases the chance that we will slide down it? The answer could take many forms, which may be more or less convincing for whether it does, indeed, increase the chance. See here for some examples, and feel free to click through for any specific sub-topics.

Section 5.4.1, "sensitive security parameters in persistent storage shall be stored securely by the device," seems a bit more likely to be a costly provision, and IMO one that misunderstands how hardware security works (there is no such thing as robust security against an attacker with physical access).

IMO, it shows that you misunderstand how these things work. They're not saying "secure against a nation state decapping your chip". They actually refer to ways that persistent storage can be generally regarded as secure, even if you can imagine an extreme case. To be honest, this is a clear sign that you've drunk the tech press kool aid and are pretty out in whacko land from where most serious tech experts are on this issue. Like, they literally tell you what standards are acceptable; it doesn't make any sense to concoct an argument for why it's AKSHUALLY impossible to satisfy the requirement.

And then there's perplexing stuff like 5.6.4 "where a debug interface is physically accessible, it shall be disabled in software.". Does this mean if you sell a color-changing light bulb, and the bulb has a usbc port, you're not allowed to expose logs across the network and instead have to expose them only over the usbc port?

H-what? What are you even talking about? This doesn't even make any sense. The standard problem here is that lots of devices have debug interfaces that are supposed to only be used by the manufacturer (you would know this if you read the definitions section), yet many products are getting shipped in a state where anyone can just plug in and do whatever they want to the device. This is just saying to not be a retard and shut it off if it's not meant to be used by the user.

I present to you: nobody.

... I see a lot of you arguing that The_Nybbler believes that giving an inch here is a bad idea because they think that a tiny regulation will directly kill innovation, while The_Nybbler is arguing that there's no particular reason for the regulators who introduced this legislation to stop at only implementing useful regulations that pass cost-benefit analysis, and that the other industries we see do seem to have vastly overreaching regulators, and so a naive cost-benefit analysis on a marginal regulation which does not factor in the likely-much-larger second-order effects is useless (though @The_Nybbler do correct me if I'm wrong about this, and you think introducing regulation would be bad even if the first-order effects of regulation were positive and there was some actually-credible way of ensuring that the scope of the regulation was strictly limited).

Honestly I think both of you could stand to focus a bit more on explaining your own positions and less on arguing against what you believe the other means, because as it stands it looks to me like a bunch of statements about what the other person believes, like "you argue that the first-order effects of the most defensible part of this regulation are bad, but you can't support that" / "well you want to turn software into an over-regulated morass similar to what aerospace / pharma / construction have become".

IMO, it shows that you misunderstand how these things work. They're not saying "secure against a nation state decapping your chip". They actually refer to ways that persistent storage can be generally regarded as secure, even if you can imagine an extreme case.

Quoting the examples:

Example 1: The root keys involved in authorization and access to licensed radio frequencies (e.g. LTE-m cellular access) are stored in a UICC.

Ok, fair enough, I can see why you would want to prevent users from accessing these particular secrets on the device they own (because, in a sense, they don't own this particular bit). Though I contend that the main "security" benefit of these is fear of being legally slapped around under CFAA.

Example 2: A remote controlled door-lock using a Trusted Execution Environment (TEE) to store and access the sensitive security parameters.

Seems kinda pointless. If an attacker can read the flash storage on your door lock, presumably that means they've already managed to detach the door lock from your door, and can just enter your house. And if a remote attacker has the ability to read the flash storage because they have gained the ability to execute arbitrary code, they can presumably just directly send the outputs which unlock the door without mucking about with the secrets at all.

Example 3: A wireless thermostat stores the credentials for the wireless network in a tamper protected microcontroller rather than in external flash storage.

What's the threat model we're mitigating here, such that the benefit of mitigating that threat is worth the monetary and complexity cost of requiring an extra component on e.g. every single adjustable-color light bulb sold?

H-what? What are you even talking about? This doesn't even make any sense. The standard problem here is that lots of devices have debug interfaces that are supposed to only be used by the manufacturer (you would know this if you read the definitions section), yet many products are getting shipped in a state where anyone can just plug in and do whatever they want to the device. This is just saying to not be a retard and shut it off if it's not meant to be used by the user.

On examination, I misread, and you are correct about what the documents says.

That said, the correct reading then seems to be "users should not be able to debug, diagnose problems with, or repair their own devices which they have physical access to, and which they bought with their own money." That seems worse, not better. What's the threat model this is supposed to be defending against? Is this a good way of defending against this threat model?

If an attacker can read the flash storage on your door lock, presumably that means they've already managed to detach the door lock from your door, and can just enter your house.

[and similar examples given as reasons to discount 5.4-1]

Sure, if there are other threats, folks should mitigate those, too. You seem to be under the impression that if this document doesn't spell out precisely every detail for how to make every aspect of a device perfectly secure from all threats, then it's completely useless. That's nonsensical. It would be silly to try to include in this type of document requirements for physically securing door locks. This is just focused on the cyber part, and it's just focused on at least doing the bones-simple basics. Put at least some roadblocks in front of script kiddies. Full "real" security is obviously harder than just the basics, and I can't imagine it would be easy or really even plausible to regulate our way to that. So, we sort of have to say, "At least do the basics," to hopefully cut out some of the worst behavior, and then we still have to hope that the unregulated part of the market even tries to deal with the other aspects.

That said, the correct reading then seems to be "users should not be able to debug, diagnose problems with, or repair their own devices which they have physical access to, and which they bought with their own money."

Not at all. They made no such normative statements. They're saying that IF the manufacturer includes a debug interface that they intend to not be a user interface, then they should shut it off. They're still completely free and clear to have any interfaces for debugging or anything else that are meant to be usable by the user. But if they're going to do that, they probably need to at least think about the fact that it's accessible, rather than just "forget" to turn it off.

Searching "debug interface", I see three places:

The first is on page 10, in section 3.1(Definition of terms, symbols and abbreviations: Terms)

debug interface: physical interface used by the manufacturer to communicate with the device during development or to perform triage of issues with the device and that is not used as part of the consumer-facing functionality

EXAMPLE: Test points, UART, SWD, JTAG.

The second is on page 20, in section 5.6 (Cyber security provisions for consumer IoT: Minimize exposed attack surfaces)

Provision 5.6-4 Where a debug interface is physically accessible, it shall be disabled in software.

EXAMPLE 5: A UART serial interface is disabled through the bootloader software on the device. No logon prompt and no interactive menu is available due to this disabling

The third is on page 32, in Table B.1: Implementation of provisions for consumer IoT security, where, at the bottom of the table, there is a "conditions" section, and "13) a debug interface is physically accessible" is the 13th such condition:

Provision 5.6-4 M C (13)

For reference

M C the provision is a mandatory requirement and conditional

NOTE: Where the conditional notation is used, this is conditional on the text of the provision. The conditions are provided at the bottom of the table with references provided for the relevant provisions to help with clarity.

So, to my read, the provision is mandatory, conditional on the product having a debug interface at all.

"But maybe they just meant that debug interfaces can't unintentionally be left exposed, and it should be left to the company to decide whether the benefits of leaving a debug interface open are worthwhile", you might ask. But we have an example of what it looks like when ETSI wants to say "the company should not accidentally leave this open", and it looks like

Provision 5.6-3 Device hardware should not unnecessarily expose physical interfaces to attack.

Physical interfaces can be used by an attacker to compromise firmware or memory on a device. "Unnecessarily" refers to the manufacturer's assessment of the benefits of an open interface, used for user functionality or for debugging purposes.

Provision 5.6-4 has a conspicuous absence of the word "unnecessarily" or any mention of things like the manufacturer's assessment of the benefits of an open interface.

So coming back to

They're still completely free and clear to have any interfaces for debugging or anything else that are meant to be usable by the user.

Can you state where exactly in the document it states this, such that someone developing a product could point it out to the legal team at their company?

I mean, just read them again, more slowly. 5.6-3 says that the company needs to at least think about leaving physical interfaces open. They can choose to do so, so long as they assess that there are benefits to the user. But their choice here is to either consider it consumer-facing or manufacturer-only. It is their choice, but they have to pick one, so they can't pretend like, "Oh, that's supposed to be manufacturer-only, so we don't have to worry about securing it," while also forgetting to turn it off before they ship.

Then, suppose you have a physical interface, is it a "debug interface" or not a "debug interface"? From the definition, it is only a "debug interface" if the manufacturer has determined that it is not part of the consumer-facing functionality. So, if they choose to make it accessible to the user (as per above, making a conscious choice about the matter), it is not a "debug interface", and 5.6-4 simply does not apply, because the device does not have a "debug interface". But if they choose to say that it's manufacturer-only, then it's a "debug interface", and they have to turn it off before they ship.

It's actually very well put together. I've seen many a regulation that is light-years more confusing.

That's a nice legal theory you have there.

Let's say you're an engineer at one such company, and you want to expose a UART serial interface to allow the device you're selling to be debuggable and modifiable for the subset of end-users who know what they're doing. You say "this is part of the consumer-facing functionality". The regulator comes back and says "ok, where's the documentation for that consumer-facing functionality" and you say "we're not allowed to share that due to NDAs, but honest this completely undocumented interface is part of the intended consumer-facing functionality".

How do you expect that to go over with the regulator? Before that, how do you expect the conversation with the legal department at your company to go when you tell them that's your plan for what to tell the regulator if they ask?

I don't see any documentation requirement for user interfaces. But it would seem that they are required to put on the table that they are intending for it to be a user interface. "We have assessed that this interface provides user benefits, e.g., debugging." Simple.

More comments

"well you want to turn software into an over-regulated morass similar to what aerospace / pharma / construction have become".

In support of this interpretation:

https://www.themotte.org/post/995/culture-war-roundup-for-the-week/210060?context=8#context (whole thing)

https://www.themotte.org/post/995/culture-war-roundup-for-the-week/209894?context=8#context ("Maybe their little subculture will change.")

https://www.themotte.org/post/995/culture-war-roundup-for-the-week/209881?context=8#context ("coloring inside the lines")

Not once in there did I say anything about it becoming an over-regulated morass. You can change your culture enough to do the trivial fucking basics without becoming an over-regulated morass.

If your idea is to change the culture of tinkerers, then I must withdraw what I said about you, and conclude you're not interested in reasonable regulations at all, but rather are getting off on imposing your views on others / are seething that so many people have managed to escape you for so long.

Fair enough. If there is literally no way to change the culture to something that doesn't have trivially-hackable default passwords on billions of devices with anything other than unreasonable regulations, if this is honestly the dichotomy that you think exists in the world, then I guess I have to throw my lot in with the unreasonable regulations folks. But if you can come up with any plausible way to change the culture enough so that we don't have a spigot of trivially-hackable devices with default passwords on them, and your method is anything other than 'unreasonable regulation', I will jump to your side immediately. Nybbler has already committed to the claim that this is a complete impossibility, that the only options are "a culture that churns out trivially-hackable devices with default passwords" and "unreasonable regulations". Do you embrace this position, that those are the only two options?

If there is literally no way to change the culture to something that doesn't have trivially-hackable default passwords on billions of devices

The approach I outlined earlier, which you called reasonable, was to regulate mass produced end-user consumer goods, and let people who build stuff on their own, or otherwise are reasonably expected to know what they're getting into, have a large degree of freedom. There wasn't a word there about changing anyone's culture, in fact the whole approach is designed to let everyone keep their culture the way they like it.

if this is honestly the dichotomy that you think exists in the world

I don't think it does, but I think the things you are saying here strongly imply that trivially hackable default passwords are just an excuse for you to destroy a culture you hate.

More comments

Once you've changed your culture from "building cool stuff" to "checking regulatory boxes and making sure all the regulation-following is documented", you've already done a vast amount of damage. Even if the regulations themselves aren't too onerous.

Can you propose a way that we could change the culture to "do the trivial basic shit so we don't have billions of abhorrently bad products permeating all of our networks"? I'm wide open to ideas you have for how to do this without making anyone check any boxes, but it seems a little unlikely that they won't have to somehow come up with a culture that at least considers having a box for "is this thing not trivially insecure against a handful of the most basic mistakes that everyone has known about for years?"

No, you're asking for an impossibility. You can't have one culture which is both open to new ideas and dedicated to checking boxes.

More comments

The inevitable increase of regulation once a regulatory framework is in place is part of it. Another part is that merely having a regulatory framework transforms your industry from "building cool stuff" to "checking regulatory boxes and making sure all the regulation-following is documented". Once the principals know they're going to be put out of business or go to jail for not following the regs or having the docs for following the regs, the whole development process is going to get bureaucratized to produce those docs. This both directly makes development much slower and more tedious, and drives the sort of people who do innovative work out of the field (because they didn't get into the field to sit in meetings where you discuss whether the regulatory requirement referenced in SSDD paragraph 2.0.2.50 is properly related to the SDD paragraph 3.1.2, the ICD paragraph 4.1.2.5, and the STP paragraph 6.6.6, which lines of code implement SDD paragraph 3.1.2, and to make sure the SIP properly specifies the update procedures)

Another part is that merely having a regulatory framework transforms your industry from "building cool stuff" to "checking regulatory boxes and making sure all the regulation-following is documented" [...] They didn't get into the field to sit in meetings where you discuss whether the SSDD paragraph 2.0.2.50 is properly related to the SDD paragraph 3.1.2, the ICD paragraph 4.1.2.5, and the STP paragraph 6.6.6, which lines of code implement SDD paragraph 3.1.2, and to make sure the SIP properly specifies the update procedures

Is this the bitter voice of experience of someone who has worked on software for the financial industry?

and drives the sort of people who do innovative work out of the field

In my experience, companies that operate in compliance-heavy industries that also have hard technical challenges frequently are able to retain talented developers who hate that kind of thing, either by outsourcing to Compliance-As-A-Service companies (Stripe, Avalara, Workday, DocuSign, etc) or by paying somewhat larger amounts of money to developers who are willing to do boring line-of-business stuff (hi). Though at some point most of your work becomes compliance, so if you don't have enough easily compartmentalized difficult technical problems the "offload the compliance crap" strategy stops working. I know some brilliant people work at Waymo, which has a quite high compliance burden but also some incredibly crunchy technical problems. On the flip side, I can't imagine that e.g. ADP employs many of our generation's most brilliant programmers.

Is this the bitter voice of experience of someone who has worked on software for the financial industry?

Not financial, but the meetings and the acronyms (though not the specific paragraph numbers) are real.

In my experience, companies that operate in compliance-heavy industries that also have hard technical challenges frequently are able to retain talented developers who hate that kind of thing, either by outsourcing to Compliance-As-A-Service companies (Stripe, Avalara, Workday, DocuSign, etc) or by paying somewhat larger amounts of money to developers who are willing to do boring line-of-business stuff (hi).

This works when the regulations target parts of the product that can be isolated from the technical challenges, but not (as in e.g. aircraft) when they can't. But I can understand the bitter envy towards software people of someone who is in a field where a good year means finding that you can tweak the radius of the trailing edge of the winglet by 1mm and save an average of a pound of fuel in an Atlantic crossing and only have to go through an abbreviated aerodynamic design review.