site banner

Culture War Roundup for the week of May 6, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

I disagree with your assessment of what "being capable of operating" entails, as we have gone over already.

We discussed shale fracking. Now Space X, ozempic, Matt Levine gives tons of examples of financial innovation, we're damn close to self-driving cars, but the hol' up is the tech, not the regulation. The list goes on and on. I do not see any more content in your comment that is anywhere near suitable to claim that we can simply declare this "gone over already". If anything, you just dropped it, because your position didn't go anywhere.

Let's make sure we're on the same page here, so that we are at least confident that we're both actually really ready to engage the slippery slope question honestly, without leaving room for a retreat in this direction. Are other industries capable of operating with some amount of regulation? Not, "Is there a general sense of a regulation-innovation tradeoff?" We agree that there is. The straightforward statement that many other industries are capable of operating with some amount of regulation. Are you going to stick with the position that this is an outlandish Bailey? Or is it simply a true fact about the world, and we can shift the discussion toward slippery slopes?

Now Space X

Careful. You might be using it as an example of the disasters lack of regulation will bring, before you know it.

I think that would be a clear case of malicious regulation, which is an entirely different class of problem. That is to say, if we were discussing something like laws about business records fraud or campaign finance, we'd talk generally about how it generates friction in business processes or has some potential to chill some amount of speech around the edges, and that would be a totally valid discussion with real tradeoffs. But I think it would be an entirely different conversation than talking specifically about Trump being maliciously prosecuted in NY; that has about jack-all to do with real tradeoffs in the space of business records fraud law or campaign finance law; it it purely about malicious actors reaching for literally any tool they can find to hit someone over the head with.

I think that would be a clear case of malicious regulation

I might end up having to do a walk of shame around here, and self-flaggelate about how I mistreated Elon, but I think that SpaceX is going to be seen as an example of "move fast and break things" being applied where it doesn't belong.

I guess "lack of regulation" isn't the right term, because there's been some bizarre political decisions in the process.

Another way to engage with it might be, "if SpaceX fails in the future, where do we expect 'overregulation' to rank on the scale of expected causes?"

I would rank it pretty high.

I don't know if I agree.

I might end up eating my words, but there's a decent chance that after a series of underwhelming Starship launches, New Glenn ends up going straight to Mars on first attempt. They both operate in the same regulatory environment, so if that happens I'd say it's down to how each company is run.

Let's take a look at those survivors then.

Shale fracking

Illegal in Europe at large.

Space X

Currently being sued for not respecting the contradictions between ITAR and the CRA.

Ozempic

Took three years to change the label of a drug that would never have been approved if they had to label it from scratch.

Finance

Most financial innovation is currently happening outside of regulation.

Self-driving cars

Technically very hard indeed, but I'm willing to bet they'll also become very hard legally once they start inevitably running people over.

On the whole, it seems hard to argue that these innovations are examples of regulation being compatible with or fostering innovation. They rather seem to exist despite it.

I'm willing to have the charity to shake on "there a general sense of a regulation-innovation tradeoff". This is true. The more regulation, the less innovation as a general rule, with some exceptions.

As for the second part of the argument, you haven't produced any reasoning as to why regulation isn't a slippery slope while I can point to the development of essentially any technology since 1940 to affirm it. From the dishwasher to the machine gun.

You seem ready to argue elsewhere in this thread that the very idea of the slope being slippery is ridiculous and unfounded and here you're dodging. I think that is bad faith and that you've done nothing but project objections to your antagonism onto those that criticize it here. That is isn't just unconvincing rhetoric, it's a waste of our time.

So instead let's actually do something productive and establish your position definitively: what is your positive theory of the interaction of regulation and innovation, does it have any limiting principle and how does it maintain the innovation cycle and competition in the face of the interests that inevitably act on it?

you haven't produced any reasoning as to why regulation isn't a slippery slope while I can point to the development of essentially any technology since 1940 to affirm it.

I don't actually see how your argument here is supposed to function. Can you spell it out for me?

You seem ready to argue elsewhere in this thread that the very idea of the slope being slippery is ridiculous and unfounded

Nope; literally never did that. Please don't waste our time strawmanning me.

what is your positive theory of the interaction of regulation and innovation, does it have any limiting principle and how does it maintain the innovation cycle and competition in the face of the interests that inevitably act on it?

I think there is often a general sense of a regulation-innovation tradeoff. It happens in different ways in different places, and it's often area specific, many times in ways that you might not expect. It's a really tough problem, so I'm generally in favor of fewer regulations, especially when they're not pretty decently well-tied to a specific, serious problem. I think that a lot of the time, you can maintain the innovation cycle and competition by being careful and hopefully as light-touch as possible with regulation. Some examples would be that if (and this is a big if, because I would actually disagree with the ends) you want to reduce carbon emissions from powerplants or noxious emissions from tailpipes, it's better to do things like set output targets and let the innovation cycle and competition figure out how to solve the problem rather than mandate specific technological solutions that must be adopted for the rest of time, no questions asked. Of course, this is an easy example, and many situations can pose more difficult problems; I'm probably not going to have the answer to them all off the top of my head.

This requirement seems mostly focused on some of the most egregious practices, and it appears that they at least try to leave open the possibility that people can come to the table with innovative solutions to accomplish the "aspirational text" (as gattsuru put it), even if it wasn't a solution that they specifically identified. It may be possible that we have some other big breakthroughs in the field of network security that make some of these line items look ridiculous in hindsight, which is why I would also say that a grossly under-resourced effort across regulation regimes is hunting for precisely any items that may have been deprecated, so they can be promptly chopped. I lament that this is not done well enough, and it's likely one of the major contributors to the general sense of a regulation-innovation tradeoff.

I reject the concept that as soon as epsilon regulation of an industry is put into place, it necessarily and logically follows that there is a slippery slope that results in innovation dying. I think you need at least some argument further. It's easy to just 'declare' bankruptcy a slippery slope, but we know that many end up not.

Sure.

Let's look at it from the point of view of Rogers' popular diffusion model.

According to this theory innovation as a social phenomenon is essentially the communication of randomly appearing new ideas of practices and the successful diffusion and adoption of those ideas relies on the availability of communication channels between five increasingly risk averse segments of the population.

So to be a successfully diffused innovation, an idea has to successively be adopted successively by:

  1. financially liquid risk takers
  2. educated opinion leaders
  3. average people
  4. skeptics
  5. traditionalists

And the criteria that people use to select what they do and do not adopt are:

  1. Compatibility (does it fit with existing norms?)
  2. Trialability (can I try it before i invest?)
  3. Relative Advantage (how much better is it than alternatives?)
  4. Observability (are the benefits noticeable?)
  5. Simplicity (is it easy to grasp?)

Now having established a model, the question is how does regulation and the establishment of norms affect these factors? What does good and bad regulation look like from our model and how likely are they?

Bad regulation is highly normative and costly. It lowers potential compatibility by adding requirements that potential new practices and ideas may not follow. Good regulation makes the existing landscape accessible for new entrants and interoperable with potentially novel uses.

Bad regulation requires high upfront investments that make adoption a risk. Good regulation provides for trials and experiments.

Bad regulation equalizes outcomes so that relative advantage is marginal. Good regulation lets winners win big.

Bad regulation makes the benefits of novel approaches impossible to demonstrate. Good regulation allows them to be obvious.

Bad regulation prevents people from obtaining the skills and knowledge to implement new approaches. Good regulation provides them those skills and knowledge.

Now that we have a good idea of what a good and a bad policy look like, let's make a detour through another area of sociology to figure out what types of regulation institutions and states tend to develop over time.

In Michels' famous study of the eponymous Political Parties, he develops a general sociological theory of institutions that is nowadays mostly known under the name of the "iron law of oligarchy".

This theory provides that any democratic institution (a fortiori any institution) always tends towards oligarchy because the "tactical and technical necessities" of power condition the playing field so that the people who end up leading any institution are always those most organized and motivated by power because this makes them uniquely fit to this selection environment.

In short, as Leach summarizes the theory: "Bureacracy happens. If bureaucracy happens, power rises. Power corrupts.".

Trivially, the people who write legislation are participants in an organization, therefore participants in a destined oligarchy and ultimately moved, at the asymptote, by the increase of their power and control and the centralization thereof. This was later described as well by Weber, Galbraith and other sociologists.

As I am sure you understand, this is not the crucible for good regulation in the sense that we are disposed to here.

A centralized bureaucracy needs the world to be understandable to it (see Scott's Seeing Like a State), and it needs it to change slowly enough that the control mechanisms can keep up. This therefore leads to ever increasing lists of demands that require high upfront investments, are specifically designed to prevent powerful new entrants, offer rigid frameworks where it is hard to show the benefits of ideas that upset the established order and ultimately where the availability of financially liquid risk takers is deliberately minimized because the "technokings" are a rival castle.

To give more power, in any quantity, to any human institution is therefore, by necessity, to put oneself on the road to prevent innovation. Because power is threatened by innovation. And as Schmitt rightfully observes, power can stand no rivals.

You may notice the paradox present here, in that innovation itself is such an act. And every rebel is always grooming himself into a new master.

Machiavelli and his followers, myself included, understand that freedom, a fortiori innovation, can only really be found in the cracks of this endless struggle. And thus, I cherish any haven that is beyond the control of power and will defend it, not out of the misplaced sense of self important artistry that you seem to identify, but because it is the only respite that we have against the violence of our condition. And I said as much, in not so long winded a fashion, in my original answer.

Those are plenty interesting general characteristics, and I don't object to much. Now, perhaps attention can be turned toward applying these ideas to the regulation at hand. I spoke to some reasons why I think this document leans much more toward the "good regulation" side than the "bad regulation" side. It seems plenty open to all sorts of innovative products in the IoT space and has very little that seems likely to affect the development of new features and products, except perhaps some edge cases. Any thoughts?

Well my thoughts are that yet again you are deliberately refusing to address the slippery slope and thus that this conversation has come to an end.

What part would you like me to address? You spoke of "good regulation" and "bad regulation" first, so it would make sense to start with that discussion and then see how it flows into other pieces. I'm really not sure what you're demanding of me.

Read the whole post please.

You asked for a more detailed argument and explanation of why the slope is slippery, I gave you one. If you're going to refuse to address the whole argument and revert back to discussing the specifics of the regulation like you seem to enjoy instead of addressing the tendency, it's pointless for me to produce anything because you're never going to discuss why people disagree with you.

Which as has been pointed out to you numerous times now, has nothing to do with the specifics of this particular regulation.

I read the whole post. You started off talking about good regulation and bad regulation, then got to some considerations of slippery slope dynamics. Is the former part just irrelevant? I was going by Grice's maxims that it did, in fact, have relevance. It seems like it is relevant to the dynamics of slippery slope dynamics. If it's not relevant, please let me know. I'd especially like to know why you don't actually think it's relevant. Do you think that it actually doesn't matter whether something is a 'good regulation' or a 'bad regulation', and that the only thing that matters is the slippery slope part, where you think that all regulations end up in the bad category? If so, it would have been nice if you said something along these lines. I was just trying to go through your considerations in a systematic fashion.

More comments

I reject the concept that as soon as epsilon regulation of an industry is put into place, it necessarily and logically follows that there is a slippery slope that results in innovation dying. I think you need at least some argument further. It's easy to just 'declare' bankruptcy a slippery slope, but we know that many end up not.

Nobody is arguing that "the moment any regulation is in place, it is inevitable that we will slide all the way down the slippery slope of increasing regulation and all innovation in that industry will die". The argument is, instead, that adding a regulation increases the chance that we will slide down that slippery slope. That chance may be worth it, if the step is small and the benefit of the regulation is large, but in the case of the entirety of ETSI EN 303 645 (not just section 5.1 in isolation), I don't think that's the case, and I certainly don't think it's a slam-dunk that it's worth the cost.

Section 5.1, "You are not allowed to use a default password on an network interface as the sole means of authorization for the administrative functions of an IoT device", if well-implemented, is probably such a high-benefit low-risk regulation.

Section 5.4.1, "sensitive security parameters in persistent storage shall be stored securely by the device," seems a bit more likely to be a costly provision, and IMO one that misunderstands how hardware security works (there is no such thing as robust security against an attacker with physical access).

They double down on the idea that manufacturers can make something robust to physical access in section 5.4.2, "where a hard-coded unique per device identity is used in a device for security purposes, it shall be implemented in such a way that it resists tampering by means such as physical, electrical or software."

And then there's perplexing stuff like 5.6.4 "where a debug interface is physically accessible, it shall be disabled in software.". Does this mean if you sell a color-changing light bulb, and the bulb has a usbc port, you're not allowed to expose logs across the network and instead have to expose them only over the usbc port? I would guess not, but I'd also guess that if I was in the UK the legal team at my company would be very unhappy if I just went with my guess without consulting them.

And that's really the crux of the issue, introducing regulation like this means that companies now have to make a choice between exposing themselves to legal risks, making dumb development decisions based on the most conservative possible interpretation of the law, or involve the legal department way more frequently for development decisions.

Nobody is arguing

I present to you: nobody.

The argument is, instead, that adding a regulation increases the chance that we will slide down that slippery slope.

This is a vastly better argument, but one that wouldn't allow us to then simply reject any continued discussion, just because we've 'declared' slippery slope and observed that we're epsilon on it. For example, one might ask about the underlying reason for why it increases the chance that we will slide down it? The answer could take many forms, which may be more or less convincing for whether it does, indeed, increase the chance. See here for some examples, and feel free to click through for any specific sub-topics.

Section 5.4.1, "sensitive security parameters in persistent storage shall be stored securely by the device," seems a bit more likely to be a costly provision, and IMO one that misunderstands how hardware security works (there is no such thing as robust security against an attacker with physical access).

IMO, it shows that you misunderstand how these things work. They're not saying "secure against a nation state decapping your chip". They actually refer to ways that persistent storage can be generally regarded as secure, even if you can imagine an extreme case. To be honest, this is a clear sign that you've drunk the tech press kool aid and are pretty out in whacko land from where most serious tech experts are on this issue. Like, they literally tell you what standards are acceptable; it doesn't make any sense to concoct an argument for why it's AKSHUALLY impossible to satisfy the requirement.

And then there's perplexing stuff like 5.6.4 "where a debug interface is physically accessible, it shall be disabled in software.". Does this mean if you sell a color-changing light bulb, and the bulb has a usbc port, you're not allowed to expose logs across the network and instead have to expose them only over the usbc port?

H-what? What are you even talking about? This doesn't even make any sense. The standard problem here is that lots of devices have debug interfaces that are supposed to only be used by the manufacturer (you would know this if you read the definitions section), yet many products are getting shipped in a state where anyone can just plug in and do whatever they want to the device. This is just saying to not be a retard and shut it off if it's not meant to be used by the user.

I present to you: nobody.

... I see a lot of you arguing that The_Nybbler believes that giving an inch here is a bad idea because they think that a tiny regulation will directly kill innovation, while The_Nybbler is arguing that there's no particular reason for the regulators who introduced this legislation to stop at only implementing useful regulations that pass cost-benefit analysis, and that the other industries we see do seem to have vastly overreaching regulators, and so a naive cost-benefit analysis on a marginal regulation which does not factor in the likely-much-larger second-order effects is useless (though @The_Nybbler do correct me if I'm wrong about this, and you think introducing regulation would be bad even if the first-order effects of regulation were positive and there was some actually-credible way of ensuring that the scope of the regulation was strictly limited).

Honestly I think both of you could stand to focus a bit more on explaining your own positions and less on arguing against what you believe the other means, because as it stands it looks to me like a bunch of statements about what the other person believes, like "you argue that the first-order effects of the most defensible part of this regulation are bad, but you can't support that" / "well you want to turn software into an over-regulated morass similar to what aerospace / pharma / construction have become".

IMO, it shows that you misunderstand how these things work. They're not saying "secure against a nation state decapping your chip". They actually refer to ways that persistent storage can be generally regarded as secure, even if you can imagine an extreme case.

Quoting the examples:

Example 1: The root keys involved in authorization and access to licensed radio frequencies (e.g. LTE-m cellular access) are stored in a UICC.

Ok, fair enough, I can see why you would want to prevent users from accessing these particular secrets on the device they own (because, in a sense, they don't own this particular bit). Though I contend that the main "security" benefit of these is fear of being legally slapped around under CFAA.

Example 2: A remote controlled door-lock using a Trusted Execution Environment (TEE) to store and access the sensitive security parameters.

Seems kinda pointless. If an attacker can read the flash storage on your door lock, presumably that means they've already managed to detach the door lock from your door, and can just enter your house. And if a remote attacker has the ability to read the flash storage because they have gained the ability to execute arbitrary code, they can presumably just directly send the outputs which unlock the door without mucking about with the secrets at all.

Example 3: A wireless thermostat stores the credentials for the wireless network in a tamper protected microcontroller rather than in external flash storage.

What's the threat model we're mitigating here, such that the benefit of mitigating that threat is worth the monetary and complexity cost of requiring an extra component on e.g. every single adjustable-color light bulb sold?

H-what? What are you even talking about? This doesn't even make any sense. The standard problem here is that lots of devices have debug interfaces that are supposed to only be used by the manufacturer (you would know this if you read the definitions section), yet many products are getting shipped in a state where anyone can just plug in and do whatever they want to the device. This is just saying to not be a retard and shut it off if it's not meant to be used by the user.

On examination, I misread, and you are correct about what the documents says.

That said, the correct reading then seems to be "users should not be able to debug, diagnose problems with, or repair their own devices which they have physical access to, and which they bought with their own money." That seems worse, not better. What's the threat model this is supposed to be defending against? Is this a good way of defending against this threat model?

If an attacker can read the flash storage on your door lock, presumably that means they've already managed to detach the door lock from your door, and can just enter your house.

[and similar examples given as reasons to discount 5.4-1]

Sure, if there are other threats, folks should mitigate those, too. You seem to be under the impression that if this document doesn't spell out precisely every detail for how to make every aspect of a device perfectly secure from all threats, then it's completely useless. That's nonsensical. It would be silly to try to include in this type of document requirements for physically securing door locks. This is just focused on the cyber part, and it's just focused on at least doing the bones-simple basics. Put at least some roadblocks in front of script kiddies. Full "real" security is obviously harder than just the basics, and I can't imagine it would be easy or really even plausible to regulate our way to that. So, we sort of have to say, "At least do the basics," to hopefully cut out some of the worst behavior, and then we still have to hope that the unregulated part of the market even tries to deal with the other aspects.

That said, the correct reading then seems to be "users should not be able to debug, diagnose problems with, or repair their own devices which they have physical access to, and which they bought with their own money."

Not at all. They made no such normative statements. They're saying that IF the manufacturer includes a debug interface that they intend to not be a user interface, then they should shut it off. They're still completely free and clear to have any interfaces for debugging or anything else that are meant to be usable by the user. But if they're going to do that, they probably need to at least think about the fact that it's accessible, rather than just "forget" to turn it off.

Searching "debug interface", I see three places:

The first is on page 10, in section 3.1(Definition of terms, symbols and abbreviations: Terms)

debug interface: physical interface used by the manufacturer to communicate with the device during development or to perform triage of issues with the device and that is not used as part of the consumer-facing functionality

EXAMPLE: Test points, UART, SWD, JTAG.

The second is on page 20, in section 5.6 (Cyber security provisions for consumer IoT: Minimize exposed attack surfaces)

Provision 5.6-4 Where a debug interface is physically accessible, it shall be disabled in software.

EXAMPLE 5: A UART serial interface is disabled through the bootloader software on the device. No logon prompt and no interactive menu is available due to this disabling

The third is on page 32, in Table B.1: Implementation of provisions for consumer IoT security, where, at the bottom of the table, there is a "conditions" section, and "13) a debug interface is physically accessible" is the 13th such condition:

Provision 5.6-4 M C (13)

For reference

M C the provision is a mandatory requirement and conditional

NOTE: Where the conditional notation is used, this is conditional on the text of the provision. The conditions are provided at the bottom of the table with references provided for the relevant provisions to help with clarity.

So, to my read, the provision is mandatory, conditional on the product having a debug interface at all.

"But maybe they just meant that debug interfaces can't unintentionally be left exposed, and it should be left to the company to decide whether the benefits of leaving a debug interface open are worthwhile", you might ask. But we have an example of what it looks like when ETSI wants to say "the company should not accidentally leave this open", and it looks like

Provision 5.6-3 Device hardware should not unnecessarily expose physical interfaces to attack.

Physical interfaces can be used by an attacker to compromise firmware or memory on a device. "Unnecessarily" refers to the manufacturer's assessment of the benefits of an open interface, used for user functionality or for debugging purposes.

Provision 5.6-4 has a conspicuous absence of the word "unnecessarily" or any mention of things like the manufacturer's assessment of the benefits of an open interface.

So coming back to

They're still completely free and clear to have any interfaces for debugging or anything else that are meant to be usable by the user.

Can you state where exactly in the document it states this, such that someone developing a product could point it out to the legal team at their company?

I mean, just read them again, more slowly. 5.6-3 says that the company needs to at least think about leaving physical interfaces open. They can choose to do so, so long as they assess that there are benefits to the user. But their choice here is to either consider it consumer-facing or manufacturer-only. It is their choice, but they have to pick one, so they can't pretend like, "Oh, that's supposed to be manufacturer-only, so we don't have to worry about securing it," while also forgetting to turn it off before they ship.

Then, suppose you have a physical interface, is it a "debug interface" or not a "debug interface"? From the definition, it is only a "debug interface" if the manufacturer has determined that it is not part of the consumer-facing functionality. So, if they choose to make it accessible to the user (as per above, making a conscious choice about the matter), it is not a "debug interface", and 5.6-4 simply does not apply, because the device does not have a "debug interface". But if they choose to say that it's manufacturer-only, then it's a "debug interface", and they have to turn it off before they ship.

It's actually very well put together. I've seen many a regulation that is light-years more confusing.

More comments

"well you want to turn software into an over-regulated morass similar to what aerospace / pharma / construction have become".

In support of this interpretation:

https://www.themotte.org/post/995/culture-war-roundup-for-the-week/210060?context=8#context (whole thing)

https://www.themotte.org/post/995/culture-war-roundup-for-the-week/209894?context=8#context ("Maybe their little subculture will change.")

https://www.themotte.org/post/995/culture-war-roundup-for-the-week/209881?context=8#context ("coloring inside the lines")

Not once in there did I say anything about it becoming an over-regulated morass. You can change your culture enough to do the trivial fucking basics without becoming an over-regulated morass.

If your idea is to change the culture of tinkerers, then I must withdraw what I said about you, and conclude you're not interested in reasonable regulations at all, but rather are getting off on imposing your views on others / are seething that so many people have managed to escape you for so long.

More comments

Once you've changed your culture from "building cool stuff" to "checking regulatory boxes and making sure all the regulation-following is documented", you've already done a vast amount of damage. Even if the regulations themselves aren't too onerous.

More comments

The inevitable increase of regulation once a regulatory framework is in place is part of it. Another part is that merely having a regulatory framework transforms your industry from "building cool stuff" to "checking regulatory boxes and making sure all the regulation-following is documented". Once the principals know they're going to be put out of business or go to jail for not following the regs or having the docs for following the regs, the whole development process is going to get bureaucratized to produce those docs. This both directly makes development much slower and more tedious, and drives the sort of people who do innovative work out of the field (because they didn't get into the field to sit in meetings where you discuss whether the regulatory requirement referenced in SSDD paragraph 2.0.2.50 is properly related to the SDD paragraph 3.1.2, the ICD paragraph 4.1.2.5, and the STP paragraph 6.6.6, which lines of code implement SDD paragraph 3.1.2, and to make sure the SIP properly specifies the update procedures)

Another part is that merely having a regulatory framework transforms your industry from "building cool stuff" to "checking regulatory boxes and making sure all the regulation-following is documented" [...] They didn't get into the field to sit in meetings where you discuss whether the SSDD paragraph 2.0.2.50 is properly related to the SDD paragraph 3.1.2, the ICD paragraph 4.1.2.5, and the STP paragraph 6.6.6, which lines of code implement SDD paragraph 3.1.2, and to make sure the SIP properly specifies the update procedures

Is this the bitter voice of experience of someone who has worked on software for the financial industry?

and drives the sort of people who do innovative work out of the field

In my experience, companies that operate in compliance-heavy industries that also have hard technical challenges frequently are able to retain talented developers who hate that kind of thing, either by outsourcing to Compliance-As-A-Service companies (Stripe, Avalara, Workday, DocuSign, etc) or by paying somewhat larger amounts of money to developers who are willing to do boring line-of-business stuff (hi). Though at some point most of your work becomes compliance, so if you don't have enough easily compartmentalized difficult technical problems the "offload the compliance crap" strategy stops working. I know some brilliant people work at Waymo, which has a quite high compliance burden but also some incredibly crunchy technical problems. On the flip side, I can't imagine that e.g. ADP employs many of our generation's most brilliant programmers.

Is this the bitter voice of experience of someone who has worked on software for the financial industry?

Not financial, but the meetings and the acronyms (though not the specific paragraph numbers) are real.

In my experience, companies that operate in compliance-heavy industries that also have hard technical challenges frequently are able to retain talented developers who hate that kind of thing, either by outsourcing to Compliance-As-A-Service companies (Stripe, Avalara, Workday, DocuSign, etc) or by paying somewhat larger amounts of money to developers who are willing to do boring line-of-business stuff (hi).

This works when the regulations target parts of the product that can be isolated from the technical challenges, but not (as in e.g. aircraft) when they can't. But I can understand the bitter envy towards software people of someone who is in a field where a good year means finding that you can tweak the radius of the trailing edge of the winglet by 1mm and save an average of a pound of fuel in an Atlantic crossing and only have to go through an abbreviated aerodynamic design review.