There's been much written about emergent or incidental or evolutionary architectures. All of these descriptors are, of course, euphemisms for "it's a mess, man."
As far as I can tell, less has been written about how these architectures come about. Clearly the key predictor of un-architected systems[1] is simply the classic programmer tendency of writing code before thinking about what you're actually doing. But are there tendencies, implementation patterns, team practices, or other factors that can accelerate the spread of patterns in a codebase? I like to think there are.
An Example
At a previous company, our core problem was that every customer we dealt with was highly unique. You can think of this in terms of physical descriptiveness. If your business needed to know the number, location, size, color, shape, height, etcetera of every mole on a person's body, there would immediately be need for a bunch of other information to contextualize this data. For example, do all people have two arms? On average, likely not, seeing as it's much more common for situations to arise where a person has 0 arms, than with 3+.
This is an incredibly contrived and potentially nonsensical example, but the key point is that we found ourselves in a situation where our key need was for an individual engineer to be able to get a new type of data point into the database as quickly as possible. This was the key determinant of our collective ability to ship new features at a certain phase of the company. So we built a system that minimized database migrations, minimized the work needed to update the schema, and made the storage descriptors for a datapoint as flexible as possible.
And it worked swimmingly.
Until it started bogging us down.
Our Problem
The problem is that when you optimize for an individual engineer's velocity, you necessarily are not optimizing for collective velocity. When we had 20-40 engineers, with each engineer owning a relatively complex feature, but still just a feature, this worked fine. When we got to the point where we had teams of 3-8 engineers building and mainting a single feature, this started being a problem.
The shorthand I used to describe this problem was that our architecture made bad assumptions viral[2].
If the Moles Engineer and the Freckles Engineer had each individual architected their own schema for Arms, eventually other people would start to depend on those implementations. The Elbows team might build off the Moles Engineer's Arm, and the Nails team might build off the Freckles Engineer's version. It's not so much that they depended on a bad implementation per-se, it's how quickly they could do it, how easily transitive dependencies could stack up, and how many of these cross-purposed, just-shy-of-correct assumptions arose.
This architecture, which for the most part was critical to getting the business into an operable state, was designed around a tool that induced a high degree of virality. Keep in mind, some of this virality was critically useful, until it wasn't. This sort of "good at the time" design is some people's strict definition of an "antipattern", but personally I think that's too negative.
More Generally
I was reminded of this over the last few weeks as I've been dipping my toe into unsafe Rust code. Unsafe Rust code –like async
Javascript-, defaults to a high degree of virality.
In Javascript, where I have vastly more experience, every call to await
must be in an async
function, which means calling functions will likely then have to call await
, meaning those functions now must be async
. Of course, I know that async
is just syntactic sugar, but that syntactic sugar can have an impact when someone unwittingly converts a function to async
without thinking through all the call sites. I am under the impression that async
Rust has a similar sort of behavior.
With unsafe Rust, there's an interesting catch though. You can choose not to declare an overall function as unsafe, even if you use unsafe blocks inside of it. In some ways this would be like wrapping an async
Javascript function in a Promise and waiting for then()
, but the key difference is that adding a then()
changes the behavior of a calling function, whereas unsafe code that isn't called out as such, remains unsafe. I think of this ability as a sort of barrier, or moat, allowing you to encapsulate complexity.
I got into a conversation with a colleague about how this might be a bad surprise for developers who inadvertently depend on a library that is internally unsafe, but were developed by someone who doesn't have a great handle on writing unsafe code[3]. This capability in some ways creates an illusion of safety. We talked about the alternate reality though, one where there was no way to encapsulate unsafe code and pull the wool over developers eyes. Where every system, by virtue of needing something like the hostname, would have to declare itself globally unsafe. Net/net, I would argue this would be a worse world. Just because one line is unsafe, doesn't mean the next one is. This ability to create moats or cell walls around this sort of highly viral implementation is, if not the backbone of the language, then at least a few vertebrae.
So?
What's the point here?
I am not sure. I just wanted to share a way I frame interactions with our tools. Other things can be viral. Types (and type systems) can be prone to virality. Certain libraries that can be used ad-hoc but have too-amenable/open internal formats eventually seem to pop up everywhere like a weed.
I am curious whether anyone has a better, more concrete descriptor or way of identifying this phenomenon.