We have found that by changing our software/system architecture we have also inadvertently changed our organisation structure.
- Inverse Conway Law or just Roy's Law ;-)
Before we had four cross functional teams, working on a single application, everyone felt responsible for it, worked overtime to fix bugs etc, we had good communication between the teams.
But after we switched to microservices the teams became responsible for just a part of the system, their microservice(s). Whenever we had an outage, one team was left to fix it, the others just went home. They stopped talking to each other because they didn't share any code, no issues... they stopped having lunch together, some things got way worse in the organisation, all sparked by a 'simple' architectural change, moving to microservices.
That reminds me of a place I used to work at, where initially we had DBAs embedded in the teams. They switched that and the DBAs were all grouped together and all hell broke loose. They were always have meetings, throwing out emails about they were dictating this and that, and had very little direct communications with the teams they were supposed to be supporting.
I ended up leaving during the peak of all of this, but in an exit-interview, a director asked me about the problems this was causing.
> They stopped talking to each other because they didn't share any code, no issues... they stopped having lunch together
Was that accompanied by any growth in company size? I've found that this happens when a group grows past about 15 people even if the structure doesn't change.
This outcome could be considered a feature of microservices: by abstracting the functionality into more tightly-contained units, failures are more isolated.
Sounds like the organization needs to do other things to keep people from getting siloed, though that gets increasingly difficult at scale. Well-defined SLAs (along with monitoring and reporting of those SLAs) are also necessary so that microservice failures can be understood in the right context.
I have seen new systems/software implemented just to isolate/remove parts of an organization. Worse I have seen it done when the existing system/software was just fine.
> They stopped talking to each other because they didn't share any code, no issues... they stopped having lunch together, some things got way worse in the organisation, all sparked by a 'simple' architectural change, moving to microservices.
Moving to microservices is anything but a simple change. Your experience is one example of why microservices are not automatically a good idea. Normally, the advice is that they might be a good fit if you have isolated teams to begin with, and for different reasons.
> “Before we had four cross functional teams, working on a single application, everyone felt responsible for it, worked overtime to fix bugs etc, we had good communication between the teams.”
This actually sounds very dysfunctional, but with the type of positive PR spin that product / executive management wants, basically anyone who believes “cross-functional” is anything more than a buzzword.
Would love to know what the engineers thought about working in that environment (which sounds like a monolithic, too-many-cooks, zero specialization situation likely negatively affecting career growth & routine skill building).
Hyrum's law is highly relevant to anyone who makes software libraries.
"With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody."
I.e. any internal implementation details that leak as behavior of the API become part of the API. Cf. Microsoft's famous "bug for bug" API compatibility through versions of Windows.
They might become part of the API in a superficial sense, but if you broadcast clearly that undocumented behaviors are subject to change, then users can decide if they want to accept that risk and won’t have a valid complaint if they want the not-covered-by-the-contractual-API preserved or become surprised by a change.
It's very common for Conway's law to be regarded as some kind of warning, as if it's something to be "defended" against. It's not. Conway's law is the most basic and important principle to creating software at scale. A better way of stating Conway's law is that if you want to design a large, sophisticated system, the first step is to design the organization that will implement the system.
Organizations that are too isolated will tend to create monoliths.
Organizations that are too connected and too flat will tend to create sprawling spaghetti systems. These two cases are not mutually exclusive. You can have sprawling spaghetti monoliths. This is also one of the dangers to having one team work on several microservices; those microservices will tend to intermingle in inappropriately complex ways. Boundaries are critical to system health, and boundaries can be tuned by organizing people. Don't worry about Conway's law, leverage it.
A personal rule of thumb I derived from the ninety-ninety rule is this: "Before starting a project, ask yourself if you would still do it if you knew it would cost twice as much and take twice as long as you expect. Because it probably will."
Murphy's Law has electrical engineering roots. I have a fun anecdote.[0] My wife is electromechanical and I'm computer science so we would work on projects together since we make a good team. I remember in college I was working with my wife on one of her projects and we were using force transducers. The damn things kept breaking at the worst times so we kept calling it Murphy's Law. After a while we looked it up. Turns out Murphy was working with transducers when he coined the phrase [1]. So I have this little back pocket anecdote about the time I got to use Murphy's Law in the original context. Which I can bring out in times just like this.
Everyone always conveniently forgets Price's Law (derived from Lotka's Law) It states that 50% of work is done by the square root of the number of employees.
Interestingly, Price's law seems to indicate 10x developers exist because if you have 100 employees, then 10 of them do half of all the work.
This idea is particularly critical when it comes to things like layoffs. If they get scared and leave or they are let go for whatever reason, the business disproportionately suffers. Some economists believe that this brain drain has been a primary cause in the death spiral of some large companies.
I have always liked Postel's law (and Jon -- what a great human being he was) but I no longer like it as I used to.
The reason it's a really great idea is that it says you should engineer in favor of resilience, which is an important form of robustness. And at the same time, "strict in what you send" means "don't cause trouble for others.
However there are cases where "fail early" is more likely to be the right thing. Here are a few:
1 - Backward compatibility can bite you in the leg. For example, USB Type C (which I love!) can support very high transfer rates but when it can't it will silently fall back. So you could have a 40 Gbps drive connected to a 40 Gbps port on a computer via a cable that only supports USB 2 speeds. It will "work" but maybe not as intended. Is this good, or should it have failed to work (or alerted the user to make a choice) so that the user can go find a better cable?
2 - DWIM is inherently unstable. For users that might not be bad (they can see the result and retry) or terrible ("crap, I didn't mean to destroy the whole filesystem").
I see these problems all the time in our own code base where someone generates some invalidly-formatted traffic which is interpreted one way by their code and a different way by someone else's. Our system is written in at least four languages. We'd be better off being more strict, but some of the languages (Python, Javascript) are liberal in both what they accept and generate.
This aphorism/law was written for the network back when we wrote all the protocol handlers by hand. Now we have so many structured tools and layered protocols this is much less necessary.
Being liberal in what you accept has turned out to be a security problem. This is especially so when this maxim is observed in a widely-deployed piece of software, as its permissiveness tends to become the de-facto standard.
I feel like Fonzie's Law would be a worthwhile inclusion: "The best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer."
There is an entire poster of funny 'laws of computing' that was created in 1980 by Kenneth Grooms. It's pretty amazing how many of these are completely relevant 40 years later...
It's hard to find the original piece of art, but my uncle had this hanging in his office for a long time, and now it's hanging in mine.
I transcribed it in a gist so I had access to them for copy/paste.
You can have quick-and-dirty for initial release, but it's rarely practical from a maintenance perspective.
A related rule: Design software for maintenance, not initial roll-out, because maintenance is where most of the cost will likely be.
An exception may be a start-up where being first to market is of utmost importance.
Other rules:
Don't repeat yourself: factor out redundancy. However, redundancy is usually better than the wrong abstraction, which often happens because the future is harder to predict than most realize.
And Yagni: You Ain't Gonna Need It: Don't add features you don't yet need. However, make the design with an eye on likely needs. For example, if there's an 80% probability of a need for Feature X, make your code "friendly" to X if it's not much change versus no preparation. Maybe there's a more succinct way to say this.
1. All software can be simplified.
2. All software has bugs.
Therefore, all software can ultimately be simplified down to a single line that doesn't work.
From what I know, the "*s'" thing works mostly for plural nouns. For singular, it only applies to classical & religious names ending with "s" ("Jesus'", "Archimedes'" etc).
I am not an English native so I may be completely off. Feel free to rage :)
I like Wiggin's Law (found in [My Heroku Values](https://gist.github.com/adamwiggins/5687294)): If it's hard, cut scope. I'm working on a compiler for my new language and sometimes I get caught up in the sheer amount of work involved in implementing a new language. I mean, I have to write a typechecker, a code generator, a runtime (including GC), a stdlib, etc. But instead of just getting overwhelmed, I'm trying to cut scope and just focus on getting a small part working. Even if the code is terrible, even if it's limited in functionality, I just need to get something working.
[+] [-] redcodenl|7 years ago|reply
We have found that by changing our software/system architecture we have also inadvertently changed our organisation structure.
- Inverse Conway Law or just Roy's Law ;-)
Before we had four cross functional teams, working on a single application, everyone felt responsible for it, worked overtime to fix bugs etc, we had good communication between the teams.
But after we switched to microservices the teams became responsible for just a part of the system, their microservice(s). Whenever we had an outage, one team was left to fix it, the others just went home. They stopped talking to each other because they didn't share any code, no issues... they stopped having lunch together, some things got way worse in the organisation, all sparked by a 'simple' architectural change, moving to microservices.
[+] [-] Bizarro|7 years ago|reply
I ended up leaving during the peak of all of this, but in an exit-interview, a director asked me about the problems this was causing.
[+] [-] badfrog|7 years ago|reply
Was that accompanied by any growth in company size? I've found that this happens when a group grows past about 15 people even if the structure doesn't change.
[+] [-] organsnyder|7 years ago|reply
Sounds like the organization needs to do other things to keep people from getting siloed, though that gets increasingly difficult at scale. Well-defined SLAs (along with monitoring and reporting of those SLAs) are also necessary so that microservice failures can be understood in the right context.
[+] [-] Shivetya|7 years ago|reply
[+] [-] BerislavLopac|7 years ago|reply
Honestly, this sounds like an improvement.
[+] [-] tempodox|7 years ago|reply
[+] [-] zemo|7 years ago|reply
[+] [-] mlthoughts2018|7 years ago|reply
This actually sounds very dysfunctional, but with the type of positive PR spin that product / executive management wants, basically anyone who believes “cross-functional” is anything more than a buzzword.
Would love to know what the engineers thought about working in that environment (which sounds like a monolithic, too-many-cooks, zero specialization situation likely negatively affecting career growth & routine skill building).
[+] [-] ComodoHacker|7 years ago|reply
[+] [-] scarejunba|7 years ago|reply
[+] [-] wffurr|7 years ago|reply
"With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody."
I.e. any internal implementation details that leak as behavior of the API become part of the API. Cf. Microsoft's famous "bug for bug" API compatibility through versions of Windows.
Http://www.hyrumslaw.com
[+] [-] mlthoughts2018|7 years ago|reply
[+] [-] ignoramous|7 years ago|reply
[+] [-] turingbook|7 years ago|reply
[1] http://www.globalnerdy.com/2007/07/18/laws-of-software-devel...
[2] https://exceptionnotfound.net/fundamental-laws-of-software-d... (not so good but with solid discussions from HN https://news.ycombinator.com/item?id=11574715 )
And more:
[3] https://embeddedartistry.com/blog/2018/8/13/timeless-laws-so... (a whole book titled Timeless Laws of Software Development)
[4] https://www.red-gate.com/simple-talk/opinion/opinion-pieces/...
[5] https://www.netobjectives.com/blogs/some-laws-software-devel...
[6] http://www.methodsandtools.com/archive/softwarelaws.php
[+] [-] tynpeddler|7 years ago|reply
Organizations that are too isolated will tend to create monoliths. Organizations that are too connected and too flat will tend to create sprawling spaghetti systems. These two cases are not mutually exclusive. You can have sprawling spaghetti monoliths. This is also one of the dangers to having one team work on several microservices; those microservices will tend to intermingle in inappropriately complex ways. Boundaries are critical to system health, and boundaries can be tuned by organizing people. Don't worry about Conway's law, leverage it.
[+] [-] BerislavLopac|7 years ago|reply
[+] [-] rubinelli|7 years ago|reply
[+] [-] da_chicken|7 years ago|reply
[+] [-] rdiddly|7 years ago|reply
[+] [-] throwaway2016a|7 years ago|reply
[0] I think it is fun. Your milage may vary.
[1] https://en.wikipedia.org/wiki/Murphy%27s_law
[+] [-] hajile|7 years ago|reply
Interestingly, Price's law seems to indicate 10x developers exist because if you have 100 employees, then 10 of them do half of all the work.
This idea is particularly critical when it comes to things like layoffs. If they get scared and leave or they are let go for whatever reason, the business disproportionately suffers. Some economists believe that this brain drain has been a primary cause in the death spiral of some large companies.
[+] [-] gumby|7 years ago|reply
The reason it's a really great idea is that it says you should engineer in favor of resilience, which is an important form of robustness. And at the same time, "strict in what you send" means "don't cause trouble for others.
However there are cases where "fail early" is more likely to be the right thing. Here are a few:
1 - Backward compatibility can bite you in the leg. For example, USB Type C (which I love!) can support very high transfer rates but when it can't it will silently fall back. So you could have a 40 Gbps drive connected to a 40 Gbps port on a computer via a cable that only supports USB 2 speeds. It will "work" but maybe not as intended. Is this good, or should it have failed to work (or alerted the user to make a choice) so that the user can go find a better cable?
2 - DWIM is inherently unstable. For users that might not be bad (they can see the result and retry) or terrible ("crap, I didn't mean to destroy the whole filesystem").
I see these problems all the time in our own code base where someone generates some invalidly-formatted traffic which is interpreted one way by their code and a different way by someone else's. Our system is written in at least four languages. We'd be better off being more strict, but some of the languages (Python, Javascript) are liberal in both what they accept and generate.
This aphorism/law was written for the network back when we wrote all the protocol handlers by hand. Now we have so many structured tools and layered protocols this is much less necessary.
[+] [-] jacques_chester|7 years ago|reply
[+] [-] mannykannot|7 years ago|reply
[+] [-] davidkuhta|7 years ago|reply
[+] [-] aitchnyu|7 years ago|reply
[+] [-] gpvos|7 years ago|reply
Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.
[+] [-] Balgair|7 years ago|reply
Any social media company will expand until it behaves like a bank; receiving deposits and making loans to customers (not necessarily users).
[+] [-] ryandrake|7 years ago|reply
[+] [-] kgwgk|7 years ago|reply
[+] [-] jatsign|7 years ago|reply
"Any application that can be written in JavaScript, will eventually be written in JavaScript."
[+] [-] BerislavLopac|7 years ago|reply
[+] [-] sagartewari01|7 years ago|reply
[+] [-] marcosdumay|7 years ago|reply
[+] [-] billfruit|7 years ago|reply
[+] [-] sorahn|7 years ago|reply
It's hard to find the original piece of art, but my uncle had this hanging in his office for a long time, and now it's hanging in mine.
I transcribed it in a gist so I had access to them for copy/paste.
https://gist.github.com/sorahn/905f67acf00d6f2aa69e74a39de65...
(Those pictures were from an ebay auction before I got the actual piece)
[+] [-] dllthomas|7 years ago|reply
... then it grows even faster.
[+] [-] tpchnmy|7 years ago|reply
[+] [-] jrochkind1|7 years ago|reply
https://tools.ietf.org/html/draft-thomson-postel-was-wrong-0...
[+] [-] aeternus|7 years ago|reply
[+] [-] JackFr|7 years ago|reply
(Don't know if it has a name)
[+] [-] tabtab|7 years ago|reply
A related rule: Design software for maintenance, not initial roll-out, because maintenance is where most of the cost will likely be.
An exception may be a start-up where being first to market is of utmost importance.
Other rules:
Don't repeat yourself: factor out redundancy. However, redundancy is usually better than the wrong abstraction, which often happens because the future is harder to predict than most realize.
And Yagni: You Ain't Gonna Need It: Don't add features you don't yet need. However, make the design with an eye on likely needs. For example, if there's an 80% probability of a need for Feature X, make your code "friendly" to X if it's not much change versus no preparation. Maybe there's a more succinct way to say this.
[+] [-] yoz-y|7 years ago|reply
[+] [-] sagartewari01|7 years ago|reply
[+] [-] _ah|7 years ago|reply
[+] [-] dgacmu|7 years ago|reply
[+] [-] xpil|7 years ago|reply
From what I know, the "*s'" thing works mostly for plural nouns. For singular, it only applies to classical & religious names ending with "s" ("Jesus'", "Archimedes'" etc).
I am not an English native so I may be completely off. Feel free to rage :)
[+] [-] _hardwaregeek|7 years ago|reply
[+] [-] ddebernardy|7 years ago|reply
> Redundancy is bad, but dependencies are worse.
https://yosefk.com/blog/redundancy-vs-dependencies-which-is-...
> Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.
https://stackoverflow.com/questions/876089/who-wrote-this-pr...