top | item 30708458

The Only Unbreakable Law [video]

115 points| ivank | 4 years ago |youtube.com | reply

98 comments

order
[+] jchw|4 years ago|reply
I’m fairly conflicted by this, because it’s fairly insightful, but it’s also probably overselling itself.

They expound quite hard on the idea that abstraction is inherently bad, and I feel this is a poor choice of words, and perhaps a mistake. Abstractions have cost. In many forms, and especially if the abstraction is a poor one.

However… they don’t seem to differentiate between good and bad abstractions. They seem to regard all abstraction as simply unnecessary, only used because our brains cannot deal with the entire problem at once.

I think you could make this argument to some degree but it breaks down when you start to see where abstraction is worth the cost. As an example, let’s say I’m writing a service that needs a key value store. If I make a simple abstraction for it, with well-defined properties for exactly how it should behave, how data consistency should work, etc. then implement multiple backends, this is a good abstraction. The reason for this is because software doesn’t have fixed requirements. Some users may be running a small instance of something on their desktop or a NAS or what have you, whereas others may be running software on gigantic clusters and would benefit from using clustered key value stores that are much more difficult to setup for valid, unchangeable reasons, even if we were to get rid of the abstraction and fully integrate a distributed key-value store right into our program.

Also, requirements change temporally. Clang could’ve implemented everything with no abstractions, but when Clang was created it targeted older and fewer versions of programming languages. The abstractions have cost, particularly when they are bad; but not having abstractions would’ve costed far more, IMO. Extending and reusing software that has little abstraction is very difficult because there’s very few reliable boundaries you can work off of. Adding a new operator in Clang is probably still hard, but I’m sure it would be harder if you carried forward all of the abstraction and folded it down instead. You need some kind of abstraction if you want cheap extensibility.

So my conclusion is basically, abstraction is not bad. Libraries are not bad. Engines are not bad. They simply have costs that are not accounted for properly, and may cost more than the value they provide in many cases. Intuitively, we know this; It’s basically the knee-jerk software engineers get when they get into a build-vs-buy discussion. You feel the jolt. The library has an amazing feature list, but something tells you it won’t be so easy. That’s the hidden cost right there.

[+] xelxebar|4 years ago|reply
> They seem to regard all abstraction as simply unnecessary...

I would be surprised if the video author agreed with this statement. My understanding of his point is something more general and almost trivially true, i.e. a given set of abstractions can't solve problems the same way as a different set of abstractions. The strong version is a Venn diagram

         ┌──────────────────────────────┐
         │                              │
         │ Implementations expressable  │
         │ by abstraction set A         │
         │                              │
         │       ┌──────────────────────┼───────┐
         │       │                      │       │
         │       │ Implementations      │       │
         │       │ expressable by both  │       │
         │       │                      │       │
         └───────┼──────────────────────┘       │
                 │ Implementations expressable  │
                 │ by abstraction set B         │
                 │                              │
                 └──────────────────────────────┘
In particular, if you are still in the "this problem isn't completely, rigorously nailed down" phase, then building abstraction-hierarchy A implicitly means that you cannot explore some of the solutions available to abstraction-hierarchy B.

Said another way, abstractions cut down the space of possible solutions.

Cutting down the space definitely has large upsides. If you're trying to build a nuclear plant, we probably want to weed out the cake-baking solutions. For especially large and complex problems with horrendously large solutions spaces, abstractions function as a way of compartmentalizing some of that solution space into manageable sub-problems. However, maybe there is a better set of sub-problems?

After a year hammering on some software development project, you probably have a much clearer idea of the problem's nuances than when first starting. Wouldn't it be great if we could perform low-cost rewrites? If you can hold the entire source code in your head and cognate about it, that's probably even possible. Could any one human hold all of Firefox in their head?

One point from the video stands out to me: abstractions might just be a necessary evil. They are effective tools for helping humans cognate about complex problems, which involves limiting our ability to cognate about potential solutions.

Anyway, I am reminded of the surprising solutions found by genetic algorithms and their ilk.

[+] chii|4 years ago|reply
> You need some kind of abstraction if you want cheap extensibility.

so does the need for cheap extensibility comes first, in which case you build up the abstraction to enable it? Or does abstraction get built up first, which gives you cheap extensibility, then users start needing it afterwards?

What if clang didn't end up being as popular as it did, and the effort it took for the "cheap" extensibility was never needed as no one actually extended it?

[+] parksy|4 years ago|reply
It is thought provoking at the very least. The perspective I saw his views on abstraction from is the perspective of the product itself. Imagine there is some perfect solution for the ultimate technology. It's a perfect whole, every operation has meaning and purpose, is as optimised as it could possibly be, the hardware and software blend seamlessly with no unnecessary redundancy and so on. Maybe such a thing is so advanced we'd barely recognise it woven into the very fabric of our biology and culture, who knows.

Our human limitations and abstractions limit our ability to approach this goal, so it's not so much that the idea of abstraction is bad because it's useful to actually get stuff done. But it's more that we should always keep in mind that abstraction itself is not the goal, it's just a tool we have to use to get there, because of human limits and I would also say natural limits (like computational efficiency etc).

Obviously such a thing of beauty is a hypothetical extreme. I can see this idea being useful on a smaller scale, like in organising a team, designing a product, or a piece of software, we should pay a lot more attention to where we draw boxes around our design space.

But I am kind of with you when it comes to the broader-sweeping implications. How far does this concept go, and what power do individuals have without some social techno-revolution to make any broader architectural changes to basically anything? I have a mobile phone that communicates via radio with centralised systems. These centralised systems exist to govern access and enforce payment for services, because that's how society itself is structured. Perhaps there's key value stores at various layers of this architecture. There is a store (again because money) where I can choose my own apps (because of human desire for freedom of choice) or view ads (money), all human constructs. (It's all becoming quite philosophical, another human trait.)

But is that the best possible solution for the problem of networked personal computer devices? Is mobile networked computing even an optimal solution for biological lifeforms, is it a technological evolutionary dead end or a precursor to some broader construct we're yet to discover, and if so then what do those superior structures look like from a design and organisational perspective? Perhaps on some alien world they have figured it out, and maybe their solution doesn't include radio communication or key value stores, or maybe it's something we wouldn't even recognise as technology, who knows.

What I take away from it is a call to think about why we're designing things the way that we do. Is there a way we can draw the design space differently and organise teams to consolidate or reimagine the problems they're solving in order to rule out truly unnecessary abstraction and keep only that which is necessary?

I can't imagine how it might look, but I am sure if someone figured out a more optimal way of organising hardware and software that served the infinitely variable needs of humanity securely and efficiently with less abstraction for the same or greater flexibility and that would continue to do so in perpetuity, that person would become rather rich (unless that solution made money obsolete, here we go again).

[+] beebmam|4 years ago|reply
A law that doesn't make predictions isn't a law. A law that is not falsifiable isn't a law. It is an unscientific belief.

It's truly incredible to me that people, like the person in this video, can speak with such confidence about how, for example from this video, "if we look at an org chart for an organization, and we look at the structure of the products that it produces, we would expect them to basically just be collapses of each other [i.e. a homomorphism]". Also known as https://en.wikipedia.org/wiki/Conway%27s_law

A sincere person in search of truth asks questions like the following when they encounter a claim:

- can we think of circumstances where this law is not true?

- can we test this claim to show that it is true?

- can a test be devised which would falsify this claim?

- if this claim is true, what are the mechanisms of action for the claim?

- if this claim is true, are there any contradictions that would arise with other things we know are true?

Using the same metaphor used in this video, if the currently recognized law of gravitation (General Relativity) made predictions which were different than what is observed, then that law is wrong. And a scientist would adjust their law to reality and be more than willing to point out the gaps of explanation in our law of gravitation (which they do).

If we're serious about Computer Science (and Software Engineering) being a field in pursuit of truth, we should be as rigorous and critical as other fields of science and engineering when it comes to making claims.

[+] jchw|4 years ago|reply
The more I think about this comment, the more I feel it is missing the point. They do expound a fair bit on the term “law” but even admit both that the title is a bit flippant (and not the one they originally wanted to go with) and that their formulation of the concept is not yet meeting criteria necessary to consider it a “law,” only that they believe there probably is an underlying law.

Like many have said in the past, computer science is not really “science” or even engineering. And if that’s true, then software architecture really isn’t science. It’s closer to a soft science if anything. There may not really be a meaningful definition of “law” and that degree of rigor may not be very easy to accomplish. After all, there’s hardly anything objective about it.

However, that doesn’t mean that observations about it are not interesting. I certainly think Conway’s law is interesting despite that it may not meet the criteria to be called a “law” in a harder science.

[+] 1970-01-01|4 years ago|reply
He spent a few minutes explaining how this isn't a scientific law. I think you missed it. Go 6 minutes in.
[+] squeegmeister|4 years ago|reply
Doesn't Conway's law make predictions and isn't it falsifiable? You pointed it out yourself "if we look at an org chart for an organization, and we look at the structure of the products that it produces, we would expect them to basically just be collapses of each other [i.e. a homomorphism]"

You could validate these claims by looking at org charts across various companies and looking at their software architecture and come up with some measure for how closely they resemble each other

[+] ukj|4 years ago|reply
Please spare us from truth-seeking and leave that to the philosophers.

In science/engineering we care about instrumentalism, not truth.

All models are wrong. Some are useful.

[+] brown|4 years ago|reply
Nice video. Conway's 1968 paper is a good find.

The conclusion is slightly defeatist, but ultimately correct. At time 49:23, Casey says "But we have to do them right now, because we haven't figured out how to do it better."

[excessive modularity] is the worst form of [software development] – except for all the others that have been tried.

[+] andrelaszlo|4 years ago|reply
Perhaps it's a desperate attempt to defer the problem of what the organization should look like. The more nodes you have in your graph, the easier it will be to collapse it into something that maps to the organization you want?
[+] 0xbadc0de5|4 years ago|reply
Having watched Casey's Handmade Hero series since day-1, I've always found him to be highly skilled and insightful. While not a game developer myself, learning from his approach to first-principles software development and code optimization has paid dividends in my day to day work nonetheless.
[+] Ygg2|4 years ago|reply
I've found him passionate and smart. But rarely right.

Like him bashing SOLID principles. It read like a man arguing against hammers, and instead suggesting using drills (which is fine if you need to drill a hole but bad advice if you want to hammer a nail). Like yeah, SOLID is over used and over-stated, but they were invented to stop certain set of problems.

[+] unyttigfjelltol|4 years ago|reply
The law holds for human orgs for a reason beyond "communication". No business unit that is asked to contribute to a project would allow its contribution to take a form different from a discrete, quantifiable module. The 5-person team in the lecture came up with a 5-pass compiler because none of the 5 people were willing to have an unquantifiable contribution indistinguishable from having not shown up to work in the first place.

The case with software classes and components is different. None of those inanimate object care if their contribution is measurable, so the law does not apply as strongly with respect to them.

[+] phtrivier|4 years ago|reply
Even more depressing than "The 20 million lines problem" - because the gist of the talk is that programming with more than one person is doomed to fail, and programming with less than two is even worse.
[+] mellosouls|4 years ago|reply
An hour long video with comments turned off? A summary would be useful...
[+] seanhunter|4 years ago|reply
I would say just read Conway’s actual paper “How do committees invent”.

This is one of those talks where I’m convinced the person is being paid by the second. By the time we had got to the third iteration of “before I tell you the thing I’m going to talk about let me (define what a law is/critique Harvard business review/give an irrelevant sidebar about the language of technical papers and the fact ‘Datamation’ still exists”) I totally had lost any and all interest.

[+] worewood|4 years ago|reply
Yeah.. I can understand this block on political videos or other sensitive topics but on this? Well if you can't take the criticism perhaps don't post it on youtube.
[+] clarkdale|4 years ago|reply
I'm curious in how to use Conway's Law to be more effective. I can learn from Brooks and Amdahls to improve software, but how to apply Conway's?

I think the closest thing is Bezos's API mandate. This is an attempt to flatten communications across a vast organization, with the upfront cost that each team build and maintain an API.

[+] kirykl|4 years ago|reply
lede is buried at the bottom of challenger deep on this one
[+] FpUser|4 years ago|reply
I think basic idea and architectures stay more or less the same for decades. Wrapping it out in some fancy terminology and calling those new does not mean "braking the law". It is like RPC vs CORBA vs DCOM vs /10000 other come and go standards which are essentially the same thing.
[+] axiosgunnar|4 years ago|reply
tldw?
[+] smegsicle|4 years ago|reply
conway's law: how it arises, why it's unavoidable, relationship w brooks' and amdahl's laws, how it explains complexity compounding over time in the form of integrating with past organization structure..
[+] 0xbadc0de5|4 years ago|reply
You care about this if:

- Software architecture matters to you

- Software performance matters to you

- Team and organizational performance matters to you

[+] teddyh|4 years ago|reply
It’s Conway's law.
[+] holyyikes|4 years ago|reply

[deleted]

[+] Laremere|4 years ago|reply
I think it's fair to have opinions, but you are overstating your case here -

> You could have had an entire career in the video game industry in the amount of time he's managed to produce some pile of crap that doesn't even have a complete gameplay loop.

Including taking time to explain concepts, Q+A, and design choices made to educate that are then iterated upon (eg, doing 2d graphics from scratch then moving to 3d), he has accumulated approximately 28 weeks of full time work.[1] That's a very sad "career" in video games.

[1] Math: Using playist https://www.youtube.com/playlist?list=PLnuhp3Xd9PYTt6svyQPyR... with a calculation tool https://ytplaylist-len.herokuapp.com/ gives an average of 1 hour 40 minutes per video. The tool has a limit of 500 videos, so multiplying average length by true playlist length, then dividing by 40 hours per week gives 27.83 weeks: https://www.wolframalpha.com/input?i=%281+hour+40+minutes%29...

[+] HexDecOctBin|4 years ago|reply
> You can safely ignore this guy.

Thanks for telling us this. We might have thought for ourselves based on the content, but now we don't need to.

[+] dS0rrow|4 years ago|reply
do you mind sharing why you consider handmade hero a con ?
[+] ladberg|4 years ago|reply
Casey's name should be in the title! I think a lot of HNers respect him.
[+] russellbeattie|4 years ago|reply
Really?? I have no idea who he is, and this video didn't impress me at all.
[+] aaaaaaaaaaab|4 years ago|reply
Conway's "law" is such a cop-out excuse for shipping shitty software... There's no such law of nature that says you must ship shitty software. Enterprises ship shitty software because the average tenure of their developers is 2 years, and they have no incentive to improve things beyond what's necessary to pay the bills.
[+] Ygg2|4 years ago|reply
I mean at the end of the day it boils to few things.

- Entropy

- Capitalism

- Greed

To not be vague, user greed for features and less for performance cause increase in complexity. A complex system is by definition more chaotic and harder to optimize.

And Capitalism rewards doing just a bit better than competition. I.e. optimize your time on quickest things that gives most users satisfaction.