(no title)
apantel | 1 year ago
The way to arrive at the optimal system is to continually optimize all individual components as the system develops. You have to walk the razor’s edge between “premature optimization is the root of all evil” and not optimizing anything until it’s too late and a bunch of bad design decisions have been baked in.
If you want to write the fastest program, profile its performance from the start and try different ways of doing things at each step of the process. There’s usually a way of doing things that maximizes simplicity and performance at the same time because maximum performance == doing as little work as possible to complete a task. Chances are that ‘littlest amount of work’ will be elegant.
PaulKeeble|1 year ago
Personally I prefer to recognise those components ahead of time and think about performance and do experiments to start from an architecture that has a better chance of succeeding in its performance goals from the outset. So I tend to agree with the article that its much more effective to optimise at the architecture than the individual components but its also much harder to do that once the thing is built and working already and the same is likely true for a lot of organisations as well. Once organisational culture has been set its hard to fix it.
drewcoo|1 year ago
I wonder if that's because speed is easy to measure. It's certainly not the only thing that can be optimized.
ZoomerCretin|1 year ago
llamaimperative|1 year ago
The key insight from Deming’s work is that at any given moment there is only and exactly ONE thing that should be optimized. Once you optimize that, there will be a new SINGLE thing that is now slowing down the entire system, and must be optimized.
The goal of an engineer of a system (or manager of an org) is to develop a method to repeatedly identify and improve each new bottleneck as it appears, and to ignore other components even if they’re apparently not performing as well as they could.
Jensson|1 year ago
In order to have a chance to get anywhere close to fast you need each component to already be very fast, and then you can build a fast system on top of those. When you work with slow components you wont use the right architecture for the system, instead you start working around the slowness of each component and you end up with a very slow mess.
Example: A function call is slow, it calls a thousand different things so you speed this up by putting a cache in front, great! But now this cache slows down the system, instead you could have sped up the function calls execution time by optimizing/trimming down those thousand things it calls.
Adding that cache there made you get further away from a fast system than you were before, not closer. One cache is negligible, but imagine every function creating a new cache, that would be very slow and basically impossible to fix without rewriting the whole thing.
RaftPeople|1 year ago
Are you sure this is what Deming thought?
A complex system has many different things that can be locally optimized independently, here's an example:
1-DC picking optimization
2-Store inventory placement to align with demand to both increase sales and decrease costs of liquidating old inventory
Both of these have very significant impact on costs or revenue or both, and are largely independent. The process of picking in the DC is independent of the specific selection and inventory levels of skus in the stores.
Why would you not do both at the same time to get the benefits as quickly as possible?