top | item 41215897

(no title)

apantel | 1 year ago

I just want to make a comment about optimizing applications even though the article is about optimizing organizations:

The way to arrive at the optimal system is to continually optimize all individual components as the system develops. You have to walk the razor’s edge between “premature optimization is the root of all evil” and not optimizing anything until it’s too late and a bunch of bad design decisions have been baked in.

If you want to write the fastest program, profile its performance from the start and try different ways of doing things at each step of the process. There’s usually a way of doing things that maximizes simplicity and performance at the same time because maximum performance == doing as little work as possible to complete a task. Chances are that ‘littlest amount of work’ will be elegant.

discuss

order

PaulKeeble|1 year ago

Usually "optimisation" is something that happens once a slow part of the system has been identified. So you develop data and expose the problem with a profiler and fix a few things that seem to be holding it up and then find the next stage of bottlenecks. You keep doing this until you meet your performance goal or you can't improve it any further and determine a completely different approach is required.

Personally I prefer to recognise those components ahead of time and think about performance and do experiments to start from an architecture that has a better chance of succeeding in its performance goals from the outset. So I tend to agree with the article that its much more effective to optimise at the architecture than the individual components but its also much harder to do that once the thing is built and working already and the same is likely true for a lot of organisations as well. Once organisational culture has been set its hard to fix it.

drewcoo|1 year ago

> Usually "optimisation" is something that happens once a slow part of the system has been identified

I wonder if that's because speed is easy to measure. It's certainly not the only thing that can be optimized.

ZoomerCretin|1 year ago

Finding and fixing the one and only source of poor performance is a minority of optimizations. The more common case is a lot of suboptimal code creating poor performance.

llamaimperative|1 year ago

Ehhh this isn’t quite the right takeaway, or at least it’s contrary to Deming’s approach.

The key insight from Deming’s work is that at any given moment there is only and exactly ONE thing that should be optimized. Once you optimize that, there will be a new SINGLE thing that is now slowing down the entire system, and must be optimized.

The goal of an engineer of a system (or manager of an org) is to develop a method to repeatedly identify and improve each new bottleneck as it appears, and to ignore other components even if they’re apparently not performing as well as they could.

Jensson|1 year ago

That is what everyone currently and we know the results, that only takes you from horribly slow to very slow.

In order to have a chance to get anywhere close to fast you need each component to already be very fast, and then you can build a fast system on top of those. When you work with slow components you wont use the right architecture for the system, instead you start working around the slowness of each component and you end up with a very slow mess.

Example: A function call is slow, it calls a thousand different things so you speed this up by putting a cache in front, great! But now this cache slows down the system, instead you could have sped up the function calls execution time by optimizing/trimming down those thousand things it calls.

Adding that cache there made you get further away from a fast system than you were before, not closer. One cache is negligible, but imagine every function creating a new cache, that would be very slow and basically impossible to fix without rewriting the whole thing.

RaftPeople|1 year ago

> The key insight from Deming’s work is that at any given moment there is only and exactly ONE thing that should be optimized. Once you optimize that, there will be a new SINGLE thing that is now slowing down the entire system, and must be optimized.

Are you sure this is what Deming thought?

A complex system has many different things that can be locally optimized independently, here's an example:

1-DC picking optimization

2-Store inventory placement to align with demand to both increase sales and decrease costs of liquidating old inventory

Both of these have very significant impact on costs or revenue or both, and are largely independent. The process of picking in the DC is independent of the specific selection and inventory levels of skus in the stores.

Why would you not do both at the same time to get the benefits as quickly as possible?