top | item 39012953

(no title)

OscarDC | 2 years ago

It may be that the entire performance including the logic is satisfactory.

Or it may be that the conditions to reach that logic are too rare to make the time to measure and optimize worthwhile.

In any case, I understand it as: if you're not bothered about measuring the performance of something, it may be that you don't need to optimize it.

discuss

order

WJW|2 years ago

> It may be that the entire performance including the logic is satisfactory.

In which case you have already measured it, even if not very precisely.

> Or it may be that the conditions to reach that logic are too rare to make the time to measure and optimize worthwhile.

This too cannot be determined without measuring how often it happens.

OscarDC|2 years ago

By what I understood of your reply, everything is a measure and I fail to see how something cannot be measured. Or perhaps I wasn't clear in what I meant initially?

> In which case you have already measured it, even if not very precisely.

If you're doing a CLI tool for example and it responds soon enough for it to be a bother, you won't want to measure a sub-part of its logic. You may be blindly "measuring" the time the whole command took with your eyes and thoughts, but you're not measuring in any way the time taken by the corresponding logic in the whole thing.

> This too cannot be determined without measuring how often it happens.

If the CLI tool logic sub-part is in a specific combination of flags and conditions that you for now didn't see a use case for or even sometimes possibility yet, you may also not measure it, without needing to have to "measure how often it happens". For example you may want to skip some optimizations in what you think will be rarely encountered error handling code, even if you actually never measured it. Even if it does happen more often than you thought, low performance may still be acceptable in some unexpected code (e.g. after a typo in a CLI tool flags).