top | item 41248089

(no title)

ohwellhere | 1 year ago

My algorithms class taught to think of it not as "describing performance" in an absolute sense, but as "describing how performance changes as the size of the input data increases".

It is not necessarily true that an O(1) algorithm will outperform an O(n^2) alternative on a particular set of data. But it is true that an O(1) algorithm will outperform an O(n^2) alternative as the size of the input data increases.

discuss

order

tialaramex|1 year ago

This sometimes doesn't work out in practice because the scaling involved runs into a limitation your big-O model didn't account for. Typical examples are: The size of the machine registers, physical RAM, addressable storage, or transmission speeds.

If your O(1) algorithm takes an hour for any input, and the competition is O(n) it may seem like there must be cases where you're better, and then you realise n is the size in bytes of some data in RAM, and your competitors can do 4GB per second. You won't be competitive until we're talking about 15TB of data and then you remember you only have 64GB of RAM.

Big-O complexities are not useless, but they're a poor summary alone, about as useful as knowing the mean price of items at supermarkets. I guess this supermarket is cheaper? Or it offers more small items? Or it has no high-end items? Or something?

chipdart|1 year ago

> (...) but as "describing how performance changes as the size of the input data increases".

Yes, that's the definition of asymptotic computational complexity. That's the whole point of these comparisons. It's pointless to compare algorithms when input size is in the single digit scale.

Kubuxu|1 year ago

You could have an O(N^2) algorithm outperform an O(N) on the scale of 10,000 (or whatever scale you want to imagine). The big-O notation compares only asymptotic behaviour, and sometimes the lower power factors are overwhelming.