(no title)
tetrazine | 2 years ago
It’s obvious from CS201 that this phrase is generally true and I have no pedantic objection to its inclusion in the article. However, I’m curious if this is strictly true or if we can find a (nontrivial) counterexample. Are there any algorithms that run faster as input grows? My first intuition is some kind of probabilistic question solved to a given correctness bound.
Edit: it is trivial to construct an algorithm with this property. My precise question is whether any meaningful problems have optimal algorithms with this property.
daveFNbuck|2 years ago
It's actually impossible to construct an algorithm that has its runtime decrease as the input size grows. You can do this for finitely many examples, but we're talking about asymptotics here. With discrete run-times and a lower-bound of 1 operation, the run-time will have to stop decreasing at some point and just stay at the same constant value for all but finitely-many exceptions. This makes it a constant-time algorithm.
A constant run-time is a counter-example to that caption though, as the problem doesn't take longer as the input grows. An example would be checking if a number is divisible by 2. You only need to check 1 bit, so it doesn't take any longer as you add more bits that the algorithm doesn't touch.
lacker|2 years ago
quickthrower2|2 years ago
What if we devise a data structure that becomes more efficient as it grows to do a certain (perhaps useless - but that is OK!) operation.
For example we have a data structure at N=0 with a million disconnected nodes and one of them being the target node.
As N increases add a node with a pointer to the target node. The algorithm to find the target node is a random search through nodes until it finds one with a pointer then halts. As N increases, time on average decreases with complexity O(1-N/(N-K)) where K=1000000
Dylan16807|2 years ago
But that's an extremely limited and misleading analysis, so you should not do that. If the time taken goes from a quadrillion to 7 as the problem size grows, don't call it constant.
dilawar|2 years ago
Nevermark|2 years ago
But the solution would still need to be checked and another candidates generated until a solution is found.
Average time would be minimized by generating random vectors each time.
But that would increase the worst case to unbounded (effectively infinite) time since an increasingly vanishingly small but finite chance that a solution has not yet been found will exist after any number of steps.
Some kind of simple search would be vastly more bounded, but in practice require more computation.
Q6T46nT668w6i3m|2 years ago
paulddraper|2 years ago
Actually, it is impossible to construct an algorithm with this property, at least under any usual computational model.
Every computational model has discrete time units; therefore the amount of time taken cannot be vanishingly small.
(This is assuming the usual notion of complexity, where only the asymptotic behavior matters. It doesn't matter if it gets increasingly faster over the first million input sizes...it only matters what the behavior is as n -> infinity.)
A program cannot be increasingly faster forever.
PartiallyTyped|2 years ago
Assume that you use a linkedlist for collisions, then for some growth rates, your cost of access is not constant, and is certainly at most O(n).
The point being that the universal hashing function has 1/m probability of collision; if your growthrate is bad. Then m << n, which turns the expected time from constant to at most O(N) because m grows slower than N.
Dylan16807|2 years ago
Filligree|2 years ago
In that case, as the size of the corpus grows it should get easier to find ones in the right range of similarity.
nullc|2 years ago
You could imagine an algorithm that takes some astronomical time for size 1, less for 2, etc.. but for any finite time at size 1 there will be some size N where the reading time finally dominates the computation.
mjcohen|2 years ago
Nevermark|2 years ago
Adding the constraint of only needing to classify a particular form of prime will always result in an algorithm of equal or lesser complexity order.
krackers|2 years ago
PartiallyTyped|2 years ago
downWidOutaFite|2 years ago