> It is a reluctant algorithm based on the principle of multiply and surrender. It was published in 1986 by Andrei Broder and Jorge Stolfi in their paper Pessimal Algorithms and Simplexity Analysis.
> The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.
> The probability of the original input list being in the exact order it's in is 1/(n!). There is such a small likelihood of this that it's clearly absurd to say that this happened by chance, so it must have been consciously put in that order by an intelligent Sorter. Therefore it's safe to assume that it's already optimally sorted in some way that transcends our naïve mortal understanding of "ascending order". Any attempt to change that order to conform to our own preconceptions would actually make it less sorted.
Quantum bogosort is probably my personal favorite. Randomly permute the input using a quantum source of entropy. If the input is not sorted, destroy the universe. The only universe remaining is one where the input was already sorted. Runs in O(n) time.
Unfortunately, randomly permuting the input takes O(nlog(n)) steps (you need at least this much time just to read in sufficiently many random bits). Perhaps a clever parallel architecture could reduce the runtime?
Bogosort is always a fun one, because I think its one of the better introductions to NP-complete problems.
If you imagine that there's no "easy" algorithm to return a sorted list (but sorted lists do exist), then the only methodology possible is to try all combinations. And you do this either systematically, or randomly.
Random NP-complete algorithms, such as WalkSAT, are quite good in practice. Systematic NP-complete algorithms, such as DPLL-SAT, are more "obvious" in how they work, and are sufficient for smaller sizes.
I recently found a fun linear-time systolic sorting algorithm. I'd describe the serial implementation as unconventionally obvious. It's like insertion sort, without all the bother of swapping.
def index_sort(a):
n = len(a)
output = [None] * n
for i in range(n): #can run in parallel, damn the GIL
index = 0
for j in range(n):
if a[j] < a[i]:
index += 1
elif a[j] == a[i] and j < i:
index += 1
output[index] = a[i]
return output
import ctypes
def mutation_sort(a):
for i, v in enumerate(a):
ctypes.c_int.from_address(id(v) + 24).value = i # for x86-64, adjust the offset for your platform
return a
def is_sorted(a):
prev = None
for v in a:
if prev is not None and prev > v: return False
prev = v
return True
is_sorted(mutation_sort([3000, 1000, 7000, 8000, 2000])) # prints True
Usage sort: About sorting a sequence of numbered books while you are picking books out of the sequence when you need them and insert them in afterwards (moving the books in between). What is the best strategy for where to put the books back? for a discussion see: https://www.iwriteiam.nl/Dpuzzle.html#usagesort
It's a Go channel based quicksort. It sorts a channel of ints using O(n) go routines! Interestingly it doesn't need to know how many ints are in the channel at the start.
Not a practical sort method but fun to see the Go concurrency primitives in use.
I know this article is meant as a joke but the first variant of bogosort, which is actually randomsort, it will be the fastest algorithm once we start using quantum computing.
Having an array and allocating "n"=="array length" qubits for it then implementing parallel version of randomsort (no while, while is serialization!) will be the fastest sorting algorithm.
Slightly off-topic, the article was fine except for one nitbit...
I wonder what the author, "hilariously" making one sorting function print "<val> sent to gulag" and calling it "stalin-sort", would think of "hitler-sort" which prints "<val> burned in oven"?
Please don't normalize mass-murderers, even for memes.
It's an obvious joke, the point of which is that the Stalin Sort is destructive and doesn't actually get you what you want, and you'd never want to use it.
It is if you ignore how the underlying OS implements multitasking and sleep() in particular. And the kernel-side implementation usually involves some kind of priority queue of sleeping threads. So taken as a whole sleep-sort is just an convoluted implementation of heap-sort that outsources the actual sorting to OS.
It's perversely linear with respect to the largest element in the array, if that's greater than the runtime of the underlying heapsort. Good luck with negative entries.
Response to dead comment:
> Couldn't this be solved by scaling everything by the largest entry. Of course that requires first a pass to find the largest entry but that's not as expensive?
One presumes that we don't have infinite precision timers, so you should only scale it down so the smallest gap between elements is a few clock cycles (unless you're satisfied with breaking the fiction that sleepsort is not heapsort). This seems like a mildly interesting question. My intuition is that finding the smallest abs(a[i]-a[j]) with i != j should be just as hard as sorting the list on a classical computer, but it seems like a quantum computer might find the smallest gap faster.
[+] [-] nayuki|4 years ago|reply
> It is a reluctant algorithm based on the principle of multiply and surrender. It was published in 1986 by Andrei Broder and Jorge Stolfi in their paper Pessimal Algorithms and Simplexity Analysis.
https://www.mipmip.org/tidbits/pasa.pdf
> The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.
[+] [-] cratermoon|4 years ago|reply
[+] [-] Sohcahtoa82|4 years ago|reply
> The probability of the original input list being in the exact order it's in is 1/(n!). There is such a small likelihood of this that it's clearly absurd to say that this happened by chance, so it must have been consciously put in that order by an intelligent Sorter. Therefore it's safe to assume that it's already optimally sorted in some way that transcends our naïve mortal understanding of "ascending order". Any attempt to change that order to conform to our own preconceptions would actually make it less sorted.
https://www.dangermouse.net/esoteric/intelligentdesignsort.h...
[+] [-] kjeetgill|4 years ago|reply
Given a random shuffle of a list, what are the chances that another shuffle would be more sorted? There's a life lesson in here, I know it...
[+] [-] michaelmior|4 years ago|reply
[+] [-] contravariant|4 years ago|reply
[+] [-] Sharlin|4 years ago|reply
[+] [-] cevi|4 years ago|reply
[+] [-] forinti|4 years ago|reply
Start with p=0 and add 2^n for each n. Then subtract the largest power of 2 successively to get your integers ordered.
Con: p will be very big.
Pro: you don't need ifs!
[+] [-] Paedor|4 years ago|reply
[+] [-] Snild|4 years ago|reply
Another con: you'd better hope your input contains no duplicates. :)
[+] [-] dragontamer|4 years ago|reply
If you imagine that there's no "easy" algorithm to return a sorted list (but sorted lists do exist), then the only methodology possible is to try all combinations. And you do this either systematically, or randomly.
Random NP-complete algorithms, such as WalkSAT, are quite good in practice. Systematic NP-complete algorithms, such as DPLL-SAT, are more "obvious" in how they work, and are sufficient for smaller sizes.
[+] [-] scarmig|4 years ago|reply
[+] [-] boothby|4 years ago|reply
[+] [-] Snild|4 years ago|reply
But... there are two nested `range(n)` for loops. Typo?
[+] [-] lifthrasiir|4 years ago|reply
[+] [-] fjfaase|4 years ago|reply
[+] [-] nickcw|4 years ago|reply
https://gist.github.com/ncw/5419af0e255d2fb62b98
It's a Go channel based quicksort. It sorts a channel of ints using O(n) go routines! Interestingly it doesn't need to know how many ints are in the channel at the start.
Not a practical sort method but fun to see the Go concurrency primitives in use.
[+] [-] danpalmer|4 years ago|reply
Really easy: you see it, then you say it, then it’s sorted.
https://www.btp.police.uk/police-forces/british-transport-po... (this is a famous national campaign on the British railways over the past few years)
[+] [-] unnouinceput|4 years ago|reply
Having an array and allocating "n"=="array length" qubits for it then implementing parallel version of randomsort (no while, while is serialization!) will be the fastest sorting algorithm.
[+] [-] r0f1|4 years ago|reply
https://news.ycombinator.com/item?id=28758106
[+] [-] forgotmypw17|4 years ago|reply
[+] [-] axiosgunnar|4 years ago|reply
I wonder what the author, "hilariously" making one sorting function print "<val> sent to gulag" and calling it "stalin-sort", would think of "hitler-sort" which prints "<val> burned in oven"?
Please don't normalize mass-murderers, even for memes.
[+] [-] axiosgunnar|4 years ago|reply
[+] [-] karaterobot|4 years ago|reply
[+] [-] NotAnOtter|4 years ago|reply
O(n) sort achieved?
[+] [-] dfox|4 years ago|reply
[+] [-] boothby|4 years ago|reply
Response to dead comment:
> Couldn't this be solved by scaling everything by the largest entry. Of course that requires first a pass to find the largest entry but that's not as expensive?
One presumes that we don't have infinite precision timers, so you should only scale it down so the smallest gap between elements is a few clock cycles (unless you're satisfied with breaking the fiction that sleepsort is not heapsort). This seems like a mildly interesting question. My intuition is that finding the smallest abs(a[i]-a[j]) with i != j should be just as hard as sorting the list on a classical computer, but it seems like a quantum computer might find the smallest gap faster.
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] throw3849|4 years ago|reply
Stay at your index!!!
[+] [-] DeathArrow|4 years ago|reply
for j = 1 to n do
if A[i] < A[j] then
swap A[i] and A[j
[+] [-] unnouinceput|4 years ago|reply
PUSH A[j]; PUSH A[i]; POP A[j]; POP A[i];
[+] [-] fauscist1984|4 years ago|reply
[deleted]
[+] [-] WJW|4 years ago|reply
[+] [-] t34zesrg|4 years ago|reply
[deleted]