I'm not sure what persistent means in this context, but what is meant by "fast" immutable array here is that the immutable array supports operations like insert and delete (returning new immutable objects) in near-constant time.
Fast(er) persistent arrays are easier than purely functional ones. See Baker's Shallow Binding Makes Functional Arrays Fast (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.47.3...). The problem with impure persistence is that it doesn't help concurrent accesses at all.
The fact that elm announces constant array access will not surprise anybody, as this is an inherent property of arrays. But then seeing that they have to use radix-trees for arrays and announcing it as constant is highly misleading. These guys should switch over to politics.
And this only holds for immutable arrays! So even without push,pop,shift,unshift they have to use trees for arrays.
Fully populating a 128 bit memory pool would take enough energy to boil the oceans[1]. This requires a branching factor of 26 to address.
So the absolute worst case scenario - even if we get science fiction computing - is a small, constant number of indirections.
The whole purpose of this type of analysis is to get an intuitive approximation as input size grows. The graphs for log32(n) and 1 will be essentially indistinguishable, and most programmers have a much clearer understanding of constant time versus O( log32( n ) ).
It is not misleading, it is a simplification that is absolutely valid to make.
Constant time in practise is a notable difference from saying constant time.
I can always sort a 32-bit int array in worst case linear time using counting sort, which in theory is better than the worst case runtime of quicksort, O(n²). Is it better in practise? Of course not, the constant factor is just too high, both for time and memory.
Much as I dislike it, not everyone uses 'array' to mean a collection with constant time indexing. Lots of people use it to mean an indexable collection.
The 'constant' vs. 'almost constant' has been covered by other comments.
Given the current rate of progress, what year would you predict that web browser applications use large enough arrays where that distinction makes a practical difference, and not just an academic one?
slice/push/get/append can stand in for push/pop/shift/unshift.
The immutability of the arrays is considered a feature and Javascript arrays are O(N) for the same operations.
[+] [-] millstone|12 years ago|reply
[+] [-] anorwell|12 years ago|reply
[+] [-] pkhuong|12 years ago|reply
[+] [-] tehabe|12 years ago|reply
I miss those email clients. They were much nicer than most of the email software which is considered modern today.
[+] [-] moomin|12 years ago|reply
[+] [-] malandrew|12 years ago|reply
https://news.ycombinator.com/item?id=7689823
[+] [-] JeanPierre|12 years ago|reply
[+] [-] sitkack|12 years ago|reply
ahh, found it. http://swannodette.github.io/mori/
[+] [-] rurban|12 years ago|reply
The fact that elm announces constant array access will not surprise anybody, as this is an inherent property of arrays. But then seeing that they have to use radix-trees for arrays and announcing it as constant is highly misleading. These guys should switch over to politics.
And this only holds for immutable arrays! So even without push,pop,shift,unshift they have to use trees for arrays.
[+] [-] reitzensteinm|12 years ago|reply
So the absolute worst case scenario - even if we get science fiction computing - is a small, constant number of indirections.
The whole purpose of this type of analysis is to get an intuitive approximation as input size grows. The graphs for log32(n) and 1 will be essentially indistinguishable, and most programmers have a much clearer understanding of constant time versus O( log32( n ) ).
It is not misleading, it is a simplification that is absolutely valid to make.
[1] https://blogs.oracle.com/bonwick/entry/128_bit_storage_are_y...
[+] [-] JeanPierre|12 years ago|reply
I can always sort a 32-bit int array in worst case linear time using counting sort, which in theory is better than the worst case runtime of quicksort, O(n²). Is it better in practise? Of course not, the constant factor is just too high, both for time and memory.
[+] [-] ajanuary|12 years ago|reply
The 'constant' vs. 'almost constant' has been covered by other comments.
[+] [-] pohl|12 years ago|reply
[+] [-] tiles|12 years ago|reply
[+] [-] gliptic|12 years ago|reply