Some context: A few years ago, there was a paper from Google (https://dl.acm.org/doi/10.1145/3183713.3196909) that made learned data structures popular for a while. They started from the idea that indexes such as B-trees approximate an increasing function with one-sided error. By using that perspective and allowing two-sided error, they were able to make the index very small (and consequently quite fast).
Many data structure researchers got interested in the idea and developed a number of improvements. The PGM-index is one of those. Its main idea is to use piecewise linear approximations (that can be built in a single quick pass over the data) instead of the machine learning black box the Google paper was using.
- The ICML20 paper "Why are learned indexes so effective?" (http://pages.di.unipi.it/vinciguerra/publication/learned-ind...) where we prove that, under some general assumptions on the input data, the space of the PGM-index is actually O(n/B^2) whp (versus Θ(n/B) of classic B-trees).
Only having skimmed the work, read the following as a somewhat educated guess.
I think one could see this similar to repeated applications of interpolation search. If you are looking for x in a sorted array of n numbers between a and b, then index (x - a) / (b - a) * (n - 1) would be a good guess assuming uniform distribution of the numbers.
But as one can not assume a uniform distribution in general, one does that repeatedly. The first interpolation leads to better interpolation coefficients for the relevant subrange, which may lead to even better interpolation coefficient for an even smaller subrange until one eventually finds what one was looking for.
If there is no structure in the data that can be exploited, this degenerates into a more or less ordinary tree as we one certainly fit a line through two points, but if at some level a larger range of the data can be well approximated with the interpolation function, then it can save space and search time because one can get close to all the values in the range with only one set of interpolation coefficients and only one interpolation.
An interesting case in which Google trolled people into action by publishing a purely hypothetical paper for which there's no evidence they intended to put it into actual practice.
This is a major practical advance from the succinct data structure community.
This community has produced so many brilliant results in the past years. But, they work in the shadows. Since the rise of interest in neural network methods, I've often described their work as "machine learning where epsilon goes to 0." It's not sexy, but it is extremely useful.
For instance, Ferragina previously helped to develop the FM-index that enabled the sequence alignment algorithms used for the primary analysis of short genomic reads (100-250bp). These tools were simply transformative, because they reduced the amount of memory required to write genome mappers by orders of magnitude, allowing the construction of full-text indexes of the genome on what was then (~2009) commodity hardware.
I don't get it. I've implemented B-trees. The majority of space the used by a B-tree is the data itself. Each N-ary leaf of the tree is a basically a vector of data with maybe some bookkeeping at the ends. The leaves are more than half of the tree.
Sure, you can compress the data. But that depends on the data, completely random data can't be compress. Other data can be. But a point blank 83x space claim seems bizarre - or it's comparing to a very inefficient implementation of a B-tree.
Edit: It seems the 83x claim is a product of the HN submission. I could not find it on the page. But even the page should say something like "a compressed index that allows full speed look-up" (akin to succinct data structures) and then it would make sense.
So, from a quick read, I think there are a few things at play here that allow for "compression of random data".
One, and probably the biggest one, _this isn't lossless compression_. As other commenters mentioned, this aggregates groups of points into line segments and stores their slopes (allowing for a pre-specified error of up to epsilon).
Two, while the sample input data is randomly generated, it then needs to be sorted before it can be used here. This completely changes the distributional qualities (see: order statistics sampled from a uniform distribution [0]). Just as a toy example, suppose this was a million randomly-generated binary digits. Sure, you could store the million digits in sorted order, or you could just use run-length encoding and say "I have 499,968 zeroes and 500,032 ones" [1].
[1] I know, this is a dense sampling on the input space. But that's the sort of intuition that allows you to compress sorted data better than you'd be able to compress the unsorted data. The provided C++ code provides a sparse sampling.
The index does not store the data at all, it store slopes.
You start by sorting the data, make a piecewise linear interpolation, and you store each slope as triplet (key, slope, intercept) with key being the smallest value in the piecewise interpolation.
I find it quite clever to be honest.
I am not sure how it works on inserts and delete, didn't read the whole paper.
If there's even a rough order to the underlying data, I'll buy their claim. On ordered data, a Postgres block-range index (BRIN) is often several orders of magnitude smaller than a B-tree index.
If the data is random, I suspect you're right and the PGM index is no-better than a B-tree index. Most data does have an order and would probably see similar gains.
They do it by not storing keys in the index. A B-tree has copies of all the keys in the tree, and (normally) also in the data. Here, they just have slopes that get you close to the right actual element.
Given that a normal B-Tree can't retrieve the original order, it must be more compressible than random data and a representation that would let you represent a wrong order has invalid sequences, so it must be less space efficient than one that would use those sequences to mean something valid.
The paper says 83x, and even makes stronger claims that are fairly unqualified:
In short, the experimental achievements of the PGM-index are: (i) better space occupancy than the FITing-tree by up to 75% and than the CSS-tree by a factor 83×,
with the same or better query time; (ii) uniform improvement of the performance of RMI in terms of query time and space occupancy, and 15× faster construction, while requiring no hyperparameter tuning; (iii) better query and update time than a B+-tree by up to 71% in various dynamic workloads while reducing its space occupancy by four orders of magnitude (from gigabytes to few megabytes).
I wonder why it was just four orders of magnitude, though. Why not six or twelve?
They stop at page size of 1024 bytes - that indicates they are tested in-memory situation. And, which is worse, their compression ratio advantage almost halves when block size is doubled. Thus, what about B-tree with blocks of 16K or even 256K?
Also, what about log-structured merge trees where bigger levels can use bigger pages and, which is quite important, these bigger levels can be constructed using (partial) data scan. These bigger levels can (and should) be immutable, which enables simple byte slicing of keys and RLE compression.
So, where's a comparison with more or less contemporary data structures and algorithms? Why beat half a century old data structure using settings of said data structure that favors your approach?
My former colleague once said "give your baseline some love and it will surprise you". I see no love for B-trees in the PGM work.
The experiment you are referring to is done in main memory with an optimised in-memory B+tree implementation. We didn't plot the performance for larger page sizes because in our machine they performed poorly, as you can already see from the configuration with 1024-byte pages. So we're not favouring our approach at all.
Note also that next-gen memories have smaller and smaller access granularities. For example, the Intel's Optane DC Persistent Memory accesses blocks of 256 bytes, while the Intel's Optane DC SSD accesses blocks of 4 KB. I guess that data structures with blocks of 16K-256K are disproportionate in these cases.
About LSM-trees, nothing prevents you to use a PGM-index (which you can construct during the compaction of levels, thus without scanning data twice) to speed up the search on a long immutable level. Or also, to use a PGM-index on data which is organised into RLE-compressed disk pages ;)
Hello everyone. I'm Giorgio, the co-author of the PGM-index paper together with Paolo Ferragina.
First of all, I'd like to thank @hbrundage for sharing our work here and also all those interested in it. I'll do my best to answer any doubt in this thread.
Also, I'd like to mention two other related papers:
- "Why are learned indexes so effective?" presented at ICML 20, and co-authored with Paolo Ferragina and Fabrizio Lillo.
TL;DR: In the VLDB 20 paper, we proved a (rather pessimistic) statement that "the PGM-index has the same worst-case query and space bounds of B-trees". Here, we show that actually, under some general assumptions on the input data, the PGM-index improves the space bounds of B-trees from O(n/B) to O(n/B^2) with high probability, where B is the disk page size.
- "A 'learned' approach to quicken and compress rank/select dictionaries" presented at ALENEX 21, and co-authored with Antonio Boffa and Paolo Ferragina.
TL;DR: You can use piecewise linear approximations to compress not only the index but the data too! We present a compressed bitvector/container supporting efficient rank and select queries, which is competitive with several well-established implementations of succinct data structures.
I enjoyed glossing over the paper (will read it properly after work), but it was not immediately obvious to me how to implement this for strings.
I'm no expert in this area, so this might have an obvious answer.
I mean I guess you could treat characters as base 2^32 or something like that and convert a string to a real that way, but often strings have a non-trivial sort orders.
Can PGM index and linear approximation models in general be applied to clustered indexes, where actual data of variable size are stored in the index along with the keys?
It seems they are only talking about compressing the index (keys) not the values.
Also, the slides seem to imply the keys need to be set in sorted order? That way their memory locations will be in increasing order too. That’s quite an important limitation, that means the index is read-only in practice once populated. Though it may still be useful in some cases.
In the paper we focused on indexing keys, as the compression of keys and values is an orthogonal problem. For example, you can compress disk pages containing keys and values with a method of your choice, and then use the PGM-index to efficiently locate the page containing the query key.
Of course it begs the question: if the keys are sorted, what do we need an index for? A simple halfing method would trivially do it then with btree like performance and infinitely better index size (0).
Maybe they may have made an improvement here trading some space for even better lookup times? In that case, the 83x space over btree indexes is certainly possible - given that infinite improvement is possible too.
Only watched the video, was disappointed by https://youtu.be/gCKJ29RaggU?t=408 , where they are comparing against tiny b*tree page sizes that nothing uses any more - 4k, 16k and 64k are way more common
Hi @jabberwcky! The plot refers to a B+tree implementation optimised for main-memory (https://panthema.net/2007/stx-btree/). We didn't show the performance for larger/smaller page sizes because in our machine they performed poorly. Indeed, you can see from the figure that the fastest B+tree configuration had page size set to 512 bytes. The one using 1024-byte pages is already much slower, that's why we didn't clutter the plot with page sizes larger than 1k ;)
I assumed that a bigger page size would incur a worse query performance. You can already see the trend in the figure. So the index size comparison is based on the b+-tree which has a similar query performance with the proposed learned index.
How would one (very roughly) approximate what this index does in terms of big-O notation for time and space? Is it the same as a b-tree in time but with linearly less space?
The paper submitted to VLDB [1] has a table (Table 1) which lists the time complexity for the PGM Index and compares it with a Sorted Array, a B-Tree and another type of Data Aware/Learned Index - FITing-tree
The worst-case bounds are discussed in *Section 2.2* and *Theorem 1*. Essentially, we have the following bounds:
Query: O(log_c(m) log_2(ε/B)) I/Os
Space of the index: O(m)
where:
n = number of input keys
B = disk page size
ε = user-given maximum error of the piecewise linear approximation (determines how many keys you need to search at each level)
m = number of segments in the piecewise linear approximation
c = fan out of the data structure (differently from standard B-trees it is not fixed, and it is potentially large)
Intuitively, the query complexity comes from the fact that the PGM-index has O(log_c(m)) levels, and at each level you do a binary search that costs O(log_2(ε/B)) I/Os.
Note that m and c depend on the "linearity" of the given input data. For example, if the input data can be approximated by a few segments, i.e. if m=O(1), and you choose ε=Θ(B), then the PGM-index takes O(1) space and answer queries in O(1) I/Os!
In general, you can remove the dependence from m and c if you can prove a lower bound on the length of a segment (i.e. the number of keys it "covers"), irrespective of the input data. We proved that the length of a single segment is at least 2ε (thus c≥2ε), or equivalently, that the number of segments m is upper bounded by n/(2ε) [Lemma 2, the proof is very straightforward].
Again, if you choose ε=Θ(B), then you have the following (rather pessimistic) worst-case bounds:
Query: O(log_B(n)) I/Os
Space of the index: O(n/B)
Basically, these bounds tell you that the PGM-index is *never* worse in time and in space complexity than a B-tree!
---
However, in our experiments, the performance of the PGM-index was better than what the above bounds show, and this motivated us to study what happens when you make some (general) assumptions on the input data. The results of this study are in the ICML20 paper "Why are learned indexes so effective?" (http://pages.di.unipi.it/vinciguerra/publication/learned-ind...).
We found that, if you assume that the gaps between input sorted keys are taken from a distribution with finite mean and variance, then you can prove (Corollary 2 of the ICML20 paper) that the space of the PGM-index is actually O(n/B^2) whp (versus Θ(n/B) of classic B-trees).
Note that the result applies to *any* distribution, as long as the mean and variance of the RVs modelling the gaps are finite. Indeed, we specialised our main result to some well-known distributions, such as Uniform, Lognormal, Pareto, Exponential, and Gamma (Corollary 1 of the paper).
I've already thought about the idea of making statistics to optimize access time, so I guess this a viable implementation to do it correctly.
That's pretty amazing... I can somehow imagine this tech landing on every modern computer, allowing users to search for anything that is on their machine.
Many devs are probably familiar with perfect hashes as the gperf tool seems omnipresent on Linux machines. Is this a related concept? The learning part makes me suspect so but the slopes and interpolation part makes me doubt it.
This is interesting. Could this be adapted to store 2D data, like how a quadtree is a 2D range tree? (If you link me to a paper / pseudocode for that, I could implement it.) I imagine it would be useful in GIS, gaming, etc.
Hi @crazypython and thank you! Yep, I just added an implementation of the multidimensional PGM-index in the main repo. If you want to improve it, you are more than welcome. Drop me an email if you have some ideas. Thanks again!
They propose a solution for dynamic PGM indexes in the paper (section 3) and benchmark it (section 6). A summary is that, in their benchmark, their index is faster by 13%-71% in most cases, but can be slower (1%-15.2%) in a few cases.
I agree the example would be more eye-catching without that sort.
In the full paper they quote a rather interesting method [1] that allows you to insert values in amortized O(log(n)) time (deletes are apparently handled with tombstones, presumably rebuilding the whole thing when a sufficiently large proportion is deleted).
A very abridged explanation of how they handle inserts: you split the collection in a list of collections where position k contains either nothing or a collection size 2^k. When you want to add a new value you find the first empty spot and fill it by building a set of your new value together with the collections of all the preceding spots (because the sizes are all sequential powers of two this will fit exactly). Provided that merging the collections takes linear time this takes an amortized O(log(n)) per inserted item.
Of course once you have this you can use it for any learned index that can be learned in linear time.
[1]: M. H. Overmars. The Design of Dynamic Data Structures, volume 156 of Lecture Notes in Computer Science. Springer, 1983.
Databases. If you want to be able to quickly do a lot of useful operations on large amounts of data, B-trees and their variants (B+ trees) are the way to go. Using a B-tree, you can find an entry, sort and do range queries by key, and inserts and deletes are fast.
This type of comment is pretty common, but never adds to the discussion. Some things are going to have similar names, and usually it just doesn’t matter.
Take Rust the game and Rust the programming language. How often do people confuse them? I’ve never seen it happen. Never mind the fact that rust
is also a compound that forms when iron combines with oxygen.
In my book, it’s better to come up with a name that makes sense or is memorable as long as it’s not very confusing.
jltsiren|5 years ago
Many data structure researchers got interested in the idea and developed a number of improvements. The PGM-index is one of those. Its main idea is to use piecewise linear approximations (that can be built in a single quick pass over the data) instead of the machine learning black box the Google paper was using.
gvinciguerra|5 years ago
You may find interesting these other papers of ours:
- The ALENEX21 paper "A 'learned' approach to quicken and compress rank/select dictionaries" (http://pages.di.unipi.it/vinciguerra/publication/learned-ran..., https://github.com/gvinciguerra/la_vector), where we introduce a compressed bitvector supporting efficient rank and select queries, which is competitive with several well-established implementations of succinct data structures.
- The ICML20 paper "Why are learned indexes so effective?" (http://pages.di.unipi.it/vinciguerra/publication/learned-ind...) where we prove that, under some general assumptions on the input data, the space of the PGM-index is actually O(n/B^2) whp (versus Θ(n/B) of classic B-trees).
danbruc|5 years ago
I think one could see this similar to repeated applications of interpolation search. If you are looking for x in a sorted array of n numbers between a and b, then index (x - a) / (b - a) * (n - 1) would be a good guess assuming uniform distribution of the numbers.
But as one can not assume a uniform distribution in general, one does that repeatedly. The first interpolation leads to better interpolation coefficients for the relevant subrange, which may lead to even better interpolation coefficient for an even smaller subrange until one eventually finds what one was looking for.
If there is no structure in the data that can be exploited, this degenerates into a more or less ordinary tree as we one certainly fit a line through two points, but if at some level a larger range of the data can be well approximated with the interpolation function, then it can save space and search time because one can get close to all the values in the range with only one set of interpolation coefficients and only one interpolation.
jeffbee|5 years ago
inciampati|5 years ago
This community has produced so many brilliant results in the past years. But, they work in the shadows. Since the rise of interest in neural network methods, I've often described their work as "machine learning where epsilon goes to 0." It's not sexy, but it is extremely useful.
For instance, Ferragina previously helped to develop the FM-index that enabled the sequence alignment algorithms used for the primary analysis of short genomic reads (100-250bp). These tools were simply transformative, because they reduced the amount of memory required to write genome mappers by orders of magnitude, allowing the construction of full-text indexes of the genome on what was then (~2009) commodity hardware.
joe_the_user|5 years ago
Sure, you can compress the data. But that depends on the data, completely random data can't be compress. Other data can be. But a point blank 83x space claim seems bizarre - or it's comparing to a very inefficient implementation of a B-tree.
Edit: It seems the 83x claim is a product of the HN submission. I could not find it on the page. But even the page should say something like "a compressed index that allows full speed look-up" (akin to succinct data structures) and then it would make sense.
vitus|5 years ago
One, and probably the biggest one, _this isn't lossless compression_. As other commenters mentioned, this aggregates groups of points into line segments and stores their slopes (allowing for a pre-specified error of up to epsilon).
Two, while the sample input data is randomly generated, it then needs to be sorted before it can be used here. This completely changes the distributional qualities (see: order statistics sampled from a uniform distribution [0]). Just as a toy example, suppose this was a million randomly-generated binary digits. Sure, you could store the million digits in sorted order, or you could just use run-length encoding and say "I have 499,968 zeroes and 500,032 ones" [1].
[0] https://en.wikipedia.org/wiki/Order_statistic#Order_statisti...
[1] I know, this is a dense sampling on the input space. But that's the sort of intuition that allows you to compress sorted data better than you'd be able to compress the unsorted data. The provided C++ code provides a sparse sampling.
siscia|5 years ago
You start by sorting the data, make a piecewise linear interpolation, and you store each slope as triplet (key, slope, intercept) with key being the smallest value in the piecewise interpolation.
I find it quite clever to be honest.
I am not sure how it works on inserts and delete, didn't read the whole paper.
More digestible info on the slides.
sa46|5 years ago
If the data is random, I suspect you're right and the PGM index is no-better than a B-tree index. Most data does have an order and would probably see similar gains.
xucheng|5 years ago
ncmncm|5 years ago
foolmeonce|5 years ago
Given that a normal B-Tree can't retrieve the original order, it must be more compressible than random data and a representation that would let you represent a wrong order has invalid sequences, so it must be less space efficient than one that would use those sequences to mean something valid.
petergeoghegan|5 years ago
In short, the experimental achievements of the PGM-index are: (i) better space occupancy than the FITing-tree by up to 75% and than the CSS-tree by a factor 83×, with the same or better query time; (ii) uniform improvement of the performance of RMI in terms of query time and space occupancy, and 15× faster construction, while requiring no hyperparameter tuning; (iii) better query and update time than a B+-tree by up to 71% in various dynamic workloads while reducing its space occupancy by four orders of magnitude (from gigabytes to few megabytes).
I wonder why it was just four orders of magnitude, though. Why not six or twelve?
thesz|5 years ago
They stop at page size of 1024 bytes - that indicates they are tested in-memory situation. And, which is worse, their compression ratio advantage almost halves when block size is doubled. Thus, what about B-tree with blocks of 16K or even 256K?
Also, what about log-structured merge trees where bigger levels can use bigger pages and, which is quite important, these bigger levels can be constructed using (partial) data scan. These bigger levels can (and should) be immutable, which enables simple byte slicing of keys and RLE compression.
So, where's a comparison with more or less contemporary data structures and algorithms? Why beat half a century old data structure using settings of said data structure that favors your approach?
My former colleague once said "give your baseline some love and it will surprise you". I see no love for B-trees in the PGM work.
gvinciguerra|5 years ago
The experiment you are referring to is done in main memory with an optimised in-memory B+tree implementation. We didn't plot the performance for larger page sizes because in our machine they performed poorly, as you can already see from the configuration with 1024-byte pages. So we're not favouring our approach at all.
Note also that next-gen memories have smaller and smaller access granularities. For example, the Intel's Optane DC Persistent Memory accesses blocks of 256 bytes, while the Intel's Optane DC SSD accesses blocks of 4 KB. I guess that data structures with blocks of 16K-256K are disproportionate in these cases.
About LSM-trees, nothing prevents you to use a PGM-index (which you can construct during the compaction of levels, thus without scanning data twice) to speed up the search on a long immutable level. Or also, to use a PGM-index on data which is organised into RLE-compressed disk pages ;)
gvinciguerra|5 years ago
First of all, I'd like to thank @hbrundage for sharing our work here and also all those interested in it. I'll do my best to answer any doubt in this thread.
Also, I'd like to mention two other related papers:
- "Why are learned indexes so effective?" presented at ICML 20, and co-authored with Paolo Ferragina and Fabrizio Lillo.
PDF, slides and video: http://pages.di.unipi.it/vinciguerra/publication/learned-ind...
TL;DR: In the VLDB 20 paper, we proved a (rather pessimistic) statement that "the PGM-index has the same worst-case query and space bounds of B-trees". Here, we show that actually, under some general assumptions on the input data, the PGM-index improves the space bounds of B-trees from O(n/B) to O(n/B^2) with high probability, where B is the disk page size.
- "A 'learned' approach to quicken and compress rank/select dictionaries" presented at ALENEX 21, and co-authored with Antonio Boffa and Paolo Ferragina.
PDF and code: http://pages.di.unipi.it/vinciguerra/publication/learned-ran...
TL;DR: You can use piecewise linear approximations to compress not only the index but the data too! We present a compressed bitvector/container supporting efficient rank and select queries, which is competitive with several well-established implementations of succinct data structures.
magicalhippo|5 years ago
I'm no expert in this area, so this might have an obvious answer.
I mean I guess you could treat characters as base 2^32 or something like that and convert a string to a real that way, but often strings have a non-trivial sort orders.
ComodoHacker|5 years ago
BenoitP|5 years ago
Are there current efforts in your research going in mainstream RDBMS (say postgres)?
The space improvements are so great columns could just be indexed by default.
virattara|5 years ago
zupa-hu|5 years ago
It seems they are only talking about compressing the index (keys) not the values.
Also, the slides seem to imply the keys need to be set in sorted order? That way their memory locations will be in increasing order too. That’s quite an important limitation, that means the index is read-only in practice once populated. Though it may still be useful in some cases.
Did I misunderstand?
gvinciguerra|5 years ago
In the paper we focused on indexing keys, as the compression of keys and values is an orthogonal problem. For example, you can compress disk pages containing keys and values with a method of your choice, and then use the PGM-index to efficiently locate the page containing the query key.
For what concerns insertion and deletions (in non-sorted order), they are discussed in Section 3 "Dynamic PGM-index" and experimented in Section 7.3. The implementation is available at https://github.com/gvinciguerra/PGM-index/blob/master/includ... and the documentation at https://pgm.di.unipi.it/docs/cpp-reference/#classpgm_1_1_dyn....
zupa-hu|5 years ago
Maybe they may have made an improvement here trading some space for even better lookup times? In that case, the 83x space over btree indexes is certainly possible - given that infinite improvement is possible too.
jabberwcky|5 years ago
gvinciguerra|5 years ago
xucheng|5 years ago
plq|5 years ago
whyuselearning|5 years ago
http://databasearchitects.blogspot.com/2019/05/why-use-learn...
RMarcus|5 years ago
(Thomas Neumann, one of authors of the blog post, is a co-author of the linked paper)
etaioinshrdlu|5 years ago
karsinkk|5 years ago
[1] http://www.vldb.org/pvldb/vol13/p1162-ferragina.pdf
gvinciguerra|5 years ago
The worst-case bounds are discussed in *Section 2.2* and *Theorem 1*. Essentially, we have the following bounds:
Query: O(log_c(m) log_2(ε/B)) I/Os Space of the index: O(m)
where: n = number of input keys B = disk page size ε = user-given maximum error of the piecewise linear approximation (determines how many keys you need to search at each level) m = number of segments in the piecewise linear approximation c = fan out of the data structure (differently from standard B-trees it is not fixed, and it is potentially large)
Intuitively, the query complexity comes from the fact that the PGM-index has O(log_c(m)) levels, and at each level you do a binary search that costs O(log_2(ε/B)) I/Os.
Note that m and c depend on the "linearity" of the given input data. For example, if the input data can be approximated by a few segments, i.e. if m=O(1), and you choose ε=Θ(B), then the PGM-index takes O(1) space and answer queries in O(1) I/Os!
In general, you can remove the dependence from m and c if you can prove a lower bound on the length of a segment (i.e. the number of keys it "covers"), irrespective of the input data. We proved that the length of a single segment is at least 2ε (thus c≥2ε), or equivalently, that the number of segments m is upper bounded by n/(2ε) [Lemma 2, the proof is very straightforward]. Again, if you choose ε=Θ(B), then you have the following (rather pessimistic) worst-case bounds:
Query: O(log_B(n)) I/Os Space of the index: O(n/B)
Basically, these bounds tell you that the PGM-index is *never* worse in time and in space complexity than a B-tree!
---
However, in our experiments, the performance of the PGM-index was better than what the above bounds show, and this motivated us to study what happens when you make some (general) assumptions on the input data. The results of this study are in the ICML20 paper "Why are learned indexes so effective?" (http://pages.di.unipi.it/vinciguerra/publication/learned-ind...).
We found that, if you assume that the gaps between input sorted keys are taken from a distribution with finite mean and variance, then you can prove (Corollary 2 of the ICML20 paper) that the space of the PGM-index is actually O(n/B^2) whp (versus Θ(n/B) of classic B-trees).
Note that the result applies to *any* distribution, as long as the mean and variance of the RVs modelling the gaps are finite. Indeed, we specialised our main result to some well-known distributions, such as Uniform, Lognormal, Pareto, Exponential, and Gamma (Corollary 1 of the paper).
byteshift|5 years ago
All code is available in open source: https://github.com/learnedsystems/SOSD
jokoon|5 years ago
That's pretty amazing... I can somehow imagine this tech landing on every modern computer, allowing users to search for anything that is on their machine.
fulafel|5 years ago
crazypython|5 years ago
RMarcus|5 years ago
https://arxiv.org/pdf/2006.13282.pdf
byteshift|5 years ago
Code: https://github.com/learnedsystems/RadixSpline
gvinciguerra|5 years ago
gaogao|5 years ago
xucheng|5 years ago
legulere|5 years ago
gvinciguerra|5 years ago
Yep, the example of Figure 2 shows only a static PGM-index on a sorted array.
Insertion and deletions are discussed in Section 3 "Dynamic PGM-index" and experimented in Section 7.3.
The Dynamic PGM-index is open-source too: you can find the implementation at https://github.com/gvinciguerra/PGM-index/blob/master/includ... and the documentation at https://pgm.di.unipi.it/docs/cpp-reference/#classpgm_1_1_dyn...
latch|5 years ago
I agree the example would be more eye-catching without that sort.
http://www.vldb.org/pvldb/vol13/p1162-ferragina.pdf
contravariant|5 years ago
A very abridged explanation of how they handle inserts: you split the collection in a list of collections where position k contains either nothing or a collection size 2^k. When you want to add a new value you find the first empty spot and fill it by building a set of your new value together with the collections of all the preceding spots (because the sizes are all sequential powers of two this will fit exactly). Provided that merging the collections takes linear time this takes an amortized O(log(n)) per inserted item.
Of course once you have this you can use it for any learned index that can be learned in linear time.
[1]: M. H. Overmars. The Design of Dynamic Data Structures, volume 156 of Lecture Notes in Computer Science. Springer, 1983.
X6S1x6Okd1st|5 years ago
AlphaSite|5 years ago
saurabhnanda|5 years ago
petergeoghegan|5 years ago
midjji|5 years ago
gvinciguerra|5 years ago
magicalhippo|5 years ago
gigatexal|5 years ago
ddorian43|5 years ago
est|5 years ago
lisper|5 years ago
dmos62|5 years ago
lincolnq|5 years ago
mooneater|5 years ago
The_rationalist|5 years ago
harperlee|5 years ago
brokencode|5 years ago
Take Rust the game and Rust the programming language. How often do people confuse them? I’ve never seen it happen. Never mind the fact that rust is also a compound that forms when iron combines with oxygen.
In my book, it’s better to come up with a name that makes sense or is memorable as long as it’s not very confusing.
cmrx64|5 years ago
kzrdude|5 years ago
zxcvbn4038|5 years ago
txdv|5 years ago