This is a really cool technique and warrants some investigation, but I can't let this go unaddressed:
> and the upper bound on the time taken to join all three tables will be the square of that
These kinds of from-principle assertions about what Postgres's (or other DBs') performance will be like sound helpful but usually aren't. The kinds of queries you issue can change everything. Indexing can change everything. Postgres's configuration can change everything. Actual size of the table can change everything. For example, if the table is small, Postgres will keep it in memory and your plans will have scary looking but actually innocent sequential scans, which I think actually happened in his join table example.
Anyway, it's good to have a lot of tools in your toolbox, and this is an interesting tool with interesting uses. I just think it would be a grave error to take the performance ratios here as fixed.
This is why DBAs are a thing. Just sit back and enjoy the rediscovery of relational datastores in the blogosphere. The "thought leaders" are going to slowly figure out things like your post that were worked out properly in the 80s.
The lack of an index on the junction table definitely did have a major effect. By just doing the following:
CREATE INDEX ON movie_person (movie_id);
CREATE INDEX ON movie_person (person_id);
The junction query speeds up to around 2ms—it's comparable to or faster than the bloom filter query. But the trade-off is revealed when you see the total size of the movie_person table (including indexes):
Whereas, by my calculations, the total size added by the bloom filters and hashes on movie and person is just 2094kB in total.
I plan on adding an erratum to my article explaining my error, the time/memory trade-off, and ideas for further improvement or exploration, potentially including bloom-based GiST indexes and the opportunities for parallelization.
Also look at GIN. Although the documentation claims it is typically larger than GIST, in certain cases (a small lexeme set, which is not true in the general case full-text) it can seem to be smaller and faster. Mostly this can be true over constrained symbolic domains (which is a lot of computing also).
We've recently replaced a GIST index with a GIN one and got a lot more than the 'rule of thumb' speedup. More like an order of magnitude. Or two. So much, that Tom Lane suggested we might have stumbled onto a bug in GIST, so we'll see.
Now that's the kind of content that I'd love to see more here. It covers basic tools like sed and awk, to nice concepts I didn't know, like BIT field in Postgres.
Any recommended book or set of articles for starting with Postgres?
I like the exploration of this method, but I would have liked to see the actual comparison of any false positives. Bad data can be acceptable in statistical analysis, but if you were showing someone a list of their ratings or the actors who were in the latest Kevin Bacon movie, false positives have a much stronger impact.
Is there any chance that the bloom could be used as a short-circuit filter but still follow-up with the m2m join to filter out the false positives? If the query optimizer can take advantage of that, then you could likely balance the size and cost of the bloom field.
This is exactly right. Bloom filters used in this way should generate candidates not results to reduce the total amount of work you do when it is expensive to confirm membership in the set. The best used cases check the opposite, e.g. Kevin Bacon was definitely not in this movie, since there are no false negatives.
Using a custom index implementation that uses bloom filters internally is probably going to work out better in the long run. It should be way more efficient than storing the data in a bit field, using app-layer code to generate the bloom filter values, then doing bitwise comparisons on-the-fly at query time.
The Postgres query planner can also recheck constraints automatically to recover from bloom filter false positive matches at query time.
FYI -- bloom filters are already used internally within the PostgreSQL intarray contrib module and the full-text search functionality.
I saw that too... I'm suspicious of the `SET person_filter = (SELECT bit_or(person.hash) ...);` line, personally. If the hash function is supposed to produce roughly random bits (I'm not familiar with murmur hash though, this may not be true), and you're simply OR-ing them together, you lose zero bits pretty quickly.
But I need to brush up on murmur hash, and bloom filters in general, so I may be entirely wrong. Maybe it's valid, and maybe that does represent a 0.5% error.
Regarding the space consumption, it seems like this is a tradeoff of storing two integers (plus the row overhead) per rating versus storing 335 bytes per movie/user.
For the join table, that's 2 integers * 575,281 ratings * 4 bytes = 4,602,248 bytes used in the join table.
With the filter, in each movie row, you need to store 1632 bits for the person_filter and 1048 bits for the hash, so 3,883 movies * (1632 bits + 1048 bits) = 1,300,805 bytes.
In each user row you need to store the same number of bits for the filter and hash, so 6,040 users * (1632 bits + 1048 bits) = 2,023,400 bytes.
Is my math here wrong? With this approach you save about 1.22MB, or about 27% over the join table approach (ignoring how much overhead there is for each row of the table and each page to store the table in).
Depending on the dataset it doesn't seem like the space savings would be worth the sacrifice in accuracy.
Note the sequential scan on movie_person that accounts for 2.5 to 60 seconds. If there were an index on movie_person(person_id), this could be an index scan.
(EDITED TO ADD: I totally misread the timings in milliseconds for timings in seconds, so most of what I originally wrote is off by a factor of 1000. I'm leaving it here for entertainment value and because you might want to play with the data set in SQLite. But my point is still valid: a vanilla join is comparable in performance to the OP's bloom-filter method.)
I'm having a hard time believing that the straightforward join on a data set as small as the OP's sample is really going to take 65 seconds on PostgreSQL. Maybe that's what EXPLAIN predicts (with spotty stats, I'd wager), but EXPLAIN is not a reliable way to measure performance. For this data, I'd expect real queries to perform much better.
EDITED TO ADD: The OP's article shows the results for EXPLAIN ANALYZE, which ought to have performed the queries. So I'm not sure why the results are so slow.
Heck, even SQLite, when processing a superset of the data set on my 4-year-old computer, can do the OP's final query (and return additional ratings data) almost instantly:
$ time sqlite3 ratings.db '
select *
from
users
natural join ratings
natural join movies
where user_id = 160
' > /dev/null
real 0m0.006s
user 0m0.002s
sys 0m0.004s
If you want to try some queries yourself, here you go:
$ wget http://www.grouplens.org/system/files/ml-1m.zip
$ unzip ml-1m.zip
$ cd m1-1m
$ sqlite3 ratings.db <<EOF
CREATE TABLE movies (
movie_id INTEGER PRIMARY KEY NOT NULL
, title TEXT NOT NULL
, genres TEXT NOT NULL
);
CREATE TABLE users (
user_id INTEGER PRIMARY KEY NOT NULL
, gender TEXT NOT NULL
, age TEXT NOT NULL
, occupation TEXT NOT NULL
, zipcode TEXT NOT NULL
);
CREATE TABLE ratings (
user_id INTEGER REFERENCES users(user_id)
, movie_id INTEGER REFERENCES movies(movie_id)
, rating INTEGER NOT NULL
, timestamp INTEGER NOT NULL
, PRIMARY KEY (user_id, movie_id)
);
.separator ::
.import movies.dat movies
.import users.dat users
.import ratings.dat ratings
EOF
Fully enumerating the joined data takes under a second:
$ time sqlite3 ratings.db '
select count(*)
from
users
natural join ratings
natural join movies
'
1000209
real 0m0.953s
user 0m0.925s
sys 0m0.021s
And even going so far as to print the joined data takes under six seconds:
$ time sqlite3 ratings.db '
select *
from
users
natural join ratings
natural join movies
' > /dev/null
real 0m5.586s
user 0m5.497s
sys 0m0.059s
Sometimes, the old stuff works better than we give it credit for.
If he had an index on movie_person, that would be a ~ 1ms index lookup, not a table scan. You can't do that sort of database layout without putting indexes on the M2M mapping table.
(edit, oh, yeah, it's right there in the parent post....)
Skimmed through it, but it seems that there is still an improvement with the bloom filter with a factor of 2~3. Is that correct? (estimated 6(?)ms once corrected with indices, vs 2ms with the bloom)
This is what I was wondering. What's the performance difference compared to using a junction table with indexes and what are the other trade-offs of indexes vs bloom filters. I'm guessing there are cases when you wouldn't want to maintain the indexes. I'm not sure what they would be though.
No: "integer" on PostgreSQL is, in fact, a signed 32-bit number, but so is "serial". The difference between serial and integer is only that usage of the serial type as a column implicitly creates a sequence (which itself is a 64-bit counter, but is unrelated to the storage in the column), sets its owner to the column, and makes the column's value default to the next value of the serial. PostgreSQL also has the type "bigint", which is a signed (not unsigned) 64-bit number, and an equivalent "bigserial" type which you would need to use if you want a 64-bit version of serial.
Yes, and if I were to make one addition to the article it would be to add a section talking about the implications of that 0.5% error rate with some examples of uses where this would be appropriate and where it wouldn't be.
[+] [-] fusiongyro|13 years ago|reply
> and the upper bound on the time taken to join all three tables will be the square of that
These kinds of from-principle assertions about what Postgres's (or other DBs') performance will be like sound helpful but usually aren't. The kinds of queries you issue can change everything. Indexing can change everything. Postgres's configuration can change everything. Actual size of the table can change everything. For example, if the table is small, Postgres will keep it in memory and your plans will have scary looking but actually innocent sequential scans, which I think actually happened in his join table example.
Anyway, it's good to have a lot of tools in your toolbox, and this is an interesting tool with interesting uses. I just think it would be a grave error to take the performance ratios here as fixed.
[+] [-] adgar|13 years ago|reply
[+] [-] zacharyvoase|13 years ago|reply
The lack of an index on the junction table definitely did have a major effect. By just doing the following:
The junction query speeds up to around 2ms—it's comparable to or faster than the bloom filter query. But the trade-off is revealed when you see the total size of the movie_person table (including indexes): Whereas, by my calculations, the total size added by the bloom filters and hashes on movie and person is just 2094kB in total.I plan on adding an erratum to my article explaining my error, the time/memory trade-off, and ideas for further improvement or exploration, potentially including bloom-based GiST indexes and the opportunities for parallelization.
[+] [-] fdr|13 years ago|reply
We've recently replaced a GIST index with a GIN one and got a lot more than the 'rule of thumb' speedup. More like an order of magnitude. Or two. So much, that Tom Lane suggested we might have stumbled onto a bug in GIST, so we'll see.
[+] [-] slig|13 years ago|reply
Any recommended book or set of articles for starting with Postgres?
[+] [-] skb_|13 years ago|reply
[1] - http://www.postgresql.org/docs/9.1/static/
[+] [-] DEinspanjer|13 years ago|reply
Is there any chance that the bloom could be used as a short-circuit filter but still follow-up with the m2m join to filter out the false positives? If the query optimizer can take advantage of that, then you could likely balance the size and cost of the bloom field.
[+] [-] spullara|13 years ago|reply
[+] [-] mhale|13 years ago|reply
The Postgres query planner can also recheck constraints automatically to recover from bloom filter false positive matches at query time.
FYI -- bloom filters are already used internally within the PostgreSQL intarray contrib module and the full-text search functionality.
See: http://postgresql.1045698.n5.nabble.com/bloom-filter-indexes... http://code.google.com/p/postgres-learning/wiki/BloomFilter
EDIT: for clarity, typo correction
[+] [-] ocharles|13 years ago|reply
For the bloom filter: (actual time=0.033..2.546 rows=430 loops=1)
And for the join: (actual time=7.440..64.843 rows=9 loops=1)
So the join returned 9 movies for person_id=160, while the bloom filtered returned 430.
I understand it's a probabilistic model, but that's a pretty whopping difference in data. Have I missed something?
[+] [-] Groxx|13 years ago|reply
But I need to brush up on murmur hash, and bloom filters in general, so I may be entirely wrong. Maybe it's valid, and maybe that does represent a 0.5% error.
[+] [-] brown9-2|13 years ago|reply
For the join table, that's 2 integers * 575,281 ratings * 4 bytes = 4,602,248 bytes used in the join table.
With the filter, in each movie row, you need to store 1632 bits for the person_filter and 1048 bits for the hash, so 3,883 movies * (1632 bits + 1048 bits) = 1,300,805 bytes.
In each user row you need to store the same number of bits for the filter and hash, so 6,040 users * (1632 bits + 1048 bits) = 2,023,400 bytes.
Is my math here wrong? With this approach you save about 1.22MB, or about 27% over the join table approach (ignoring how much overhead there is for each row of the table and each page to store the table in).
Depending on the dataset it doesn't seem like the space savings would be worth the sacrifice in accuracy.
[+] [-] SeoxyS|13 years ago|reply
[+] [-] tmoertel|13 years ago|reply
I think the reason that OP's join-based queries are slow is that there are no indexes over his junction table's foreign keys:
Thus when he wants the movies associated with person 160, the database must examine the entire junction: Note the sequential scan on movie_person that accounts for 2.5 to 60 seconds. If there were an index on movie_person(person_id), this could be an index scan.(EDITED TO ADD: I totally misread the timings in milliseconds for timings in seconds, so most of what I originally wrote is off by a factor of 1000. I'm leaving it here for entertainment value and because you might want to play with the data set in SQLite. But my point is still valid: a vanilla join is comparable in performance to the OP's bloom-filter method.)
I'm having a hard time believing that the straightforward join on a data set as small as the OP's sample is really going to take 65 seconds on PostgreSQL. Maybe that's what EXPLAIN predicts (with spotty stats, I'd wager), but EXPLAIN is not a reliable way to measure performance. For this data, I'd expect real queries to perform much better.
EDITED TO ADD: The OP's article shows the results for EXPLAIN ANALYZE, which ought to have performed the queries. So I'm not sure why the results are so slow.
Heck, even SQLite, when processing a superset of the data set on my 4-year-old computer, can do the OP's final query (and return additional ratings data) almost instantly:
If you want to try some queries yourself, here you go: Fully enumerating the joined data takes under a second: And even going so far as to print the joined data takes under six seconds: Sometimes, the old stuff works better than we give it credit for.[+] [-] wiredfool|13 years ago|reply
(edit, oh, yeah, it's right there in the parent post....)
[+] [-] alxv|13 years ago|reply
[+] [-] jaekwon|13 years ago|reply
[+] [-] rscott|13 years ago|reply
[+] [-] b0b0b0b|13 years ago|reply
Because you can't index that bloom column, it seem's you'd always be doing full table scans.
In fact it doesn't appear any indexes were used throughout this whole exercise, is that right?
[+] [-] BenoitEssiambre|13 years ago|reply
[+] [-] asharp|13 years ago|reply
A major problem is a lack of indexes in array elements, which then forces you to build an otherwise unneeded table.
That being said, it's definitely doable, to some extent.
[+] [-] pr0filer_net|13 years ago|reply
I see the table `movies_people` uses (SIGNED) INTEGERS as datatypes, but they reference to a UNSIGNED BIGINT (SERIAL).
[+] [-] saurik|13 years ago|reply
[+] [-] luney|13 years ago|reply
[+] [-] secure|13 years ago|reply
This fact makes it unusable for many use-cases, but it’s an interesting and good article nevertheless.
[+] [-] mcherm|13 years ago|reply
[+] [-] radarsat1|13 years ago|reply