SELECT <user_id> FROM (SELECT DISTINCT user_id FROM user_actions);
You're absolutely right that both those queries will give the same result. I guess I was trying to motivate the basic problem of finding whether some user exists in a set of users, and `SELECT DISTINCT` is the SQL way of representing a set.
Don't get me wrong, I love Postgres and use it in pretty much all of my projects... but for this kind of application it's not very well suited. Leave your relational data for the database and use something more efficient!
Semi-related in the land of Postgres and probabilistic data structures -- Redshift supports APPROXIMATE COUNT. Much, much faster than a raw COUNT, and their stated error is +-2%
Our native implementations of all probabilistic data structures use MurmurHash3, so this isn't a problem. The dumbloom implementation is in no way a good Bloom filter, as the name suggests :)
The idea of using probabilistic data structures instead of counting every point of data (for things like customer analytics) is pretty significant -- getting caught in the weeds of managing every data point is error-prone and inefficient.
usman-m, the approach of PipelineDB seems really interesting. However, I'd like to understand how in your opinion it compares with processing the stream of data changes accessed over PostgreSQL's logical decoding (http://www.postgresql.org/docs/9.4/static/logicaldecoding.ht...) interface. Thank you
ahachete, I'm not sure if I totally understand your question.
Continuous views are consumers for streams. You can think of them as high throughput real-time materialized views. The source of data for the stream can be practically anything. Logical decoding on the other hand is a producer of streaming data--it's basically a human readable replication log. So you could potentially stream the logically decoded log into PipelineDB and build some continuous views in front of it.
[+] [-] gleb|10 years ago|reply
[+] [-] usman-m|10 years ago|reply
Fixed the post, thanks!
[+] [-] teddyh|10 years ago|reply
[+] [-] mamikonyana|10 years ago|reply
[+] [-] usman-m|10 years ago|reply
[+] [-] pbnjay|10 years ago|reply
Redis comes with both bitfields (see http://redis.io/commands/bitcount) and hyperloglog counters (see http://redis.io/commands/pfcount), requires almost no setup and has very minimal overhead.
[+] [-] Zikes|10 years ago|reply
[+] [-] danneu|10 years ago|reply
"Just add another database!"
[+] [-] matsur|10 years ago|reply
http://docs.aws.amazon.com/redshift/latest/dg/r_COUNT.html
[+] [-] striking|10 years ago|reply
[+] [-] jordibunster|10 years ago|reply
[+] [-] usman-m|10 years ago|reply
[+] [-] zallarak|10 years ago|reply
[+] [-] ahachete|10 years ago|reply
[+] [-] usman-m|10 years ago|reply
Continuous views are consumers for streams. You can think of them as high throughput real-time materialized views. The source of data for the stream can be practically anything. Logical decoding on the other hand is a producer of streaming data--it's basically a human readable replication log. So you could potentially stream the logically decoded log into PipelineDB and build some continuous views in front of it.