top | item 1804633

Redis, from the Ground Up

83 points| mjrusso | 15 years ago |blog.mjrusso.com

13 comments

order

jemfinch|15 years ago

"Lists, sets, etc. are more fundamental to computer scientists than relational database tables, columns, and rows."

That does not ring true to me. I'm curious what his data source for that claim is.

antirez|15 years ago

If I'll ever meet Aliens I'm sure lists, sets, hash tables, and trees are in their CS books as well. But I bet their mainstream DB model may really be different than our relational one.

Edit: I reflected a bit more on the issue. It seems like that our mainstream DB model is clearly due to the kind of applications computers were mainly used for when the DB technology was developing: business application programs.

Imagine a DB technology emerging instead in completely different scenarios, like social applications where you need to update the status of users in a chronological way. Or a DB designed where most softwares had to deal with geo locations... as you can see the DB model is much more an ad-hoc affair.

A DB modeled after the fundamental data structures like Redis may not be the perfect fit for everything but should be able to model any kind of problem eventually without too much efforts, and with a very clear understanding of what will be needed in terms of storage and access time.

bhiggins|15 years ago

a table is just a set of tuples.

this is like saying 32 bits are more fundamental to computer science than the abstract idea of an integer.

joe_the_user|15 years ago

Redis' internal design typically trades off memory for speed. For some workloads, there can be an order of magnitude difference between the raw number of bytes handed off to Redis to store, and the amount of memory that Redis uses.

What are the circumstances that make this kind of tradeoff worthwhile?

A generic Key-Value store, say Kyoto Cabinet, is pretty fast and you can configure its cache to be huge if you need it. Does reconstructing and using a list/set/hash take that much time?

Edit: Is the "order of magnitude" here greater or less than the extra space that keeping a b-tree index in memory would take? Is it doing something akin to that or a completely different thing?

antirez|15 years ago

The tradeoff is especially worthwhile because we export complex data structures (I wrote a great deal of articles about this, please check the latest at antirez.com), but this time I'll try to provide a proof by paradox: what you are saying here is that memcached may be replaced with TC from the point of view of performances, if you add an LRU expiry, that I think it's not true.