I remember my interview at Spotify where we discussed how to implement thumbnail display service in the most effective way. What we actually came to is something along the lines of this library.
I always like it when a company focus on their real problems in job interviews and manage to avoid the brain teaser trap. That way you can have a feeling about the job that you are going to work on there, and see if you really like both work and people.
Thanks :-) That is a go-to interview question for us and we actually all do it slightly differently and take it in different directions depending on your expertise or specialization.
It is really closely aligned to what our core service is (distributing and streaming files) and is a great chance to talk with the interviewee and figure out where their strengths are.
Totally encourage other people to interview this way. It's what I've done at the past few companies I've been at and really worked excellently — just think of a problem you're working or have worked on, and distill it into an interview problem.
If you're familiar with thumbnail display you know Facebook has Haystack for efficient image serving. It goes very low end. Sparkey or CDB or BAM (mentioned in this post) could be much less complex and can do similar work. Why did they go Haystack route I dont get it.
Another cool project is bam[1], a constant key/value server built on similar principles. A single input file (in this case, a TSV file instead of the SPL log file) and an index file. The cool thing about bam is that it uses the CMPH[2] library to generate a minimal perfect hash function over the keys in the input file before putting them in the index file.
Wow, didn't expect this to make a mention on the front page of HN today. I never did convince Etsy to let me deploy bam in production, but it's so simple that it should be doable without much fuss. I mainly built it as a proof-of-concept to show that serving static data does not have to be difficult – and that loading large static data sets into a relational database is a truly wasteful, terrible approach. Are you actually using bam "in anger"?
There are a bunch of comments related to performance data - the code is available so nothing is stopping anyone from making an unbiased comparison. :)
That said, I intend to publish some sort of performance comparison code / results. The downside with me doing it is that: 1) I know the sparkey code much better than I know level-db or any other solution, so the tuning parameters will probably be suboptimal for the other solutions. 2) I will only focus on our specific usecase (write large bulks, do lots of random reads), which may seem a bit unfair to the most general solutions.
Here are some preliminary performance benchmarks from my regular workstation (Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz, 8 GB RAM): http://pastebin.com/7buZVgdu
The sparkey usage is fairly optimized, but I just randomly put something together for the level-db, so consider the results extremely biased.
Where's the source code for your bench? How large are the records you're loading? How large are the keys? What's the insert order?
How does your test compare to http://symas.com/mdb/microbench/ ? If you're going to try to talk about numbers, talk about them in a meaningful context. Right now you're just handwaving.
I was surprised to see LevelDB (https://code.google.com/p/leveldb/) was missing from the list of storage solutions you tried, because it seems optimal for your use-case. Were you aware of it?
I'm not sure about the optimal use-case match. Sparkey is for "mostly static" datasets where on disk structures are generated by a batch process and pushed to servers providing read only access to the data to consumers.
leveldb on the other hand, supports concurrent writes and provides features to handle data consistency and cheap gradual reindexing.
The problem is, that so many projects for so many things exists. It is hard to find the matching project, thats why many people invent the wheele again.
I wish more projects would follow this kind of readme format, at least somewhat. There are so many new things that popup on HN but have very little information about all the whats and whys I should care
What problem are you solving?
If existing solutions existed what hurdles did you face with them and how did you overcome them with your custom solution?
How do you compare from a performance view? (granted they still need to do this, but at least put in a section about it)
Indeed, from the README: We used to rely a lot on CDB (which is a really great piece of software). It performed blazingly quick and produces compact files. We only stopped using it when our data started growing close to the 4 GB limit
Similar also to DiscoDB, which does support compression, and uses perfect hashing for constant time-lookup with minimal disk access.
Not only that but it provides lightning-fast conjunctive normal form queries, a.k.a logical combinations of primitive keys. Plus it has Python / Erlang bindings.
I'm baffled by the choice of using the GNU autofools chain just to include a Doxygen target in the Makefile. The whole thing is essentially straight up C with just 1 library dependency.
The command line argument processing is also quite haphazardly done, it's not like it using getopt or whatever that poses compatibility issues. Is writing and packaging with a Makefile that difficult?
We used to have a "simple" makefile, but once we started to support multiple development environments (OSX, various Linux flavors) it got more and more complex. Autotools actually brings a lot of functionality as part of the package and is what people expect. I'm no fan, but it works for our use case.
They wrote a database to solve an operational need. From experience I can tell you that's an endeavour you should strife to spend as little time on as possible.
I think it's a miracle they produced something they feel comfortable sharing with the world. If you write a database in house, and the tool chain and the argument processing are the only things done haphazardly, then hats off to you :)
This looks interesting. Can be used for deduplication by keeping this hash table on disk for large amount of data
I wrote something like this to optimize disk seeks heavily by returning a reference of 8 byte and keeping a hashtable in memory. A mostly-append only records store that allowing mutations of same key and by rounding size of blobs by power of 2. Written to optimize storage layer for Membase.
I have now created a very simple benchmark suite to give you some rough performance numbers, and updated the README to include some sample numbers for one specific machine.
I'm struggling to find something that it does that a webserver pointed at the filesystem doesn't do (with hash-ids for file names). I'm wondering if that's all it is, with a bit of logic to write the files in the correct structure.
From the description: Sparkey is an extremely simple persistent key-value store. You could think of it as a read-only hashtable on disk and you wouldn't be far off.
levosmetalo|12 years ago
I always like it when a company focus on their real problems in job interviews and manage to avoid the brain teaser trap. That way you can have a feeling about the job that you are going to work on there, and see if you really like both work and people.
rohansingh|12 years ago
It is really closely aligned to what our core service is (distributing and streaming files) and is a great chance to talk with the interviewee and figure out where their strengths are.
Totally encourage other people to interview this way. It's what I've done at the past few companies I've been at and really worked excellently — just think of a problem you're working or have worked on, and distill it into an interview problem.
jetz|12 years ago
jdp|12 years ago
[1]: https://github.com/StefanKarpinski/bam [2]: http://cmph.sourceforge.net/
StefanKarpinski|12 years ago
krka|12 years ago
That said, I intend to publish some sort of performance comparison code / results. The downside with me doing it is that: 1) I know the sparkey code much better than I know level-db or any other solution, so the tuning parameters will probably be suboptimal for the other solutions. 2) I will only focus on our specific usecase (write large bulks, do lots of random reads), which may seem a bit unfair to the most general solutions.
krka|12 years ago
The sparkey usage is fairly optimized, but I just randomly put something together for the level-db, so consider the results extremely biased.
hyc_symas|12 years ago
How does your test compare to http://symas.com/mdb/microbench/ ? If you're going to try to talk about numbers, talk about them in a meaningful context. Right now you're just handwaving.
atdt|12 years ago
blippie|12 years ago
leveldb on the other hand, supports concurrent writes and provides features to handle data consistency and cheap gradual reindexing.
sorbits|12 years ago
tinco|12 years ago
A quick glance over LevelDB's features gives me the impression that its bulk-write performance would not be sufficient.
progx|12 years ago
hyc_symas|12 years ago
rschmitty|12 years ago
What problem are you solving?
If existing solutions existed what hurdles did you face with them and how did you overcome them with your custom solution?
How do you compare from a performance view? (granted they still need to do this, but at least put in a section about it)
huhtenberg|12 years ago
[0] http://en.wikipedia.org/wiki/Cdb_%28software%29
js2|12 years ago
jflatow|12 years ago
Not only that but it provides lightning-fast conjunctive normal form queries, a.k.a logical combinations of primitive keys. Plus it has Python / Erlang bindings.
http://discodb.rtfd.org
https://github.com/jflatow/discodb
seiji|12 years ago
wyuenho|12 years ago
The command line argument processing is also quite haphazardly done, it's not like it using getopt or whatever that poses compatibility issues. Is writing and packaging with a Makefile that difficult?
blippie|12 years ago
tinco|12 years ago
I think it's a miracle they produced something they feel comfortable sharing with the world. If you write a database in house, and the tool chain and the argument processing are the only things done haphazardly, then hats off to you :)
nodata|12 years ago
jetz|12 years ago
slynux|12 years ago
I wrote something like this to optimize disk seeks heavily by returning a reference of 8 byte and keeping a hashtable in memory. A mostly-append only records store that allowing mutations of same key and by rounding size of blobs by power of 2. Written to optimize storage layer for Membase.
https://github.com/t3rm1n4l/lightkv
krka|12 years ago
rythie|12 years ago
sp332|12 years ago
kirbyk|12 years ago