(no title)
josefcullhed | 4 years ago
It takes us a couple of days to build the index but we have been coding this for about 1 year.
All the indexes are on disk.
josefcullhed | 4 years ago
It takes us a couple of days to build the index but we have been coding this for about 1 year.
All the indexes are on disk.
kreeben|4 years ago
Love it. Makes for a cheaper infrastructure, since SSD is cheaper than RAM.
>> It takes us a couple of days to build the index
It's hard for me to see how that could be done much faster unless you find a way to parallelize the process, which in itself is a terrifyingly hard problem.
I haven't read your code yet, obviously, but could you give us a hint as to what kind of data structure you use for indexing? According to you, what kind of data structure allows for the fastest indexing and how do you represent it on disk so that you can read your on-disk index in a forward-only mode or "as fast as possible"?
josefcullhed|4 years ago
>> It's hard for me to see how that could be done much faster unless you find a way to parallelize the process
We actually parallelize the process. We do it by separating the URLs to three different servers and indexing them separately. Then we just make the searches on all three servers and merges the result URLs.
>> I haven't read your code yet, obviously, but could you give us a hint as to what kind of data structure you use for indexing?
It is not very complicated, we use hashes a lot to simplify things. The index is basically a really large hash table with the word_hash -> [list of url hashes] Then if you search for "The lazy fox" we just take the intersection between the three lists of url hashes to get all the urls which have all words in them. This is the basic idea that is implemented right now but we will of course try to improve.
details are here: https://github.com/alexandria-org/alexandria/blob/main/src/i...