(no title)
wander_homer | 2 years ago
Not at all, I'm just incredibly curious of how you'd solve the issue of creating an index of a filesystem as fast as Everything, because I've thought and read a lot about it in the last couple of years and haven't found any solution at all, nor did I find any other software which achieved something like that on Linux systems.
> For ideal reproducibility, let me know which forum(s) you initially got traction on. I'll try to mirror your marketing as closely as possible.
One post on the Arch Linux forum and one on the r/linux sub on Reddit. From there I got enough users to get more than 100$ in donations. Nowadays it's obviously more.
> I'd also like to know how you went about benchmarking performance against existing stuff for your project;
Everything has an extensive debug mode with detailed performance information about pretty much everything it's doing. That's how I know exactly how long it took to create the index, perform a specific search, update the index with x file creations, deletions or metadata changes etc.
> for comparison against `Everything` I was thinking that the metric to beat is delta between file creation/removal time and the time that the file shows up in the results set (or index).
That's not particularly interesting, because it's quite straight forward to achieve a similar performance.
The crucial metric is how long it initially takes to create the index and then update it when the application starts (i.e. finding all changes to the filesystem which happened while the application wasn't running). That's where Everything excels and to which I and others haven't found a solution on non-Windows systems (without making significant changes to the kernel of course). The best and pretty much only solution I'm aware of is the brute force method of walking the filesystem and calling stat, which obviously is much slower.
lelanthran|2 years ago
That's what I meant by " delta between file creation/removal time and the time that the file shows up in the results set (or index)."
Basically, how fast can we update the index?
> That's where Everything excels and to which I and others haven't found a solution on non-Windows systems (without making significant changes to the kernel of course).
I've got a couple of out-there ideas which may or may not pan out, one of which was, indeed, a kernel module.
Another idea is to deploy the indexer as a daemon with the applications all using IPC to query and update it. This will give the query applications a significant advantage on startup compared to Everything.
As for updating the index timeously, I've got a few ideas there as well. Walking the filesystem starting at `/` for each update will result in only performing index updates once a day or so (hence, the reason I expressed the metric as a delta) so I feel that that is no good.
I'll do an implementation and try to message you (if you want to check it out) because code talks louder than words :-)
wander_homer|2 years ago
The two core issues are:
1) How do you quickly get a list of all files and their attributes from the filesystem, without recursively visiting all directories? The kernel has no such functionality and neither do most filesystems (except NTFS with the MFT, which is how Everything solves that).
2) How do you know which files have been modified on a filesystem since it was last mounted on the system or since your monitoring daemon/application was running the last time? This information also needs to be stored persistently on the filesystem (like the USN journal, which Everything is using) if you want to avoid slow recursive traversals.
> I've got a couple of out-there ideas which may or may not pan out, one of which was, indeed, a kernel module.
Well the problem is, my kernel isn't the only kernel who changes the filesystems I'm using. Hence a kernel module only works if your system is the only one whose modifying the data you're working with or most other systems need to be using the same kernel module, which isn't realistic.
> Another idea is to deploy the indexer as a daemon with the applications all using IPC to query and update it. This will give the query applications a significant advantage on startup compared to Everything.
Everything uses a daemon as well and it's not a solution to that issue, because somehow the daemon also has to get the list of files/folders and their attributes out of a filesystem without walking it. How else would the daemon know which files belong to the volume which was just mounted moments ago?
> As for updating the index timeously, I've got a few ideas there as well. Walking the filesystem starting at `/` for each update will result in only performing index updates once a day or so (hence, the reason I expressed the metric as a delta) so I feel that that is no good.
Walking the filesystem shouldn't be done at all, because it's just too slow.
> I'll do an implementation and try to message you (if you want to check it out) because code talks louder than words :-)
Of course, I'd appreciate that.