(no title)
wander_homer | 2 years ago
Please enlighten us how that would work.
> TBH, if I thought I could make even $100 in donations from this, I'd start it tomorrow, but absolutely no one misses ultra-fast searching when they don't have it.
You can easily make $100 in donations with this. I did it with this piece of software while it was still less performant and powerful and without an official release and by only mentioning it on one or two forums.
If the software delivers what you're saying, I'll guarantee you, that this will lead to more than 100$ per month in donations.
lelanthran|2 years ago
My point was that the incentive to produce something like `Everything` on Linux just isn't aligned with what the target market wants or needs. I think that what you have produced satisfies what the target market wants.
> You can easily make $100 in donations with this.
Honestly, I'm still very skeptical that even a $100 target is possible. I have to also admit that I've looked at stuff in the past, gone "No one could possibly want that, at that price point" and been horribly wrong.
I feel like I should test the claim of how many people want an `Everything` equivalent on Linux: I'll make it, package it with a MVP GUI, and mention it on a few forums in addition to posting a show HN here.
For ideal reproducibility, let me know which forum(s) you initially got traction on. I'll try to mirror your marketing as closely as possible.
I'd also like to know how you went about benchmarking performance against existing stuff for your project; for comparison against `Everything` I was thinking that the metric to beat is delta between file creation/removal time and the time that the file shows up in the results set (or index).
Like the other responder here, I also think that once something is in the index, retrieval time should be almost instant, so there's not much point in benchmarking "How long does it take to update results after every keypress" once that metric falls below 100ms or so.
wander_homer|2 years ago
Not at all, I'm just incredibly curious of how you'd solve the issue of creating an index of a filesystem as fast as Everything, because I've thought and read a lot about it in the last couple of years and haven't found any solution at all, nor did I find any other software which achieved something like that on Linux systems.
> For ideal reproducibility, let me know which forum(s) you initially got traction on. I'll try to mirror your marketing as closely as possible.
One post on the Arch Linux forum and one on the r/linux sub on Reddit. From there I got enough users to get more than 100$ in donations. Nowadays it's obviously more.
> I'd also like to know how you went about benchmarking performance against existing stuff for your project;
Everything has an extensive debug mode with detailed performance information about pretty much everything it's doing. That's how I know exactly how long it took to create the index, perform a specific search, update the index with x file creations, deletions or metadata changes etc.
> for comparison against `Everything` I was thinking that the metric to beat is delta between file creation/removal time and the time that the file shows up in the results set (or index).
That's not particularly interesting, because it's quite straight forward to achieve a similar performance.
The crucial metric is how long it initially takes to create the index and then update it when the application starts (i.e. finding all changes to the filesystem which happened while the application wasn't running). That's where Everything excels and to which I and others haven't found a solution on non-Windows systems (without making significant changes to the kernel of course). The best and pretty much only solution I'm aware of is the brute force method of walking the filesystem and calling stat, which obviously is much slower.