(no title)
kmschaal | 2 years ago
Distance definitions such as the Levenshtein and Damerau-Levenshtein distances provide a solid basis for discussions on accuracy. However, they are costly to compute and hence not widely adopted in fuzzy search libraries.
I started by using the known filter equation for the Levenshtein distance and computed a quality score with a leightweight formula. Then, I realized that the filter equation can be extended to the Damerau-Levenshtein distance by sorting the characters of the 3-grams.
In my tests, this implementation worked well. Please let me know how it works for you if you test it.
LoganDark|2 years ago
I'd say a search is accurate if it finds what most closely matches the query, for some definition of "matches". A search is useful if most people can find what they are looking for on the first try.
That is, a search being accurate doesn't necessarily translate to usefulness, if people don't (or can't) know how to write those accurate queries.
I'd imagine this is why fuzzy searches exist. Fuzzier queries allow for a larger spectrum of possible matches, which means a larger set of queries can turn up those results someone is looking for. Queries do not have to be as precise, and writing useful queries is easier.
But to me it seems diametrically opposed to accuracy. Usefulness is a much more intuitive measure, because the query does not have to be perfectly accurate in order to find the right result.
Alternatively, you could focus on the quality of ranking of the returned matches: how often the correct result is near the top (and how near) when the user finds what they are looking for. Ideally you want this as high as possible.
kmschaal|2 years ago
So, in the end, I believe it's worthwhile to try different implementations and share our subjective experiences.