(no title)
ralegh
|
7 months ago
This is fine assuming the popular request types don’t change, but arguably if both new versions of matching are sufficiently fast then I would prefer Ken’s long term as the other could become slow again if the distribution of request types changes.
sfilmeyer|7 months ago
mikepurvis|7 months ago
andrepd|7 months ago
Lines of thinking like that are part of the reason most modern software is so sloooow :)
Rendello|7 months ago
vlovich123|7 months ago
A more robust strategy would be at least be to check if the rule was the same as the previous one (or a small hash table) so that the system is self-healing.
Ken’s solution is at least robust and by that property I would prefer it since it’s just as fast but doesn’t have any weird tail latencies where the requests out of your cache distribution are as fast as the ones in.
ants_everywhere|7 months ago
scott_w|7 months ago