(no title)
Berniek | 2 years ago
There are 2 ways to handle it.
1 Stop the thundering herd.(make all the clients do something different). That may make things worse. Congestion in networks is usually exponential. You can't fulfill a request so you repeat the request, it can't be fulfilled so it repeats. You can add a random delay at the client end but that is just The US governments answer to the debt problem, it kicks the can down the road in the short term but it will almost certainly come back and bite you. Mathematically it is very easy for this scenario to become catastrophically exponential when a threshold is reach
2 Stop the congestion (make the handling process faster or add processes)
The system already has a cache to handle this but if its not in the cache it doesn't help. There needs to be an extra request cache exclusively for congestion scenarios. The existing cache request is already doing some processing, so extend that to route "thundering herd requests processing". This second cache does a bit more processing as well.As each new request is routed to it, it checks itself to see if this requestor is in the cache and removes it or overwrites it. It should never contain more than one entry per client.
When no more editions are made to this congestion cache (or the rate has slowed significantly) then the requests can be forwarded and processed via the original cache system.
Under this configuration, the congestion does not become exponential and should only delay the thundering herd requests. All other requests will be handled as per normal.
Once the original cache has the information there is no need for any thundering herd requests to be routed to the congestion cache.
Some clients will encounter some delays but not all and only on the "thundering herd process".
No comments yet.