(no title)
smacker | 5 months ago
I have witnessed many incidents when DB was considerably degrading. However, thanks to the cache in redis/memcache, a large part of the requests could still be processed with minimal increase in latency. If I were serving cache from the same DB instance, I guess, it would cause cache degradation too when there are any problems with the DB.
aiisthefiture|5 months ago
smacker|5 months ago
motorest|5 months ago
This was the very first time I heard anyone even suggest that storing data in Postgres was a concern in terms of reliability, and I doubt you are the only person in the whole world who has access to critical insight onto the matter.
Is it possible that your prior beliefs are unsound and unsubstantiated?
> I have witnessed many incidents when DB was considerably degrading.
This vague anecdote is meaningless. Do you actually have any concrete scenario in mind? Because anyone can make any system "considerably degrading", even Redis, if they make enough mistakes.
baobun|5 months ago
Besides, having the cache on separate hardware can reduce the impact on the db on spikes, which can also factor into reliability.
Having more headroom for memory and CPU can mean that you never reach the load where ot turns to service degradation on the same hw.
Obviously a purpose-built tool can perform better for a specific use-case than the swiss army knife. Which is not to diss on the latter.
didntcheck|5 months ago
You seem to be reading "reliability" as "durability", when I believe the parent post meant "availability" in this context
> Do you actually have any concrete scenario in mind? Because anyone can make any system "considerably degrading", even Redis
And even Postgres. It can also happen due to seemingly random events like unusual load or network issues. What do you find outlandish about the scenario of a database server being unavailable/degraded and the cache service not being?
abtinf|5 months ago
This is a class of error a human is extremely unlikely to make.