hmm, Redis Labs are setting a cluster of 40 Redis processes on the same instance. It would be extremely difficult to do that with Redis OSS for anyone else.
"For the last 15 years, Redis has been the primary technology for developers looking to provide a real-time experience to their users. Over this period, the amount of data the average application uses has increased dramatically, as has the available hardware to serve that data. Readily available cloud instances today have at least 10X the CPUs and 100X more memory than their equivalent counterparts had 15 years ago. However, the single-threaded design of Redis has not evolved to meet modern data demands nor to take full advantage of modern hardware."
That's not what they are saying is wrong with Redis. Is Redis really 'antique tech'? Arguably, concurrent processing with a scale-up-only approach is a poor fit for "modern hardware".
So yes, you are correct: Redis from github requires knowledge and (your) code to make n instances work together (whether on the same node or not). But to claim that this is the case for "anyone else [but Redis Labs]" is questionable.
From a certain architectural camp, pin-to-core-process-in-parallel approach is optimal for [scaling on] "modern hardware". Salvatore can correct me on this but I don't recall that being a consideration at the early days, but it turned out to be a good choice. Some of the Redis apis however require dataset ensemble participation (anykind of total order semantics over the partitioned set) which is what is "difficult" to do effectively.
So basically any startup that can do that, should theoretically be able to squeeze more performance form their SaaS infrastructure than running Dragonfly type of architecture. Bonus, as pointed out by Redis Labs, being that the lots of parallel k/v processes can bust out of the max-jumbo-box should you ever need that to happen (for 'reliability' for example) ..
I think using word "misleading" is also "misleading".
Dragonfly hides complexity. Docker hid complexity of managing cgroups and deploying applications. S3 hid complexity of writing into separate disks. But you do not call S3 or minio misleading because they store stuff similarly to how disk stores files. Dragonfly hides complexity of managing bunch of processes on the same instance and the outcome of this is a cheaper production stack. What do you think has higher effective memory capacity on c6gn.16xlarge: a single process using all the memory or 40 processes which you need to provision independently?
eternalban|2 years ago
"For the last 15 years, Redis has been the primary technology for developers looking to provide a real-time experience to their users. Over this period, the amount of data the average application uses has increased dramatically, as has the available hardware to serve that data. Readily available cloud instances today have at least 10X the CPUs and 100X more memory than their equivalent counterparts had 15 years ago. However, the single-threaded design of Redis has not evolved to meet modern data demands nor to take full advantage of modern hardware."
That's not what they are saying is wrong with Redis. Is Redis really 'antique tech'? Arguably, concurrent processing with a scale-up-only approach is a poor fit for "modern hardware".
So yes, you are correct: Redis from github requires knowledge and (your) code to make n instances work together (whether on the same node or not). But to claim that this is the case for "anyone else [but Redis Labs]" is questionable.
From a certain architectural camp, pin-to-core-process-in-parallel approach is optimal for [scaling on] "modern hardware". Salvatore can correct me on this but I don't recall that being a consideration at the early days, but it turned out to be a good choice. Some of the Redis apis however require dataset ensemble participation (anykind of total order semantics over the partitioned set) which is what is "difficult" to do effectively.
So basically any startup that can do that, should theoretically be able to squeeze more performance form their SaaS infrastructure than running Dragonfly type of architecture. Bonus, as pointed out by Redis Labs, being that the lots of parallel k/v processes can bust out of the max-jumbo-box should you ever need that to happen (for 'reliability' for example) ..
intev|2 years ago
romange|2 years ago