top | item 40449011

(no title)

mcaravey | 1 year ago

I would consider the documented performance targets [0] for a standard Azure Blob account to be very good. We're talking 60 Gbps in/120 Gbps out, with 20,000 requests per second as the default request rate.

From what I can tell, the S3 request rate is about 9,000 requests per second [1] split between reads and writes for a single partition. From my perspective it really just depends on what you're trying to build but I don't see the performance of Azure Storage as being an issue in any way for a typical application.

Partitioning will also depend heavily on what kind of application you're building, but the documentation does point out that load balancing will kick in once it starts to see a lot of traffic on a partition [2]. Since you have to use partitioning for S3 in order to get better performance, I don't really see how that's a point against Azure.

As for SDKs [3] I have no idea how good support is, but they all have commits within the last day.

[0] https://learn.microsoft.com/en-us/azure/storage/common/scala...

[1] https://docs.aws.amazon.com/AmazonS3/latest/userguide/optimi...

[2] https://learn.microsoft.com/en-us/azure/storage/blobs/storag...

[3] https://azure.microsoft.com/en-us/downloads/

discuss

order

kondro|1 year ago

That’s 3,500 PUT and 5,500 GET requests per second per PREFIX.

S3 does a bad job of describing what a prefix is, but for most[1] practical intents you can consider the entire key of an object the prefix.

[1] The actual behaviour treats prefixes like partitions, but it’s completely automated and as long as you don’t expect an instant scale up to very large request rates, S3’s performance is basically unlimited. There are no per account hard or soft limits that need increasing or limit scalability.