Yes, the amount of operations per second is too small for directly serving files like for hosting. (We didn't have this issue since we proxy/cache all files via dedicated hardware)
Though, it's still too small if a HEAD request counts as an operation, since we need to check if files updated or not.
We use a cache too but we still will hit this limit because our cache is not very large compared with bucket size and as you said, we need to make HEAD requests.
I do not know if this is a hard limit or if some kind of burst is available for spikes. Constant limit per bucket without considering bucket size makes no sense. At least charge for operations as well.
Maybe they just can't scale quickly enough or maybe they consider object storage just for backup/archiving purposes.
tmikaeld|1 year ago
Though, it's still too small if a HEAD request counts as an operation, since we need to check if files updated or not.
jaigupta|1 year ago
I do not know if this is a hard limit or if some kind of burst is available for spikes. Constant limit per bucket without considering bucket size makes no sense. At least charge for operations as well.
Maybe they just can't scale quickly enough or maybe they consider object storage just for backup/archiving purposes.