So pricing is 1c / GB-month, compared to S3 IA at 1.25c / GB-month, a decent saving but not massive, no archive or deep archive options though, I wonder if / when these will come.
What sort of negotiated rates can you get from AWS for bandwidth I wonder, at the moment, that’s seems like the only real benefit from CF I think.
Backblaze is unprofitable and publicly traded, a combination which cant last forever. They raised B2 prices 20% last year, I wouldnt be surprised to see more increases if they continue to burn through cash.
Well if f000.backblazeb2.com is used for any other people's buckets too, which appears to be the case, I guess I am now able to serve other people's files from my domain? This seems terrible.
I can’t get over the fact how storage is still so expensive in 2024. Lowest you can get is probably $5 per tb a month from any of these companies. A new tb hdd is probably $25 today. Where does the money go, into c suite car payments?
We're onboarding to Cloudflare MagicWan and want to use them for logging, which they do to 's3-compliant' buckets.... on Google or Amazon.
I was pretty surprised at the lack of dogfooding, wondered if it's an oversight, on somebody's Gantt, or just not something R2 can handle for some reason.
Yeah, the integration and production readiness of their non-core offerings is not perfect. I'm dealing with R2 and another service and you can tell they fell more like... specifically integrated features, rather than fully modular services you can choose to use as you want. Like the workers have possible R2 bindings, but you can't use those in a fetch() call - you have to use S3 compatible endpoint instead.
AWS has its own issues, but the push to have everything talking over API did wonders for the ability to use them as you want.
At least magnetic disks are iops constrained, lower iops loads conceivably allow higher density, or packing different load patterns to the same devices. Say a 8 TB / 100 iops disk reserves 90 iops for a 1 TB a database service, that's 87% of the disk's capacity sitting free but only 10 iops to serve it with. Adding what is effectively an iops tax to discourage frequent reads is one way to make a mixture like this work (or another way to think of it - subtracting an iops discount)
Obviously example above is contrived, but same principle applies to a pool of 1000 disks as it would 1. You also don't escape this issue with regular hot storage either, there is still a (((iops * replication count) / average traffic) / max latency) type problem lurking, which would still necessitate either limiting density or increasing redundancy according to expected IO rate. This is one reason why some S3 alternatives with weaker latency bounds (not naming names, they're great but it's just not the same service) can often be made substantially cheaper, and why at least one of S3's storage classes may be implemented entirely as an accounting trick with no data movement or hardware changes at all
Yes, because in a well-designed setup files that are frequently accessed would be restored to standard tier. Ideally you'd only pay the data processing fee once when files transition from infrequently accessed to frequently accessed. There's a breakeven point at a data access rate of once every two months.
Maybe the cold-to-hot migration "tax" is partially to prevent abuse?
> "Data retrieval is charged per GB when data in the Infrequent Access storage class is retrieved and is what allows us to provide storage at a lower price. It reflects the additional computational resources required to fetch data from underlying storage optimized for less frequent access."
I like the "automatic storage classes" idea as well.
> "…you can define an object lifecycle policy to move data to Infrequent Access after a period of time goes by and you no longer need to access your data as often. In the future, we plan to automatically optimize storage classes for data so you can avoid manually creating rules and better adapt to changing data access patterns."
A fairly unrelated point, but its so strange how companies that underpin a lot of the internet struggle in the stock market. While we all wish we had sold our tech stocks in 2021, Cloudflare still hasn't recovered.
Cloudflare has a very dysfunctional sales pipeline. Their free, premium and self-serve offerings might underpin the internet, but the highly profitable offerings that are gated behind their sales teams are not getting sold. Too many of the clients that they should be selling to.
Magic Transit (bring your own ASN), classic website DDoS protection (above the Business $200 tier, which has low, undisclosed data limits in regions like New Zealand) and ilk all require interacting with the sales rep, and unless your paying 5 figures a month they are disinterested.
There is a whole market out there between $300 to $2000 a month that Cloudflare could tap without making new infrastructure but is actively being ignored.
I believe Cloudflare (and many other cos like it) have never produced operating income. They are growing and obviously important and potentially very profitable in the future, but when discount rates are much higher and you add in some uncertainty, one could argue they don't look as hot as they used to.
It is bizzare. All the old guard foundations of society type companies that the world relies on for modernity have stocks that barely budge but pay out decent dividends. Maybe tech stocks that have grown to such a position should consider paying out dividends instead of failing to chase exponential stock price growth while still clearly doing a lot of productive things. I expect the shareholder boards prefer the chance of exponential wealth over steady returns and prevent this mindset from emerging.
thrixton|1 year ago
What sort of negotiated rates can you get from AWS for bandwidth I wonder, at the moment, that’s seems like the only real benefit from CF I think.
gavinsyancey|1 year ago
ac29|1 year ago
Guzba|1 year ago
As an example I investigated, to put a custom domain in front of a B2 bucket they suggest using Cloudflare and CNAME-ing a bucket subdomain (eg f000.backblazeb2.com) https://www.backblaze.com/docs/cloud-storage-deliver-public-...
Well if f000.backblazeb2.com is used for any other people's buckets too, which appears to be the case, I guess I am now able to serve other people's files from my domain? This seems terrible.
nchmy|1 year ago
iscoelho|1 year ago
There are other ways to compete.
smileybarry|1 year ago
kjkjadksj|1 year ago
unknown|1 year ago
[deleted]
Lucasoato|1 year ago
mattdeboard|1 year ago
ricopags|1 year ago
I was pretty surprised at the lack of dogfooding, wondered if it's an oversight, on somebody's Gantt, or just not something R2 can handle for some reason.
viraptor|1 year ago
AWS has its own issues, but the push to have everything talking over API did wonders for the ability to use them as you want.
rc_mob|1 year ago
aftbit|1 year ago
tills13|1 year ago
So... something isn't right here. Maybe a mechanical turk where a live human is fetching the object using Windows Explorer behind the scenes?
nitsky|1 year ago
dmw_ng|1 year ago
Obviously example above is contrived, but same principle applies to a pool of 1000 disks as it would 1. You also don't escape this issue with regular hot storage either, there is still a (((iops * replication count) / average traffic) / max latency) type problem lurking, which would still necessitate either limiting density or increasing redundancy according to expected IO rate. This is one reason why some S3 alternatives with weaker latency bounds (not naming names, they're great but it's just not the same service) can often be made substantially cheaper, and why at least one of S3's storage classes may be implemented entirely as an accounting trick with no data movement or hardware changes at all
dannyw|1 year ago
The differences stack up for say, a 1GB video that becomes viral and triggers terabytes in egress. You pay for 1GB, not terabytes.
It’s also an optional tier.
ericpauley|1 year ago
CharlesW|1 year ago
> "Data retrieval is charged per GB when data in the Infrequent Access storage class is retrieved and is what allows us to provide storage at a lower price. It reflects the additional computational resources required to fetch data from underlying storage optimized for less frequent access."
I like the "automatic storage classes" idea as well.
> "…you can define an object lifecycle policy to move data to Infrequent Access after a period of time goes by and you no longer need to access your data as often. In the future, we plan to automatically optimize storage classes for data so you can avoid manually creating rules and better adapt to changing data access patterns."
unknown|1 year ago
[deleted]
tschellenbach|1 year ago
rc_mob|1 year ago
historynops|1 year ago
simfree|1 year ago
Magic Transit (bring your own ASN), classic website DDoS protection (above the Business $200 tier, which has low, undisclosed data limits in regions like New Zealand) and ilk all require interacting with the sales rep, and unless your paying 5 figures a month they are disinterested.
There is a whole market out there between $300 to $2000 a month that Cloudflare could tap without making new infrastructure but is actively being ignored.
Guzba|1 year ago
kjkjadksj|1 year ago