Three feedback points for B2. In case anyone from BB is reading:
- (mentioned this before) invoices need improvement. No accountant will accept this as of now. Please take a look at e.g. Digitalocean for how to do this right. I mentioned this at least 5 times to support...but nobody seems to care.
- upload speed is really bad from Europe (might be purely due to distance). Tried uploading one TB (@1.25MB/s), did around 25GB per day. Switched to a Hetzner Storage box, doing consistently 75GB per day right now.
- deleting your bucket (with data in it) requires to install a command-line tool which is really inconvenient. Took me 30 minutes+ and still threw some weird java.exception afterwards when clicking on „delete bucket“. Why not put a simple „delete“ button in the web ui?
You should see the invoices from LastPass for "Enterprise".. it's hilarious. By default they dont send anything, the invoice link from their enterprise dashboard has been broken for months (support know) and they give you some other weird login to workaround and it loads a poor html page with tables, and a button to switch what kind of tax you want the Invoice to show (VAT, GST, etc)
It's so bad.
Side note: One thing that terrifies me about B2 is the ability to make a bucket public (when it's private and intended to be). Would be nice to have some kind of lock on it to say "never private" or at least make it harder to change that opening the same settings page as other stuff.
> - upload speed is really bad from Europe (might be purely due to distance). Tried uploading one TB (@1.25MB/s), did around 25GB per day
That's really useful to know. I am looking to switch from Amazon Cloud Drive for my cloud backups (now their pricing scheme has changed) and one criteria was much faster uploads. Sounds like B2 won't be it then, shame.
Deleting a bucket in an object store is surprisingly non-trivial. See Amazon’s UI - it works fine for small object counts but is very buggy for larger buckets.
Do you have any info on what temperature your drives are running on normally?
Love to see some performance stat to see if read /write IOP, MB/second, etc change over time on per model base.
Other than complete failure, do you collect check the any other early warning failure data such as ECC change over the life time of the drive? --- Would be a fun big data exercise for an intern to work on.
I love these reports! I'm always in the market for new storage solutions and find these statistics very useful in determining which brands and drive sizes provide the best reliability.
One statistic I'd really like to see included is the disk usage of each drive as a percentage of their total capacity. Would there be any correlation between that and the failure rate?
Backblaze, from what I gather, mostly just leave dead drives in place for a while.
They have to maintain more racks to do that (how much does it cost them to have their drive arrays running st 90%?) but that drops the amortized cost of replacing the drives.
But for me it might be cheaper to have one big drive that fails twice as often, because the odds of a drive being dead on any given Tuesday and the costs of addressing that might be low enough to offset the price differential.
Yev, would you ever consider opening the software and hardware design of the vault?
It seems that you could lead this and give the industry a standard way, with vendors producing the fitting hardware, and allowing others to maintain their own vaults. Maybe this is a business advantage you don't want to lose right now. Then, I'm sorry to have asked :). If not then I think you might benefit by lower prices if the hardware is readily available and not a Backblaze custom production run.
There are many places who cannot for technical or won't for political reasons offload their storage to S3 and friends and need it all locally most of the time which would use a vault or two.
As for getting the servers, there's an entire company that popped up from one of our early storage pod producers, Protocase, you can buy servers (very reasonably priced btw) based on our designs here -> http://www.45drives.com/.
I have a somewhat off-topic question: are the new 10/12 TB HDD filled with helium (He)? Last time I've checked on WesternDigital and Seagate sites I couldn't find any mention of it.
I see Seagate is continuing their track record of Being Shit, as usual. Yet again they're the only brand where a couple models are exceeding the normal failure rate by a factor of 50-100x.
Am I misreading the data (Cumulative 2013->2017) or something? The Seagate 4TB are horrific, yes, but they look to have completely solved their issues above 4TB.
e.g.
6TB - BB have 4 times as many Seagate drives (and nearly 4x logged time) as WDC, but more WDC drives have failed. (4.6%WDC versus 1.2%)
8TB - Annualised failure is lower than HGST (1.1% & 1.2% versus 1.7%). Nearly 25,000 Seagate drives and 150 odd failures.
10TB - 0 failures over 13,720 drive days so far
12TB - 0 failures over 500 drive days so far.
Maybe I'm misreading something, but it looks from their data that the major Seagate issues have been put behind them now?
Those 10gb Seagates have to be pretty new still. I'm curious about them over time. It would be nice if that's one like their old pre 2006 hdds that hardly ever failed.
Anecdata, but my work's just passed the 1 year anniversary of running 12 of those and they're all still in perfect health. I'm very impressed by Seagate's enterprise drives.
Upvoted, but I'd like to suggest a title change, since this is just the Q3 report. The next report will be covering all of 2017.
>Our next drive stats post will be in January, when we’ll review the data for Q4 and all of 2017, and we’ll update our lifetime stats for all of the drives we have ever used. In addition, we’ll get our first real look at the 12 TB drives.
[+] [-] philfrasty|8 years ago|reply
- (mentioned this before) invoices need improvement. No accountant will accept this as of now. Please take a look at e.g. Digitalocean for how to do this right. I mentioned this at least 5 times to support...but nobody seems to care.
- upload speed is really bad from Europe (might be purely due to distance). Tried uploading one TB (@1.25MB/s), did around 25GB per day. Switched to a Hetzner Storage box, doing consistently 75GB per day right now.
- deleting your bucket (with data in it) requires to install a command-line tool which is really inconvenient. Took me 30 minutes+ and still threw some weird java.exception afterwards when clicking on „delete bucket“. Why not put a simple „delete“ button in the web ui?
[+] [-] atYevP|8 years ago|reply
[+] [-] lathiat|8 years ago|reply
It's so bad.
Side note: One thing that terrifies me about B2 is the ability to make a bucket public (when it's private and intended to be). Would be nice to have some kind of lock on it to say "never private" or at least make it harder to change that opening the same settings page as other stuff.
Per-bucket keys would be nice too.
[+] [-] codeaken|8 years ago|reply
Btw, what is missing in the invoices?
For us to be able to move from S3 we would also need to be able to create multiple API keys where we can set what buckets they have access to.
[+] [-] porker|8 years ago|reply
That's really useful to know. I am looking to switch from Amazon Cloud Drive for my cloud backups (now their pricing scheme has changed) and one criteria was much faster uploads. Sounds like B2 won't be it then, shame.
[+] [-] diggs|8 years ago|reply
[+] [-] Neil44|8 years ago|reply
[+] [-] atYevP|8 years ago|reply
[+] [-] srcmap|8 years ago|reply
Love to see some performance stat to see if read /write IOP, MB/second, etc change over time on per model base.
Other than complete failure, do you collect check the any other early warning failure data such as ECC change over the life time of the drive? --- Would be a fun big data exercise for an intern to work on.
[+] [-] linuxguy2|8 years ago|reply
[+] [-] eljimmy|8 years ago|reply
One statistic I'd really like to see included is the disk usage of each drive as a percentage of their total capacity. Would there be any correlation between that and the failure rate?
[+] [-] atYevP|8 years ago|reply
[+] [-] nabla9|8 years ago|reply
When 12TB HDD fails, it's equal to four 3TB HDD's failing at once.
[+] [-] hinkley|8 years ago|reply
They have to maintain more racks to do that (how much does it cost them to have their drive arrays running st 90%?) but that drops the amortized cost of replacing the drives.
But for me it might be cheaper to have one big drive that fails twice as often, because the odds of a drive being dead on any given Tuesday and the costs of addressing that might be low enough to offset the price differential.
[+] [-] throwawaysml|8 years ago|reply
It seems that you could lead this and give the industry a standard way, with vendors producing the fitting hardware, and allowing others to maintain their own vaults. Maybe this is a business advantage you don't want to lose right now. Then, I'm sorry to have asked :). If not then I think you might benefit by lower prices if the hardware is readily available and not a Backblaze custom production run.
There are many places who cannot for technical or won't for political reasons offload their storage to S3 and friends and need it all locally most of the time which would use a vault or two.
[+] [-] atYevP|8 years ago|reply
As for getting the servers, there's an entire company that popped up from one of our early storage pod producers, Protocase, you can buy servers (very reasonably priced btw) based on our designs here -> http://www.45drives.com/.
[+] [-] IronBacon|8 years ago|reply
[+] [-] atYevP|8 years ago|reply
[+] [-] paulmd|8 years ago|reply
[+] [-] christoph|8 years ago|reply
e.g.
6TB - BB have 4 times as many Seagate drives (and nearly 4x logged time) as WDC, but more WDC drives have failed. (4.6%WDC versus 1.2%)
8TB - Annualised failure is lower than HGST (1.1% & 1.2% versus 1.7%). Nearly 25,000 Seagate drives and 150 odd failures.
10TB - 0 failures over 13,720 drive days so far
12TB - 0 failures over 500 drive days so far.
Maybe I'm misreading something, but it looks from their data that the major Seagate issues have been put behind them now?
[+] [-] katastic|8 years ago|reply
I even had RMA'd ones die within a week.
[+] [-] ksec|8 years ago|reply
[+] [-] Joeri|8 years ago|reply
[+] [-] jumpkickhit|8 years ago|reply
[+] [-] PascLeRasc|8 years ago|reply
[+] [-] CaliforniaKarl|8 years ago|reply
>Our next drive stats post will be in January, when we’ll review the data for Q4 and all of 2017, and we’ll update our lifetime stats for all of the drives we have ever used. In addition, we’ll get our first real look at the 12 TB drives.
[+] [-] dang|8 years ago|reply