In the same vein, a South African carrier pigeon was faster than a popular ADSL solution in 2009. It carried a 4GB memory stick 60 miles in about an hour. It took them another hour to upload the data to their system. The ADSL solution only completed 4% of the transfer, in the same amount of time.
Never underestimate the street value either. I wonder if they have an armed escort for this truck -- the hardware must cost on the order of 10-20 million, and the data itself could be worth many multiples of that. Could make a great heist movie.
As other commenters noted, it's fascinating that no matter how advanced the networking technology progresses, we'll always have a variation of "sneakernet"[1] to bypass the limitations of the network. The sneakernet just evolves from floppies to 45-foot shipping containers.
If humans later colonize Mars and want to have the full 50-terabytes copy of Wikipedia in the biosphere, it's faster to send some harddrives as a rocket payload on a 6 month journey rather than try to transfer it via the 32kbps uplink[2] which would take ~500 years.
A screenshot of Tanenbaum's "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway" argument with the context around it from his book.
I just realized how much of a limited resource the Mars -> Earth internet connection is going to be in the future. It'll need to be limited to only absolutely necessary communications for a long time.
Assuming we get enough people on Mars, they'll have their own internet over there.
I feel unusual when I see articles like this. I deploy "workloads" that require instances, auto scaling, Multi-AZ etc. It makes my projects feel minuscule at the scale of other companies that actually use something like this! I wonder how many companies will actually use this in any given year.
I imagine surprisingly many. I have operations that are not remotely on that scale. 8 employees with total data on the order of tens of terabytes. I found that to be a surprisingly heavy density of data per employee. A 1000 employee company with the same density is on the petabyte scale.
I wonder why this makes sense. Isn't it more useful to get a few hundred snowballs and ship them via Fedex? You can transfer in parallel and should be at the same speed as with Snowmobile. It's at the DC next day and the data will be faster in S3 than by truck. Also, the economies of scale will never pay off for Snowmobile, likely more for Snowball.
At the same time, logistics (incl insurance and security) is handled by companies that are very good at it. Fedex, DHL and the like offer physical security services for goods if you need it in addition to encryption.
Think it's a PR move only. They will probably find a few clients to somehow utilize one truck, but I don't think it's more efficient than Snowballs.
Installing, powering and cabling "a few hundred" of anything in a datacenter is a big deal. You probably don't have room. You may not have power. You have to deal with hundreds of boxes, cardboard isn't allowed on the datacenter floor (ideally), and just mucking around on the loading dock wrangling stupid stuff like shipping labels is going to suck up a ton of time.
[I'm a C++ dev who likes to help design and build datacenters. It's fun.]
"One Snowmobile can transport up to one hundred petabytes of data in a single trip, the equivalent of using about 1,250 AWS Snowball devices." -- https://aws.amazon.com/snowmobile/
You'd have to find a thousand 1GBe ports in your data center (unless Amazon would ship an expensive switch along with Snowballs) -- that's about two server cages' (10 racks x42U) worth. You will have to find a lot of power -- while Snowmobile can bring a generator.
I doubt there will be a lot of demand for Snowmobiles, though.
> However, customers with exabyte-scale on-premises storage look at the 80 TB, do the math, and realize that an all-out data migration would still require lots of devices and some headache-inducing logistics.
> Isn't it more useful to get a few hundred snowballs and ship them via Fedex?
Corporate types, spending loads of company money, are more interested in convenience and turn key solutions to problems that rigs that cost less but give more to think about or other issues.
If somehow a technology is developed to allow local storage cost to drop by a factor of 10, don't you think S3 would make use of the same technology to stay competitive?
Cloud storage is a commodity these days. The market saves you in this case -- if one cloud provider didn't use the 10x technology and pass along the 10x savings to the customer, another company would do it and steal all of their customers.
Your standalone 4TB Western Digital hard drive isn't going to give you 99.999999999% durability and replicate your data across multiple datacenters. .03/GB is still insanely cheap IMO.
I suspect that Amazon will not only be dropping prices, but will be adding new ways to store data with variable pricing (e.g. new tiers as well as reduced redundancy). Not to mention that if you have 100PB in AWS you might get preferential pricing? Google recently increased their spread and now have 4 tiers. There is also an interesting article about Glacier storage and pricing here https://storagemojo.com/2014/04/25/amazons-glacier-secret-bd....
Considering the cost to store 100PB on site, with redundant disks, geographically distributed, including property leases, power, security, and staffing. $700k might be considerably cheaper.
I'm at re:invent and they have a "making of" video next to a demo unit. They only show the physical construction of the power distribution and the raised floor. (Nothing about the racks or what's in them.)
It also appears that you never have access to the inside where the racks are. You can only access the last ~4 feet for power and data connections.
This really helps me, personally, abstract the concept of 'data' a lot more. It's not about files or records, data has a 'volume'. A few PB fills up a shipping container.
Next time my client asks me how "much" a PB is, I can just say "about a shippingcontainer's worth".
350KW seems a bit high for something that should effectively be an append-only file system. I would have expected 95% of the trailer to be in standby at any point in time.
100PB storage and a total network capacity of 1TB/s across multiple 40GB/s links means pretty serious hardware, before even considering the security and video surveillance systems.
Now this is real persistent container storage. ;) And persistent it will be, because even at 1Tb/s it will take you over a week to load the data onto it. The bandwidth while in transit is phenomenal, but before that it will be sitting at your DC's loading dock for quite a while.
So- suppose we could move that same 100 PB in a standard 24" cube FedEx box, collecting the data in less than a week and using only two 110v power connections. Would that be interesting? Oh, and it takes less than a single rack of gear.
> Each Snowmobile includes a network cable connected to a high-speed switch capable of supporting 1 Tb/second of data transfer spread across multiple 40 Gb/second connections. Assuming that your existing network can transfer data at that rate, you can fill a Snowmobile in about 10 days.
[+] [-] wazoox|9 years ago|reply
[+] [-] andrelaszlo|9 years ago|reply
[+] [-] MichaelApproved|9 years ago|reply
http://news.bbc.co.uk/2/hi/africa/8248056.stm
It was an obvious publicity stunt trying to bring attention to the slow connection speeds but it does illustrate this issue well.
[+] [-] mojoe|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] theregoesjohn|9 years ago|reply
[deleted]
[+] [-] jasode|9 years ago|reply
If humans later colonize Mars and want to have the full 50-terabytes copy of Wikipedia in the biosphere, it's faster to send some harddrives as a rocket payload on a 6 month journey rather than try to transfer it via the 32kbps uplink[2] which would take ~500 years.
[1] https://en.wikipedia.org/wiki/Sneakernet
[2] http://mars.nasa.gov/msl/mission/communicationwithearth/data...
[+] [-] tyingq|9 years ago|reply
http://imgur.com/a/qi4BP
[+] [-] zeristor|9 years ago|reply
http://www.space.com/34824-nasa-x-ray-tech-deep-space-commun...
https://news.ycombinator.com/item?id=13063277
[+] [-] teaearlgraycold|9 years ago|reply
Assuming we get enough people on Mars, they'll have their own internet over there.
[+] [-] kylehotchkiss|9 years ago|reply
[+] [-] spennant|9 years ago|reply
[+] [-] FryHigh|9 years ago|reply
[+] [-] Beltiras|9 years ago|reply
[+] [-] dx034|9 years ago|reply
At the same time, logistics (incl insurance and security) is handled by companies that are very good at it. Fedex, DHL and the like offer physical security services for goods if you need it in addition to encryption.
Think it's a PR move only. They will probably find a few clients to somehow utilize one truck, but I don't think it's more efficient than Snowballs.
[+] [-] kabdib|9 years ago|reply
[I'm a C++ dev who likes to help design and build datacenters. It's fun.]
[+] [-] gfv|9 years ago|reply
You'd have to find a thousand 1GBe ports in your data center (unless Amazon would ship an expensive switch along with Snowballs) -- that's about two server cages' (10 racks x42U) worth. You will have to find a lot of power -- while Snowmobile can bring a generator.
I doubt there will be a lot of demand for Snowmobiles, though.
[+] [-] robbiemitchell|9 years ago|reply
[+] [-] gist|9 years ago|reply
Corporate types, spending loads of company money, are more interested in convenience and turn key solutions to problems that rigs that cost less but give more to think about or other issues.
[+] [-] LeifCarrotson|9 years ago|reply
[+] [-] JackFr|9 years ago|reply
[+] [-] brianwawok|9 years ago|reply
Then what happens in 5 years if local storage cost dropped by a factor of 10, but S3 cost did not drop?
Big risk, no?
[+] [-] gamegoblin|9 years ago|reply
Cloud storage is a commodity these days. The market saves you in this case -- if one cloud provider didn't use the 10x technology and pass along the 10x savings to the customer, another company would do it and steal all of their customers.
[+] [-] res0nat0r|9 years ago|reply
[+] [-] noahdesu|9 years ago|reply
[+] [-] jrgifford|9 years ago|reply
[+] [-] PetahNZ|9 years ago|reply
Considering the cost to store 100PB on site, with redundant disks, geographically distributed, including property leases, power, security, and staffing. $700k might be considerably cheaper.
[+] [-] robbiemitchell|9 years ago|reply
PSA: consume the full content before you comment on the content.
[+] [-] bluedino|9 years ago|reply
Around this time last year BackBlaze had 200PB of customer backups. They described storing it on 54,675 hard drives across 1,215 Storage Pods
So imagine 600 storage pods or half of BackBlaze's entire operation, for just one customer. Insane.
[+] [-] chucknelson|9 years ago|reply
I wish they gave more details as to what hardware was in there - are there any pictures of what the trailer looks like on the inside?
[+] [-] tyingq|9 years ago|reply
[+] [-] zeristor|9 years ago|reply
Quite a dramatic illustration of the increase in data usage.
If one extrapolates next stop will be a train, and then a container ship full of hard drives.
[+] [-] paco3346|9 years ago|reply
It also appears that you never have access to the inside where the racks are. You can only access the last ~4 feet for power and data connections.
[+] [-] neals|9 years ago|reply
Next time my client asks me how "much" a PB is, I can just say "about a shippingcontainer's worth".
[+] [-] OJFord|9 years ago|reply
[+] [-] dkresge|9 years ago|reply
[+] [-] masklinn|9 years ago|reply
[+] [-] bArray|9 years ago|reply
[+] [-] patrickg_zill|9 years ago|reply
[+] [-] notacoward|9 years ago|reply
[+] [-] jjagylstor|9 years ago|reply
[+] [-] onde2rock|9 years ago|reply
If they fill the thing in 30 days, that average to 40 Go/s, way faster than 10gbe or Fibre Channel.
[+] [-] ceejayoz|9 years ago|reply
[+] [-] jasoncchild|9 years ago|reply
[+] [-] jklein11|9 years ago|reply
1. https://xkcd.com/949/
[+] [-] aeharding|9 years ago|reply
[+] [-] dx034|9 years ago|reply
[+] [-] orf|9 years ago|reply