> I still don’t understand this industry’s obsession with predefined fixed limits on unrelated resources.
1. It keeps their billing simpler. They would have to (or otherwise make it up elsewhere) charge different rates for different resources, making it relatively confusing + increasing support costs.
2. Much easier to forecast resources. If you know that you can fit X instances of type Y on a box, or W instances of type Z, it's easier to understand when/where you will need more hardware.
It's not perfect, I agree, but if an ad-hoc VPS product was profitable I'm sure we'd have seen it by now.
This makes sense when you keep in mind how the older clouds work (essentially VPS providers 101).
You have a server with some disks, some ram and some cpus. You aggregate the disks together, then split them to form the individual disks for the virtual machines. You then use kvm/xen to provide isolation as well as to split the ram/cpu between the virtual machines.
So to answer your question: Storage/ram/cpu is sold in lock step because otherwise there would be resources sitting on servers that are unable to be sold. Bandwidth isn't constrained like that because bandwidth isn't a thing tied to a machine.
There are some providers out there that don't lock ram/disk together. This is mostly because they use a distributed storage pool rather than local disks. This is significantly more complex and is a 'fairly' new addition to the scene (~2010?).
This is also why certain providers still charge you for ram even when your machine is turned off, and why backups/migra
tions/plan upgrades can be a bit of a pain in the neck at times.
"Unfortunately given how physical resources are segmented if we gave users the ability to arbitrarily select CPU, RAM, or HDD independent of each other they would actually end up paying more for this 'custom' plan than using one of the pre-defined plans.
As I'm sure you are a well aware the resources are not equal and are not priced equally, it's cheaper to get more disk, than to get more RAM, which is why we've done our best to cut it up into units and provide the best cost savings to our customers."
Maybe it is easier to keep the maximum amount of virtual machines running on your hardware when you only offering fixed sizes? You can plan your physical hardware so that there is always certain amount of vms on one host and all CPU/mem/disk is allocated to them.
With more flexible allocation of resources the pool would start to fragment. Without local disk the defragmentation process would be fairly easy as you could just restart the vm's in another host, but local disk makes this more difficult (or more annoying for the customer).
I think it's more useful to be able to build up a more real-world deployment with storage costs, etc all built in, like you can do with PlanForCloud.
That gives you a monthly/yearly final number, which is more useful for comparing against other 'cloud' providers, or for making the point that often it's cheaper to use standard VM or dedicated server providers.
This tool is great, but only if you're only looking at EC2, and I think that's a mistake these days.
Here's a different service I made that includes other regions, continously updated spot instance prices, and a few other nice features: http://ec2pricing.iconara.info/
AWS has a full blown calculator for every service they provide, not just EC2. They give you pricing by monthly, upfront cost [for reserved instances] etc.
These costs don't include the cost of provisioned IOPS, right? Without provisioned IOPS, the I/O performance is going to range from "very low" to "low", not "low" to "very high".
Am I missing something or does this not include options for the reserved instances? Because _that_ is the part of the EC2 pricing that is most confusing to me.
I just started using AWS yesterday and was pretty annoyed by the really counter-intuitive AWS website. I've been using other IAAS before, but AWS's information is all over the place.
Having an option to show the effective monthly rate (incl. the amortized up front cost) for light/medium/heavy reserved instances would be great ... I always end up creating a spreadsheet for those.
For applications that don't need guaranteed uptime or SSDs, you can save a lot of money with EC2 spot instances. I can get an m1.large instance and 80GB of EBS storage for about $27/month; comparable specs would cost $80 from DigitalOcean.
10x in cost isn't necessarily the most important factor to a business.
From an individual's perspective, that's hard to understand. To a business, it really isn't that difficult to justify the cost.
One of my criticisms of HN commenters is the inability to empathize from a company's POV. Just because it doesn't make sense to you doesn't mean it doesn't make sense!
Companies (especially big ones) have different priorities than individuals. I may think it's stupid to 10x infrastructure costs by using AWS. A company may say "that's 1% of my budget, and it keeps my development running smoothly and developers happy with the familiarity and flexibility. I make that 10x cost back in 30 minutes, every day. Not worth optimizing."
Just having checked out the companies you mentioned, they are cheaper for small-midsize hosting solutions.
For running large memory and bandwidth hungry servers they just can't deliver. On Amazon you can get 256GB of ram with dedicated 10GBps clustered networking. None of the options you listed can go above 1GBps and it won't be dedicated (you'll get a 1GBps port onto a shared network, and you'll be at the mercy of the traffic conditions inside their data center.)
Amazon also has a huge amount of cloudy solutions which is not to be sneezed at.
[+] [-] stephenr|12 years ago|reply
Just because I want lots of RAM, why do I necessarily need lots of disk and/or lots of transfer?
Or vice versa, why do I need to pay for lots of tranfer and RAM to get lots of disk?
I get that AWS has separate billing for data, but they still tie CPU, RAM and Disk space together, as do most “traditional” VPS hosts.
And even more confusing to me, is why anyone with any sense would pay for these things?
[+] [-] elithrar|12 years ago|reply
1. It keeps their billing simpler. They would have to (or otherwise make it up elsewhere) charge different rates for different resources, making it relatively confusing + increasing support costs.
2. Much easier to forecast resources. If you know that you can fit X instances of type Y on a box, or W instances of type Z, it's easier to understand when/where you will need more hardware.
It's not perfect, I agree, but if an ad-hoc VPS product was profitable I'm sure we'd have seen it by now.
[+] [-] asharp|12 years ago|reply
You have a server with some disks, some ram and some cpus. You aggregate the disks together, then split them to form the individual disks for the virtual machines. You then use kvm/xen to provide isolation as well as to split the ram/cpu between the virtual machines.
So to answer your question: Storage/ram/cpu is sold in lock step because otherwise there would be resources sitting on servers that are unable to be sold. Bandwidth isn't constrained like that because bandwidth isn't a thing tied to a machine.
There are some providers out there that don't lock ram/disk together. This is mostly because they use a distributed storage pool rather than local disks. This is significantly more complex and is a 'fairly' new addition to the scene (~2010?).
This is also why certain providers still charge you for ram even when your machine is turned off, and why backups/migra tions/plan upgrades can be a bit of a pain in the neck at times.
[+] [-] rschmitty|12 years ago|reply
http://digitalocean.uservoice.com/forums/136585-digital-ocea...
"Unfortunately given how physical resources are segmented if we gave users the ability to arbitrarily select CPU, RAM, or HDD independent of each other they would actually end up paying more for this 'custom' plan than using one of the pre-defined plans.
As I'm sure you are a well aware the resources are not equal and are not priced equally, it's cheaper to get more disk, than to get more RAM, which is why we've done our best to cut it up into units and provide the best cost savings to our customers."
[+] [-] jpalomaki|12 years ago|reply
With more flexible allocation of resources the pool would start to fragment. Without local disk the defragmentation process would be fairly easy as you could just restart the vm's in another host, but local disk makes this more difficult (or more annoying for the customer).
[+] [-] philfreo|12 years ago|reply
Just want better disk performance? Use instance storage or high provisioned IOPS EBS.
Just want a lot of memory? Pick a M2 or CR1 Memory-optimized instance type and pay for more RAM without adding CPU.
Just want more CPU power? Stick with the same amount of RAM as the m1.large but add 5 times the CPU power with the c1.xlarge.
More disk space? On EBS you just pay per GB.
[+] [-] semanticist|12 years ago|reply
That gives you a monthly/yearly final number, which is more useful for comparing against other 'cloud' providers, or for making the point that often it's cheaper to use standard VM or dedicated server providers.
This tool is great, but only if you're only looking at EC2, and I think that's a mistake these days.
[+] [-] okrasz|12 years ago|reply
[+] [-] wanghq|12 years ago|reply
[+] [-] iconara|12 years ago|reply
It's also available as an API: http://ec2pricing.herokuapp.com/api/v1/eu-west-1/
[+] [-] coolrhymes|12 years ago|reply
http://calculator.s3.amazonaws.com/calc5.html
[+] [-] rspeer|12 years ago|reply
[+] [-] AsymetricCom|12 years ago|reply
[+] [-] aristidb|12 years ago|reply
[+] [-] kpras|12 years ago|reply
[+] [-] sitkack|12 years ago|reply
[+] [-] bvancea|12 years ago|reply
Thanks a lot, really helpful for me atleast!
[+] [-] aquark|12 years ago|reply
[+] [-] immad|12 years ago|reply
AWS in general makes pricing opaque and hard to reason about, more simple tools like this would be useful.
[+] [-] shrike|12 years ago|reply
[+] [-] zacwitte|12 years ago|reply
[+] [-] kudu|12 years ago|reply
[+] [-] morgo|12 years ago|reply
[+] [-] teraflop|12 years ago|reply
[+] [-] velodrome|12 years ago|reply
[+] [-] geuis|12 years ago|reply
[+] [-] philfreo|12 years ago|reply
[+] [-] yeleti|12 years ago|reply
[+] [-] ye|12 years ago|reply
EC2 is ridiculously expensive, considering the prices at Linode, Hetzner, DigitalOcean, OVH, LeaseWeb and 100tb.
[+] [-] monkeyspaw|12 years ago|reply
From an individual's perspective, that's hard to understand. To a business, it really isn't that difficult to justify the cost.
One of my criticisms of HN commenters is the inability to empathize from a company's POV. Just because it doesn't make sense to you doesn't mean it doesn't make sense!
Companies (especially big ones) have different priorities than individuals. I may think it's stupid to 10x infrastructure costs by using AWS. A company may say "that's 1% of my budget, and it keeps my development running smoothly and developers happy with the familiarity and flexibility. I make that 10x cost back in 30 minutes, every day. Not worth optimizing."
[+] [-] eloff|12 years ago|reply
For running large memory and bandwidth hungry servers they just can't deliver. On Amazon you can get 256GB of ram with dedicated 10GBps clustered networking. None of the options you listed can go above 1GBps and it won't be dedicated (you'll get a 1GBps port onto a shared network, and you'll be at the mercy of the traffic conditions inside their data center.)
Amazon also has a huge amount of cloudy solutions which is not to be sneezed at.