The value proposition of these instance types seems to be entirely focused on CPU burst performance, with no local storage, no EBS optimization (though there are provisioned IOPS), and only moderate network performance that is shared.
Relatively poor disk performance is somewhat expected. I'm not sure how fair it is to compare it to instance volumes on other platforms, given the significantly reduced flexibility that brings with it.
With I/O being the top bottleneck in modern databases and web applications, a lot of what people call "unpredictable performance" or "virtualization overhead" has mostly to do with sharing (and thin provisioning) storage devices between multiple servers/nodes.
I think there would be a huge market for a performance-oriented VPS provider who could provide each node with its own, dedicated hard drives/SSDs. All major virtualisation tech (KVM/Xen) already supports raw disk mode.
Obviously, space and SATA ports inside servers are an expensive commodity with off-the-shelf hardware, so this project would require at least some custom hardware to offer competitive pricing. I think the tiny mPCIE SSDs sometimes found in laptops would be a good area to explore.
EC2 is generally very expensive for CPU. RAM and storage are okay but CPU is crazy.
Anyone know what recommends EC2 over Digital Ocean, Vultr, Linode, etc.? Are they more reliable? Enterprise features? Network bandwidth? Cause right now they look hugely overpriced.
I've hosted on Digital Ocean and Vultr for some time and my uptime is great on both. I run constant ping testing and I do see little glitches from time to time between data centers, but that could be network weather on the global backbone. (I have a geo-distributed architecture so there's stuff running at five different locations.)
Any thoughts on how this compares to simply spinning up a standard instance for a few hours then turning it off when you don't need it?
I run a service that needs to run about 72 hours worth of processing each day, and it all needs to happen during a 3 hour window. That's a natural fit for spinning up a couple dozen instances then killing them when they finish.
I'd love to see a comparison of what would happen if I kept the same amount of compute power on standby 24/7 using this new instance type.
It seems like this fits two needs, for smaller companies and/or people just getting started with EC2.
1. Laziness. Which I don't necessarily mean in a pejorative sense. Maybe someone just doesn't have time, yet, to learn/configure/maintain spinning up an instance for limited times.
2. Single instance. To spin up an instance, you need another computer. If you want that "manager" computer to be an instance at EC2, too, now you need two instances. With this approach, you can set up just one instance and get much of the same economic benefit.
EDIT: Also...
3. Predictable cost. If your manual spun-up instance turns out to need to run for 4 hours instead of 2, you get a bigger bill. With the t2 instances, you'll get a slower compute (if you run out of "credits") but not a bigger bill.
Again, this probably appeals most to small/new customers?
Well a c3.xlarge is $0.21 per hour (their "compute optimized current generation cpu" line).
24 * $0.21 = $5.04 per hour
You could burn those for two hours, almost, to match the lowest $9.50 per month cost of what they're talking about in the blog.
The c3 approach would give you 96 vCPUs during that time. The t2 micro for $9.36 or whatever per month, gives you one vCPU. I'd have to strongly favor spinning up 24 to 48 instances of the c3 large and clocking the job in one to three hours if possible.
If you are running a service that needs to be up all the time this is ideal. For other scenarios, what you describe is probably appropriate.
With burstable instances, you accumulate 6 CPU credits every hour so you can run at 100% load for an hour, once per 10 hours for t2.medium (once every 13.3 hours for t2.medium; once every 15 hours for t2.micro)
It would be nice to have a credit window greater than 24 hours though.
EDIT: ColinCera pointed out the math is incorrect. Updated and removed erroneous conclusion.
A t2.medium starts with enough credits for about 15 minutes of 2 core CPU saturation and accumulates 12 minutes/hour thereafter. In non-burst mode it is about 5X slower. For this type of workload you'd likely be better off with a c3 or m3. t2 is a better fit for long term usage with periodic spikes (20% or less of total operational time).
Super interesting. If I did the math right, a 3year heavy reserved t2.micro instance comes out to $4.48/mo, which is cost competitive with Digital Ocean. The proof will come in the benchmarks, but this may become my preferred hosting solution.
It's $77 for 1 year reserved if I'm reading it correctly. That's $6.44 per month for an instance with double the RAM of the DO $5 instance. The specs look like the size that DO is charging $10 for currently. For a 3 year reserved instance it's $4.48 a month for double the size of the DO $5 instance. There's also a free tier, so the first year is free to try it out.
DO was competitive with EC2 on price but not on features (and certainly not on security), now with the price advantage gone...
The issue is that this offering is complex to understand as opposed to DO which is incredibly simple to understand. It is actually pretty funny how hard it is to understand this offering from AWS, it takes many paragraphs of reading to figure it out.
I would replace 'super interesting' with 'Super complex'. It's an example of how you can make the price of a $10-40/mo server complex to the extent that you need to read the blog post numerous times before you understand the construct.
And even then, one still needs to factor in the 'other' costs like I/O or IOPs, disk (persistent/EBS), IPs, internet and inter-region data transfer… before you understand the real cost.
And then you need to compare to other instance types (which soon will cover the full alphabet -- c, cg, cr, g, h, i, m, r, t… ) and then other providers.
You still have several unresolved issue -
1. Are your assumption on usage (cpu, I/O, internet etc) correct? Will they change?
2. How do I compare performance across providers for a given VM specification.
3. Can I get support when I need it?
And I am sure there are others
It certainly means there is room for other players who just make it simple, whether they are infrastructure folk (like DO/Linode etc) or platform plays that make the pricing understandable by the audience they are trying to target (like Heroku/Ninefold)
> The T2 instances use Hardware Virtualization (HVM) in order to get the best possible performance from the underlying CPU and you will need to use an HVM AMI.
I've always used paravirtual AMI's, as I understood that gets the best performance for a Linux box.
Given that I try to use the same self-baked base AMI's for various purposes (and instance sizes), I would either have to mix and match or switch everything to HVM. However, I have no clue what the practical consequences of that would be.
HVM gives the best performance because you can take advantage of certain hardware features through the hypervisor. It's basically more direct access to the hardware, which makes it faster as you don't have as much hypervisor overhead. Amazon's "enhanced" networking and SSDs need HVM to get a good chunk of performance.
Yes you'd have to build new AMIs with HVM. It'd be easiest if you had some kind of configuration management so you didn't need as many AMIs baked. When I build machines I use a script to handle the creation and mounting any extra volumes on a machine that I have as "nonstandard". I have only 2 custom AMIs - one for PV and the other for HVM. You'll need to have at least both, because if you wanted to use certain instances (t1.micro, m1.small come to mind) you can only use PV.
HVM vs PV is confusing because as Xen improves the performance characteristics of the two modes change. Brenden Gregg covered the differences in quite some detail in a recent blog post [1]. Basically, if you are running a new enough kernel on the guest os you will get better performance from HVM.
This looks like it's a reaction to (and effective solution for) the problem with t1 instances that made them largely useless (or a gamble at best) due to sharing a CPU with instances that run at full load all the time.
Any recommendations for software builds? I usually go with c3.4xlarge for building Android platforms but wondering if there are alternatives out there.
That was my understanding too. At $9.50/mo, a server with 96GB of RAM would bring in $912/mo.
A quick click around dell finds that a mid-range 1U rackmount server (R320) with that much RAM costs $3,135.
So a back-of-the-envelope calculation makes it seem workable, especially for high-RAM low-CPU configurations, which is what this is.
There are other tricks that they might be employing, such as swapping out part of RAM to SSDs behind the scenes, as well as compressing RAM contents. On low-load servers like these, typical usage would imply that RAM would be mostly static.
[+] [-] growt|11 years ago|reply
[+] [-] personZ|11 years ago|reply
Relatively poor disk performance is somewhat expected. I'm not sure how fair it is to compare it to instance volumes on other platforms, given the significantly reduced flexibility that brings with it.
[+] [-] jakozaur|11 years ago|reply
[+] [-] faeroe|11 years ago|reply
[+] [-] vomitcuddle|11 years ago|reply
I think there would be a huge market for a performance-oriented VPS provider who could provide each node with its own, dedicated hard drives/SSDs. All major virtualisation tech (KVM/Xen) already supports raw disk mode.
Obviously, space and SATA ports inside servers are an expensive commodity with off-the-shelf hardware, so this project would require at least some custom hardware to offer competitive pricing. I think the tiny mPCIE SSDs sometimes found in laptops would be a good area to explore.
[+] [-] api|11 years ago|reply
Anyone know what recommends EC2 over Digital Ocean, Vultr, Linode, etc.? Are they more reliable? Enterprise features? Network bandwidth? Cause right now they look hugely overpriced.
I've hosted on Digital Ocean and Vultr for some time and my uptime is great on both. I run constant ping testing and I do see little glitches from time to time between data centers, but that could be network weather on the global backbone. (I have a geo-distributed architecture so there's stuff running at five different locations.)
[+] [-] jasonkester|11 years ago|reply
I run a service that needs to run about 72 hours worth of processing each day, and it all needs to happen during a 3 hour window. That's a natural fit for spinning up a couple dozen instances then killing them when they finish.
I'd love to see a comparison of what would happen if I kept the same amount of compute power on standby 24/7 using this new instance type.
[+] [-] 6cxs2hd6|11 years ago|reply
1. Laziness. Which I don't necessarily mean in a pejorative sense. Maybe someone just doesn't have time, yet, to learn/configure/maintain spinning up an instance for limited times.
2. Single instance. To spin up an instance, you need another computer. If you want that "manager" computer to be an instance at EC2, too, now you need two instances. With this approach, you can set up just one instance and get much of the same economic benefit.
EDIT: Also...
3. Predictable cost. If your manual spun-up instance turns out to need to run for 4 hours instead of 2, you get a bigger bill. With the t2 instances, you'll get a slower compute (if you run out of "credits") but not a bigger bill.
Again, this probably appeals most to small/new customers?
[+] [-] adventured|11 years ago|reply
24 * $0.21 = $5.04 per hour
You could burn those for two hours, almost, to match the lowest $9.50 per month cost of what they're talking about in the blog.
The c3 approach would give you 96 vCPUs during that time. The t2 micro for $9.36 or whatever per month, gives you one vCPU. I'd have to strongly favor spinning up 24 to 48 instances of the c3 large and clocking the job in one to three hours if possible.
[+] [-] aleem|11 years ago|reply
With burstable instances, you accumulate 6 CPU credits every hour so you can run at 100% load for an hour, once per 10 hours for t2.medium (once every 13.3 hours for t2.medium; once every 15 hours for t2.micro)
It would be nice to have a credit window greater than 24 hours though.
EDIT: ColinCera pointed out the math is incorrect. Updated and removed erroneous conclusion.
[+] [-] jread|11 years ago|reply
[+] [-] syncsynchalt|11 years ago|reply
[+] [-] zrail|11 years ago|reply
[+] [-] keytomouse|11 years ago|reply
DO was competitive with EC2 on price but not on features (and certainly not on security), now with the price advantage gone...
EDIT: corrected calculation
[+] [-] bhouston|11 years ago|reply
[+] [-] AJ72|11 years ago|reply
And even then, one still needs to factor in the 'other' costs like I/O or IOPs, disk (persistent/EBS), IPs, internet and inter-region data transfer… before you understand the real cost.
And then you need to compare to other instance types (which soon will cover the full alphabet -- c, cg, cr, g, h, i, m, r, t… ) and then other providers.
You still have several unresolved issue -
1. Are your assumption on usage (cpu, I/O, internet etc) correct? Will they change? 2. How do I compare performance across providers for a given VM specification. 3. Can I get support when I need it?
And I am sure there are others
It certainly means there is room for other players who just make it simple, whether they are infrastructure folk (like DO/Linode etc) or platform plays that make the pricing understandable by the audience they are trying to target (like Heroku/Ninefold)
[+] [-] mathrawka|11 years ago|reply
[+] [-] livid|11 years ago|reply
[+] [-] bowlofpetunias|11 years ago|reply
I've always used paravirtual AMI's, as I understood that gets the best performance for a Linux box.
Given that I try to use the same self-baked base AMI's for various purposes (and instance sizes), I would either have to mix and match or switch everything to HVM. However, I have no clue what the practical consequences of that would be.
Can anybody enlighten me?
[+] [-] caw|11 years ago|reply
Yes you'd have to build new AMIs with HVM. It'd be easiest if you had some kind of configuration management so you didn't need as many AMIs baked. When I build machines I use a script to handle the creation and mounting any extra volumes on a machine that I have as "nonstandard". I have only 2 custom AMIs - one for PV and the other for HVM. You'll need to have at least both, because if you wanted to use certain instances (t1.micro, m1.small come to mind) you can only use PV.
[+] [-] helper|11 years ago|reply
[1]: http://www.brendangregg.com/blog/2014-05-07/what-color-is-yo...
[+] [-] gfunk911|11 years ago|reply
[+] [-] sdfjkl|11 years ago|reply
[+] [-] Andys|11 years ago|reply
[+] [-] Vieira|11 years ago|reply
[+] [-] philsnow|11 years ago|reply
The thrashing will increase gradually until user experience is pleasant.
[+] [-] Nakatomi_Plaza|11 years ago|reply
[+] [-] aurelianito|11 years ago|reply
[+] [-] jewel|11 years ago|reply
A quick click around dell finds that a mid-range 1U rackmount server (R320) with that much RAM costs $3,135.
So a back-of-the-envelope calculation makes it seem workable, especially for high-RAM low-CPU configurations, which is what this is.
There are other tricks that they might be employing, such as swapping out part of RAM to SSDs behind the scenes, as well as compressing RAM contents. On low-load servers like these, typical usage would imply that RAM would be mostly static.
[+] [-] wmf|11 years ago|reply
[+] [-] NhanH|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]