$0.02/hr == approximately $14/mo if you leave it on.
Is this really $6/mo cheaper than a linode 512? That might be nice for personal projects.
I'm trying to teach myself some things that really need more than my local dev machine (puppet, backup strategies/more resilient code, learning cassandra). I've been running a bunch of VM's on my laptop, but my dev machine is a weakling and can't hardly handle it.
It almost seems like I could just spin up a dozen of these little instances for $2.88 per waking day and teach myself under substantially more "real" conditions on the cheap. That's something I'd love to have as an option on linode, given that teaching myself is a large part of what I use it for.
Is there any reason that wouldn't work? Is this too complicated in practice?
One thing to note is that even EC2's "small" instance pales in comparison CPU-wise to even the low end Linodes. When playing with EC2, it blew me away how ridiculously slow they are until you get to the high levels (oddly it seemed the memory speed was also very poor - I had to wonder if even memory was networked somehow).
I haven't seen anyone mention the cost of IO requests associated with EBS. Quoted from http://aws.amazon.com/ebs/ :
As an example, a medium sized website database might be 100 GB in size and expect to average 100 I/Os per second over the course of a month. This would translate to $10 per month in storage costs (100 GB x $0.10/month), and approximately $26 per month in request costs (~2.6 million seconds/month x 100 I/O per second $0.10 per million I/O).
If you use a 1-year reserved instance, it'll cost you about $115/year, which is substantially cheaper than the lowest-priced Linode. I'll be switching http://mbusreloaded.com/ after its Linode subscription runs out, as there's absolutely no way I use more than 10 GB of bandwidth per year.
Yup - you could - this is a fantastic move by Amazon - this is the one thing that's been missing from my experimentation world - Linode is fantastic for the servers I run permanently there - I have no plans to remove them, but the commitment to an instance for a certain period of time is what keeps me from using it to work on deployment scenarious and whatnot. Amazon is fantastic for this (and has differnet constraints, of course)
This just brought AWS down into the realm of competition with the likes of Linode and others, and we can now figure out how to spin up a dozen nodes and work with all the innards of aws without worring about getting accidentally hosed on forgotten instances... works for me.
None that I can see. I'm going to buy one reserved one, even though I have 7 hardware servers. I need to get better with the platform and for $54/year, the price is right
One thing that is useful: if you like to keep several instances configured and 'ready to go' for your learning experiments, when you are done, don't terminate them, just stop them. You don't pay for stopped instances (but then you do pay 24 cents a day per unattached Elastic IP address).
Aside: How far off are we from renting our PCs in the cloud, and just having a local terminal? I know it's an old Failed Dream (mainframe-terminal, client-server, settop box etc), but maybe we're getting closer...
It seems a bit ridiculous, because you still need a bit of local power for display and fast reactions, and current iPhones/netbooks could do with more power. But desktop PC's have been fast enough in that respect for a while. An advantage of the cloud is that as RAM, cycle prices etc drop, you get it (more of them or cheaper) without the hassle of physically upgrading. And bursty usage is available too, eg. when compiling.
There's solid economics here: it's a sort of timesharing idea, instead of cycles being wasted while you type, someone else uses them to compile. Even more compelling globally - someone else uses them while you sleep. The same argument works for sharing your desktop's own cycles, p2p, but a centralized cloud has admin advantages and other economies of scale.
I regularly use a cheap netbook as a terminal to an m1.large instance for development work. My scripts use spot instances to keep the price low--typically around $0.13/hr. Not for everyone, but it's saved me maybe $500 over buying a fast laptop. Unlike a laptop I can easily let others log into the system, leave it on for long compute jobs when I'm out of town, and don't have to worry as much about losing important data.
The "failed dream" been successfull but never for long. As disk bandwidth and network bandwidth increase irregularly and leapfrog one another, it goes in and out of fashion.
If you take a look at the sad state of this countries broadband infrastructure we are a long way off. I live in a "tech" city, I have two internet connections (Time Warner and ClearWire) patched together on a high-end router and I still wait for things I shouldn't have to.
My pictures are all stored "in the cloud" (on facebook, or flickr, or photobucket). "My" music doesn't even really exist anymore, it is playlists on grooveshark, or stations on pandora.
Documets? Google documents.
I do all of my web development on VIM running VPSs.
The only stuff that doesn't happen "in the cloud" is specialized, media-creation type things, an activity that I would be surprised if more than 5% of the population participates in.
That CPU performance is consistent with what they give as the burst performance: up to 2 EC2 Compute Units, which should be 2x as much as the small instances' 1 ECU. Would be interesting to know how often/long you can burst, and what the non-burst baseline is, though.
Aside: I looked into AWS a couple weeks ago, to play with a simple webapp idea, but the myriad choices, acronyms and signups confused me, and there was [seemed to be] no free options for getting started, and initial traction. It seems focussed on sophisticated enterprise users (nothing wrong with that). So, I went with google's App Engine, which was much simpler, and has been great. These micro-instances seem the same.
If you can make your app fit within the significant limitations of App Engine, then it is a great service. I've used it for several projects from basic CMS to AJAX chat.
That said, sooner or later you'll want to do something that should seem possible on App Engine (e.g., image transformations with BufferedImage) and you'll hit a brick wall.
That's when I turn to a generic Ubuntu image running on EC2. It's not free, but with spot pricing it's awfully cheap. I expect spot pricing for this newest micro size to stabilize at around $10 a month.
Maybe not as huge a deal for Linux instances, but this is HUGE for Windows users. There's nothing comparable elsewhere. The cheapest Rackspace Cloud instance is $0.08/hour for 1GB. There is no faster, cheaper way to spin up a Windows server than AWS now.
This might just make me replace the Slicehost instance I use for Mercurial and build server. Elastic IP + EBS + micro instance makes a pretty nice low level machine.
It always bothered me that for a development server you are basically overpaying for bandwidth. Who cares I have 450GB bandwidth when I use maybe 30GB per month ?
It seems there is no local storage included in the price.
I did not try, but that probably means complicated settings, which is a pity, since while the price could probably appeal to people launching side projects at minimal costs, like me, being a side project also means that not much time can be devoted to sysadminery.
Go to the AWS console and launch one. It's pretty much as simple as it can be. They automatically use EBS so they're persistent. No special configuration required.
Don't forget that static IP is extra. All bandwidth is extra. Memory is at a fixed limit (some VPS will let you burst above your assigned memory). No software like Plesk to help configure the box. Those things can add up.
Edit 1: I mistakenly said that static IP was extra when it's not while in use.
If you book a reserved instance, the price for lnx get as low as $0.01 ($54/yr).
It's a bit premature but for spot instances, atm windows ones are around $0.0135 (linux history is not yet available). As for other instance-types it looks like that with spot instances you'll get the usual 60% off the original price.
So how often do EC2 instances go down? Is it at hardware fail rate? or more often? Can I use this as a VPS replacement, and not have to worry about monitoring and fast restoration? (of non-important projects).
I think you'd need a pretty large sample set before you came to any reasonable conclusion.
I've been running ~10 nodes for AdGrok for the last 3 months, and we've already had one node fail (in that it wouldn't respond to any ec2 cli command to shut down or even terminate).
That hardware failure rate is about what I'd expect if it was our own colo and our own machines. Stuff always breaks.
It's difficult to say. I've worked at two companies that have used EC2 for various purposes. One had an instance that was used for dev work get "corrupted" on three separate occasions (I couldn't get the specifics, but there were definitely I/O issues with the instance storage), and the other has been using the same production box for more than a year.
The bottom line is that EC2 installations need to be designed as semi-permanent. My preferred strategy is similar to how Google talks about their hardware (when they do), that any one server can go down at any time, but the overall setup is resilient to failure.
We've been running hundreds of instances on EC2 for a couple years, and have never seen one just "go down." However, we will get notifications of "degraded instances." When an instance is degraded, you have some window (generally a couple days) to move the services running on that instance to another one. Even at the aforementioned scale, this happens maybe once every three to four months.
Can you use this as a VPS replacement? Probably. My guess is that your uptime will be no worse than some VPS provider. However, if you're storing information on the ephemeral storage, the onus is on you to get it to the new instance. I imagine that isn't generally the case on a VPS.
You may be able to mitigate this by using EBS (required in the case of these micro instances), but I've only used EBS a handful of times, and am no expert on the subject. If I understand their layout for these micro instances, it would simply be a matter of spinning up a new instance and spinning down the degraded node.
I've been using the high-cpu medium instances (c1.medium) for our rails nodes, just to avoid the slothy m1.small CPU. It seems like these are tailor-made for running either haproxy or your web tier!
I notice that, in a sense, AWS proves that the Total Cost of Ownership of Windows infrastructures is higher than the TCO of Linux infrastructures.
Amazon charges more for Windows instances across their entire offering. A Windows micro instance costs 50% more than a Linux micro instance ($.03/hr vs $.02/hr). This likely reflects Amazon's statistical studies on their EC2 datacenters that a Windows stack (OS + apps) uses on average more resources than a Linux stack, therefore more power costs, cooling costs, etc.
[+] [-] mr_luc|15 years ago|reply
Is this really $6/mo cheaper than a linode 512? That might be nice for personal projects.
I'm trying to teach myself some things that really need more than my local dev machine (puppet, backup strategies/more resilient code, learning cassandra). I've been running a bunch of VM's on my laptop, but my dev machine is a weakling and can't hardly handle it.
It almost seems like I could just spin up a dozen of these little instances for $2.88 per waking day and teach myself under substantially more "real" conditions on the cheap. That's something I'd love to have as an option on linode, given that teaching myself is a large part of what I use it for.
Is there any reason that wouldn't work? Is this too complicated in practice?
[+] [-] petercooper|15 years ago|reply
[+] [-] tasaro|15 years ago|reply
As an example, a medium sized website database might be 100 GB in size and expect to average 100 I/Os per second over the course of a month. This would translate to $10 per month in storage costs (100 GB x $0.10/month), and approximately $26 per month in request costs (~2.6 million seconds/month x 100 I/O per second $0.10 per million I/O).
[+] [-] swolchok|15 years ago|reply
[+] [-] dedward|15 years ago|reply
This just brought AWS down into the realm of competition with the likes of Linode and others, and we can now figure out how to spin up a dozen nodes and work with all the innards of aws without worring about getting accidentally hosed on forgotten instances... works for me.
[+] [-] bdonlan|15 years ago|reply
[+] [-] tibbon|15 years ago|reply
[+] [-] mark_l_watson|15 years ago|reply
[+] [-] kurokikaze|15 years ago|reply
[+] [-] tlrobinson|15 years ago|reply
[+] [-] Spoutingshite|15 years ago|reply
AWS is great, but you can't compare the AWS instances with other VPS or Dedicated offerings.
[+] [-] 10ren|15 years ago|reply
It seems a bit ridiculous, because you still need a bit of local power for display and fast reactions, and current iPhones/netbooks could do with more power. But desktop PC's have been fast enough in that respect for a while. An advantage of the cloud is that as RAM, cycle prices etc drop, you get it (more of them or cheaper) without the hassle of physically upgrading. And bursty usage is available too, eg. when compiling.
There's solid economics here: it's a sort of timesharing idea, instead of cycles being wasted while you type, someone else uses them to compile. Even more compelling globally - someone else uses them while you sleep. The same argument works for sharing your desktop's own cycles, p2p, but a centralized cloud has admin advantages and other economies of scale.
[+] [-] miyabo|15 years ago|reply
[+] [-] JoeAltmaier|15 years ago|reply
[+] [-] javery|15 years ago|reply
[+] [-] tibbon|15 years ago|reply
[+] [-] blhack|15 years ago|reply
My pictures are all stored "in the cloud" (on facebook, or flickr, or photobucket). "My" music doesn't even really exist anymore, it is playlists on grooveshark, or stations on pandora.
Documets? Google documents.
I do all of my web development on VIM running VPSs.
The only stuff that doesn't happen "in the cloud" is specialized, media-creation type things, an activity that I would be surprised if more than 5% of the population participates in.
[+] [-] dedward|15 years ago|reply
[+] [-] jread|15 years ago|reply
[+] [-] _delirium|15 years ago|reply
[+] [-] drtse4|15 years ago|reply
[+] [-] 10ren|15 years ago|reply
Did I give up too soon?
[+] [-] ryandvm|15 years ago|reply
That said, sooner or later you'll want to do something that should seem possible on App Engine (e.g., image transformations with BufferedImage) and you'll hit a brick wall.
That's when I turn to a generic Ubuntu image running on EC2. It's not free, but with spot pricing it's awfully cheap. I expect spot pricing for this newest micro size to stabilize at around $10 a month.
[+] [-] johns|15 years ago|reply
[+] [-] bravura|15 years ago|reply
This is the only instance that is missing 64-bit.
Even Micro instances offer 64-bit.
[+] [-] psadauskas|15 years ago|reply
https://spreadsheets.google.com/ccc?key=0AtNTMtkGNKnfdGJoajF...
$10/mo for 1 year reserved is pretty amazing.
[+] [-] jefft|15 years ago|reply
[+] [-] unknown|15 years ago|reply
[deleted]
[+] [-] andrevoget|15 years ago|reply
[+] [-] fierarul|15 years ago|reply
It always bothered me that for a development server you are basically overpaying for bandwidth. Who cares I have 450GB bandwidth when I use maybe 30GB per month ?
[+] [-] madewulf|15 years ago|reply
I did not try, but that probably means complicated settings, which is a pity, since while the price could probably appeal to people launching side projects at minimal costs, like me, being a side project also means that not much time can be devoted to sysadminery.
[+] [-] johns|15 years ago|reply
[+] [-] unknown|15 years ago|reply
[deleted]
[+] [-] bkrausz|15 years ago|reply
[+] [-] MichaelApproved|15 years ago|reply
Edit 1: I mistakenly said that static IP was extra when it's not while in use.
Edit 2: Storage is also extra.
[+] [-] binomial|15 years ago|reply
[+] [-] siong1987|15 years ago|reply
[+] [-] staunch|15 years ago|reply
[+] [-] enko|15 years ago|reply
[+] [-] simon_kun|15 years ago|reply
For example, I'm interested to see what changes this makes to the Heroku offering. It seems to be a perfect fit for their product.
[+] [-] rbranson|15 years ago|reply
[+] [-] drtse4|15 years ago|reply
EDIT: $54 upfront and then $0.01
[+] [-] forkqueue|15 years ago|reply
[+] [-] raghus|15 years ago|reply
[+] [-] cschneid|15 years ago|reply
[+] [-] mceachen|15 years ago|reply
I've been running ~10 nodes for AdGrok for the last 3 months, and we've already had one node fail (in that it wouldn't respond to any ec2 cli command to shut down or even terminate).
That hardware failure rate is about what I'd expect if it was our own colo and our own machines. Stuff always breaks.
[+] [-] ary|15 years ago|reply
The bottom line is that EC2 installations need to be designed as semi-permanent. My preferred strategy is similar to how Google talks about their hardware (when they do), that any one server can go down at any time, but the overall setup is resilient to failure.
[+] [-] mchadwick|15 years ago|reply
Can you use this as a VPS replacement? Probably. My guess is that your uptime will be no worse than some VPS provider. However, if you're storing information on the ephemeral storage, the onus is on you to get it to the new instance. I imagine that isn't generally the case on a VPS.
You may be able to mitigate this by using EBS (required in the case of these micro instances), but I've only used EBS a handful of times, and am no expert on the subject. If I understand their layout for these micro instances, it would simply be a matter of spinning up a new instance and spinning down the degraded node.
[+] [-] mceachen|15 years ago|reply
[+] [-] mrb|15 years ago|reply
Amazon charges more for Windows instances across their entire offering. A Windows micro instance costs 50% more than a Linux micro instance ($.03/hr vs $.02/hr). This likely reflects Amazon's statistical studies on their EC2 datacenters that a Windows stack (OS + apps) uses on average more resources than a Linux stack, therefore more power costs, cooling costs, etc.