Lightsail instances can be used to "proxy" data from other AWS resources (eg EC2 instances or S3 buckets). Each Lightsail instance has a certain amount of data transfer included in it's price ($3.5 instance has 1TB, $5 instance has 2TB, $10 instance has 3TB, $20 instance has 4TB, $40 instance has 5TB). The best value (dollar per transferred data) is the $10 instance, which gives you 3TB of traffic.
Using the data provided by the post:
3TB worth of traffic from an EC2 would cost $276.48 (us-east-1).
3TB worth of traffic from a S3 bucket would cost $69.
Note: one downside of using Lightsail instances is that both ingress and egress traffic counts as "traffic".
> 51.3. You may not use Amazon Lightsail in a manner intended to avoid incurring data fees from other Services (e.g., proxying network traffic from Services to the public internet or other destinations or excessive data processing through load balancing or content delivery network (CDN) Services as described in the technical documentation), and if you do, we may throttle or suspend your data services or suspend your account.
You can download 1TB of data for free from AWS each month, as Cloudfront has a free tier [1] with 1TB monthly egress included. Point it to S3 or whatever HTTP server you want and voila.
GCP patched a similar loophole [1] in 2023 presumably because some of their customers were abusing it. I'd expect AWS to do the same if this becomes widespread enough.
Unlikely. The "loophole" GCP patched was that you can use GCS to transfer data between regions on the same continent for free. This is already non-free on AWS. What OP mentioned is that transferring data between availability zones *in the same region* also costs $0.02 per GB and can be worked around.
This doesn't feel like a loophole though, it feels like they have optimized S3 and intend your EC2 instances to use S3 as storage. But maybe not as transfer window, that is, they expect you to put and leave your data on there.
There are tons of these tricks you can use to cut costs and get resources for free. It's smart, but not reliable. It's the same type of hacking that leads to crypto mining on github actions via OSS repos.
Treat this as an interesting hacking exercise, but do not deploy a solution like this to production (or at least get your account managers blessing first), lest you risk waking up to a terminated AWS account.
I have used this and other techniques for years and never gotten shut down. Passing through S3 is also generally more efficient for distributing data to multiple sources than running some sync process.
S3 storage costs are charged per GB month so 1 TB * .023 per GB / 730 hrs per month… should be 3 cents if the data was left in the bucket for an hour.[1]
However sounds like it was deleted almost right away. In that case the charge might be 0.03 / 60 if the data was around for a minute. Normally I would expect AWS to round this up to $0.01..
The TimedByteStorage value from the cost and usage report would be the ultimate determinant here.
This feels like the tech equivalent of tax avoidance.
If too many people do this - AWS will "just close the loophole".
There's not one AWS - there are probably dozens if not hundreds of AWS - each with their own KPIs. One wants to reduce your spend - but not tell you how to really reduce your spend.
If you make something complex enough (AWS) - it will be impossible for customers to optimize in any one factor - as everything is complected together.
This isn't a loophole. This is by design. AWS wants you to use specific services in specific ways, so they make it really cheap to do so. Using an endpoint for S3 is one of the ways they want you to use the S3 service.
Another example is using CloudFront. AWS wants you to use CloudFront, so they make CloudFront cheaper than other types of data egress.
An alternative to sophisticated cloud cost minimization systems is…….. don’t use the cloud. Host it yourself. Or use Cloudflare which has 0 cents per gigabyte egress fees. Or just rent cloud servers from one of the many much cheaper VPS hosting services and don’t use all the expensive and complex cloud services all designed to lock you in and drain cash from your at 9 or 12 or 17 cents per gigabyte.
Seriously, if you’re at the point that you’re doing sophisticated analysis of cloud costs, consider dropping the cloud.
If you're at the point you're doing sophisticated cloud cost analysis you are doing the cloud right, because that is completely impossible anywhere else.
I swear the people who say go on premise have no idea how much the salary costs of someone who will not treat their datacenter like a home lab is. Even Apple iCloud is in AWS and GCP because of how economical it is, you suck at the cloud you think you have to go back on prem, or you just don't give a shit about reliability (start pricing up DDoS protection costs at anything higher than 10G and tell me the cloud is more expensive).
We spend 100k+ on AWS bandwidth and it's still cheaper than our DIA circuits because we don't have to pay network engineers to manage 3 different AZs.
> Seriously, if you’re at the point that you’re doing sophisticated analysis of cloud costs, consider dropping the cloud.
Which would mean that you loose part of the reason to use the cloud in the first place... A lot of org move to cloud based hosting because it enable them to go way further in FinOps / cost control (amongst many other thing).
This can make a lot of sense depending on your infra, if you have some fluctuation in you needs (storage, compute, etc...), cloud based solution can be a great fit.
At the end of the day, it is just a tool. I worked in places where I SSH´d into the prod bare-metal server to update our software, manage the firewall, check the storage, ... and all that manually. And I worked in places where we were using a cloud provider for most of our hosting needs. I also handle the transition from one to the other.
All I can say is: It's a tool. "The cloud" is no better or worse than a bare-metal server or a VPS. It really depends on your use-case. You should just do your due diligence and evaluate the reason why one would fit you more than the other, and reevaluate it from time to time depending on the changes in your environment.
>Seriously, if you’re at the point that you’re doing sophisticated analysis of cloud costs, consider dropping the cloud.
The blog post's solution is relatively simple to put in place ; if you're already locked-in to AWS, dropping it will cost quite a lot and this might be a great middle ground in some cases.
If you're actually using the features of cloud - i.e. managed services - then this involves building out an IT department with skills that many companies just don't have. And the cost of that will easily negate or exceed any savings, in many cases.
That's a big reason that the choice of cloud became such a no brainer for companies.
I don't think "host yourself" in this instance would've helped. I think AWS in this instance is operating at a loss. Author found a loophole in AWS pricing and that's why it's so cheap. Doing it on their own would've been more expensive.
Now as to why the AWS pricing the way it is... we may only guess, but likely to promote one service over the other.
Those VPS hosting services are a solution for a startup but not to run your internet banking, airline company or public sas product. Plus their infinite egress bandwidth and data transfer claims are only true, as long as you don't...Use infinite egress bandwidth and data transfer....
The tipping point for self-hosting would be the point at which paying for full-time, 24/7 on-call sysadmins (salary + benefits) is less than your cloud hosting bill.
That's just not gonna happen for a lot of services.
If you're a heavy bandwidth user it's worth looking at Leaseweb, PhoenixNAP, Hetzner, OVH, and others who have ridiculously cheaper bandwidth pricing.
I remember a bizarre situation where the AWS sales guys wouldn't budge on bandwidth pricing even though the company wouldn't be viable at the standard prices.
Another trick is to use ECR. You can transfer 5TB out to the internet each month for free. The container image must be public, but you can encrypt the contents. Useful when storing media archives in Glacier.
I don't understand how AWS can keep ripping people off with these absurd data transfer fees, when there is Cloudflare R2 just right over there offering a 100 times better deal.
Data has "gravity" -- as in, it holds you down to where your data is, and you have to spend money to move it just like you have to spend money to escape gravity.
When all my VMs and containers are hosted in AWS, and S3 has rock solid support no matter what language, framework, setup I use, it becomes really tough to ask the team to use another vendor for object storage. If something goes wrong with R2 (data loss, slow transfer, etc.) I will get blamed (or at least asked for help). If S3 loses data or performs slowly in some case, people will just figure we're somehow using it wrong. And they will figure out how to make it better. Nobody gets blamed. And to be honest, data transfer fees is negligible if your business is creating any sort of value. You don't need to optimise it.
we just built a new feature for our pretty bandwidth heavy SaaS on R2. Works pretty damn good with indeed massive savings. We just use the AWS-SDK (Node.js) and use the R2 endpoint.
I trust cloudflare far less than AWS. Once my data is in AWS all applications in the same region as the data can use the data without paying anything in transfer costs.
Also, the prices he quotes are label prices, if you are a customer and you pre purchase your bandwidth under an agreement, it gets _significantly_ less expensive.
R2 is still pretty new. I don't know how well it works in practice in terms of performance and availability. And of course durability, which is difficult if not impossible to judge. S3 has a much longer history and track record, so it has the advantage here. And if all your stuff is inside AWS already there are advantages to keeping the data closer. Depending on how the data is used, egress might also not always be such a major cost.
But yes, the moment you actually produce significant amounts of egress traffic it gets absurdly expensive. And I would expect competitors like R2 to gain ground if they can provide reasonably competitive reliability and performance.
> it’s almost as if S3 doesn’t charge you anything for transient storage? This is very unlike AWS, and I’m not sure how to explain this. I suspected that maybe the S3 free tier was hiding away costs, but - again, shockingly - my S3 storage free tier was totally unaffected by the experiment, none of it was consumed (as opposed to the requests free tier, which was 100% consumed).
It’s also possible their billing system can’t detect transient storage usage. Request billing would work differently from how billed storage is tracked. It depends on how billing is implemented but would be my guess. That may change in the future.
Maybe some sampling mechanism comes along and takes a snapshot once per hour.
Suppose you store the data there for 6 minutes. Then there's an 90% probability that the sampler misses it entirely and you pay $0. But there's a 10% probability that the sampler does catch it. Then you pay for a whole hour even though you used a fraction of that.
Over many events, it averages out close to actual usage[1]. In 9 out of 10 cases, you pay 0X actual usage. In 1 out of 10 cases, you pay 10X actual usage. (But you can't complain because you did agree to 1-hour increments.)
---
[1] Assuming no correlation between your timing and the sampler's timing. If you can evade the sampler by guessing when it runs and carefully timing your access, then you can save a few pennies at the risk of a ban.
This is clever. And as I understand it, one of the tricks WarpStream (https://www.warpstream.com) use to reduce the costs of operating a Kafka cluster.
This may be arguably nitpicking, but the following statement from TFA isn’t exactly the case:
> Moreover, uploading to S3 - in any storage class - is also free!
Depending upon how much data you’re transferring in terms of storage class, number of API calls your software makes to do so, and the capacity used, you may incur charges. This is very easy to inadvertently do when uploading large volumes of archival data directly to the S3 Glacier tiers. You absolutely will pay if you end up using millions of API calls to upload tens of millions of objects comprising tens of terabytes or more.
Thanks for the feedback! I don't think it's nitpicking, you're right that it's misleadingly phrased - in fact, the only S3 costs I observed weren't storage at all, but rather the API calls.
I’ve been deploying 3xAZs in 3xRegions for a while now (years). The backing store being regional s3 buckets (keeping data in the local compliance region) and DDB with replication (opaque indexing and global data) and Lambda or Sfn for compute. So far data transfer hasn’t risen to the level of even tertiary concern (at scale). Perhaps because I don’t have video, docker or “AI” in play?
I've not seen any evidence that multi-AZ is more resilient. There's no history of an entire AZ going down that doesn't affect the entire region, at least that I can find on the internet within 15 minutes of googling.
In case you ever decide to return to AWS, its Cost Explorer is far from perfect but it can show you where your expenses are coming from, especially if your costs are pennies. In the last re:invent they even released daily granularity when grouping by resources (https://aws.amazon.com/blogs/aws-cloud-financial-management/...).
Offtopic but related: Has anyone noticed transient AWS routing issues as of late?
I’ve on three or four occasions the last three months notices that I got served a completely different SSL certificate than the domain I was visiting, of a domain that often could not be reached on publicly - probably pointing to some organizations internal OTA environment. In all occasions the URL I wanted to visit and the DNS of the site I was then actually visiting were located in AWS. Then less than a minute elapsed the issue is resolved.
I first thought it must be my side, my DNS server malfunctioning or something, but the served sites could not be accessed publicly anyway, and I had the issue on two separate networks with two separate clients and separate DNS servers. I’ve had it with polar.co internal environment, bank of ireland (iirc), multiple times with download.mozilla.org and a few other occassions.
I contacted AWS on Twitter about it, but just got some generic pointless response I should make an incident - but I’m just some random user, I’m not an AWS client. Somehow I could not get it clear to the AWS support on the other side of Twitter.
I'm not sure how this could be removed - the fundamentals behind it are basic building blocks of S3.
Maybe raising the cost of transient storage? e.g. If you have to pay for a minimum of a day's storage - but even if that was the case this would still be cost-effective, and at any rate it seems very unnatural for AWS to charge on such granularity.
+ I would guess that S3 is orders of magnitude more profitable for AWS than cross-AZ charges, so I'm not sure they'd consider it a loss-leader.
For those suggesting VPSs instead of cloud based solutions, how do you deal with high availability? Even for a small business you may need it to stay up at all times. With a VPS this is harder to accomplish.
Do you setup the same infrastructure in two or more VPS instances and then load balance? (say, [1]). Feels a bit of an ... artisanal solution, compared to using something like AWS ECS.
Someone in the thread said that if you're 'at the point that you’re doing sophisticated analysis of cloud costs, consider dropping the cloud.'
We've built https://nodeshift.com/ with the idea that cloud is affordable by default without any additional optimization, you focus on your app with no concerns on costs or anything else.
Cost analysis has helped me build great infrastructure on AWS. The costs are communicating to you what is and is not efficient for AWS to do on your behalf, by analyzing the costs and working to reduce them, you also incidentally increase efficiency and in some cases, such as this one, workload durability.
I reduced them to 0 by not using AWS. This simple trick lets you install and configure dedicated servers that work just fine. Most of your auto scaling needs can be solved using a CDN. But by the time you reach such needs you'd have hired competent engineers to properly architect things - it will be cheaper than using amazon anyway.
I've been looking for a place to store files for backup. Already keeping a local copy on NAS, but I want another one to be remote. Would you guys recommend S3? Wouldn't be using any other services.
I use S3 with the DEEP_ARCHIVE storage class for disaster recovery. Costs go up if you have many thousands of files so careful there. Hopefully will never need to access the objects and it's the cheapest I could find.
There's going to a be a huge market for consultants to unwind people's cloud stacks and go back to simpler on-prem/colo (or Heroku-like) deployments in the coming years.
develatio|2 years ago
Lightsail instances can be used to "proxy" data from other AWS resources (eg EC2 instances or S3 buckets). Each Lightsail instance has a certain amount of data transfer included in it's price ($3.5 instance has 1TB, $5 instance has 2TB, $10 instance has 3TB, $20 instance has 4TB, $40 instance has 5TB). The best value (dollar per transferred data) is the $10 instance, which gives you 3TB of traffic.
Using the data provided by the post:
3TB worth of traffic from an EC2 would cost $276.48 (us-east-1). 3TB worth of traffic from a S3 bucket would cost $69.
Note: one downside of using Lightsail instances is that both ingress and egress traffic counts as "traffic".
jonatron|2 years ago
> 51.3. You may not use Amazon Lightsail in a manner intended to avoid incurring data fees from other Services (e.g., proxying network traffic from Services to the public internet or other destinations or excessive data processing through load balancing or content delivery network (CDN) Services as described in the technical documentation), and if you do, we may throttle or suspend your data services or suspend your account.
rfoo|2 years ago
You can download 1TB of data for free from AWS each month, as Cloudfront has a free tier [1] with 1TB monthly egress included. Point it to S3 or whatever HTTP server you want and voila.
[1] It used to be 50GB per month for the first 12 months. It was changed to 1TB free forever shortly after Cloudflare posted https://blog.cloudflare.com/aws-egregious-egress
andruby|2 years ago
Nitpick: $5 for 2TB is better than $10 for 3TB.
unknown|2 years ago
[deleted]
intelVISA|2 years ago
ishitatsuyuki|2 years ago
[1]: https://cloud.google.com/storage/pricing-announce#network
rfoo|2 years ago
cedws|2 years ago
Cthulhu_|2 years ago
lijok|2 years ago
Treat this as an interesting hacking exercise, but do not deploy a solution like this to production (or at least get your account managers blessing first), lest you risk waking up to a terminated AWS account.
huslage|2 years ago
boiler_up800|2 years ago
However sounds like it was deleted almost right away. In that case the charge might be 0.03 / 60 if the data was around for a minute. Normally I would expect AWS to round this up to $0.01..
The TimedByteStorage value from the cost and usage report would be the ultimate determinant here.
[1] https://handbook.vantage.sh/aws/services/s3-pricing/
jakozaur|2 years ago
1. Ask for discounts if you are a big AWS customer (e.g., spend $1mln+/year). At some point, they were huge for inter-AZ transfers.
2. Put things in one AZ. Running DB in "b" zone and your only server in "a" is even worse than just standardizing on one zone.
3. When using multiple AZ do load aware AZ balancing.
throwaway167|2 years ago
There must be use cases for this, but I lack imagination. Cost? But not cost?
pibefision|2 years ago
lucidguppy|2 years ago
If too many people do this - AWS will "just close the loophole".
There's not one AWS - there are probably dozens if not hundreds of AWS - each with their own KPIs. One wants to reduce your spend - but not tell you how to really reduce your spend.
If you make something complex enough (AWS) - it will be impossible for customers to optimize in any one factor - as everything is complected together.
karlkatzke|2 years ago
Another example is using CloudFront. AWS wants you to use CloudFront, so they make CloudFront cheaper than other types of data egress.
andrewstuart|2 years ago
Seriously, if you’re at the point that you’re doing sophisticated analysis of cloud costs, consider dropping the cloud.
overstay8930|2 years ago
I swear the people who say go on premise have no idea how much the salary costs of someone who will not treat their datacenter like a home lab is. Even Apple iCloud is in AWS and GCP because of how economical it is, you suck at the cloud you think you have to go back on prem, or you just don't give a shit about reliability (start pricing up DDoS protection costs at anything higher than 10G and tell me the cloud is more expensive).
We spend 100k+ on AWS bandwidth and it's still cheaper than our DIA circuits because we don't have to pay network engineers to manage 3 different AZs.
maeln|2 years ago
Which would mean that you loose part of the reason to use the cloud in the first place... A lot of org move to cloud based hosting because it enable them to go way further in FinOps / cost control (amongst many other thing).
This can make a lot of sense depending on your infra, if you have some fluctuation in you needs (storage, compute, etc...), cloud based solution can be a great fit.
At the end of the day, it is just a tool. I worked in places where I SSH´d into the prod bare-metal server to update our software, manage the firewall, check the storage, ... and all that manually. And I worked in places where we were using a cloud provider for most of our hosting needs. I also handle the transition from one to the other. All I can say is: It's a tool. "The cloud" is no better or worse than a bare-metal server or a VPS. It really depends on your use-case. You should just do your due diligence and evaluate the reason why one would fit you more than the other, and reevaluate it from time to time depending on the changes in your environment.
This whole "cloud bad" is just childish.
Jean-Papoulos|2 years ago
The blog post's solution is relatively simple to put in place ; if you're already locked-in to AWS, dropping it will cost quite a lot and this might be a great middle ground in some cases.
antonvs|2 years ago
If you're actually using the features of cloud - i.e. managed services - then this involves building out an IT department with skills that many companies just don't have. And the cost of that will easily negate or exceed any savings, in many cases.
That's a big reason that the choice of cloud became such a no brainer for companies.
crabbone|2 years ago
Now as to why the AWS pricing the way it is... we may only guess, but likely to promote one service over the other.
belter|2 years ago
api|2 years ago
Cloud bandwidth pricing is… well the best analogies are very inappropriate for this site.
cdchn|2 years ago
Turing_Machine|2 years ago
That's just not gonna happen for a lot of services.
lijok|2 years ago
So what do I do if I'm at the point where I'm doing sophisticated analysis of on-prem costs? Do I move to the cloud?
barkingcat|2 years ago
then load the data into the datacentre and then just pay for the last sync / delta.
throwaway2990|2 years ago
[deleted]
jonatron|2 years ago
I remember a bizarre situation where the AWS sales guys wouldn't budge on bandwidth pricing even though the company wouldn't be viable at the standard prices.
declan_roberts|2 years ago
arpinum|2 years ago
declan_roberts|2 years ago
andersa|2 years ago
karlkatzke|2 years ago
perryizgr8|2 years ago
tnolet|2 years ago
akira2501|2 years ago
Also, the prices he quotes are label prices, if you are a customer and you pre purchase your bandwidth under an agreement, it gets _significantly_ less expensive.
fabian2k|2 years ago
But yes, the moment you actually produce significant amounts of egress traffic it gets absurdly expensive. And I would expect competitors like R2 to gain ground if they can provide reasonably competitive reliability and performance.
vlovich123|2 years ago
It’s also possible their billing system can’t detect transient storage usage. Request billing would work differently from how billed storage is tracked. It depends on how billing is implemented but would be my guess. That may change in the future.
adrianmonk|2 years ago
Suppose you store the data there for 6 minutes. Then there's an 90% probability that the sampler misses it entirely and you pay $0. But there's a 10% probability that the sampler does catch it. Then you pay for a whole hour even though you used a fraction of that.
Over many events, it averages out close to actual usage[1]. In 9 out of 10 cases, you pay 0X actual usage. In 1 out of 10 cases, you pay 10X actual usage. (But you can't complain because you did agree to 1-hour increments.)
---
[1] Assuming no correlation between your timing and the sampler's timing. If you can evade the sampler by guessing when it runs and carefully timing your access, then you can save a few pennies at the risk of a ban.
glenngillen|2 years ago
TheNewsIsHere|2 years ago
> Moreover, uploading to S3 - in any storage class - is also free!
Depending upon how much data you’re transferring in terms of storage class, number of API calls your software makes to do so, and the capacity used, you may incur charges. This is very easy to inadvertently do when uploading large volumes of archival data directly to the S3 Glacier tiers. You absolutely will pay if you end up using millions of API calls to upload tens of millions of objects comprising tens of terabytes or more.
danielklnstein|2 years ago
I updated the phrasing.
mlhpdx|2 years ago
hipadev23|2 years ago
dangoodmanUT|2 years ago
playingalong|2 years ago
If all services, then things like whole or most of a single AZ being borked happens fairly often.
playingalong|2 years ago
xbar|2 years ago
I'm back to managing my own systems. So much cheaper and less chance of nonlinear bills.
danielklnstein|2 years ago
rospaya|2 years ago
Havoc|2 years ago
sebazzz|2 years ago
I’ve on three or four occasions the last three months notices that I got served a completely different SSL certificate than the domain I was visiting, of a domain that often could not be reached on publicly - probably pointing to some organizations internal OTA environment. In all occasions the URL I wanted to visit and the DNS of the site I was then actually visiting were located in AWS. Then less than a minute elapsed the issue is resolved.
I first thought it must be my side, my DNS server malfunctioning or something, but the served sites could not be accessed publicly anyway, and I had the issue on two separate networks with two separate clients and separate DNS servers. I’ve had it with polar.co internal environment, bank of ireland (iirc), multiple times with download.mozilla.org and a few other occassions.
I contacted AWS on Twitter about it, but just got some generic pointless response I should make an incident - but I’m just some random user, I’m not an AWS client. Somehow I could not get it clear to the AWS support on the other side of Twitter.
quickthrower2|2 years ago
danielklnstein|2 years ago
Maybe raising the cost of transient storage? e.g. If you have to pay for a minimum of a day's storage - but even if that was the case this would still be cost-effective, and at any rate it seems very unnatural for AWS to charge on such granularity.
+ I would guess that S3 is orders of magnitude more profitable for AWS than cross-AZ charges, so I'm not sure they'd consider it a loss-leader.
api|2 years ago
emmanueloga_|2 years ago
Do you setup the same infrastructure in two or more VPS instances and then load balance? (say, [1]). Feels a bit of an ... artisanal solution, compared to using something like AWS ECS.
1: https://www.hetzner.com/cloud/load-balancer
nodeshift|2 years ago
We've built https://nodeshift.com/ with the idea that cloud is affordable by default without any additional optimization, you focus on your app with no concerns on costs or anything else.
akira2501|2 years ago
gumballindie|2 years ago
issafram|2 years ago
spieden|2 years ago
esafak|2 years ago
declan_roberts|2 years ago
rco8786|2 years ago
unknown|2 years ago
[deleted]
unknown|2 years ago
[deleted]
salawat|2 years ago
DeathArrow|2 years ago
TruthWillHurt|2 years ago
explain|2 years ago