top | item 45977744

A $1k AWS mistake

339 points| thecodemonkey | 3 months ago |geocod.io

265 comments

order

Havoc|3 months ago

These sort of things show up about once a day between the three big cloud subreddit. Often with larger amounts

And it’s always the same - clouds refuse to provide anything more than alerts (that are delayed) and your only option is prayer and begging for mercy.

Followed by people claiming with absolute certainty that it’s literally technically impossible to provide hard capped accounts to tinkerers despite there being accounts like that in existence already (some azure accounts are hardcapped by amount but ofc that’s not loudly advertised).

Waterluvian|3 months ago

This might be speaking the obvious, but I think that the lack of half-decent cost controls is not intentionally malicious. There is no mustache-twirling villain who has a great idea on how to !@#$ people out of their money. I think it's the play between incompetence and having absolutely no incentive to do anything about it (which is still a form of malice).

I've used AWS for about 10 years and am by no means an expert, but I've seen all kinds of ugly cracks and discontinuities in design and operation among the services. AWS has felt like a handful of very good ideas, designed, built, and maintained by completely separate teams, littered by a whole ton of "I need my promotion to VP" bad ideas that build on top of the good ones in increasingly hacky ways.

And in any sufficiently large tech orgnization, there won't be anyone at a level of power who can rattle cages about a problem like this, who will want to be the one to do actually it. No "VP of Such and Such" will spend their political capital stressing how critical it is that they fix the thing that will make a whole bunch of KPIs go in the wrong direction. They're probably spending it on shipping another hacked-together service with Web2.0-- er. IOT-- er. Blockchai-- er. Crypto-- er. AI before promotion season.

cristiangraz|3 months ago

AWS just released flat-rate pricing plans with no overages yesterday. You opt into a $0, $15, or $200/mo plan and at the end of the month your bill is still $0, $15, or $200.

It solves the problem of unexpected requests or data transfer increasing your bill across several services.

https://aws.amazon.com/blogs/networking-and-content-delivery...

cobolcomesback|3 months ago

AWS just yesterday launched flat rate pricing for their CDN (including a flat rate allowance for bandwidth and S3 storage), including a guaranteed $0 tier.

https://news.ycombinator.com/item?id=45975411

I agree that it’s likely very technically difficult to find the right balance between capping costs and not breaking things, but this shows that it’s definitely possible, and hopefully this signals that AWS is interested in doing this in other services too.

chatmasta|3 months ago

The sad part about these posts is not the cloud apologists that show up in the comments. It’s the inevitable conclusion that “we should have understood our billing agreements better” or “we should really research what we’re deploying next time.” Nobody ever thinks, maybe the problem is the cloud itself. Maybe if I’m a single person business or have a few employees and even fewer users, then I should pay for infrastructure with a predictable cost…

moduspol|3 months ago

AWS would much rather let you accidentally overspend and then forgive it when you complain than see stories about critical infrastructure getting shut off or failing in unexpected ways due to a miscommunication in billing.

nijave|3 months ago

I've always been under the impression billing is async and you really need it to be synchronous unless cost caps work as a soft limit.

You can transfer from S3 on a single instance usually as fast as the instances NIC--100Gbps+

You'd need a synchronous system that checks quotas before each request and for a lot of systems you'd also need request cancellation (imagine transferring a 5TiB file from S3 and your cap triggers at 100GiB--the server needs to be able to receive a billing violation alert in real time and cancel the request)

I imagine anything capped provided to customers already AWS just estimates and eats the loss

Obviously such a system is possible since IAM/STS mostly do this but I suspect it's a tradeoff providers are reluctant to make

sofixa|3 months ago

It's not that it's technically impossible. The very simple problem is that there is no way of providing hard spend caps without giving you the opportunity to bring down your whole production environment when the cap is met. No cloud provides wants to give their customers that much rope to hang themselves with. You just know too many customers will do it wrong or will forget to update the cap or will not coordinate internally, and things will stop working and take forever to fix.

It's easier to waive cost overages than deal with any of that.

strogonoff|3 months ago

I think it’s disingenuous to claim that AWS only offers delayed alerts and half-decent cost controls. Granted, these features were not there in the beginning, but for years now AWS, in addition to the better known stuff like strategic limits on auto scaling, allows subscribing to price threshold triggers via SNS and perform automatic actions, which could be anything including scaling down or stopping services completely if the cost skyrockets.

jrjeksjd8d|3 months ago

The problem with hard caps is that there's no way to retroactively fix "our site went down". As much as engineers are loathe to actually reach out to a cloud provider, are there any anecdotes of AWS playing hardball and collecting a 10k debt for network traffic?

Conversely the first time someone hits an edge case in billing limits and their site goes down, losing 10k worth of possible customer transactions there's no way to unring that bell.

The second constituency are also, you know, the customers with real cloud budgets. I don't blame AWS for not building a feature that could (a) negatively impact real, paying customers (b) is primarily targeted at people who by definition don't want to pay a lot of money.

belter|3 months ago

These topics are not advanced...they are foundational scenarios covered in any entry level AWS or AWS Cloud third-party training.

But over the last few years, people have convinced themselves that the cost of ignorance is low. Companies hand out unlimited self-paced learning portals, tick the “training provided” box, and quietly stop validating whether anyone actually learned anything.

I remember when you had to spend weeks in structured training before you were allowed to touch real systems. But starting around five or six years ago, something changed: Practitioners began deciding for themselves what they felt like learning. They dismantled standard instruction paths and, in doing so, never discovered their own unknown unknowns.

In the end, it created a generation of supposedly “trained” professionals who skipped the fundamentals and now can’t understand why their skills have giant gaps.

wulfstan|3 months ago

This happens so often that the S3 VPC endpoint should be setup by default when your VPC is created. AWS engineers on here - make this happen.

Also, consider using fck-nat (https://fck-nat.dev/v1.3.0/) instead of NAT gateways unless you have a compelling reason to do otherwise, because you will save on per-Gb traffic charges.

(Or, just run your own Debian nano instance that does the masquerading for you, which every old-school Linuxer should be able to do in their sleep.)

Spivak|3 months ago

The reason to not include the endpoint by default is because VPCs should be secure by default. Everything is denied and unless you explicitly configure access to the Internet, it's unreachable. An attacker who manages to compromise a system in that VPC now has a means of data exfiltration in an otherwise air gapped set up.

It's annoying because this is by far the more uncommon case for a VPC, but I think it's the right way to structure, permissions and access in general. S3, the actual service, went the other way on this and has desperately been trying to reel it back for years.

cowsandmilk|3 months ago

S3 Gateway endpoints break cross-region S3 operations. Changing defaults will break customers.

belter|3 months ago

AWS is not going to enable S3 endpoints by default, and most of the thread is downvoting the correct explanations like thinking in terms of a small hobby VPC, not the architectures AWS actually has to support.

Why it should not be done:

1. It mutates routing. Gateway Endpoints inject prefix-list routes into selected route tables. Many VPCs have dozens of RTs for segmentation, TGW attachments, inspection subnets, EKS-managed RTs, shared services, etc. Auto-editing them risks breaking zero-trust boundaries and traffic-inspection paths.

2. It breaks IAM / S3 policies. Enterprises commonly rely on aws:sourceVpce, aws:SourceIp, Private Access Points, SCP conditions, and restrictive bucket policies. Auto-creating a VPCE would silently bypass or invalidate these controls.

3. It bypasses security boundaries. A Gateway Endpoint forces S3 traffic to bypass NAT, firewalls, IDS/IPS, egress proxies, VPC Lattice policies, and other mandatory inspection layers. This is a hard violation for regulated workloads.

4. Many VPCs must not access S3 at all. Air-gapped, regulated, OEM, partner-isolated, and inspection-only VPCs intentionally block S3. Auto-adding an endpoint would break designed isolation.

5. Private DNS changes behavior. With Private DNS enabled, S3 hostname resolution is overridden to use the VPCE instead of the public S3 endpoint. This can break debugging assumptions, routing analysis, and certain cross-account access patterns.

6. AWS does not assume intent. The VPC model is intentionally minimal. AWS does not auto-create IGWs, NATs, Interface Endpoints, or egress paths. Defaults must never rewrite user security boundaries.

coredog64|3 months ago

If you use the AWS console, it's a tick box to include this.

raw_anon_1111|3 months ago

If you are creating a VPC from the console that might be a reasonable default. But any serious implementation is going to be using IAC - like they were - and I would expect to spell out everything explicitly.

scotty79|3 months ago

> This happens so often that the S3 VPC endpoint should be setup by default when your VPC is created.

It's a free service after all.

patabyte|3 months ago

> which every old-school Linuxer should be able to do in their sleep.

Oof, this hit home, hah.

withinboredom|3 months ago

Or just run bare metal + garage and call it a day.

stef25|3 months ago

Made a similar mistake like this once. While just playing around to see what's possible I upload some data to the AWS algo that will recommended products to your users based on everyone's previous purchases.

I uploaded a small xls with uid and prodid columns and then kind of forgot about it.

A few months later I get a note from bank saying your account is overdrawn. The account is only used for freelancing work which I wasn't doing at the time, so I never checked that account.

Looks like AWS was charging me over 1K / month while the algo continuously worked on that bit of data that was uploaded one time. They charged until there was no money left.

That was about 5K in weekend earnings gone. Several months worth of salary in my main job. That was a lot of money for me.

Few times I've felt so horrible.

nine_k|3 months ago

I worked in a billing department, and learned to be healthily paranoid about such things. I want to regularly check what I'm billed for. I of course check all my bank accounts' balances at least once a day. All billing emails are marked important in my inbox, and I actually open them.

And of course I give every online service a separate virtual credit card (via privacy dot com, but your bank may issue them directly) with a spend limit set pretty close to the expected usage.

dabiged|3 months ago

I made the same mistake and blew $60k.

I have never understood why the S3 endpoint isn't deployed by default, except to catch people making this exact mistake.

philipwhiuk|3 months ago

Yeah imagine the conversation:

"I'd like to spend the next sprint on S3 endpoints by default"

"What will that cost"

"A bunch of unnecessary resources when it's not used"

"Will there be extra revenue?"

"Nah, in fact it'll reduce our revenue from people who meant to use it and forgot before"

"Let's circle back on this in a few years"

rikafurude21|3 months ago

Thats a year salary but hey think about how much more complicated your work would be if you had to learn to self-host your infra!

kidsil|3 months ago

Great write-up, thanks for sharing the numbers.

I get pulled into a fair number of "why did my AWS bill explode?" situations, and this exact pattern (NAT + S3 + "I thought same-region EC2→S3 was free") comes up more often than you’d expect.

The mental model that seems to stick is: S3 transfer pricing and "how you reach S3" pricing are two different things. You can be right that EC2→S3 is free and still pay a lot because all your traffic goes through a NAT Gateway.

The small checklist I give people:

1. If a private subnet talks a lot to S3 or DynamoDB, start by assuming you want a Gateway Endpoint, not the NAT, unless you have a strong security requirement that says otherwise.

2. Put NAT on its own Cost Explorer view / dashboard. If that line moves in a way you didn’t expect, treat it as a bug and go find the job or service that changed.

3. Before you turn on a new sync or batch job that moves a lot of data, sketch (I tend to do this with Mermaid) "from where to where, through what, and who charges me for each leg?" It takes a few minutes and usually catches this kind of trap.

Cost Anomaly Detection doing its job here is also the underrated part of the story. A $1k lesson is painful, but finding it at $20k is much worse.

CjHuber|3 months ago

Does Amazon refund you for mistakes, or do you have to land on HN frontpage for that to happen?

Dunedan|3 months ago

Depends on various factors and of course the amount of money in question. I've had AWS approve a refund for a rather large sum a few years ago, but that took quite a bit of back and forth with them.

Crucial for the approval was that we had cost alerts already enabled before it happened and were able to show that this didn't help at all, because they triggered way too late. We also had to explain in detail what measures we implemented to ensure that such a situation doesn't happen again.

thecodemonkey|3 months ago

Hahaha. I'll update the post once I hear back from them. One could hope that they might consider an account credit.

throwawayffffas|3 months ago

I do not know. But in this case they probably should. They probably incurred no cost themselves.

A bunch of data went down the "wrong" pipe, but in reality most likely all the data never left their networks.

Aeolun|3 months ago

I presume it depends on your ability to pay for your mistakes. A $20/month client is probably not going to pony up $1000, a $3000/month client will not care as much.

nijave|3 months ago

I've gotten a few refunds from them before. Not always and usually they come with stipulations to mitigate the risk of the mistake happening again

viraptor|3 months ago

They do sometimes if you ask. Probably depends on each case though.

stef25|3 months ago

> Does Amazon refund you for mistakes

Hard no. Had to pay I think 100$ for premium support to find that out.

merpkz|3 months ago

> AWS charges $0.09 per GB for data transfer out to the internet from most regions, which adds up fast when you're moving terabytes of data.

How does this actually work? So you upload your data to AWS S3 and then if you wish to get it back, you pay per GB of what you stored there?

0manrho|3 months ago

That is the business model and one of the figurative moats: easy to onboard, hard/expensive (relative to on-boarding ) to divest.

Though important to note in this specific case was a misconfiguration that is easy to make/not understand in the data was not intended to leave AWS services (and thus should be free) but due to using the NAT gateway, data did leave the AWS nest and was charged at a higher data rate per GB than if just pulling everything straight out of S3/EC2 by about an order of magnitude (generally speaking YMMV depending on region, requests, total size, if it's an expedited archival retrieval etc etc)

So this is an atypical case, doesn't usually cost $1000 to pull 20TB out of AWS. Still this is an easy mistake to make.

pjc50|3 months ago

Nine cents per gigabyte feels like cellphone-plan level ripoff rather than a normal amount for an internet service.

And people wonder why Cloudflare is so popular, when a random DDoS can decide to start inflicting costs like that on you.

hexbin010|3 months ago

Yes uploading into AWS is free/cheap. You pay per GB of data downloaded, which is not cheap.

You can see why, from a sales perspective: AWS' customers generally charge their customers for data they download - so they are extracting a % off that. And moreover, it makes migrating away from AWS quite expensive in a lot of circumstances.

pavlov|3 months ago

Yes…?

Egress bandwidth costs money. Consumer cloud services bake it into a monthly price, and if you’re downloading too much, they throttle you. You can’t download unlimited terabytes from Google Drive. You’ll get a message that reads something like: “Quota exceeded, try again later.” — which also sucks if you happen to need your data from Drive.

AWS is not a consumer service so they make you think about the cost directly.

blitzar|3 months ago

Made in California.

We are programmed to receive. You can check out any time you like, but you can never leave

thefreeman|3 months ago

You put a CDN in front of it and heavily cache when serving to external customers

speedgoose|3 months ago

Yes. It’s not very subtle.

ilogik|3 months ago

the statement is about aws in general, and yes, you pay for bandwith

krystalgamer|3 months ago

Ah, the good old VPC NAT Gateway.

I was lucky to have experienced all of the same mistakes for free (ex-Amazon employee). My manager just got an email saying the costs had gone through the roof and asked me to look into it.

Feel bad for anyone that actually needs to cough up money for these dark patterns.

mgaunard|3 months ago

Personally I don't even understand why NAT gateways are so prevalent. What you want most of the time is just an Internet gateway.

4gotunameagain|3 months ago

I'm still adamant about the fact that the "cloud" is a racket.

Sure, it decreases the time necessary to get something up running, but the promises of cheaper/easier to manage/more reliable have turned out to be false. Instead of paying x on sysadmin salaries, you pay 5x to mega corps and you lose ownership of all your data and infrastructure.

I think it's bad for the environment, bad for industry practices and bad for wealth accumulation & inequality.

lan321|3 months ago

I'd say it's a racket for enterprise but it makes sense for small things. For example, a friend of mine, who's in a decent bit of debt and hence on the hunt for anything that can make some money, wanted to try making essentially a Replika clone for a local market and being able to rent an H100 for 2$ an hour was very nice. He could mess around a bit, confirm it's way more work than he thought and move on to other ideas for like 10$ :D

Assuming he got it working he could have opened service without directly going further in debt with the caviat that if he messed up the pricing model, and it took off, it could have annihilated his already dead finances.

cobolcomesback|3 months ago

This wouldn’t have specifically helped in this situation (EC2 reading from S3), but on the general topic of preventing unexpected charges from AWS:

AWS just yesterday launched flat rate pricing for their CDN (including a flat rate allowance for bandwidth and S3 storage), including a guaranteed $0 tier. It’s just the CDN for now, but hopefully it gets expanded to other services as well.

https://news.ycombinator.com/item?id=45975411

viraptor|3 months ago

The service gateways are such a weird thing in AWS. There seems to be no reason not to use them and it's like they only exist as a trap for the unaware.

wiether|3 months ago

Reading all the posts about people who got bitten by some policies on AWS, I think they should create two modes:

- raw

- click-ops

Because, when you build your infra from scratch on AWS, you absolutely don't want the service gateways to exist by default. You want to have full control on everything, and that's how it works now. You don't want AWS to insert routes in your route tables on your behalf. Or worse, having hidden routes that are used by default.

But I fully understand that some people don't want to be bothered but those technicalities and want something that work and is optimized following the Well-Architected Framework pillars.

IIRC they already provide some CloudFormation Stacks that can do some of this for you, but it's still too technical and obscure.

Currently they probably rely on their partner network to help onboard new customers, but for small customers it doesn't make sense.

benmmurphy|3 months ago

the gateway endpoints are free (s3 + dynamodb?), but the service endpoints are charged so that could be a reason why people don't use the service endpoints. but there doesn't seem to be a good reason for not using the service gateways. it also seems crazy that AWS charges you to connect to their own services without a public ip. also, i guess this would be less of an issue (in terms of requiring a public ip) if all of AWS services were available over ipv6. because then you would not need NAT gateways to connect to AWS services when you don't have a public ipv4 ip and I assume you are not getting these special traffic charges when connecting to the AWS services with a public ipv6 address.

ryanjshaw|3 months ago

As a bootstrapped dev, reading stories like these gives me so much anxiety. I just can’t bring myself to use AWS even despite its advantages.

throwawayffffas|3 months ago

Do not buy into the hype, AWS and all the other cloud providers are extremely over priced.

If you don't have a specific need for a specific service they are offering stay away, it's a giant ripoff.

If you need generic stuff like VMs, data storage, etc. You are much better of using Hetzner, OVH, etc, and some standalone CDN if you need one.

thecodemonkey|3 months ago

We are also 100% customer-funded. AWS makes sense for us for the enterprise version of Geocodio where we are SOC2 audited and HIPAA-compliant.

We are primarily using Hetzner for the self-serve version of Geocodio and have been a very happy customer for decades.

themafia|3 months ago

The documentation is thick but it has a common theme and format to it. So once you get the hang of finding the "juicy bits" you can usually locate them anywhere. The docs do generally warn you of these cases, or have a whole "best practices" section which highlights them directly.

The key is, do not make decisions lightly in the cloud, just because something is easy to enable in the UI does not mean it's recommended. Sit down with the pricing page or calculator and /really/ think over your use case. Get used to thinking about your infrastructure in terms of batch jobs instead of real time and understand the implementation and import of techniques like "circuit breakers."

Once you get the hang of it it's actually very easy and somewhat liberating. It's really easy to test solutions out in a limited form and then completely tear them down. Personally I'm very happy that I put the effort in.

abigail95|3 months ago

What is a bootstrapped dev?

joshtbradley|3 months ago

I did this when I was ~22 messing with infra for the first time. A $300 bill in two days when I had $2000 in the bank really stung. I love AWS for many things, but I really wish they made the cost calculations transparent for beginners.

kevmo|3 months ago

I wonder why they don't...

abujazar|3 months ago

$1000 for 20 TB of data transfer sounds like fraud. You can get a VM instance with 20 TB included INTERNET traffic at Hetzner for €4.15.

fergie|3 months ago

Is it possible for hobbyists to set a hard cut off for spending? Like, "SHUT EVERYTHING DOWN IF COSTS EXCEED $50"

ndiddy|3 months ago

You can with some effort, but cloud providers don't provide real-time information on how much you're spending. Even if you use spending alerts to program a hard cut-off yourself, a mistake can still result in you being charged for 6+ hours of usage before the alert fires.

Raed667|3 months ago

my understanding from reading this kind of threads is that there is no real way to enforce it and the provider makes no guarantees, as your usage can outpace the system that is handling the accounting and shutoff

mr_toad|3 months ago

Shut down everything? Including S3? There goes all your data.

conception|3 months ago

Yes, but you have to program it. And there is a little bit of whack so it might be $51 or something like that.

maciekkmrk|3 months ago

An entire blog article post to say "read the docs and enable VPC S3 endpoint".

It's all in the docs: https://docs.aws.amazon.com/vpc/latest/privatelink/concepts....

>There is another type of VPC endpoint, Gateway, which creates a gateway endpoint to send traffic to Amazon S3 or DynamoDB. Gateway endpoints do not use AWS PrivateLink, unlike the other types of VPC endpoints. For more information, see Gateway endpoints.

Even the first page of VPC docs: https://docs.aws.amazon.com/vpc/latest/userguide/what-is-ama...

>Use a VPC endpoint to connect to AWS services privately, without the use of an internet gateway or NAT device.

The author of the blog writes:

> When you're using VPCs with a NAT Gateway (which most production AWS setups do), S3 transfers still go through the NAT Gateway by default.

Yes, you are using a virtual private network. Where is it supposed to go? It's like being surprised that data in your home network goes through a router.

jairuhme|3 months ago

> An entire blog article post to say "read the docs and enable VPC S3 endpoint".

I think it's okay if someone missed something in the docs and wanted to share from their experience. In fact, if you look at the the s3 pricing page [0], under Data Transfer, VPC endpoints are mentioned at all. It simply says data transfer is free between AWS services in the same region. I think that much detail would be enough to reasonably assume you didn't have to set up additional items to accomplish.

[0]https://aws.amazon.com/s3/pricing/

throwawayffffas|3 months ago

> The solution is to create a VPC Gateway Endpoint for S3. This is a special type of VPC endpoint that creates a direct route from your VPC to S3, bypassing the NAT Gateway entirely.

The solution is to move your processing infrastructure to Hetzner.

mooreds|3 months ago

Always always set up budget alarms.

Make sure they go to an list with multiple people on it. Make sure someone pays attention to that email list.

It's free and will save your bacon.

I've also had good luck asking for forgiveness. One time I scaled up some servers for an event and left them running for an extra week. I think the damage was in the 4 figures, so not horrendous, but not nothing.

An email to AWS support led to them forgiving a chunk of that bill. Doesn't hurt to ask.

torginus|3 months ago

> I've been using AWS since around 2007. Back then, EC2 storage was entirely ephemeral and stopping an instance meant losing all your data. The platform has come a long way since then.

Personally I miss ephemeral storage - having the knowledge that if you start the server from a known good state, going back to that state is just a reboot away. Way back when I was in college, a lot of out big-box servers worked like this.

You can replicate this on AWS with snapshots or formatting the EBS volume into 2 partitions and just clearing the ephemeral part on reboot, but I've found it surprisingly hard to get it working with OverlayFS

andrewstuart|3 months ago

Why are people still using AWS?

And then writing “I regret it” posts that end up on HN.

Why are people not getting the message to not use AWS?

There’s SO MANY other faster cheaper less complex more reliable options but people continue to use AWS. It makes no sense.

chistev|3 months ago

Examples?

dylan604|3 months ago

Had the exact same thing happen. Only we used a company vetted/recommended by AWS to set this up for us, as we have no AWS experts and we're all too busy tasked doing actual startup things. So we staffed it out. Even the "professionals" get it wrong, and we racked up a huge expense as well. Staffed out company shrugged shoulders, and then just said sorry about your tab. We worked with AWS support to correct situation, and cried to daddy AWS account manager for a negotiated rate.

fragmede|3 months ago

Just $1,000? Thems rookie numbers, keep it up, you'll get there (my wallet won't, ow).

thecodemonkey|3 months ago

Haha, yep we were lucky to catch this early! It could easily have gotten lost with everything else in the monthly AWS bill.

bravetraveler|3 months ago

Came here to say the same, take my vote

    - DevOops

lapcat|3 months ago

> AWS's networking can be deceptively complex. Even when you think you've done your research and confirmed the costs, there are layers of configuration that can dramatically change your bill.

Unexpected, large AWS charges have been happening for so long, and so egregiously, to so many people, including myself, that we must assume it's by design of Amazon.

blutoot|3 months ago

Regardless of the AWS tech in question (and yes VPCE for non-compute services is a very common pattern in an enterprise setup using AWS since VPC with NAT is a pretty fundamental requirement), I honestly believe this was the biggest miss from the author: “Always validate your assumptions. I thought "EC2 to S3 is free" was enough. I should have tested with a small amount of data and monitored the costs before scaling up to terabytes.” To me this is a symptom of DevOps/infra engineers being too much in love with infra automation without actually testing the full end to end flow.

harel|3 months ago

You probably saved me a future grand++. Thanks

thecodemonkey|3 months ago

That was truly my hope with this post! Glad to hear that

lowbloodsugar|3 months ago

I’m sure NAT gateways exist purely to keep uninformed security “experts” at companies happy. I worked at a Fortune 500 company but we were a dedicated group building a cloud product on AWS. Security people demanded a NAT gateway. Why? “Because you need address translation and a way to prevent incoming connections”. Ok. That’s what an Internet Gateway is. In the end we deployed a NAT gateway and just didn’t setup routes to it. Then just used security groups and public IPs.

tlaverdure|3 months ago

Abolish NAT Gateways. Lean on gateway endpoints, egress only internet gateways with IPv6, and security groups to batten down the hatches. All free.

agwa|3 months ago

Now that AWS charges for public IPv4 addresses, is it still free if you need to access IPv4-only hosts?

auggierose|3 months ago

Are there any cloud providers that allow a hard cap on dollars spent per day/week/month? Should there not be a law that they have to?

citizenpaul|3 months ago

Its staggering to me that after all this time there are somehow still people in potions like this that are working without basic cost monitoring alerts on cloud/SaaS services

It really shows the Silicon Vally disconnect with the real world, where money matters.

Hikikomori|3 months ago

Saved >120k/month by deploying some vpc endpoints and vpc peering (rather than tgw).

denvrede|3 months ago

VPC peering becomes ugly fast, once your network architecture becomes more complex. Because transitive peering doesn't work you're building a mesh of networks.

siliconc0w|3 months ago

It used to be that you could whine to your account rep and they'd waive sudden accidental charges like this. Which we did regularly due to all the sharp edges. These days I gather it's a bit harder.

sprybear|3 months ago

I'm not telling anyone to stay away from AWS, but I've heard far too many similar stories to feel comfortable recommending it.

mgaunard|3 months ago

If you want to avoid any kind of traffic fees, simply don't allow routing outside of your VPC by default.

mikesickler|3 months ago

Got killed by AWS Macie. The default 5K cap is brutal

lloydatkinson|3 months ago

I can’t see this as anything but on purpose

knowitnone3|3 months ago

That's a loophole AWS needs to close

Fokamul|3 months ago

The lesson: Don't use AWS

bpiroman|3 months ago

So happy I don't use AWS

AmbroseBierce|3 months ago

Imagine a world were Amazon was forced to provide a publicly available report were they disclose how many clients have made this error -and similar ones- and how much money they have made from it. I know nothing like this will ever exist but hey, is free to dream.

whalesalad|3 months ago

Wait till you encounter the combo of gcloud parallel composite uploads + versioning + soft-delete + multi-region bucket - and you have 500TB of objects stored.

nrhrjrjrjtntbt|3 months ago

NAT gateway probably cheap as fuck for Bezos & co to run but nice little earner. The parking meter or exit ramp toll of cloud infra. Cheap beers in our bar but $1000 curb usage fee to pull up in your uber.

tecleandor|3 months ago

I think it's been calculated that data transfer is the biggest margin product in all AWS catalog by a huge difference. A 2021 calculation done by Cloudflare [0] estimated almost 8000% price markup in EU and US regions.

And I can see how, in very big accounts, small mistakes on your data source when you're doing data crunching, or wrong routing, can put thousands and thousands of dollars on your bill in less than an hour.

--

  0: https://blog.cloudflare.com/aws-egregious-egress/

ukoki|3 months ago

I don't think its about profits, its about incentivising using as many AWS products as possible. Consider it an 'anti-lock-in fee'

belter|3 months ago

[deleted]

wiether|3 months ago

There's nothing to gain in punching down

They made a mistake and are sharing it for the whole word to see in order to help others avoid making it.

It's brave.

Unlike punching down.