top | item 40346597

Amazon S3 will no longer charge for several HTTP error codes

289 points| axyjo | 1 year ago |aws.amazon.com

76 comments

order

lapcat|1 year ago

AWS is full of dark patterns. You can sign up for the so-called "free" tier and then too easily, unwittingly enable something that suddenly charges you hundreds of dollars before you know it (by getting a bill at the end of the month), even if you're not doing anything with the service except looking around. AWS doesn't give any warning to free tier members that a configuration change is going to cost you, and their terms are also very confusing. For example, PostgreSQL is advertised as free, but "Aurora PostgreSQL" is quite costly.

akira2501|1 year ago

> unwittingly enable something that suddenly charges you hundreds of dollars before you know it

The default is to have current and estimated monthly cost displayed on your root console as soon as you login. You will also get email alerts when you hit 50% and then 80% of your "free tier quota" in a month.

> even if you're not doing anything with the service except looking around.

I'm not aware of any services which will cost you money unless you actively enable them and create an object within it's class. Many services, such as S3, will attempt to force you into a "secured" configuration that avoids common traps by default.

> For example, PostgreSQL is advertised as free, but "Aurora PostgreSQL" is quite costly.

There's a rubric to the way AWS talks about it's internal services that is somewhat impenetrable at first. It's not too hard to figure out, though, if you take the time to read through their rather large set of documentation. That's the real price you must pay to successfully use the "free tier."

Anyways.. PostgreSQL is an open source project. Amazon RDS is a managed service that can run instances of it. Amazon Aurora is a different service that provides it's own engine that is _compatible_ with MySQL and PostgreSQL.

To know why you'd use one or the other, the shibboleth is "FAQ," so search for "AWS Aurora FAQ" and carefully read the whole page before you enable the service.

_adamb|1 year ago

We have a slack channel, #aws-budget-alerts, where AWS sends a notification any time our forecasted spend reaches certain milestones or the actual spend reaches certain milestones.

It's a really easy to set up app!

ranger_danger|1 year ago

> AWS doesn't give any warning

It does if you ask it to. You can get billing alerts if current costs are projected to go over a threshold.

alanfranz|1 year ago

Most cloud providers work this way somehow. Flexible, pay as you go infra doesn’t cope well with fixed pricing.

Fixed price cloud offerings exist for some services, but can end up with an apparently larger sticker price.

jsheard|1 year ago

The system works! Just raise your concerns and they'll get around to it in [checks notes] 18 years

https://twitter.com/cperciva/status/1785402732976992417

CSMastermind|1 year ago

Ahh I see the problem. The steps to get it resolved were not to tell the team about it.

The steps were to raise a big enough fuss that it would undermine customer trust if the team didn't fix it.

treve|1 year ago

In the same timespan Microsoft released Windows 1 all the way up to XP

cbsmith|1 year ago

In fairness, the issue was attended to within weeks after it recently got attention.

chadhutchins10|1 year ago

We've done it. Now let's re-engineer our apps to use error codes for 200 responses and get free S3 usage.

surfingdino|1 year ago

I worked on a team with similar cost optimisation gurus... They abused HTTP code conventions and somehow managed to wedge in two REST frameworks into the Django app that at one point had 1m+ users...

hunter2_|1 year ago

If I understand TFA, you'd need to find a way to get S3 (which offers no server-side script execution, only basic file delivery) to emit an error code (403 specifically) alongside a response of useful data. Good luck...

ceejayoz|1 year ago

> For buckets configured with website hosting, applicable request and other charges will still apply when S3 returns a custom error document or for custom redirects.

I was wondering about that one.

cratermoon|1 year ago

From the previous story, "S3 requests without a specified region default to us-east-1 and are redirected as needed. And the bucket’s owner pays extra for that redirected request."

So will Amazon continue charge for the redirected 403?

dmw_ng|1 year ago

Can't imagine a change like this would be made without some analysis.. would love an internal view into a decision like this, I wonder if they already have log data to compute financial loss from the change, or if they have sampling instrumentation fancy enough to write/deploy custom reports like this quickly.

In any case 2 weeks seems like an impressive turnaround for such a large service, unless they'd been internally preparing to acknowledge the problem for longer

londons_explore|1 year ago

> 2 weeks seems like an impressive turnaround for such a large service

I assume they were lucky in that whatever system counts billable requests also has access to the response code, and therefore it's pretty easy to just say "if response == 403: return 0".

The fact that is the case suggests they may do the work to fulfill the request before knowing the response code and doing billing, so there might be some loophole to get them to do lots of useful work for free...

mike_d|1 year ago

> Can't imagine a change like this would be made without some analysis.. would love an internal view into a decision like this

Sure, here you go: There was some buzz and negative press so it got picked up by the social media managers who forwarded it to executive escalations who loops in legal. Legal realizes that what they are doing is borderline fraud and sends it to the VP that oversees billing as a P0. It then gets handed down to a senior director who is responsible for fixing it within a week. Comms gets looped in to soft announce it.

At no point does anyone look at log data or give a shit about any instrumentation. It is a business decision to limit liability to a lawsuit or BCP investigation. As a publicly traded company it is also extremely risky for them to book revenue that comes from fraudulent billing.

pdimitar|1 year ago

Are you for real? Legitimately baffled by your comment.

How about the financial losses of customers that could be DDoS-ed into bankruptcy through no fault of their own? Keeping S3 bucket names secret is not always easy.

moi2388|1 year ago

There needs to be a law that says any user needs to set any limit on any service or subscription, and then the costs can not surpass this until the budget is upped by the user. At the same time, there should be real-time cost analysis, breakdown per service and predicted costs per day.

usr1106|1 year ago

A law in which country?

Well, GDPR showed a bit that rather global impact is possible.

If you offer an open service on the internet you need to be prepared that users and misusers will cause costs.

However, if you block it for public access you as a customer are not offering a public service. It's the cloud provider offering a public service so it seems just a basic legal principle that it's the cloud provider who pays for misuse (attempts to access something that is not public). But of course big corporations are not known for fair contracts respecting legitimate interest of the customer before legal action is on the horizon. I wonder what made AWS wake up here.

beeeeerp|1 year ago

Now please do this for NXDOMAIN on Route53. This can be a big problem with acquired domains.

mike_d|1 year ago

You should never actually use Route53 for your domains. Delegate a subdomain like cloud.yourcompany.net to R53 and use that.

ranger_danger|1 year ago

You're not screaming on twitter about it so it will never happen...

paulddraper|1 year ago

That's not analogous.

Joel_Mckay|1 year ago

Bezos loss-leader product-manager pushes hook deeper into worm.

I fail to see this as progress, YMMV =3