top | item 45979852

(no title)

wulfstan | 3 months ago

This happens so often that the S3 VPC endpoint should be setup by default when your VPC is created. AWS engineers on here - make this happen.

Also, consider using fck-nat (https://fck-nat.dev/v1.3.0/) instead of NAT gateways unless you have a compelling reason to do otherwise, because you will save on per-Gb traffic charges.

(Or, just run your own Debian nano instance that does the masquerading for you, which every old-school Linuxer should be able to do in their sleep.)

discuss

order

Spivak|3 months ago

The reason to not include the endpoint by default is because VPCs should be secure by default. Everything is denied and unless you explicitly configure access to the Internet, it's unreachable. An attacker who manages to compromise a system in that VPC now has a means of data exfiltration in an otherwise air gapped set up.

It's annoying because this is by far the more uncommon case for a VPC, but I think it's the right way to structure, permissions and access in general. S3, the actual service, went the other way on this and has desperately been trying to reel it back for years.

wulfstan|3 months ago

Right, I can appreciate that argument - but then the right thing to do is to block S3 access from AWS VPCs until you have explicitly confirmed that you want to pay the big $$$$ to do so, or turn on the VPC endpoint.

A parallel to this is how SES handles permission to send emails. There are checks and hoops to jump through to ensure you can't send out spam. But somehow, letting DevOps folk shoot themselves in the foot (credit card) is ok.

What has been done is the monetary equivalent of "fail unsafe" => "succeed expensively"

SOLAR_FIELDS|3 months ago

There’s zero reason why AWS can’t pop up a warning if it detects this behavior though. It should clearly explain the implications to the end user. I mean EKS has all sorts of these warning flags it pops up on cluster health there’s really no reason why they can’t do the same here.

unethical_ban|3 months ago

I don't get your argument. If an ec2 needs access to an s3 resource, doesn't it need that role? Or otherwise, couldn't there be some global s3 URL filter that automagically routes same-region traffic appropriately if it is permitted?

My point is that, architecturally, is there ever in the history of AWS an example where a customer wants to pay for the transit of same-region traffic when a check box exists to say "do this for free"? Authorization and transit/path are separate concepts.

There has to be a better experience.

cowsandmilk|3 months ago

S3 Gateway endpoints break cross-region S3 operations. Changing defaults will break customers.

deanCommie|3 months ago

Changing defaults doesn't have to mean changing existing configurations. It can be the new default for newly created VPCs after a certain date, or for newly created accounts after a certain date.

And if there are any interoperability concerns, you offer an ability to opt-out with that (instead of opting in).

There is precedent for all of this at AWS.

belter|3 months ago

AWS is not going to enable S3 endpoints by default, and most of the thread is downvoting the correct explanations like thinking in terms of a small hobby VPC, not the architectures AWS actually has to support.

Why it should not be done:

1. It mutates routing. Gateway Endpoints inject prefix-list routes into selected route tables. Many VPCs have dozens of RTs for segmentation, TGW attachments, inspection subnets, EKS-managed RTs, shared services, etc. Auto-editing them risks breaking zero-trust boundaries and traffic-inspection paths.

2. It breaks IAM / S3 policies. Enterprises commonly rely on aws:sourceVpce, aws:SourceIp, Private Access Points, SCP conditions, and restrictive bucket policies. Auto-creating a VPCE would silently bypass or invalidate these controls.

3. It bypasses security boundaries. A Gateway Endpoint forces S3 traffic to bypass NAT, firewalls, IDS/IPS, egress proxies, VPC Lattice policies, and other mandatory inspection layers. This is a hard violation for regulated workloads.

4. Many VPCs must not access S3 at all. Air-gapped, regulated, OEM, partner-isolated, and inspection-only VPCs intentionally block S3. Auto-adding an endpoint would break designed isolation.

5. Private DNS changes behavior. With Private DNS enabled, S3 hostname resolution is overridden to use the VPCE instead of the public S3 endpoint. This can break debugging assumptions, routing analysis, and certain cross-account access patterns.

6. AWS does not assume intent. The VPC model is intentionally minimal. AWS does not auto-create IGWs, NATs, Interface Endpoints, or egress paths. Defaults must never rewrite user security boundaries.

wulfstan|3 months ago

These are all good arguments. Then do the opposite and block S3 access from VPCs by default. That would violate none of those.

“We have no idea what your intent is, so we’ll default to routing AWS-AWS traffic expensively” is way, way worse than forcing users to be explicit about their intent.

Minimal is a laudable goal - but if a footgun is the result then you violate the principle of least surprise.

I rather suspect the problem with issues like this is that they mainly catch the less experienced, who aren’t an AWS priority because they aren’t where the Big Money is.

ElectricalUnion|3 months ago

> Auto-editing them risks breaking zero-trust boundaries and traffic-inspection paths.

How are you inspecting zero-trust traffic? Not at the gateway/VPC level, I hope, as naive DPI there will break zero-trust.

If it breaks closed as it should, then it is working as intended.

If it breaks open, guess it was just useless pretend-zero-trust security theatre then?

coredog64|3 months ago

If you use the AWS console, it's a tick box to include this.

MrDarcy|3 months ago

No professional engineer uses the AWS console to provision foundational resources like VPC networks.

raw_anon_1111|3 months ago

If you are creating a VPC from the console that might be a reasonable default. But any serious implementation is going to be using IAC - like they were - and I would expect to spell out everything explicitly.

scotty79|3 months ago

> This happens so often that the S3 VPC endpoint should be setup by default when your VPC is created.

It's a free service after all.

patabyte|3 months ago

> which every old-school Linuxer should be able to do in their sleep.

Oof, this hit home, hah.

withinboredom|3 months ago

Or just run bare metal + garage and call it a day.

perching_aix|3 months ago

I personally prefer to just memorize the data and recite it really quickly on-demand.

Only half-joking. When something grossly underperforms, I do often legitimately just pull up calc.exe and compare the throughput to the number of employees we have × 8 kbit/sec [0], see who would win. It is uniquely depressing yet entertaining to see this outperform some applications.

[0] spherical cow type back of the envelope estimate, don't take it too seriously; assumes a very fast 200 wpm speech, 5 bytes per word, and everyone being able to independently progress

iso1631|3 months ago

Or colocate your bare metal in two or three data centres for resilience against environmental issues and single supplier.