top | item 39373763

(no title)

jackpirate | 2 years ago

I don't see how that would have helped in this case. This was not a resource at a known location that was supposed to be only available to logged in users. This was a resource that the admins didn't know about available at an unknown url that was exposed to the public internet due to a configuration error. Are you going to write a test case for every possible url in your server to make sure it's not being exposed?

Something that could work is including a random hash as a first hidden email inside of every client, and then regularly searching outbound traffic for that hash. But that would be rather expensive.

discuss

order

miki123211|2 years ago

Or a canary token[1] (which won't help you find out that the vulnerability exists, but will hopefully alert you when it actually gets exploited).

[1] https://canarytokens.org

toomuchtodo|2 years ago

n=1, head of a security at a fintech. We perform automated scans of external facing sensitive routes and pages after deploys, checking for PII, PAN, and SPI indicators, kicked off by Github Actions. We also use a WAF with two person config change reviews (change management), which assists in preventing unexpected routes or parts of web properties being made public unexpectedly due to continuous integration and deployment practices (balancing dev velocity with security/compliance concerns).

Not within the resources of all orgs of course, but there is a lot of low hanging fruit through code alone that improves outcomes. Effective web security, data security, and data privacy are not trivial.

15457345234|2 years ago

> fintech

You keep your business logic and account handling code on github?

Not an accusation, genuinely asking.

delano|2 years ago

You don't need to check every one though. Or any. You create a known account with known content in it (similar to your hash idea) and monitor that.

Even if they never got around to automating it and were highly laissez-faire, manually checking that account with those testcases say once a month would have caught this within 30 days. That still sucks but it's at least an order of magnitude less suck than the situation they're in now.

blincoln|2 years ago

If the screenshot in the article isn't edited, this was an HTTP service exposed to the internet on an unusual port (81). I'd propose the following test cases:

1) Are there any unexpected internet-facing services?

* Once per week (or per month, if there are thousands of internet-facing resources) use masscan or similar to quickly check for any open TCP ports on all internet-facing IPs/DNS names currently in use by the company. * Check the list of open ports against a very short global allowlist of port numbers. In 2024, that list is probably just 80 and 443. * Check each host/port combination against a per-host allowlist of more specific ports. e.g. the mail servers might allow 25, 465, 587, and 993. * If a host/port combination doesn't match either allowlist, alert a human.

Edit: one could probably also implement this as a check when infrastructure is deployed, e.g. "if this container image/pod definition/whatever is internet-facing, check the list of forwarded ports against the allowlists". I've been out of the infrastructure world for too long to give a solid recommendation there, though.

2) Every time an internet-facing resource is created or updated (e.g. a NAT or load-balancer entry from public IP to private IP is changed, a Route 53 entry is added or altered, etc.), automatically run an automated vulnerability scan using a tool that supports customizing the checks. Make sure the list of checks is curated to pre-filter any noise ("you have a robots.txt file!"). Alert a human if any of the checks come up positive.

OpenVAS, etc. should easily flag "directory listing enabled", which is almost never something you'd find intentionally set up on a server unless your organization is a super old-school Unix/Linux software developer/vendor.

Any decent commercial tool (and probably OpenVAS as well) should also have easily flagged content that disclosed email addresses, in this case.

3) Pay for a Shodan account. Set up a recurring job to check every week/month/whatever for your organization name, any public netblocks, etc. Generate a report of anything that was found during the current check that wasn't found during the previous check, and have a human review it. This one would take some more work, because there would need to be a mechanism for the human(s) to add filtering rules to weed out the inevitable false positives.