top | item 46138029

(no title)

icyfox | 2 months ago

I'm always a bit surprised how long it can take to triage and fix these pretty glaring security vulnerabilities. October 27, 2025 disclosure and November 4, 2025 email confirmation seems like a long time to have their entire client file system exposed. Sure the actual bug ended up being (what I imagine to be) a <1hr fix plus the time for QA testing to make sure it didn't break anything.

Is the issue that people aren't checking their security@ email addresses? People are on holiday? These emails get so much spam it's really hard to separate the noise from the legit signal? I'm genuinely curious.

discuss

order

Aurornis|2 months ago

In my experience, it comes down to project management and organizational structure problems.

Companies hire a "security team" and put them behind the security@ email, then decide they'll figure out how to handle issues later.

When an issue comes in, the security team tries to forward the security issue to the team that owns the project so it can be fixed. This is where complicated org charts and difficult incentive structures can get in the way.

Determining which team actually owns the code containing the bug can be very hard, depending on the company. Many security team people I've worked with were smart, but not software developers by trade. So they start trying to navigate the org chart to figure out who can even fix the issue. This can take weeks of dead-ends and "I'm busy until Tuesday next week at 3:30PM, let's schedule a meeting then" delays.

Even when you find the right team, it can be difficult to get them to schedule the fix. In companies where roadmaps are planned 3 quarters in advance, everyone is focused on their KPIs and other acronyms, and bonuses are paid out according to your ticket velocity and on-time delivery stats (despite PMs telling you they're not), getting a team to pick up the bug and work on it is hard. Again, it can become a wall of "Our next 3 sprints are already full with urgent work from VP so-and-so, but we'll see if we can fit it in after that"

Then legal wants to be involved, too. So before you even respond to reports you have to flag the corporate counsel, who is already busy and doesn't want to hear it right now.

So half or more of the job of the security team becomes navigating corporate bureaucracy and slicing through all of the incentive structures to inject this urgent priority somewhere.

Smart companies recognize this problem and will empower security teams to prioritize urgent things. This can cause another problem where less-than-great security teams start wielding their power to force everyone to work on not-urgent issues that get spammed to the security@ email all day long demanding bug bounties, which burns everyone out. Good security teams will use good judgment, though.

srrdev|2 months ago

Oh man this is so true. In this sort of org, getting something fixed out-of-band takes a huge political effort (even a critical issue like having your client database exposed to the world).

rvba|2 months ago

> Many security team people I've worked with were smart, but not software developers by trade.

A lot are people who cannot code at all, cannot administer - they just fill tables and check boxes, maybe from some automated suite. They dont know what http and https is, because they are just paper pushers what is far from real security, but more like security in name only.

And they joined the work since it pays well

tietjens|2 months ago

Great comment. Very true.

Barathkanna|2 months ago

A lot of the time it’s less “nobody checked the security inbox” and more “the one person who understands that part of the system is juggling twelve other fires.” Security fixes are often a one-hour patch wrapped in two weeks of internal routing, approvals, and “who even owns this code?” archaeology. Holiday schedules and spam filters don’t help, but organizational entropy is usually the real culprit.

Aurornis|2 months ago

> A lot of the time it’s less “nobody checked the security inbox” and more “the one person who understands that part of the system is juggling twelve other fires.”

At my past employers it was "The VP of such-and-such said we need to ship this feature as our top priority, no exceptions"

whstl|2 months ago

I've once had a whole sector of a fintech go down because one DevOps person ignored daily warning emails for three months that an API key was about to expire and needed reset.

And of course nobody remembered the setup, and logging was only accessible by the same person, so figuring out also took weeks.

ChrisMarshallNY|2 months ago

It could also be someone "practicing good time management."

They have a specific time of day, when they check their email, and they only give 30 minutes to that time, and they check emails from most recent, down.

The email comes in, two hours earlier, and, by the time they check their email, it's been buried under 50 spams, and near-spams; each of which needs to be checked, so they run out of 30 minutes, before they get to it. The next day, by email check time, another 400 spams have been thrown on top.

Think I'm kidding?

Many folks that have worked for large companies (or bureaucracies) have seen exactly this.

throwaway290|2 months ago

It's not about fixing it, it's about acknowledging it exists

ipdashc|2 months ago

security@ emails do get a lot of spam. It doesn't get talked about very much unless you're monitoring one yourself, but there's a fairly constant stream of people begging for bug bounty money for things like the Secure flag not being set on a cookie.

That said, in my experience this spam is still a few emails a day at the most, I don't think there's any excuse for not immediately patching something like that. I guess maybe someone's on holiday like you said.

canopi|2 months ago

This.

There is so much spam from random people about meaningless issues in our docs. AI has made the problem worse. Determining the meaningful from the meaningless is a full time job.

latchkey|2 months ago

My favorite one is the "We've identified a security hole in your website"... and I always respond quickly that my website is statically generated, nothing dynamic and immutable on cloudflare pages. For some odd reason, I never hear back from them.

bfxbjuf|2 months ago

Well we have 600 people in the global response center I work at. And the priority issue count is currently 26000. That means its serious enough that its been assigned to some one. There are tens of thousands of unassigned issues cuz the traige teams are swamped. People dont realize as systems get more complex issues increase. They never decrease. And the chimp troupes response has always been a Story - we can handle it.

londons_explore|2 months ago

The security@ inbox has so much junk these days with someone reporting that if you paste alert('hacked') into devtools then it makes the website hacked!

I reckon only 1% of reports are valid.

LLM's can now make a plausible looking exploit report ('there is a use after free bug in your server side implementation of X library which allows shell access to your server if you time these two API calls correctly'), but the LLM has made the whole thing up. That can easily waste hours of an experts time for a total falsehood.

I can completely see why some companies decide it'll be an office-hours-only task to go through all the reports every day.

tryauuum|2 months ago

My favorite was "we can trigger your website to initiate a connection to the server we control". They were running their own mail servers and were creating a new accounts on our website. Of course someone needs to initiate a TCP connection to deliver an email message!

Of course this could be a real vulnerability if it would disclose the real server IP behind cloudflare. This was not the case, we were sending via AWS email gateway

gwbas1c|2 months ago

Not every organization prioritizes being able to ship a code change at the drop of a hat. This often requires organizational dedication to heavy automated testing a CI, which small companies often aren't set up to do.

stavros|2 months ago

I can't believe that any company takes a month to ship something. Even if they don't have CI, surely they'd prefer to break the app (maybe even completely) than risk all their legal documents exfiltrated.

Capricorn2481|2 months ago

> October 27, 2025 disclosure and November 4, 2025 email confirmation seems like a long time to have their entire client file system exposed

I have unfortunately seen way worse. If it will take more than an hour and the wrong people are in charge of the money, you can go a pretty long time with glaring vulnerabilities.

giancarlostoro|2 months ago

I call that one of the worrisome outcomes from "Marketing Driven Development" where the business people don't let you do technical debt "Stories" because you REALLY need to do work that justifies their existence in the project.

perlgeek|2 months ago

Another aspect to consider: when you reduce the amount of permission anything has (like here the returned token), you risk breaking something.

In a complex system it can be very hard to understand what will break, if anything. In a less complex system, it can still be hard to understand if the person who knows the security model very well isn't available.

jofzar|2 months ago

> October 27, 2025 disclosure and November 4, 2025 email confirmation seems like a long time to have their entire client file system exposed

There is always the simple answer, these are lawyers so they are probably scrambling internally to write a response that covers themselves legaly also trying to figure out how fucked they are.

1 week is surprisingly not that slow.

bgbntty2|2 months ago

I'm a bit conflicted about what responsible disclosure should be, but in many cases it seems like these conditions hold:

1) the hack is straightforward to do;

2) it can do a lot of damage (get PII or other confidential info in most cases);

3) downtime of the service wouldn't hurt anyone, especially if we compare it to the risk of the damage.

But, instead of insisting on the immediate shutting down of the affected service, we give companies weeks or months to fix the issue while notifying no one in the process and continuing with business as usual.

I've submitted 3 very easy exploits to 3 different companies the past year and, thankfully, they fixed them in about a week every time. Yet, the exploits were trivial (as I'm not good enough to find the hard ones, I admit). Mostly IDORs, like changing id=123456 to id=1 all the way up to id=123455 and seeing a lot medical data that doesn't belong to me. All 3 cases were medical labs because I had to have some tests done and wanted to see how secure my data was.

Sadly, in all 3 cases I had to send a follow-up e-mail after ~1 week, saying that I'll make the exploit public if they don't fix it ASAP. What happened was, again, in all 3 cases, the exploit was fixed within 1-2 days.

If I'd given them a month, I feel they would've fixed the issue after a month. If I'd given then a year - after a year.

And it's not like there aren't 10 different labs in my city. It's not like online access to results is critical, either. You can get a printed result or call them to write them down. Yes, it would be tedious, but more secure.

So I should've said from the beginning something like:

> I found this trivial exploit that gives me access to medical data of thousands of people. If you don't want it public, shut down your online service until you fix it, because it's highly likely someone else figured it out before me. If you don't, I'll make it public and ruin your reputation.

Now, would I make it public if they don't fix it within a few days? Probably not, but I'm not sure. But shutting down their service until the fix is in seems important. If it was some hard-to-do hack chaining several exploits, including a 0-day, it would be likely that I'd be the first one to find it and it wouldn't be found for a while by someone else afterwards. But ID enumerations? Come on.

So does the standard "responsible disclosure", at least in the scenario I've given (easy to do; not critical if the service is shut down), help the affected parties (the customers) or the businesses? Why should I care about a company worth $X losing $Y if it's their fault?

I think in the future I'll anonymously contact companies with way more strict deadlines if their customers (or others) are in serious risk. I'll lose the ability to brag with my real name, but I can live with it.

As to the other comments talking about how spammed their security@ mail is - that's the cost of doing business. It doesn't seem like a valid excuse to me. Security isn't one of hundreds random things a business should care about. It's one of the most important ones. So just assign more people to review your mail. If you can't, why are you handling people's PII?

nl|2 months ago

Don't do this.

I understand you think you are doing the right thing but be aware that by shutting down a medical communication services there's a non-trivial chance someone will die because of slower test results.

Your responsibility is responsible disclosure.

Their responsibility is how to handle it. Don't try to decide that for them.

ghostly_s|2 months ago

> I think in the future I'll anonymously contact companies with way more strict deadlines if their customers (or others) are in serious risk. I'll lose the ability to brag with my real name, but I can live with it.

What you're describing is likely a crime. The sad reality is most businesses don't view protection of customers' data as a sacred duty, but simply another of the innumerable risks to be managed in the course of doing business. If they can say "we were working on fixing it!" their asses are likely covered even if someone does leverage the exploit first—and worst-case, they'll just pay a fine and move on.