top | item 44724287

Project Zero – Policy and Disclosure: 2025 Edition

102 points| esnard | 7 months ago |googleprojectzero.blogspot.com

39 comments

order

woodruffw|7 months ago

This policy change makes sense to me; I'm also sympathetic to the P0 team's struggle in getting vendors to take patching seriously.

At the same time, I think publicly sharing that some vulnerability was discovered can be valuable information to attackers, particularly in the context of disclosure on open source projects: it's been my experience that maintaining a completely hermetic embargo on an OSS component is extremely difficult, both because of the number of people involved and because fixing the vulnerability sometimes requires advance changes to other public components.

I'm not sure there's a great solution to this.

Shank|7 months ago

On the contrary: If Project Zero finds a 0-day in a product I know I use, and I know that product is Internet Facing, I can immediately take action and firewall it off. It isn't always the case that they find things like this, but an early warning signal can be really beneficial.

For customers, it also gives them leverage to contact vendors and ask politely for news on the patch.

structural|7 months ago

There really isn't a great solution here. The notice that a vulnerability has been discovered puts even more pressure on the fix to be deployed as close to instantly as possible, throughout the entire supply chain.

Why is this? Especially for smaller or more stable open-source projects, the number of commits in a 90-day period that have the possibility to be security-relevant are likely to be quite low, perhaps as low as single digits. So the specific commit that fixes the reported security issue is highly likely to be identified immediately, and now there's a race to develop and use an exploit.

As one example, a stable project that's been the target of significant security hardening and analysis is the libpng decoder. Over the past 3 months (May 1 - Jul 29), its main branch has seen 41 total commits. Of those, at least 25 were non-code changes, involving documentation updates, release engineering activities, and build system / cross-platform support. If Project Zero had announced a vulnerability in this project on May 1 with a disclosure embargo of today, there would be at most 16 commits to inspect over 3 months to find the bug. That's not a lot of work for a dedicated team.

So now, do we delay publishing security fixes to public repos and try and maintain private infrastructure and testing for all of this? And then try and get a release made, propagated to multiple layers of downstream vendors, have them make releases, etc... all within a day or two? That's pretty hard, just organizationally. No great answers here.

jms703|7 months ago

This seems like a good move. I do hope that slow moving consumers of the software in question can start anticipating an upcoming release and construct remediation plans instead of doing that after the release.

tptacek|7 months ago

I love it; it's a big-company reformulation of the classic vulnerability researcher's "reporting transparency" process: post "Found a nasty vuln in XYZ: 6f0c848159d46104fba17e02906f52aef460ee17d1962f5ea05d2478600fce8a" (the SHA2 hash of a report artifact confirming the vuln).

croemer|7 months ago

> This data will make it easier for researchers and the public to track how long it takes for a fix to travel from the initial report, all the way to a user's device (which is especially important if the fix never arrives!)

This paragraph is very confusing: What data is meant by "this data"? If they mean the announcement of "there's something", isn't the timeline of disclosure made public already under current reporting policy once everything has been opened up?

In other words, the date of initial report is not new data? Sure the delay is reduced, but it's not new at all in contrast to what the paragraph suggests.

bgwalter|7 months ago

I find the stated goal of alerting downstream a bit odd. Most downstreams scan upstream web pages for releases and automatically open an issue after a new release.

Project zero could also open a mailing list for trusted downstreams and publish the newly found announcements there.

The real goal seems to be to increase pressure on upstream, which in our modern times ranks lowest on the open source ladder: Below distributors, corporations, security pundits (some of whom do not write software themselves and have never been upstream for anything) and demanding users.

eyalitki|7 months ago

Not sure what is the measurable metric here, and what will be considered a success in this trial period.

Propagating the fix downstream depends on the release cycles of all downward vendors. Giving them a heads up will help planning, but I doubt it will significantly impact the patching timeline.

It is highly more likely that companies will get stressed that the public knows they have a vulnerability, while they are still working to fix it. The pressure from these companies will probably shut this policy change down.

Also, will this policy apply also to Google's own products?

zamadatix|7 months ago

The measure would probably be whether any of the reports led to examples of downstreams either syncing prior to release via security sharing they didn't already have established or any projects preparing to sync out of normal schedule ahead of time, regardless of if that's a small or large magnitude of change. How companies would prefer the public hear about a vulnerability has always been the lowest concern out of disclosures so I don't expect it to bring anything new here.

Google's products represent 3/6 of the initial vulnerabilities following this new reporting policy in the linked reporting page.

runningmike|7 months ago

It is indeed a complex problem. But is Google now killing FOSS slowly? IMHO there is far too much emphasis on Foss security and far too little on closed sourced hardware, firmware and software. Too much blame and pressure will not solve the complex problems as stated in the blog.

some_furry|7 months ago

Shoring up the security of FOSS is not "killing FOSS slowly".

Closed source software doesn't get to benefit from the goodwill of the open source software community, which includes independent security researchers as well as orgs like P0.

I guess our disagreement can be distilled down to one question:

Why would an emphasis on closed source products help FOSS, and why would an emphasis on FOSS help closed source?

Because this seems backwards to me. Maybe it makes sense in public relations where vibes are more important than substance and nobody thinks for more than 100 milliseconds?

zamadatix|7 months ago

All 6/6 of the initial reports are for proprietary software, most of which seemed to be related to hardware offloads.

amiga386|7 months ago

If Google is adopting this, maybe rachelbythebay's vagueposting was ahead of the curve?

I jest; the vagueposting led to uninformed speculation, panic, reddit levels of baseless accusation, and harassment of the developers: https://news.ycombinator.com/item?id=43477057

I hope Google's experiment doesn't turn out the same.

diggan|7 months ago

> uninformed speculation, panic, reddit levels of baseless accusation, and harassment of the developers

To be fair, it seems like the only way of avoiding something like that is never saying anything publicly. The crowds of the internet eagerly jump into any drama, vague or not, and balloon it regardless.

fn-mote|7 months ago

> I jest; the vagueposting led to [...]

Resurrecting a 4 month old issue that evaporated in a day or two seems like poor form to me.

Also I believe most of the responsibility for the negative behavior should be assigned to those actually engaging in it, not the initial post. I understand others reasonably disagree (notably about the accusation and harrassment).

Tbh, it sounds like you might have been personally affected? At any rate, I certainly don't condone a mob mentality.

perching_aix|7 months ago

Speaking of, whatever came out of that? I don't see any related updates on that blog.