Hey folks. I'm the product manager for Git at GitHub. We're sorry for the breakage, we're reverting the change, and we'll communicate better about such changes in the future (including timelines).
I want to encourage you to think about locking in the current archive details, at least for archives that have already been served. Verifying that downloaded archives have the expected checksum is a critical best practice for software supply chain security. Training people to ignore checksum changes is training them to ignore attacks.
GitHub is a strong leader in other parts of supply chain security, and it can lead here too. Once GitHub has served an archive with a given checksum, it should guarantee that the archive has that checksum forever.
We updated our Git version which made this change for the reasons explained. At the time we didn't foresee the impact. We're quickly rolling back the change now, as it's clear we need to look at this more closely to see if we can make the changes in a less disruptive way. Thanks for letting us know.
We are seeing an npm install failure inside our docker builds pointing at a github URL with a SHA change. Is this possibly related?
#15 [dev-builder 4/7] RUN --mount=type=secret,id=npm,dst=/root/.npmrc npm ci
#0 4.743 npm WARN deprecated querystring@0.2.0: The querystring API is considered Legacy. new code should use the URLSearchParams API instead.
#0 8.119 npm WARN tarball tarball data for http2@https://github.com/node-apn/node-http2/archive/apn-2.1.4.tar.gz (sha512-ad4u4I88X9AcUgxCRW3RLnbh7xHWQ1f5HbrXa7gEy2x4Xgq+rq+auGx5I+nUDE2YYuqteGIlbxrwQXkIaYTfnQ==) seems to be corrupted. Trying again.
#0 8.164 npm ERR! code EINTEGRITY
#0 8.169 npm ERR! sha512-ad4u4I88X9AcUgxCRW3RLnbh7xHWQ1f5HbrXa7gEy2x4Xgq+rq+auGx5I+nUDE2YYuqteGIlbxrwQXkIaYTfnQ== integrity checksum failed when using sha512: wanted sha512-ad4u4I88X9AcUgxCRW3RLnbh7xHWQ1f5HbrXa7gEy2x4Xgq+rq+auGx5I+nUDE2YYuqteGIlbxrwQXkIaYTfnQ== but got sha512-GWBlkDNYgpkQElS+zGyIe1CN/XJxdEFuguLHOEGLZOIoDiH4cC9chggBwZsPK/Ls9nPikTzMuRDWfLzoGlKiRw==. (72989 bytes)
#0 8.176
#0 8.177 npm ERR! A complete log of this run can be found in:
#0 8.177 npm ERR! /root/.npm/_logs/2023-01-30T23_19_36_986Z-debug-0.log
#15 ERROR: process "/bin/sh -c npm ci" did not complete successfully: exit code: 1
This was working earlier today and the docker build/package.json haven't changed.
In my particular use-case, I'm using a set of local dev tools hosted as a homebrew tap.
The build looks up the github tar.gz release for each tag and commits the sha256sum of that file to the formula
What's odd is that all the _historical_ tags have broken release shasums. Does this mean the entire set of zip/tar.gz archives has been rebuilt? That could be a problem, as perhaps you cannot easily back out of this change...
Hyrum's Law strikes again. It kind of doesn't matter what you document. If you weren't randomizing your checksum previously [1], you can't just spring this on the community and blame it for the fallout. I'm more shocked that there's resistance from the GitHub team saying "but we documented this isn't stable". Default stance for the team should be rollback & reevaluate an alternate path forward when the scope is this wide (e.g. only generating the new tarballs for future commits going forward).
[1] Apparently googlesource did do this and just had people shift to using GitHub mirrors to avoid this problem.
But look at it from the other side. Users that don't read your documentation and expect your software to work like they imagined are just a huge pain in the ass.
This isn't even a case of "we didn't documented this".
I know that the Bazel team reached out to GitHub in the past to get a confirmation that this behaviour could be relied on, and only after that was confirmed did they set that as recommendation across their ecosystem.
This is especially true of something like a git SHA, which is drilled into your head as THE stable hash of your code and git tree at a certain state. It should be expected that lots of tools use it as an identifier -- heck, I've done so myself to confirm which version of a piece of software is deployed on a particular machine, etc.
It's Microsoft. Just as the Apple of today is not the Apple of ten years ago, the GitHub today is not the GitHub of ten years ago. It's literally different people.
The people who made the things you love have mostly moved on, and the brand is being run by different people with different values now.
There's a little bit of an argument that such things are a bait-and-switch, but such is the nature of a large and multigenerational corporation.
I didn't even know I should be depending on compression, file ordering, created-at file metadata, etc. being stable when pressing 'download repository as zip' (if I understand correctly what this is about, since the article doesn't really say). Perhaps it could be stable due to caching for a while after you first press it, but when it gets re-generated? I'm very surprised this was reproducible to begin with, given how much trouble other projects have with that.
For projects where I verify the download, gpg seems to be what all of them use (thinking of projects like etesync and restic here). Interesting that so many people relied on a zip being generated byte-for-byte identically every time.
I once had a small issue with a deployment at work because of ordering issues within a zip file. That order is important with Spring since that determines which classes are initialized first.
There are lots of methods to solve this problem - I imagine this was just easiest at the time given it appeared to work. Bazel devs on the list are discussing the best approach going forward - a simple change is to upload a fixed copy as a release artifact.
> Hey folks. I'm the product manager for Git at GitHub. We're sorry for the breakage, we're reverting the change, and we'll communicate better about such changes in the future (including timelines).
I wonder what monetary loss in productivity was due to this change. We noticed this issue a bit before noon, tracked it down to GH, sent out company-wide comms notifying others of the problem, filed tickets with GH, had to modify numerous repos across multiple teams, and now it's 3pm and I'm here reading about it.
It's crazy how such a seemingly innocuous change, like this, could lead to such widespread loss in productivity across the globe.
Our conda-forge package builds broke. We had someone declare to us that tag downloads were never stable, just releases. This seems to be the opposite of the known truth about the previous status quo - but does go some way to demonstrating how little the state of the actual guarantees for this system were understood.
The change was upstream from git itself, and it was to use the builtin (zlib-based) compression code in git, rather than shelling out to gzip.
But would the gzip binary itself give reproducible results across versions of gzip (and zlib)? Intuition seems to suggest it wouldn't, at least not always. And if not, was the "strategy" just to never update gzip or zlib on GitHub's servers? That seems like a non-starter...
Does anyone have the motivation for why the git project wants to use their own implementation of gzip? Did this implementation already exist and was being used for something else?
I understand wanting fewer dependencies, but gut-reaction is that it's a bad move in the unsafe world of C to rewrite something that already has a far more audited, ubiquitous implementation.
“Their own” implementation is just zlib, already in use throughout git since the dawn of the project for other purposes like blob storage [1].
Depending on how you measure it, zlib might be considered significantly more ubiquitous than gzip itself. At any rate it’s certainly no less battle tested.
It was publicly known that Github was breaking automatic git archives consistency for many years. Here is a bug on a project to stop relying on fake github archives (as opposed to stable git-archive(1)):
At some point it was impossible to go a few weeks (or even days) without a github archive change (depending on which part of the "CDN" you hit), I guess they must have stabilized it at some point. Here is an old issue before GitHub had a community issue tracker:
I always anticipated something like this could happen and it bothered me enough to create my own workflow [1] to archive, hash, and attach it to each release automatically for my AUR package. I can see how most people wouldn't notice/bother with such a small detail though, so I am not at all surprised by the fallout this caused.
I can't fathom how no one internally at Microsoft-Github realized how widespread the breakage would be before rolling this out to all public users.
Surely, Microsoft-Github's own internal builds would have started failing as a result of this change? Or do they not even canary releases internally at all?
Do they let Github generate the archives as one of the build rules instead of performing the archival and compression locally and uploading the result?
Lol... I was being burned by this just about an hour ago. Cloned a repo, did a build of the project (which uses bezel to fetch dependencies) and it reported errors due to mismatch in expected checksums.
The fact that this is causing problems seems like a flaw in Bazel, imo. Nix, for example, calculates a hash of the contents of a tarball, rather than a hash of the tarball itself.
I remember a similar breakage happening before due to internal git changes, and thought it was common knowledge to upload your own signed tarballs for releases.
I wonder if this incident will encourage our industry to build more robust forms of artifact integrity verification, or if we will instead codify the status quo of "we guarantee repos to be archived deterministically." To me, the latter seems like a more troubling precedent.
We’ve regressed from the previous norm of open source projects providing stable source tarballs with fixed checksums, sometimes even with cryptographic signatures.
This is being driven in industry by the push by US FedGov (via NIST) to have supply chain verification after the recent hacks.
POTUS issued an EO and NIST have been following up, leading to the promotion of schemes such as spdx https://tools.spdx.org/app/about/
Where I work is also required to start documenting our supply chain as part of the (new, replacing PCI-DSS) PCI-SFF certification requirements, which requires end-to-end verification of artifacts that are deployed within PCI scope.
So really, the arguments about CPU time etc are basically silly. The use of SHA hashes for artifacts that don't change will be a requirement for anyone building industrial software, or supplying to government, or in the money transacting business.
files uploaded to GH Packages are not modified by GitHub.
only the "Source Code (.zip)" and "Source Code (.tgz)" files that are part of releases and tags are affected because git generates them on demand, and git does not guarantee hash stability.
if you upload a package to GH Packages or upload a release asset to a GitHub releases those are never modified, and you can rely on those hashes.
Now I’m having a laugh at all those times someone tried to explain to me that vendoring dependencies doesn’t make sense, when you have package managers which verify checksums of the things downloaded from GitHub/wherever. A good laugh.
This is a false choice. "Vendoring" is much more of a mess than this is, and second, there's no reason to rely on these on the fly tarballs for anything, when proper versioned software releases exist.
Github has pretty much a one-click ( or one API call ) workflow to create properly versioned and archived tarballs. Just because lots of people try to skirt proper version management doesn't mean you should commit the world into your repo
Did people not know this? Honest question. I did run into this few times already before this change, so I assumed this would be wide-spread knowledge and mirrored everything.
How would anyone (outside of GH) have known this? The checksums have been stable for years, and this issue resulted from an internal update to the version of Git being used. It also was not publicized, until this ex post facto blog post
True, small percentage will always be impacted by even the tiniest of change. But this was not that, checksums all over the place started breaking, as lots of FOSS is hosted on GitHub and lots of infrastructure depends on checksums remaining the same, otherwise they error out (correctly).
vtbassmatt|3 years ago
Also posted here: https://github.com/bazel-contrib/SIG-rules-authors/issues/11...
rsc|3 years ago
I want to encourage you to think about locking in the current archive details, at least for archives that have already been served. Verifying that downloaded archives have the expected checksum is a critical best practice for software supply chain security. Training people to ignore checksum changes is training them to ignore attacks.
GitHub is a strong leader in other parts of supply chain security, and it can lead here too. Once GitHub has served an archive with a given checksum, it should guarantee that the archive has that checksum forever.
vtbassmatt|3 years ago
mdouglass|3 years ago
kris-nova|3 years ago
denom|3 years ago
The build looks up the github tar.gz release for each tag and commits the sha256sum of that file to the formula
What's odd is that all the _historical_ tags have broken release shasums. Does this mean the entire set of zip/tar.gz archives has been rebuilt? That could be a problem, as perhaps you cannot easily back out of this change...
unknown|3 years ago
[deleted]
vlovich123|3 years ago
[1] Apparently googlesource did do this and just had people shift to using GitHub mirrors to avoid this problem.
blueflow|3 years ago
hobofan|3 years ago
I know that the Bazel team reached out to GitHub in the past to get a confirmation that this behaviour could be relied on, and only after that was confirmed did they set that as recommendation across their ecosystem.
nilsbunger|3 years ago
daniealapt|3 years ago
sneak|3 years ago
The people who made the things you love have mostly moved on, and the brand is being run by different people with different values now.
There's a little bit of an argument that such things are a bait-and-switch, but such is the nature of a large and multigenerational corporation.
lucb1e|3 years ago
For projects where I verify the download, gpg seems to be what all of them use (thinking of projects like etesync and restic here). Interesting that so many people relied on a zip being generated byte-for-byte identically every time.
slaymaker1907|3 years ago
rfoo|3 years ago
GPG signs a hash of the message with the private key, and you verify that the signature matches the file hash.
Oh wait, what hash? :clown:
leoh|3 years ago
philipwhiuk|3 years ago
frankjr|3 years ago
metrognome|3 years ago
acdha|3 years ago
nick__m|3 years ago
pxc|3 years ago
Did cache hits save you? Did cache misses break your builds?
c4mpute|3 years ago
[deleted]
WayToDoor|3 years ago
> Hey folks. I'm the product manager for Git at GitHub. We're sorry for the breakage, we're reverting the change, and we'll communicate better about such changes in the future (including timelines).
skobovm|3 years ago
It's crazy how such a seemingly innocuous change, like this, could lead to such widespread loss in productivity across the globe.
misnome|3 years ago
wildfire|3 years ago
kelnos|3 years ago
The change was upstream from git itself, and it was to use the builtin (zlib-based) compression code in git, rather than shelling out to gzip.
But would the gzip binary itself give reproducible results across versions of gzip (and zlib)? Intuition seems to suggest it wouldn't, at least not always. And if not, was the "strategy" just to never update gzip or zlib on GitHub's servers? That seems like a non-starter...
FeepingCreature|3 years ago
jzelinskie|3 years ago
I understand wanting fewer dependencies, but gut-reaction is that it's a bad move in the unsafe world of C to rewrite something that already has a far more audited, ubiquitous implementation.
nemetroid|3 years ago
https://public-inbox.org/git/1328fe72-1a27-b214-c226-d239099...
semiquaver|3 years ago
Depending on how you measure it, zlib might be considered significantly more ubiquitous than gzip itself. At any rate it’s certainly no less battle tested.
[1] https://git-scm.com/book/en/v2/Git-Internals-Git-Objects
groestl|3 years ago
Aissen|3 years ago
https://bugzilla.tianocore.org/show_bug.cgi?id=3099
At some point it was impossible to go a few weeks (or even days) without a github archive change (depending on which part of the "CDN" you hit), I guess they must have stabilized it at some point. Here is an old issue before GitHub had a community issue tracker:
https://github.com/isaacs/github/issues/1483
I am glad this is getting more attention, maybe now github will finally have a stable endpoint for archives.
doubleunplussed|3 years ago
elesiuta|3 years ago
[1] https://github.com/elesiuta/picosnitch/blob/master/.github/w...
frankjr|3 years ago
lopkeny12ko|3 years ago
Surely, Microsoft-Github's own internal builds would have started failing as a result of this change? Or do they not even canary releases internally at all?
ilyt|3 years ago
"didn't read every commit in new version of git, realized after the fact"
medellin|3 years ago
ErikCorry|3 years ago
jart|3 years ago
UncleOxidant|3 years ago
hamandcheese|3 years ago
rfoo|3 years ago
On the other hand this goes against the "verify before parse" principle so I have mixed feelings on Nix's approach.
ArchOversight|3 years ago
rektide|3 years ago
metrognome|3 years ago
bentley|3 years ago
rswail|3 years ago
POTUS issued an EO and NIST have been following up, leading to the promotion of schemes such as spdx https://tools.spdx.org/app/about/
Where I work is also required to start documenting our supply chain as part of the (new, replacing PCI-DSS) PCI-SFF certification requirements, which requires end-to-end verification of artifacts that are deployed within PCI scope.
So really, the arguments about CPU time etc are basically silly. The use of SHA hashes for artifacts that don't change will be a requirement for anyone building industrial software, or supplying to government, or in the money transacting business.
swarfield|3 years ago
1letterunixname|3 years ago
Tar/zipball archives on the same ref never have a stable hash.
Forever problem 1:
No sha256/512/3 hashes of said tar/zipballs.
Forever problem 2:
No metalinks for those.
Forever problem 3:
Not IPv6. Some of our network is IPv6 only.
Forever problem 4:
Hitting secondary rate limiting because I can browse fast.
fomine3|3 years ago
unknown|3 years ago
[deleted]
pabs3|3 years ago
https://diffoscope.org/
You can try it online here:
https://try.diffoscope.org/
unknown|3 years ago
[deleted]
swarfield|3 years ago
capableweb|3 years ago
groestl|3 years ago
and relies on checksumming ephemeral artefacts for integrity.
robomc|3 years ago
unknown|3 years ago
[deleted]
jakeogh|3 years ago
philipwhiuk|3 years ago
Anyone remember the crazyness when Homebrew had problems with using GitHub for the same thing?
naikrovek|3 years ago
files uploaded to GH Packages are not modified by GitHub.
only the "Source Code (.zip)" and "Source Code (.tgz)" files that are part of releases and tags are affected because git generates them on demand, and git does not guarantee hash stability.
if you upload a package to GH Packages or upload a release asset to a GitHub releases those are never modified, and you can rely on those hashes.
blcknight|3 years ago
forgotpwd16|3 years ago
zoobab|3 years ago
yakubin|3 years ago
Keep it simple, just vendor your deps.
reindeerer|3 years ago
Github has pretty much a one-click ( or one API call ) workflow to create properly versioned and archived tarballs. Just because lots of people try to skirt proper version management doesn't mean you should commit the world into your repo
DoctorNick|3 years ago
SuperSandro2000|3 years ago
gray_-_wolf|3 years ago
skobovm|3 years ago
mhitza|3 years ago
daniealapt|3 years ago
capableweb|3 years ago