This is a great example of why `pull_request_target` is fundamentally insecure, and why GitHub should (IMO) probably just remove it outright: conventional wisdom dictates that `pull_request_target` is "safe" as long as branch-controlled code is never executed in the context of the job, but these kinds of argument injections/local file inclusion vectors demonstrate that the vulnerability surface is significantly larger.
At the moment, the only legitimate uses of `pull_request_target` are for things like labeling and auto-commenting on third-party PRs. But there's no reason for these actions to have default write access to the repository; GitHub can and should be able to grant fine-grained or (even better) single-use tokens that enable those exact operations.
(This is why zizmor blanket-flags all use of `pull_request_target` and other dangerous triggers[1]).
I don't disagree... but, there is a use case for orgs that don't allow forks. Some tools do their merging outside of github and thus allow for PRs that cannot be clean from a merge perspective. This won't trigger workflows that are pull_request. Because pull_request requires a clean merge. In those cases pull_request_target is literally the only option.
The best move would be for github to have a setting for allowing the automation to run on PRs that don't have clean merges, off by default and intended for use with linters only really. Until that happens though pull_request_target is the only game in town to get around that limitation. Much to my and other SecDevOps engineers sadness.
NOTE: with these external tools you absolutely cannot do the merge manually in github unless you want to break the entire thing. It's a whole heap of not fun.
Inside private repos we use pull_request_target because 1. it runs the workflow as it exists on main and therefore provides a surface where untampered with test suites can run, and 2. provides a deterministic job_workflow_ref in the sub claim in the jwt that can be used for highly fine grained access control in OIDC enabled systems from the workflow
> This event runs in the context of the base of the pull request, rather than in the context of the merge commit, as the pull_request event does. This prevents execution of unsafe code from the head of the pull request that could alter your repository or steal any secrets you use in your workflow.
Which is comical given how easily secrets were exilfiltrated.
Had the Nix team rolled out signed commits/reviews and independent signed reproducible builds as my (rejected) RFC proposed, then it would not be possible to do any last mile supply chain attacks like this.
In the end NixPkgs wants to be wikipedia easy for any rando to modify, and fear any attempt at security will make volunteers run screaming, because they are primarily focused on being a hobby distro.
That's just fine, but people need to know this, and stop using and promoting Nix in security critical applications.
An OS that will protect anything of value must have strict two party hardware signing requirements on all changes and not place trust in any single computer or person with a decentralized trust model.
Hey! First, a disclaimer: I do not speak for anyone officially, but I am a very regular contributor to nixpkgs and have been involved in trying to increase nixpkgs' security through adopting the Full-Source Bootstrap that Guix and Stagex use. I also assume that the RFC you're talking about is RFC 0100, "Sign Commits"(ref: https://github.com/NixOS/rfcs/pull/100)
As mentioned in the RFC discussion, the major blocker with this is the lack of an ability for contributors to sign from mobile devices. Currently, building tooling for mobile devices is way out-of-scope for nixpkgs, and would be a large time sink for very little gain over what we have now. Further, while I sign my commits because I believe it is a good way to slightly increase the provenance of my commits, there is nothing preventing me from pushing an unsigned commit, or a commit with an untrusted key, and that's, in my opinion, fine. While for a project like Stagex(which as a casual cybersecurity enthusiast and researcher, I thoroughly appreciate the security work you all do), this layer of security is important, as it's clearly part of the security posture of the project, nixpkgs takes a different view to trustworthiness. While I disagree with your conclusion that having this sort of security measure would "make volunteers run screaming", I would be interested in seeing statistics on the usage of these mechanisms in nixpkgs already. Nixpkgs is also definitely not focused on being a hobby distro, considering it's in use at many major companies around the world(just look at NixCon 2025's sponsor list).
To be clear, this isn't to say that all security measures are worthless. Enabling more usage of security features is a good thing, and it's something I know folks are looking into(but I'm not going to speak for them), so this may change in the future. However, I do agree with the consensus that for nixpkgs, enabling commit signing would be very bad overall for the ecosystem, despite the advantages of them. Also, I didn't see anything in your PR about "independent signed reproducible builds", but for a project the size of nixpkgs, this would also be a massive infrastructure undertaking for a 3rd-party, though NixOS is very close to being fully reproducible(https://reproducible.nixos.org/) at the moment, we're not there yet though.
In conclusion, while I agree that signing commits would a good improvement, the downsides for nixpkgs are significant enough that I don't believe it would be a good move. It's something to definitely continue thinking about as nixpkgs and nix continue to refine and work on their security practices, though. I would also love some more information about how Stagex does two-party hardware signing, as that sounds like something interesting as well. Thank you so much!
Edit: Also, want to be very clear: I am not saying you're entirely wrong, or trying to disparage the very interesting and productive work that Stagex is doing. However, there were some (what I felt were)misconceptions I wanted to clean up.
I find it rather embarrassing that, after all these years of trying to design computer systems, modern workflows are still designed so that bearer tokens, even short-lived, are issued to trusted programs. If the GitHub action framework gave a privileged Unix socket or ssh-agent access instead, then this type of vulnerability would be quite a lot harder to exploit.
Bearer tokens should be replaced with schemes based on signing and the private keys should never be directly exposed (if they are there's no difference between them and a bearer token). Signing agents do just that. Github's API is based on HTTP but mutual TLS authentication with a signing agent should be sufficient.
CI/CD actions for pull/merge requests are a nightmare. When a developer writes test/verification steps, they are mostly in the mindset "this is my code running in the context of my github/gitlab account", which is true for commits made by themselves and their team members.
But then in a pull request, the CI/CD pipeline actually runs untrusted code.
Getting this distinction correct 100% of the time in your mental model is pretty hard.
For the base case, where you maybe run a test suite and a linter, it's not too bad. But then you run into edge cases where you have to integrate with your own infrastructure (either for end2end tests, or for checking if contributors have CLAs submitted, or anything else that requires a bit more privs), and then it's very easy byte you.
I don't think the problem is CI/CD runs on pull requests, per se: it's that GitHub has two extremely similar triggers (`pull_request` and `pull_request_target`). One of these is almost entirely safe (you have to go out of your way to misuse it), while the other is almost entirely unsafe (it's almost impossible to use safely).
To make things worse, GitHub has made certain operations on PRs (like auto-labeling and leaving automatic comments) completely impossible unless the extremely dangerous version (`pull_request_target`) is used. So this is a case of incentive-driven insecurity: people want to perform reasonable operations on third-party PRs, but the only mechanism GitHub Actions offers is a foot-cannon.
> If you’ve read the man page for xargs, you’ll see this warning:
>> It is not possible for xargs to be used securely
However, the security issue this warning relates to is not the one that's applicable here. The one here is possible to avoid by using -- at the end of the command.
There's a huge footgun in that article that has broader impact:
> but it gets worse. since the workflow was checking out our PR code, we could replace the OWNERS file with a symbolic link to ANY file on the runner. like, say, the github actions credentials file
So git allows committing soft links. So the issue above could affect almost any workflow.
Yes, but IIRC when you run `pull_request_target` the credentials are to the target repository - i.e. the one you're merging into. When you run `pull_request`, it's to the source repository, the one the attacker is in control of.
As time goes on, I find myself increasingly worried about supply chain attacks—not from a “this could cost me my job” or “NixOS, CI/CD, Node, etc. are introducing new attack vectors” perspective, but from a more philosophical one.
The more I rely on, the more problems I’ll inevitably have to deal with.
I’m not thinking about anything particularly complex—just using things like VSCode, Emacs, Nix, Vim, Firefox, JavaScript, Node, and their endless plugins and dependencies already feels like a tangled mess.
Embarrassingly, this has been pushing me toward using paper and the simplest, dumbest tech possible—no extensions, no plugins—just to feel some sense of control or security. I know it’s not entirely rational, but I can’t shake this growing disillusionment with modern technology. There’s only so much complexity I can tolerate anymore.
Emacs itself is probably secure and you can easily audit every extension, but if you update every extension blindly via a nicely composable emacs Nix configuration, you would indeed have a problem.
I guess one could automate finding obvious exploits via LLMs and if the LLM finds something abort the update.
The right solution is to use Coq and just formally verify everything in your organization, which incidentally means throwing away 99.999% of software ever written.
> It is not possible for xargs to be used securely
Eh... That is taken out of context quite a bit, that sentence does continue. Just do `cat "$HOME/changed_files" | xargs -r editorconfig-checker --` and this specific problem is fixed.
Yeah, I don't think the specific reason for that sentence in the manpage applies here. But the general sentiment is correct: not all programs support `--` as a delimiter between arguments and inputs, so many xargs invocations are one argument injection away from arbitrary code execution.
(This is traditionally a non-issue, since the whole point is to execute code. So this isn't xargs' fault so much as it's the undying problem of tools being reused across privilege contexts.)
Not to defend the pull_request_target, it is dangerous... But, am I the only one who think it was a stretch to say "just like that, we had a github actions token with read/write access to nixpkgs"?
They were able to dump arbitrary file to logs. The secrets were automatically obfuscated with *** in the logs. How could they exfiltrate the token?
Well the "good" new is, OpenBSD and NetBSD still uses CVS, even for packages. So this will not work on those systems. I do not know about FreeBSD. Security by obscurity :)
But I have been seeing docs indication those projects are looking to go to git, will see if it really happens. In OpenBSD's case seems it will be based upon got(1).
Just to make it clear, what you say is correct, but this is not a git vulnerability, it's a github actions vulnerability. That is, the BSDs are secured by CVS only because github doesn't do CVS. If you use git and even github but don't do CI/CD using github actions you are not affected by this.
woodruffw|4 months ago
At the moment, the only legitimate uses of `pull_request_target` are for things like labeling and auto-commenting on third-party PRs. But there's no reason for these actions to have default write access to the repository; GitHub can and should be able to grant fine-grained or (even better) single-use tokens that enable those exact operations.
(This is why zizmor blanket-flags all use of `pull_request_target` and other dangerous triggers[1]).
[1]: https://docs.zizmor.sh/audits/#dangerous-triggers
leeter|4 months ago
The best move would be for github to have a setting for allowing the automation to run on PRs that don't have clean merges, off by default and intended for use with linters only really. Until that happens though pull_request_target is the only game in town to get around that limitation. Much to my and other SecDevOps engineers sadness.
NOTE: with these external tools you absolutely cannot do the merge manually in github unless you want to break the entire thing. It's a whole heap of not fun.
lijok|4 months ago
zamalek|4 months ago
> This event runs in the context of the base of the pull request, rather than in the context of the merge commit, as the pull_request event does. This prevents execution of unsafe code from the head of the pull request that could alter your repository or steal any secrets you use in your workflow.
Which is comical given how easily secrets were exilfiltrated.
cookiengineer|4 months ago
Remember the python packages that got pwned with a malicious branch name that contained shellshock like code? Yeah, that incident.
I blogged about all vulnerable variables at the time and how the attack works from a pentesting perspective [1].
[1] https://cookie.engineer/weblog/articles/malware-insights-git...
lrvick|4 months ago
In the end NixPkgs wants to be wikipedia easy for any rando to modify, and fear any attempt at security will make volunteers run screaming, because they are primarily focused on being a hobby distro.
That's just fine, but people need to know this, and stop using and promoting Nix in security critical applications.
An OS that will protect anything of value must have strict two party hardware signing requirements on all changes and not place trust in any single computer or person with a decentralized trust model.
Shameless plug, that is why we built Stagex. https://stagex.tools https://codeberg.org/stagex/stagex/ (Don't worry, not selling anything, it is and will always be 100% free to the public)
pyrox|4 months ago
As mentioned in the RFC discussion, the major blocker with this is the lack of an ability for contributors to sign from mobile devices. Currently, building tooling for mobile devices is way out-of-scope for nixpkgs, and would be a large time sink for very little gain over what we have now. Further, while I sign my commits because I believe it is a good way to slightly increase the provenance of my commits, there is nothing preventing me from pushing an unsigned commit, or a commit with an untrusted key, and that's, in my opinion, fine. While for a project like Stagex(which as a casual cybersecurity enthusiast and researcher, I thoroughly appreciate the security work you all do), this layer of security is important, as it's clearly part of the security posture of the project, nixpkgs takes a different view to trustworthiness. While I disagree with your conclusion that having this sort of security measure would "make volunteers run screaming", I would be interested in seeing statistics on the usage of these mechanisms in nixpkgs already. Nixpkgs is also definitely not focused on being a hobby distro, considering it's in use at many major companies around the world(just look at NixCon 2025's sponsor list).
To be clear, this isn't to say that all security measures are worthless. Enabling more usage of security features is a good thing, and it's something I know folks are looking into(but I'm not going to speak for them), so this may change in the future. However, I do agree with the consensus that for nixpkgs, enabling commit signing would be very bad overall for the ecosystem, despite the advantages of them. Also, I didn't see anything in your PR about "independent signed reproducible builds", but for a project the size of nixpkgs, this would also be a massive infrastructure undertaking for a 3rd-party, though NixOS is very close to being fully reproducible(https://reproducible.nixos.org/) at the moment, we're not there yet though.
In conclusion, while I agree that signing commits would a good improvement, the downsides for nixpkgs are significant enough that I don't believe it would be a good move. It's something to definitely continue thinking about as nixpkgs and nix continue to refine and work on their security practices, though. I would also love some more information about how Stagex does two-party hardware signing, as that sounds like something interesting as well. Thank you so much!
Edit: Also, want to be very clear: I am not saying you're entirely wrong, or trying to disparage the very interesting and productive work that Stagex is doing. However, there were some (what I felt were)misconceptions I wanted to clean up.
XorNot|4 months ago
gmfawcett|4 months ago
cpuguy83|4 months ago
friendly_wizard|4 months ago
amluto|4 months ago
Thom2000|4 months ago
Bearer tokens should be replaced with schemes based on signing and the private keys should never be directly exposed (if they are there's no difference between them and a bearer token). Signing agents do just that. Github's API is based on HTTP but mutual TLS authentication with a signing agent should be sufficient.
otabdeveloper4|4 months ago
It's not used by anyone because nobody actually gives a shit about security, the entire industry is basically a grift.
perlgeek|4 months ago
But then in a pull request, the CI/CD pipeline actually runs untrusted code.
Getting this distinction correct 100% of the time in your mental model is pretty hard.
For the base case, where you maybe run a test suite and a linter, it's not too bad. But then you run into edge cases where you have to integrate with your own infrastructure (either for end2end tests, or for checking if contributors have CLAs submitted, or anything else that requires a bit more privs), and then it's very easy byte you.
woodruffw|4 months ago
To make things worse, GitHub has made certain operations on PRs (like auto-labeling and leaving automatic comments) completely impossible unless the extremely dangerous version (`pull_request_target`) is used. So this is a case of incentive-driven insecurity: people want to perform reasonable operations on third-party PRs, but the only mechanism GitHub Actions offers is a foot-cannon.
immibis|4 months ago
>> It is not possible for xargs to be used securely
However, the security issue this warning relates to is not the one that's applicable here. The one here is possible to avoid by using -- at the end of the command.
lostmsu|4 months ago
> but it gets worse. since the workflow was checking out our PR code, we could replace the OWNERS file with a symbolic link to ANY file on the runner. like, say, the github actions credentials file
So git allows committing soft links. So the issue above could affect almost any workflow.
danudey|4 months ago
aftergibson|4 months ago
The more I rely on, the more problems I’ll inevitably have to deal with.
I’m not thinking about anything particularly complex—just using things like VSCode, Emacs, Nix, Vim, Firefox, JavaScript, Node, and their endless plugins and dependencies already feels like a tangled mess.
Embarrassingly, this has been pushing me toward using paper and the simplest, dumbest tech possible—no extensions, no plugins—just to feel some sense of control or security. I know it’s not entirely rational, but I can’t shake this growing disillusionment with modern technology. There’s only so much complexity I can tolerate anymore.
YouAreWRONGtoo|4 months ago
I guess one could automate finding obvious exploits via LLMs and if the LLM finds something abort the update.
The right solution is to use Coq and just formally verify everything in your organization, which incidentally means throwing away 99.999% of software ever written.
ishouldbework|4 months ago
Eh... That is taken out of context quite a bit, that sentence does continue. Just do `cat "$HOME/changed_files" | xargs -r editorconfig-checker --` and this specific problem is fixed.
hombre_fatal|4 months ago
If you have to opt in to safe usage at every turn, then it's an unsafe way of doing things.
woodruffw|4 months ago
(This is traditionally a non-issue, since the whole point is to execute code. So this isn't xargs' fault so much as it's the undying problem of tools being reused across privilege contexts.)
nyouhd|4 months ago
They were able to dump arbitrary file to logs. The secrets were automatically obfuscated with *** in the logs. How could they exfiltrate the token?
jmclnx|4 months ago
But I have been seeing docs indication those projects are looking to go to git, will see if it really happens. In OpenBSD's case seems it will be based upon got(1).
seanhunter|4 months ago
graemep|4 months ago
Mic92|4 months ago