top | item 39974257

(no title)

cfreksen | 1 year ago

I am a bit fascinated[1] by the "verified" version, as it fetches from the same URL twice. First I found it inefficient, but since they are doing these request for every zsh startup an extraneous request is probably not seen as a performance problem. Then I realised that the data they verify the hash of is not the same copy of the data that they load: An attacker controlling the server at the curl'ed URL could serve a different file on the second request, which in turn reminded me of a blog post describing how to detect `curl | bash` server side[2][3].

I think the lesson of this small aspect of the "zi" tale is that one should strive to have a single source of truth (a single copy of the data served at the URL), and that in security contexts one needs to be very precise with exactly which guarantees have been established for which data at which point in time: it is surprisingly easy to implicitely add an assumption like "GET requests returning 200 OK behave like pure functions".

[1]: Though this might just be me piling on the mockery of their project, for my own amusement and schadenfreude.

[2]: https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-b..., alternatively https://web.archive.org/web/20240406132938/https://www.idont..., discussed here e.g. https://news.ycombinator.com/item?id=11532599 (122 comments)

[3]: I am not sure if zsh behaves like bash in this case, as in: Does zsh only read part its input before it starts executing commands?

discuss

order

No comments yet.