top | item 21490151

Curl to shell isn't so bad

266 points| stargrave | 6 years ago |arp242.net | reply

193 comments

order
[+] yoavm|6 years ago|reply
Not so bad comparing to what? Yeah, comparing to downloading a tar file from the website and running ./configure, make etc - right, it's probably quite a similar risk. But who does that?

Every decent Linux distro has a package manager that covers 99% of the software you want to install, and comparing to an apt-get install, pacman -S, yum install and so on - running is a script off some website is way more risky. My package manager verifies the checksum of every file it gets to make sure my mirror wasn't tempered with, and it works regardless of the state of the website of some random software. If I have to choose between a software that's packaged for my package manager and one I have to install with a script - I'll always choose the package manager. And we didn't even start to talk about updates - as if that isn't a security concern.

The reason we should discourage people from installing scripts of the internet is because it would be much better if that software would just be packaged correctly.

[+] hannob|6 years ago|reply
> Every decent Linux distro has a package manager that covers 99% of the software you want to install

I wish this were true, but plenty of experience with Linux usage tells me that not having something packaged is a very common occurence.

Though of course this can be improved: More people should actually help working on their favorite Linux distro, so more software gets packaged. And upstreams should try better to collaborate with distros, which unfortunately they rarely do.

[+] brianpgordon|6 years ago|reply
It seems like you're assuming that there's someone vetting these packages. For enterprise distros like Red Hat that's certainly true. Community package maintainers in, for example, the Debian project provide some safety as well. But there are plenty of package managers where that's just not the case. In the case of Homebrew, the package manager pulls down the program directly from upstream and installs it. It's exactly the same as downloading a tar file from the developer's website over HTTPS. Same with npm. Some package managers like Maven and NuGet will rehost artifacts but if the project owner is malicious or compromised then that won't help - so the risk profile is again basically the same as downloading a tar file from the website.
[+] coldtea|6 years ago|reply
>Not so bad comparing to what?

Compared to donwloading a binary and 10 other similar methods people use.

>Yeah, comparing to downloading a tar file from the website and running ./configure, make etc - right, it's probably quite a similar risk. But who does that?

Millions of people?

And even more just download binaries off of websites...

[+] ben509|6 years ago|reply
There's still an issue with the package managers requiring arbitrary shell commands to be run, often with sudo. From docker[1] there are steps like these:

    Install packages to allow apt to use a repository over HTTPS...
    Add Docker’s official GPG key...
    Use the following command to set up the stable repository...
Once you've done that, then it's just apt-get install.

> And we didn't even start to talk about updates - as if that isn't a security concern.

I think that's the worst part of it. Some software will nag you about updates, yarn and pipenv for example, but it's far more reliable to have one system that keeps everything up to date.

[1]: https://docs.docker.com/install/linux/docker-ce/ubuntu/

[+] LIV2|6 years ago|reply
I disagree with some of this, I.e paste jacking.

Plenty of software projects put more care and focus into their software and not in their website, if you're running a vulnerable version of Wordpress or whatever CMS it'd be easy for someone to insert something malicious without being noticed whereas something that modified your code would show up in git, code reviews etc

[+] aasasd|6 years ago|reply
This is pretty much the only real answer to the article. However, if the software itself is distributed via the site then the same caveat applies since replacing the release itself is much more enticing. It comes down to either the software being published on Github where a hijacked release might be noticed, and/or files having signatures that you can somehow trust.
[+] mehrdadn|6 years ago|reply
How is this any different from just downloading a binary from their website and running it? Which people have been doing for ages?
[+] Carpetsmoker|6 years ago|reply
Pastejacking should be mitigated if you use zsh, as it will never run pasted commands automatically. From quick test it seems that recent(?) versions of bash also implemented this feature and have it enabled by default. I don't know about fish or other shells.
[+] avaloneon|6 years ago|reply
I'm surprised that no one has yet mentioned that piping curl to bash can be detected by the server (previous discussion at https://news.ycombinator.com/item?id=17636032). This allows an attacker to send different code if it's being piped to bash instead of saved to disk.

IMHO, "curl to shell" is uniquely dangerous, since all the other installation vectors mentioned don't support the bait-and-switch.

[+] halter73|6 years ago|reply
To me, this seems like only a slightly more advanced version of sending malicious payloads only to curl user agents and not something uniquely dangerous.

If I was already using curl to predownload and audit the script, I'd probably just execute the script I already downloaded which would be safe. Most of the people piping to bash directly do no auditing at all because they trust the source. If you're going to put a malicious payload in a script, you don't have to be that tricky about it.

Most people wouldn't know anything was up in any event until someone else discovered the attack and started raising a fuss on social media. I don't think serving the malicious script just to people who pipe it to bash (or really just download it slowly for any reason) would stop everyone from finding out. It would just make the malicious script more notable when found.

[+] valleyer|6 years ago|reply
You might be interested to read the section titled "User-Agent based attacks" of the linked article.
[+] tastroder|6 years ago|reply
An unmodified curl invocation does not require weird timing based attacks, it sends an appropriate user-agent header the server can use (and which the article already adresses).
[+] lawl|6 years ago|reply
> Not knowing what the script is going to do.

Yep, this is why i hate piping curl to sh. Much prefer how e.g. go does this:

Tells you to just run

    tar -C /usr/local -xzf go1.13.4.linux-amd64.tar.gz
It's not that I don't trust the installer script to not install malware. But I don't trust the installer script to not crap all over my system.
[+] lokedhs|6 years ago|reply
Try using Qubes OS. It will allow you to run such scripts without having to worry about your system being screwed up.

Note that Qubes have some drawbacks, the main one being that it doesn't support GPU's, so not everybody are in the position to use it.

[+] andreareina|6 years ago|reply
My experience is that software that installs via curl|bash tends to ignore my preferences as expressed via $PREFIX/DESTDIR, $XDG_{CACHE,CONFIG,DATA}_HOME, etc. It'll install who-knows-where and probably leave dotfiles all over my home directory.

Maybe curl|bash is functionally equivalent to git clone && ./configure && make && make install, but my bet is on the one providing a standard install flow to be a better guest on my system.

[+] manojlds|6 years ago|reply
The curl|bash might actually clone a repo and build it. Your concern is a different one.
[+] lokedhs|6 years ago|reply
The points raised in the article are correct, and I'm much more concerned with the willingness of people to run arbitrary software on their primary computers in general, than the specific case of piping to sh. I think piping to sh just emphasises how insecure the entire practice is, and arguing against that is analogous to close your eyes to protect yourself from the attacking tiger.

The only system I've worked with that helps you truly deal with this is Qubes OS. Perhaps Fedora Silverblue will achieve this as well, once it comes out of beta.

[+] sjy|6 years ago|reply
Has running a curl-to-bash command found during normal user-initiated web browsing ever resulted in a malware infection? Even anecdotal evidence would be valuable at this point.
[+] m0xte|6 years ago|reply
A close anecdote. I posted one on IRC about 15 years ago, before this was even a thing, which called home by calling curl to an endpoint I controlled only. The sales pitch for the script was setting up vim properly which it did do. 80% of the downloads executed instantly. I had 12 people run the script. No one read it first or downloaded it and ran it as the request count matched the callback count exactly and there wasn’t more than a couple of seconds before the request and callback.

After the fact I realised I should have put if the account was root or not in the callback!

Alas all it needs is some trust and a sales pitch and someone will run it. At the time I didn’t think of the security consequences until after I had done it.

[+] shakna|6 years ago|reply
Not intentionally malware.

But the OPAM install [0] deleted my $PATH, and it took me a while how to fix that one. They've since fixed what allowed the problem to happen [1], well it was a combination of that (normal shell problems) and a power-outage that killed the installer just before completion (in which case the script may just think it's complete).

But I'm sure similar catastrophic side-effects can occur in other install scripts out there.

[0] https://opam.ocaml.org/doc/Install.html

[1] https://github.com/ocaml/opam/issues/2165

[+] blablabla123|6 years ago|reply
It's also already more than a decade ago. But a friend of mine wanted to format a USB stick. So there was this tutorial online how to do it. I'm not sure if he copy&pasted it or typed the command himself. But anybody who used dd a lot, knows what happened, he damaged his root filesystem irreversibly by pointing to the wrong disk. I think repairing damaged filesystems with Norton suite or so was never a thing on Linux... ;)

Long story short, even when typing well intended shell commands, you can damage your system. (Even on Windows or macOS!) Directly piping curl into bash shows a lot of trust. It's amazing how well-intended the web is, must be at least 99,999999%

That said, I cannot count how many times following some tutorial blindly/some shell based installer made my carefully crafted *nix installation a bit worse. Nonetheless, most adware/spyware/malware I got through commercial download websites I think.

[+] Arnt|6 years ago|reply
Not as far as I know, but I have heard about people pasting the Wrong Thing into a root shell.

In a way it's a casting error. A type safety violation. You paste text into a privileged shell and coerce it to be sh, and when it goes wrong the sh input is rich in < and > characters.

Friends of mine have mentioned at least a) people accidentally pasting much more than the intended line into sh because they selected more than intended and b) sites that modify the cut buffer silently to add some "pasted from … blah … like us on facebook" or somesuch, I forget the details. The person who wrote the page intended one line to be castable to sh, another person who worked on the site added the script that transformed the cut/paste without realising that.

[+] mceachen|6 years ago|reply
I've had a server become unbootable after applying a curl|sh (and I actually wget'ed, skimmed the script, then executed), and it errored out. I'd just taken a backup so I didn't do much forensics before I restored.

I know, badware ≠ malware

[+] tannhaeuser|6 years ago|reply
Yeah I was asking this question on SO - How to responsibly publish a script - but got no response, a sarcastic "Tumbleweed" badge even. My concern was that the script could be easily hosted elsewhere and we'd have multiple versions with potential malicious mods flying around. In the absence of alternatives curl-bashing isn't so bad after all because it promotes a canonical download location from a domain/site you control, even if I hated it initially as a long-term Unix user.
[+] eadmund|6 years ago|reply
> There is no fundamental difference between curl .. | sh versus cloning a repo and building it from source.

Not true: when you clone a repo with signed commits, you have forensic evidence that the repo signer provided the code you ran, while when you use curl you have … just the code itself.

That's not a lot, but it's not nothing.

[+] akerl_|6 years ago|reply
How many repos are there that actually sign commits, and of those, how many users are doing validation that the signer of their local checkout’s commits is actually the key they expected?

The line you’ve quoted doesn’t say that there’s no fundamental difference between curl | sh and cloning a repo with signed commits, and I think it’s a stretch to think signed commits have enough usage among devs / users to make them a viable option.

[+] kijin|6 years ago|reply
I hate install scripts, period. They feel so Windows-ish. Just distribute a .deb, .rpm, .snap, homebrew package, npm package, or whatever is the most appropriate for your software. All the scripting you need to do should be done inside of the regular package installation process, and even that should be kept to a minimum.

The only software that has any right to rely on an ad-hoc install script on a Unix-like system is the package manager itself. It's awful enough that I have to do apt update and npm update separately. Please don't add even more ways to pollute my system.

[+] Boulth|6 years ago|reply
The problem with deb, rpm, etc is that you need to add instructions to all supported systems one by one. Check out this site for reference: https://www.sublimemerge.com/docs/linux_repositories and compare with curl URL|sh that can detect target system and delegate to appropriate system. Much simpler.

The root cause of this is no universal packaging format for Linux in my opinion.

[+] dmitriid|6 years ago|reply
What’s the difference between a curl you blindly pipe into sh and a blind brew install/npm install command?
[+] paxy|6 years ago|reply
The average non-technical user is never going to open up the terminal and run commands. The well educated technical user is going to be vary of untrusted sites and various forms of attacks (which I'm assuming the author of this post falls under).

IMO this is good advice for those that fall in the middle of these two categories, i.e. slightly technical people who run into problems and copy-paste solutions from Stack Overflow hoping that something will work.

> you’re not running some random shell script from a random author

This is exactly what is happening in the vast majority of these cases. These users are going to be vary if linked to an executable or installer, but "hey just run this simple line of code" sounds like a very appealing solution.

[+] hising|6 years ago|reply
> copy-paste solutions from Stack Overflow

On the other hand, a solution on SO that would be a hidden attack would not gain upvotes and be an alternative for the one seeking advice there.

[+] jchw|6 years ago|reply
Agreed. If I don’t trust the server, or don’t have a secure connection to it, it is not likely wise to run any non trivial code downloaded from it.

Verifying a hash that comes from the same server also doesn’t make that much sense. Verifying a PGP signature would be a compelling reason to not pipe to shell, and that’s really about it.

[+] pacifika|6 years ago|reply
Just because the connection is secure doesn’t mean it’s controlled by a trusted entity
[+] esotericn|6 years ago|reply
For the most part this is a problem with non-rolling-release distros.

There are very few instances in which I've had to even use an installer on Arch. For many of those cases, the AUR provides a package that verifies the hash of the downloaded file anyway.

I've constantly been frustrated when using Ubuntu because something basic like having 'vim' not be months out of date requires a PPA.

The 'official' Rust installation method is a curl | sh. Or:

    $ pacman -Q rustup && rustup -V
    rustup 1.20.2-1
    rustup 1.20.2 (2019-10-16)
[+] titanomachy|6 years ago|reply
Rolling-release and fixed-version distros serve different purposes. A fixed-version OS has a set of software packages at specific versions which have been tested together, both by test suites and by the many users using the same version set. Security patches and bugfixes get patched in, but the packaged software doesn't undergo major changes. That's important if you're running a critical production system.

Rolling-release systems are awesome for personal machines where you can handle breaking updates or work around them. Usually I want the latest versions of everything when I'm doing exploratory stuff.

That said, modern software deployment is definitely moving away from "pick an LTS Linux distro and only change your application code", instead we mostly use containers now. A lot of production systems are probably still using the older technique though.

[+] slim|6 years ago|reply
the problem is mainly that the script is executed without leaving a trace. if you downloaded the script then executed it, you would have something to inspect in case something goes wrong.

it's too easy, and people with very scarce knowledge could develop a habit of doing this without asking questions and not even leaving any trace for a senior to inspect in case of a problem happening

[+] akerl_|6 years ago|reply
If that were the case, the best option is surely `curl | tee $(mktemp) | sh`. In the case of downloading a script and then executing it, the script has the ability to modify its own contents.
[+] M4v3R|6 years ago|reply
It does leave a trace - the command executed is stored in your shell's history file, so unless a malicious script deletes the history (and it also could delete the download binary if you checkout it from a repo) anyone can immediately see what was executed.
[+] forty|6 years ago|reply
> There is no fundamental difference between curl .. | sh versus cloning a repo and building it from source

I would say it depends. If the commits are signed by a key you know it's probably better. Even if it's not the case, cloning with SSH if you know the host key is also slightly better than downloading through HTTPS where any (compromised) trusted CA can MITM your connection :) (you can argue that those to use cases are rare in practice, and I would agree with you ;))

[+] BadBadJellyBean|6 years ago|reply
I don't think SSH is more secure. You have to verify the server key to make it secure. How do you do that? I have googled a bit and didn't find an obvious page for the github server keys. And even if I did I would be back to HTTPS MITM by compromised CA.
[+] eavotan|6 years ago|reply
> Not knowing what the script is going to do.

This is more like: not knowing what to do, when it doesn't work. And this is always the case until it works. Which is just a local Phenomenon and i can't expect things that work for me to work for others. So why don't write an expressive installation documentation with multiple steps instead of one-liners that either work or don't. There is just no in between.

Take the installation instruction of syncthing for example:

    curl -s https://syncthing.net/release-key.txt | sudo apt-key add -

    echo "deb https://apt.syncthing.net/ syncthing stable" | sudo tee /etc/apt/sources.list.d/syncthing.list
These two steps are hard to automate, if you don't have an interactive shell.

Same goes for the saltstack-boostrap-script. This script doesn't work on all platform equally good. This is not an reliable state. So in the end I'll stick with the normal way to install things which is very easy to automate.

[+] Niksko|6 years ago|reply
I ran into this recently at work. I wanted to write a script that you could curl into bash to quickly set up some common tools.

Firstly, I made sure that the script told you what it would do before doing it.

Secondly, my instructions are two lines. Curl to a file, then run it through bash. A compromise, but if you mistrust the script, you can inspect it yourself before running it.

[+] e12e|6 years ago|reply
> Either way, it’s not a problem with just pipe-to-shell, it’s a problem with any code you retrieve without TLS.

Well, yes. But the typical alternative is a tar-ball and a gpg signature - both via insecure transport, but verifiable (like with tls and a CA).

Git will typically be via ssh or https - so to a certain degree over a secure channel.

[+] mkup|6 years ago|reply
If curl loses connection to the source website while downloading the script, then partially downloaded script will be executed, no matter what. This is a main drawback of curl-to-shell piping approach, and the original article is missing it entirely.
[+] glic3rinu|6 years ago|reply
A common solution is to wrap all code within a function. This way nothing gets executed until the last line, the one that calls the function, is executed.

  function main () {
     # all code goes here
  }
  main
[+] scbrg|6 years ago|reply
No. It's addressed in the second last bullet, Partial content
[+] tonyedgecombe|6 years ago|reply
Is that really much of a problem? I can't remember the last time I had a download fail part the way through and they are usually much bigger than a bootstrapping script.