top | item 8617652

Why curl defaults to stdout

166 points| akerl_ | 11 years ago |daniel.haxx.se

96 comments

order
[+] jrochkind1|11 years ago|reply
I actually want printing to stdout more often than I want printing to file, it is more often what I need. I guess different people have different use cases.

I will admit that rather than learn the right command to have curl print to file -- when I _do_ want to write to file, I do use wget (and appreciate it's default progress bar; there's probably some way to make curl do that too, but I've never learned it either).

When I want writing to stdout, I reach for curl, which is most of the time. (Also for pretty much any bash script use, I use curl; even if I want write to a file in a bash script, I just use `>` or lookup the curl arg).

It does seem odd that I use two different tools, with mostly entirely different and incompatible option flags -- rather than just learning the flags to make curl write to a file and/or to make wget write to stdout. I can't entirely explain it, but I know I'm not alone in using both, and choosing from the toolbox based on some of their default behaviors even though with the right args they can probably both do all the same things. Heck, in the OP the curl author says they use wget too -- now I'm curious if it's for something that the author knows curl doesn't do, or just something the author knows wget will do more easily!

To me, they're like different tools focused on different use cases, and I usually have a feel for which is the right one for the job. Although it's kind of subtle, and some of my 'feel' may be just habit or superstition! But as an example, recently I needed to download a page and all it's referenced assets (kind of like browsers will do with a GUI; something I only very rarely have needed to do), and I thought "I bet wget has a way to do this easily", and looked at the man page and it did, and I have no idea if curl can do that too but I reached for wget and was not disappointed.

[+] mikepurvis|11 years ago|reply
I think the biggest nuisance with this strategy is that neither tool is included by default on the machines I'm usually working with— wget is missing from my Mac, and curl is missing from my Ubuntu servers.

Both can be quickly rectified, but it's still a pretty big pain.

[+] ams6110|11 years ago|reply
Yes, the stdout default is great for working with/testing REST APIs on the command line, for example.
[+] shadytrees|11 years ago|reply
I myself have an alias to get wget-like behavior (still not as verbose, but great for downloading binaries): alias w='curl -#O'

I'm afraid I side with the left-handedness argument. Years of muscle memory make me want to type "w".

[+] nanoscopic|11 years ago|reply
"to have curl print to file -- when I _do_ want to write to a file"

You have it reversed. Wget will output to the screen/pipe if you output to the "file" --. Curl does not. "curl http://www.google.com/index.html -o --" does not output to the screen. It creates a file named "--".

[+] ainiriand|11 years ago|reply
And don't forget all the options available for TLS certificates that are available with cURL!
[+] NickPollard|11 years ago|reply
I think his argument is valid, and thinking about curl as an analog to cat makes a lot of sense. Pipes are a powerful feature and it's good to support them so nicely.

However, just as curl (in standard usage) is an analog to cat, I feel that wget (in standard usage) is an analog to cp, and whilst I certainly can copy files by doing 'cat a > b', semantically cp makes more sense.

Most of the time if I'm using curl or wget, I want to cp, not cat. I always get confused by curl and not being able to remember the command to just cp the file locally, so I tend to default to wget because it's easier to remember,

[+] a3n|11 years ago|reply
Ah, I was trying to figure out how to express my view of wget and curl as different tools, and you've done exactly that, thanks.

Yes, I think of wget more like cp, and I think of curl more like cat, and there are times when I want exactly curl as cat, as opposed to wget as cp.

Different tools, I like them both, and I use them differently. And, I only scratch the surface of capability for both tools.

[+] needusername|11 years ago|reply
Pipes (like much of UNIX) are stuck in ASCII. The response encoding has no relationship to your UNIX locale. The response would know the encoding but can't pass it on to the pipe because it's a concept from the 70ies. In the end all of this only works as long as everybody keeps to ASCII.
[+] Touche|11 years ago|reply
I find that weird. Why are you wanting to copy stuff from http all of the time? I only ever rarely do that because I want to examine something like an API response further for an extended period of time. Usually I just want to see the response once, or view the headers.
[+] viraptor|11 years ago|reply
I think he may be missing what people mean by "it's easier without an argument". It's not just "only one option" - what I see in reality quite often is: "curl http://...", screen is filled with garbage, ctrl-c, ctrl-c, ctrl-c, damn I'm on a remote host and ssh needs to catch up, ctrl-c, "cur...", actually terminal is broken and I'm writing garbage now, "reset", "wget http://...".

I'm not saying he should change it. But if he thinks it's about typing less... he doesn't seem to realise how his users behave.

[+] yason|11 years ago|reply
That's the reason I use wget and only when necessary I switch to curl. It's not that I wouldn't forget about that nasty behaviour (eventhough I sometimes do forget) but it usually goes like this:

    $ curl -o news.ycombinator.com
    curl: no URL specified!
    curl: try 'curl --help' or 'curl --manual' for more information
    $ curl -O news.ycombinator.com
    curl: Remote file name has no length!
    curl: try 'curl --help' or 'curl --manual' for more information
    $ curl -O foo news.ycombinator.com
    curl: Remote file name has no length!
    curl: try 'curl --help' or 'curl --manual' for more information
    <html>
    <head><title>301 Moved Permanently</title></head>
    <body bgcolor="white">
    <center><h1>301 Moved Permanently</h1></center>
    <hr><center>nginx</center>
    </body>
    </html>
    $ wget news.ycombinator.com
    --2014-11-17 14:27:18--  http://news.ycombinator.com/
    Resolving news.ycombinator.com (news.ycombinator.com)... 198.41.191.47, 198.41.190.47
    Connecting to news.ycombinator.com (news.ycombinator.com)|198.41.191.47|:80... connected.
    HTTP request sent, awaiting response... 301 Moved Permanently
    Location: https://news.ycombinator.com/ [following]
    --2014-11-17 14:27:19--  https://news.ycombinator.com/
    Connecting to news.ycombinator.com (news.ycombinator.com)|198.41.191.47|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: unspecified [text/html]
    Saving to: ‘index.html’

    [ <=>                                                                                                                                  ] 22,353      --.-K/s   in 0.07s

    2014-11-17 14:27:19 (331 KB/s) - ‘index.html’ saved [22353]
With wget, I can just through any URL to it and it‘ll probably do the right thing with the least amount of surprises. „Grab a file“ is my usecase 99.99% of the time, „Print a file“ is the rest 0.01%.
[+] skywhopper|11 years ago|reply
Which users? I only ever use curl to print stuff to stdout, but I use it for that a lot. When I want to download a file as lazily as possible, I use wget.

If you don't use the tools often enough to remember how they work, I don't think your needs are going to come up high on the developer's priority list.

[+] duaneb|11 years ago|reply
Most of the time I'm using curl, I want to see the output. Otherwise I start it with >output.file.
[+] kybernetyk|11 years ago|reply
>I'm not saying he should change it.

I'd say that it's too late for a change. Changing the default behaviour would break way too many existing scripts and cronjobs.

[+] shapeshed|11 years ago|reply
Do one thing and do it well.

IMHO cURL is the best tool for interacting with HTTP and wget is the best tool for downloading files.

[+] digi_owl|11 years ago|reply
Pretty much. i keep seeing curl being used as the "back end" of web browsers, fueling the likes of webkit.

Wget on the other hand end up within shell scripts and similar (i have before me a distro where the package manager is made of shell scripts, core utils and wget).

[+] nkozyra|11 years ago|reply
This is a good way to put it, especially since people tend to use them analogously.
[+] qwerta|11 years ago|reply
+1,

curl is like swiss army knife and wget is fixed blade knife ;-)

[+] rachelbythebay|11 years ago|reply
This "-O" seemed dubious to me so I took a look. Turns out... yep, it's not as simple as that.

"curl -O foo" is not the same as "wget foo". wget will rename the incoming file to as to not overwrite something. curl will trash whatever might be there, and it's going to use the name supplied by the server. It might overwrite anything in your current working directory.

Try it and see.

[+] bkirwi|11 years ago|reply
According to the manpage, the filename depends only on the supplied URL:

  Write output to a local file named like the remote file we get. (Only the file part of the remote file is used, the path is cut off.)
  The remote file name to use for saving is extracted from the given URL, nothing else.
wget is a hugely useful tool for making local copies of websites and similar things -- the no-clobber rule is useful there, and the built-in crawling and resource fetching is fantastic. OTOH, for most things, I actually like curl's 'dumb' behaviour; it seems to match up better with the rest of the UNIX ecosystem.
[+] userbinator|11 years ago|reply
I think of curl as a somewhat more intelligent version of netcat that doesn't require me to do the protocol communication manually, so outputting to stdout makes great sense.
[+] wyldfire|11 years ago|reply
It would be really nice if curl took the content-type and results from isatty(STDOUT_FILENO) into consideration when deciding whether to spew to stdout.
[+] acqq|11 years ago|reply
Yes, I can't imagine there are actual scripts dumping binary data to the terminal, and it would help everybody whose terminal would otherwise experience what viraptor nicely describes:

""curl http://...", screen is filled with garbage, ctrl-c, ctrl-c, ctrl-c, damn I'm on a remote host and ssh needs to catch up, ctrl-c, "cur...", actually terminal is broken and I'm writing garbage now, "reset", "wget http://..."."

I admit it happened to me more than once.

[+] wtetzner|11 years ago|reply
The thing is, you might want to pipe the output to something else instead of saving it to a file. If you check that you're outputting to a terminal, then the terminal behavior is different than the pipe behavior, which might be confusing.

Maybe if it's a terminal print a warning with a simple explanation, or a Y/N prompt?

[+] davidmh|11 years ago|reply
HTTPie is a command line HTTP client, a user-friendly cURL replacement. http://httpie.org
[+] blacksmith_tb|11 years ago|reply
I find it very useful for debugging (and it's in the Ubuntu repos, and can be installed via homebrew on OSX).
[+] 0x0|11 years ago|reply
Chrome dev tools have a super useful "Copy as cURL" right-click menu option in the network panel. Makes it very easy to debug HTTP!
[+] icebraining|11 years ago|reply
Same with Firefox dev tools. I use it all the time.
[+] mobiplayer|11 years ago|reply
We all have some user-bias and in this case it is geared towards seeing Curl as some shell command to download files through HTTP/S.

Luckily, Curl is much more than that and it is a great and powerful tool for people that work with HTTP. The fact that it writes to stdout makes things easier for people like me that are no gurus :) as it just works as I would expect.

When working with customers with dozens of different sites I like to be able to run a tiny script that leverages Curl to get me the HTTP status code from all the sites quickly. If you're migrating some networking bits this is really useful for a first quick check that everything is in place after the migration.

Also, working with HEAD instead of GET (-I) makes everything cleaner for troubleshooting purposes :)

My default set of flags is -LIkv (follow redirects, only headers, accept invalid cert, verbose output). I also use a lot -H to inject headers.

[+] eddieroger|11 years ago|reply
Having known both tools for a long time now, I never realized there was a rivalry between them - I just figured they're each used differently. cURL is everywhere, so it's a good default. I use it when I want to see all of the output of a request - headers, response raw, etc. It's my de facto API testing tool. And before I even read the article, I assumed the answer was "Everything is a pipe". It sucks to have to memorize the flags, but it's worthwhile when you're actually debugging the web.
[+] talles|11 years ago|reply
> people who argue that wget is easier to use because you can type it with your left hand only on a qwerty keyboard

Haha I would never realize that

[+] bshimmin|11 years ago|reply
I've worked with multiple people who chose passwords based on whether they could be typed with only one hand. I guess there's a perverse sort of sense in it, if you're really that lazy.
[+] discardorama|11 years ago|reply
The "c" in "curl" stands for "cat". Any unix user knows what cat(1) does. Why the confusion?
[+] wtetzner|11 years ago|reply
I think the confusion is probably that people didn't realize that "c" stood for "cat" in "cURL".
[+] lsiebert|11 years ago|reply
I was recently playing with libcurl (easiest way I know to interact with a rest api in c), and libcurl's default callback for writing data does this too.It takes a file handle, and if no handle is supplied, it defaults to stdout. It's actually really nice as a default... you can use different handles for the headers vs the data, or use a different callback altogether.

I really, really like libcurl's api (or at least the easy api, I didn't play around with the heavy duty multi api for simultaneous stuff). It's very clean and simple.

[+] ams6110|11 years ago|reply
I use curl over wget in most cases, just because I learned it first I guess. I use it enough that I rarely make the mistake of not redirecting when I want the output in a file.

The one case where I will reach for wget first is making a static copy of a website. I need to do this sometimes for archival purposes, and though I always need to look up the specific wget options to do this properly, this use case seems to be one where wget is stronger than curl (especially converting links so they work properly in the downloaded copy).

[+] pbhjpbhj|11 years ago|reply
"cat url", huh, that makes sense.

Why not just alias it ("make a File from URL" -> furl?) if people want to use it with -O flag set as default?

[+] zkhalique|11 years ago|reply
I find it pretty cool how authors of text-mode UNIX programs are still around. In fact the GNU culture has kind of grown up around that. And yet, to me text-mode stuff is just a part of a much larger distribution, not something to be distributed to so many systems. Oh, how times have changed.
[+] unclesaamm|11 years ago|reply
I am in the opposite camp, where I always try to pipe wget to file. Then I end up with two files. Argh.
[+] geon|11 years ago|reply
> if you type the full commands by hand you’ll use about three keys less to write “wget” instead of “curl -O”

Unless you forgot what the option was since you don't use it multiple times a day.