So far I'm loving Caddy. Super simple to configure, and the auto management of SSL certificates is magical. However, for all of the simplicity, they are still pretty resistant to properly packaging it up in the repositories. By far the longest part of getting it running, is the stupid manual configuration of launch daemons, working directories, and permissions. For software that prides itself on dead simple management, the basics of getting it into a package manager should not even be a discussion.
With that said, I 100% moved all of my sites from Nginx to Caddy after the config file setup is around three lines per site.
Of the 5 announcements we made yesterday, milestones for 0.11, 0.12, and 1.0 was one of them (and their details will be published to GitHub soon): one of the main purposes of 1.0 being stable is so that we can find ways to distribute Caddy better for people who want to use package managers. I know holding out isn't a popular decision but we really want to make sure we get it right given Caddy's unique needs with regards to plugins.
>the basics of getting it into a package manager should not even be a discussion.
Last time I looked into it, their problem was that Caddy plugins must be configured at compile-time, because runtime plugins seem to be hard to do with Golang (besides RPC). Because there's a handful of Caddy plugins, they'd either have to provide a slim package with no plugins, or ship with all or only some blessed plugins. Obviously all approaches have drawbacks … they discussed the option of shipping their own package manager for Caddy plugins, too (shudder).
Are you sure it doesn't even depend on libc? Most Go programs do end up depending on libc by default; they have to be specially compiled to avoid the libc dep (IIRC, anything that depends on net or net/http or something of that sort).
The MitM detection is interesting. How long until these appliances start altering the HTTP request's User-Agent header to avoid detection (or adding logic such that the handshaking process mimics that of the browser), though?
To me, HPKP with preloading seems like a more reliable approach (and browsers shouldn't allow this to be overridden [1]).
[1] If this breaks corporate MitM attacks, great. This practice always struck me as incredibly invasive and I'd personally be concerned if an employer was doing this. I think some traffic data should be able to be logged and timestamped in case of abuse (e.g. SNI info, TCP connections, DNS lookups) but I don't see a need to intercept application data - and honestly, it seems like the vast majority of these appliances hinder security.
> How long until these appliances start altering the HTTP request's User-Agent header to avoid detection (or adding logic such that the handshaking process mimics that of the browser), though?
A TLS proxy that does this is meddling with the application layer and is thus broken (IMO). A TLS proxy that doesn't want to be detected shouldn't mess with the application layer.
The best thing a TLS proxy can do (other than be turned off) is preserve the characteristics of the original TLS connection exactly.
Indeed, Caddy's implementation is mainly for detecting TLS proxies that do a lousy job at it.
I'd also be concerned if an employer was doing it, unless there was a good reason - and I can think of a few of those. For example, companies that deal with sensitive health, financial or legal information. They may need assurances (or at least a paper trail) that's a lot stronger than "we don't MITM and we trust our employees to do the right thing".
Yep, I use the similar eBay/fabio which pulls my LE certs from hashicorp/vault, much better than leaving them in a filesystem. Can't imagine using a proxy/ingress/LB without automatic reactive routing these days. All I have to tell Fabio is what interface to listen to.
Awesome product, I'm using it at work and it's incredibly fast. Configuration is easy, documentation is clear. The systemd unit provided in the repo is insane, all the latest security and isolation stuff are in it, a great inpiration for writing good systemd units.
I would just like to bring up how much I admire the way they want to profit off Caddy[1][2]. Sponsorships and focused development; followed by "remember, caddy is open source". My only feedback would be that they introduce a "$50/mo; my bank is not big enough" for people that wants to endorse their model/software.
Look at nginx, where new functionality is hidden behind a paywall. I don't want to deny them [nginx devs and sales people] their well-deserved money, but it pushes me away.
Thanks for the feedback! We are mindful of people who want to contribute funds but aren't able to because of the price. We get a lot of requests for stickers and T-shirts and even jackets/sweaters; so we might go that route for individuals being able to contribute.
Surprised there isn't more talk about the MITM detection here. Anyone got a TLDR of how this is supposed to work? I'm going to read the full doc when I get home, but I'd be interested in hearing peoples opinions on how accurate it is likely to be.
The authors of the original paper [1] identified that the set of client cipher suites advertised by each browser can be used to fingerprint and identify a browser.
Caddy records the cipher suite advertised by the client during the TLS handshake and then later examines the client's user agent. Using the fingerprinting techniques mentioned in the paper, Caddy then determines whether or not the advertised user-agent is compatible with the user-agent that it inferred through the client cipher suites.
TLS interception proxies establish their own TLS connection to the server. Depending on what underlying TLS library the proxy uses, it also has its own unique fingerprint. When the TLS proxy forwards the user's request, Caddy detects the mismatch and flags it as a MITM.
Anyone with experience using this as a dynamic reverse proxy - I need to proxy certain requests to private container (ports) where the port isn't known until the container is booted, and containers come up and down as users require them.
Depending on what that private container is serving up, an API gateway might work better. Tyk is one that's written in golang. It has a rest API and hot reload, so it should be able to handle your use case of dynamically allocated ports.
You can use service discovery tools like Consul or Etcd. Basic idea that on container/app boot, you register your app to service discovery service (you give it IP and current port), and it stores information about your all running apps.
After you can use consul Nginx integration (it will dynamically generate Nginx config on all updates, and will restart it).
Tyk mentioned here also have Consul service integration and much more.
Nginx can proxy to a server specified in a variable. All you have to do is define that variable though perl or lua to get port dynamically from somewhere, like a file.
Pricing page is pretty bad. Draws your eyes with primary colors and bold fonts to $5000/yr and $9900/yr. I had to swallow the sticker shock and look around the page to see the weird, faded side-bubble telling me it's free.
They mention that 'Default Timeouts' have been disabled. Urging users to 'Act according to your threat model!'.
I'm not sure I understand. How is not having these timeouts a security thread? Someone could potentially open up enough HTTP connections to starve others from having the opportunity to do so?
This is true; but slowloris attacks don't require opening as many connections. We've seen one or two instances where buggy (or malicious?) clients were slowlorising Caddy instances; but we were too eager to enable timeouts by default I think.
Random question: is there a way to start Caddy as root so it can bind to port 80 (for example) then change the user so a non-root user can send a `USR1` signal to Caddy to get it to reload the configuration?
Any benefit in using this inside a docker container instead of nginx? No need for SSL or many of the other features I'm seeing listed here since it's all behind an Amazon ELB.
We are using Caddy as a simple reverse proxy in Docker environments. The configuration is a bit simpler than nginx and we love the tiny Docker images we can create (not sure how large an nginx-full installation is).
That being said, we did run into a few issues that forced us to go back to older Caddy versions, like broken websocket support or the timeout issue in 0.9.5. Also, sometimes the documentation is a bit lacking and unclear. DNS resolution seems to be flaky sometimes (we're using alpine-based containers and sometimes Caddy just won't resolve names of other containers, even though a curl inside the container can resolve the names just fine).
So if you've got a working nginx setup, I'd say stick with it. For new projects it is worth to check out Caddy. The issues we ran into occured early in our development process, they didn't just suddenly happen in production, so once you've tested everything, Caddy just works.
It's a lot easier than ngxinx + letsencrypt + cron job - there is zero config to get letsencrypt up and running and certs renewing automatically.
It supports http2 out of the box and will keep improving as the go net/http library does, performance is good (certainly good enough for 99.9% of the static sites out there, I haven't specifically benchmarked against nginx but response times didn't change much). I'd recommend it as a proxy if you need one, particularly if you need one that handles your tls certs.
Compared to nginx it's just so much simpler and so far I haven't found myself missing a single feature. (In fact, caddy has active health checks, which nginx only supports for its extortionate enterprise fee)
A lot of the other responses have been focusing on how simple Caddy is to get started with - batteries included. We've been using Caddy for a bit over 2 years in production. At the time we had just moved our infrastructure from an assortment of self-written python scripts to Mesos/Marathon. We were looking for a solution that would reverse proxy to our app servers without having to rely on DNS. I initially thought about writing a plugin for nginx before I found Caddy. When we found Caddy, it was a better solution for us as it was far easier to write a mesos integration for rather than trying to integrate it into nginx.
I use Nginx in production, but caddy on my dev machine, where I might have dozens of test sites running at any time. Those sites all live on subdomains of a dedicated dev domain and I rarely create more than 2 or 3 per week so I can always just use https with certificates generated on the fly. In Nginx I would need to constantly re-run certbot to add subdomains manually.
I feel like the niche Caddy's trying to fill is for people who don't want to bother with that, or don't want to learn. If you already know how to set up nginx and LE (like you and I do), its only appeal is relatively minor: potentially saving a quarter or half hour.
I made the complete switch to Caddy just because the configuration is simplified and it has features like git webhooks built-in. Setting up BasicAuth on a directory is one line in the config.
And it's fast. No issues so far, been using Caddy for over a year now.
[+] [-] overcast|9 years ago|reply
With that said, I 100% moved all of my sites from Nginx to Caddy after the config file setup is around three lines per site.
[+] [-] mholt|9 years ago|reply
[+] [-] bauerd|9 years ago|reply
Last time I looked into it, their problem was that Caddy plugins must be configured at compile-time, because runtime plugins seem to be hard to do with Golang (besides RPC). Because there's a handful of Caddy plugins, they'd either have to provide a slim package with no plugins, or ship with all or only some blessed plugins. Obviously all approaches have drawbacks … they discussed the option of shipping their own package manager for Caddy plugins, too (shudder).
See: https://forum.caddyserver.com/t/packaging-caddy/61
[+] [-] sergiotapia|9 years ago|reply
Features:
- Easy configuration with Caddyfile
- Automatic HTTPS via Let's Encrypt; Caddy obtains and manages all cryptographic assets for you
- HTTP/2 enabled by default (powered by Go standard library)
- Virtual hosting for hundreds of sites per server instance, including TLS SNI
- Experimental QUIC support for those that like speed TLS session ticket key rotation for more secure connections
- Brilliant extensibility so Caddy can be customized for your needs
- Runs anywhere with no external dependencies (not even libc)
[+] [-] weberc2|9 years ago|reply
[+] [-] pbreit|9 years ago|reply
[+] [-] lol768|9 years ago|reply
To me, HPKP with preloading seems like a more reliable approach (and browsers shouldn't allow this to be overridden [1]).
[1] If this breaks corporate MitM attacks, great. This practice always struck me as incredibly invasive and I'd personally be concerned if an employer was doing this. I think some traffic data should be able to be logged and timestamped in case of abuse (e.g. SNI info, TCP connections, DNS lookups) but I don't see a need to intercept application data - and honestly, it seems like the vast majority of these appliances hinder security.
[+] [-] mholt|9 years ago|reply
A TLS proxy that does this is meddling with the application layer and is thus broken (IMO). A TLS proxy that doesn't want to be detected shouldn't mess with the application layer.
The best thing a TLS proxy can do (other than be turned off) is preserve the characteristics of the original TLS connection exactly.
Indeed, Caddy's implementation is mainly for detecting TLS proxies that do a lousy job at it.
[+] [-] vosper|9 years ago|reply
[+] [-] bpierre|9 years ago|reply
[+] [-] tscs37|9 years ago|reply
Traefik can efficiently deal with proxying HTTP traffic but cannot serve files and is not as easy to configure if you're not using Docker or similar.
Caddy can serve files or PHP but is not as good with proxying HTTP traffic.
[+] [-] doublerebel|9 years ago|reply
[+] [-] CrowderSoup|9 years ago|reply
[+] [-] papey|9 years ago|reply
[+] [-] mixmastamyk|9 years ago|reply
https://github.com/mholt/caddy/tree/master/dist/init/linux-s...
[+] [-] jbergstroem|9 years ago|reply
Look at nginx, where new functionality is hidden behind a paywall. I don't want to deny them [nginx devs and sales people] their well-deserved money, but it pushes me away.
[1]: https://caddyserver.com/blog/options-for-businesses
[2]: https://caddyserver.com/pricing
[+] [-] mholt|9 years ago|reply
[+] [-] thearrow|9 years ago|reply
This bit our app during testing.
[+] [-] mike-cardwell|9 years ago|reply
[+] [-] kondbg|9 years ago|reply
Caddy records the cipher suite advertised by the client during the TLS handshake and then later examines the client's user agent. Using the fingerprinting techniques mentioned in the paper, Caddy then determines whether or not the advertised user-agent is compatible with the user-agent that it inferred through the client cipher suites.
TLS interception proxies establish their own TLS connection to the server. Depending on what underlying TLS library the proxy uses, it also has its own unique fingerprint. When the TLS proxy forwards the user's request, Caddy detects the mismatch and flags it as a MITM.
[1] https://jhalderm.com/pub/papers/interception-ndss17.pdf
[+] [-] nzjrs|9 years ago|reply
[+] [-] tyingq|9 years ago|reply
http://tyk.io
[+] [-] LeonidBugaev|9 years ago|reply
After you can use consul Nginx integration (it will dynamically generate Nginx config on all updates, and will restart it).
Tyk mentioned here also have Consul service integration and much more.
[+] [-] xemoka|9 years ago|reply
[+] [-] zzzcpan|9 years ago|reply
[+] [-] doublerebel|9 years ago|reply
[+] [-] dyeje|9 years ago|reply
[+] [-] snowpanda|9 years ago|reply
https://www.youtube.com/watch?v=ZyVA9tuif4s
The way he explains things makes me wish he did tutorials for programming languages.
[+] [-] haarts|9 years ago|reply
I'm not sure I understand. How is not having these timeouts a security thread? Someone could potentially open up enough HTTP connections to starve others from having the opportunity to do so?
[+] [-] mholt|9 years ago|reply
[+] [-] titanomachy|9 years ago|reply
[+] [-] cdnsteve|9 years ago|reply
[+] [-] zalmoxes|9 years ago|reply
Due to the nature of the plug system, you'll have to build caddy yourself if you add customizations like that.
[+] [-] zalmoxes|9 years ago|reply
[+] [-] tlrobinson|9 years ago|reply
[+] [-] zufallsheld|9 years ago|reply
[+] [-] jazoom|9 years ago|reply
Running as root: We advise against this. You can still listen on ports < 1024 using setcap like so: sudo setcap cap_net_bind_service=+ep ./caddy
[+] [-] snowpalmer|9 years ago|reply
[+] [-] xrstf|9 years ago|reply
That being said, we did run into a few issues that forced us to go back to older Caddy versions, like broken websocket support or the timeout issue in 0.9.5. Also, sometimes the documentation is a bit lacking and unclear. DNS resolution seems to be flaky sometimes (we're using alpine-based containers and sometimes Caddy just won't resolve names of other containers, even though a curl inside the container can resolve the names just fine).
So if you've got a working nginx setup, I'd say stick with it. For new projects it is worth to check out Caddy. The issues we ran into occured early in our development process, they didn't just suddenly happen in production, so once you've tested everything, Caddy just works.
[+] [-] blowski|9 years ago|reply
[+] [-] grey-area|9 years ago|reply
It supports http2 out of the box and will keep improving as the go net/http library does, performance is good (certainly good enough for 99.9% of the static sites out there, I haven't specifically benchmarked against nginx but response times didn't change much). I'd recommend it as a proxy if you need one, particularly if you need one that handles your tls certs.
[+] [-] shawabawa3|9 years ago|reply
But seriously, for all "hobby" projects I've started recently, and even some production services now, I go straight to caddy.
My config consists of a 1 line global config file:
And 90% of the individual services have this config: Compared to nginx it's just so much simpler and so far I haven't found myself missing a single feature. (In fact, caddy has active health checks, which nginx only supports for its extortionate enterprise fee)[+] [-] nemothekid|9 years ago|reply
[+] [-] singlow|9 years ago|reply
[+] [-] knrz|9 years ago|reply
[+] [-] dev247|9 years ago|reply
[+] [-] jhack|9 years ago|reply
[+] [-] brotherjerky|9 years ago|reply
[+] [-] daari0|9 years ago|reply