top | item 9202039

Introducing OpenBSD's new httpd [pdf]

197 points| fcambus | 11 years ago |openbsd.org

154 comments

order
[+] xiaq|11 years ago|reply
So... Apache was removed from base on Mar 14 2014 in favor of nginx, and nginx on Aug 27 2014 in favor of OpenBSD httpd.

For sysadmins who closely follow the "recommended" way, having to migrate the configurations of the http server twice within half a year must have been a frustrating experience.

Also, I wonder what "removal from base" means exactly - can you still install them (the OpenBSD-patched versions) from the ports collection or something like that?

[+] 4ad|11 years ago|reply
Apache was removed from base in 2014, but nginx was added to base in 2011.

Sysadmins had ~2 years to migrate from Apache to nginx (and if they didn't want to migrate, they could continue to use Apache from ports).

For nginx, they only had 6 months to migrate (though again, they can still use nginx from ports).

So no, they didn't have to migrate the http configuration twice a year, more like twice in 4 years, although, considering that OpenBSD only ever included into base 3 web servers, more like twice in 16 years.

[+] sarnowski|11 years ago|reply
That's correct. They are still available in the ports. If were operating a complex setup, you probably already used nginx from ports before.

Not being in the base means that it doesn't get the same security attention as its not officially part of OpenBSD anymore.

[+] hobarrera|11 years ago|reply
Those sysadmins could still use nginx from ports. That's what I did, and will continue to do until httpd gets SNI support. Nothing broke for anyone during this time.
[+] jacquesm|11 years ago|reply
I've been going through the code for the last half hour and I really hope this isn't representative of what the OpenBSD group considers to be defensive C programming.

Stack allocated buffers, questionable logic and a generally terrible style as well as a complete lack of comments.

Don't take my word for it, see for yourself:

https://github.com/reyk/httpd/blob/master/httpd/server.c

The "new" is a bit off too, the copyright runs 2006-2015.

[+] nly|11 years ago|reply
Don't forget yet another hand-written HTTP parser, and the complete lack of a test suite.

On a more flamebaity note, I don't know why you'd even want to write something like this in C. Writing a server for one of the most prevalent network protocols on the Internet in C, in 2015, just seems like masochism. This code reinvents so many wheels for the 1000th time in C code history, it's just tiresome to read. C++ would've reduced and simplified the code substantially and there are a growing number of other fine choices these days.

[+] jedisct1|11 years ago|reply
I don't see anything wrong with that code.

There is nothing wrong with using the stack for what it was designed for.

The function names are self-explanatory.

The "style" might not be yours, but it doesn't make it bad.

And as described in the slides, it was derived from relayd.

[+] cremno|11 years ago|reply
Can you link to specific examples? I'm currently not taking your word for it. The file has comments. Maybe too few and too terse for you, but that isn't what you've said. Yes, some buffers are stack-allocated, but what's wrong with that? No unsafe functions are used to write to them. Discussing style, unless it's really terrible which is also not the case and moreover it is well documented (as OpenBSD KNF), is a waste of time.
[+] InfiniteRand|11 years ago|reply
This certainly is not how I would have written this code, but I will say it is very consistent in its implementation. That alone is extremely important and underrated, so kudos for that.
[+] feld|11 years ago|reply
The code is based on their relayd, so that's why it has the 2006-2015 copyright
[+] conductor|11 years ago|reply
What's wrong with stack allocated buffers? They are faster (no malloc/free) and the stack is large enough to hold couple of KiB of data.
[+] hobarrera|11 years ago|reply
> The "new" is a bit off too, the copyright runs 2006-2015.

As the presentation says, it's based on relayd, so the code is not all new, a lot of it is reused.

[+] marc_omorain|11 years ago|reply
Is there a technical reason why you would implement HTTPS in a HTTP server? If you ran a separate process on port 443 to terminate SSL connections, and then proxy that request to a HTTP server running locally, there would be better separation of concerns.

For example, this setup would mean that a security flaw in the HTTP server that allowed a user to read memory would not be able to read any private keys used in the HTTPS server.

I guess some downsides would be some extra latency while the request is proxied, and some extra memory overhead for the second process.

I'm interested in anyones thoughts on this.

[+] throwaway2048|11 years ago|reply
OpenBSD has added support into libressl for privilege separated processes that hold SSL keys, any operation requiring the use of private keys such as the creation of session keys, and signing things are shuttled off via a small api to a separate process. This is somewhat analogous to what ssh-agent does for openssh clients.

OpenBSD's TLS private key consuming daemons have moved to this model or are in the process of doing so. This helps to mitigate the problem of access to process memory results in disclosed private keys, also the requirement of the daemon's user facing bits to have access to the keyfiles.

http://article.gmane.org/gmane.os.openbsd.cvs/139527/

[+] toast0|11 years ago|reply
At work, we run stud in a freebsd jail to handle SSL termination. It uses the haproxy proxy protocol (v1) to send the client IP to the http daemon.

Downsides include: Three sockets per client connection (this gets problematic around 1M client connections). Lack of information about the SSL negotiation in the http context. Stud doesn't have the typical graceful restart options that are typical with web servers.

On the plus side, stud is a lot less code than an http server, so its easier to modify things if you need to. I added sha-1/sha-2 cert switching for example. Would have been doable in an https server too, but a lot more to avoid.

[+] Too|11 years ago|reply
I'm not into the specifics of https but i've been writing a lot of gateways for other protocols and it's never as easy as "just forward the message". Protocols some times have state that the proxy must be aware of and sometimes the forwarding is conditional which means that the proxy must understand both types of protocol and be able to act based on information in it. Say for example you want to block all requests to a specific resource, if your https server knows about this you might be able to reject the request before you decrypt everything of it.
[+] zurn|11 years ago|reply
Proxying typically loses a lot of information, like client address, client certificate info and so on. If you take care to make carry over everything transparently, it would more accurately be called privilege separation (ala sshd).
[+] eeZi|11 years ago|reply
SNI, for example. If you're running multiple virtual hosts, the proxy would have to be aware of all of them. But yes, SSL termination is not uncommon, especially if you have a frontend/backend architecture.
[+] cssmoo|11 years ago|reply
We run exactly that setup. Apache on the front end proxying and doing SSL for a mish-mash of Java EE, .Net and native apache modules.

There's very little latency added, it allows centralised logging and TBH apache is a ton more reliable than anything else out there. Does about 2-3 million requests a day.

[+] SixSigma|11 years ago|reply
That's how plan9 does it. Ssl is a wrapper around whatever connection.
[+] zx2c4|11 years ago|reply
[+] fafner|11 years ago|reply
At least they added Comic Neue. I'm a bit disappointed that a project like OpenBSD that is so vocal about free software is promoting a proprietary font like Comic Sans MS.
[+] hobarrera|11 years ago|reply
I love these tiny easter eggs that the OpenBSD proyect leaves around and only Windows users will come across.

So much passive agressiveness in such a fun way!

[+] detaro|11 years ago|reply
Supports TLS using LibreSSL, serves static files and FastCGI.

https://github.com/reyk/httpd/issues?q=label%3Afeaturitis+is...

featuritis tag in die bugtracker for currently denied features. Clearly aiming for as simple as possible while being useful.

[+] Gracana|11 years ago|reply
In one of the comments on the "featuritis" tagged issues:

> I add the label "featuritis" to remind us of extra features (eg. ldap) that we reject now but might want to reinspect later.

So it's not that httpd will never be extended beyond the basics, but these issues are simply out of scope right now. I like that approach.

[+] bluetech|11 years ago|reply
What is wrong with compressing static files?
[+] ezequiel-garzon|11 years ago|reply
If I may take this opportunity... Does anybody know what I'm supposed to put in /etc/ssl/server.crt for SSL encryption? I have concatenated all six possible permutations of my own certificate ssl.crt, the intermediate certificate sub.class1.server.ca.pem and the root certificate ca.pem, but this gives me the error The certificate is not trusted because no issuer chain was provided. (Error code: sec_error_unknown_issuer) (my Ubuntu Chrome gives me a green lock, though). Feel free to visit my blank site https://ezequiel-garzon.net

Thanks!

[+] cesarb|11 years ago|reply
In cases like this, turn to Qualys' SSL tester: https://www.ssllabs.com/ssltest/analyze.html?d=ezequiel-garz...

It shows your server as sending only one certificate, the one with "CN=ns.ezequiel-garzon.net". It's missing the next one in the chain, "CN=StartCom Class 1 Primary Intermediate Server CA". I don't know the configuration details for the server you're using, but many servers use a separate "chain" file for the intermediates; if that's the case, you should put the main certificate in one file and the "StartCom Class 1 Primary Intermediate Server CA" in the other file.

And why it works in some browsers? Notice that Qualys listed the intermediate as "Extra download"; some browsers can download the intermediate certificate directly from the CA's web server. Some browsers cache the intermediate certificates they've seen, so if you've visited a properly-configured server with the same intermediate before, the browser will use the copy from its cache. But it's not recommended to depend on this; you should always include all intermediates.

[+] icebraining|11 years ago|reply
Works fine here. Where are you getting an error? Maybe that client doesn't have StartCom's root cert in its trust store.
[+] andor|11 years ago|reply
FastCGI: The protocol provides the single and fast interface to serve dynamic content

That's a bad choice in my opinion. Without reverse proxy functionality httpd can't match the flexibility of nginx.

[+] skissane|11 years ago|reply
Why bother with FastCGI? Why not just use reverse proxy HTTP instead? Why introduce yet another protocol when HTTP works fine?

I didn't write this, but I agree with it - https://ef.gy/fastcgi-is-pointless

[+] jedisct1|11 years ago|reply
This is needed at least for PHP. Granted, PHP has a built-in web server, but it's really just for testing and not meant to be a replacement for fpm.

Maybe reverse proxying is coming up next, as well as HTTP2. But once again, the goal is not to have a full-featured server that can replace Nginx. Rather something small, simple and secure.

[+] knivets|11 years ago|reply
Why is it a bad choice? Why does it need reverse proxy functionality? Why not use some reverse proxy software (e.g. HAProxy) in order to follow the UNIX modular principle? I'm not trying to argue here, but rather trying to understand.
[+] hobarrera|11 years ago|reply
I believe relayd is mean to serve that purpose.
[+] jalfresi|11 years ago|reply
Does anyone know if the FastCGI implementation is complete i.e. it supports FastCGI processes in all three roles; Responder, Authorizer and Filter? I've always wanted to use FastCGI more but most implementations (in Apache and Nginx at least) only support some of those roles (or require work arounds using server specific features; i.e. apache filters rather than FastCGI filters)
[+] davidgerard|11 years ago|reply
I read the config file format and I fell in love.

I really hope this gets the portable treatment.

[+] jnazario|11 years ago|reply
[update - i read the back story elsewhere and the reason is less boneheaded than i had assumed. still, i think the community needs to focus on higher priority needs and gaps]

this is the sort of thing that makes me happy i'm no longer involved in the OpenBSD world. httpd & previously smtpd are two replacements that (in my opinion) have little additive value beyond existing, community-adopted solutions (e.g. nginx and postfix), diluting effort where it is needed.

does the world need a new httpd? maybe. but the world needs other replacement software to be done first because it'll have a greater impact.

for example, OpenBSD could invest time and effort in maturing static code analyzers to assist in code audits (especially of ports).

i suspect this new httpd was done less because it was needed and more because it could be done. that's the attitude i disagree with.

[+] joosters|11 years ago|reply
OpenBSD seems to have caught a bad case of the 'not invented here' sickness. If they didn't like where nginx was going, why not just fork it and have a working web server with a known codebase? The forks would diverge but they could still grab fixes from nginx whenever they wanted to.
[+] cturner|11 years ago|reply
What would be the elegant way to implement websockets on the new openbsd arrangement? Would it be to use relayd instead of httpd? Or is websocketd suitable for the openbsd base?
[+] floatboth|11 years ago|reply
Why do they even have an httpd in base? They like to say they're smaller and simpler than FreeBSD, but FreeBSD doesn't include a web server in base!
[+] dyoder|11 years ago|reply
Did OpenBSD just standardize on an HTTP server they wrote in 2 weeks, has no tests, doesn't fully implement the spec…and then brag about it?
[+] mdekkers|11 years ago|reply
any performance benchmarks in the wild?
[+] kymywho|11 years ago|reply
Https authentication support for Subversion could be the killer feature.
[+] vacri|11 years ago|reply
Why use a name that's already in use as a general descriptor? At least the other httpds have names that can be used to differentiate them: http://en.wikipedia.org/wiki/Httpd