top | item 518885

Nginx vs Apache performance

65 points| dangoldin | 17 years ago |blog.webfaction.com | reply

47 comments

order
[+] pinkbike|17 years ago|reply
Benchmarks that are not completely anecdotal are really hard to produce. For starters you need the following...

1. Don't run the client on the same server. If you do, you have no business trying to test for high concurrency. Isolate the variables.

2. Size of file you are serving. Are you close to saturating your connection between the client and server? Most of the time this is the case.

3. Concurrency is hard to test because most of the time the client is the problem in the test. Don't use apache bench for anything like this as it's high concurrency is much to be desired.

4. A lot of other details need to be compared to make a benchmark useful. Are you using keepalives on both/or not. Nginx workers/processes vs apache threads/clients. Are you comparing apples to apples? How's your TCPIP backlog in a case like this? What kind of IO model are you running on each? Are you using sendfile on both or only one?

Nginx is a great server, and probably a better choice for static files, but data like this is like saying, "the other day I saw a some kind of Honda pass some kind of Nissan". No useful information to infer about either.

[+] brianm|17 years ago|reply
Nginx shines as a high-volume proxy, but as a straight up web server or app server, if either apache or nginx is your bottleneck, you are probably do something wrong.

At its heart nginx is a fork of apache 1.3 with the multi-processing ripped out in favor of an event loop (and all the copyright statements removed from headers, but hey, it's cool). The event loop, time and again, has been shown to truly shine for a high number of low activity connections. In comparison, a blocking IO model with threads or processes has been shown, time and again, to cut down latency on a per-request basis compared to an event loop. On a lightly loaded system the difference is indistinguishable. Under load, most event loops choose to slow down, most blocking models choose to shed load.

A few short years ago the benefits from using an event loop instead of blocking io were much more dramatic -- the level of parallelism achievable in hardware has gone way up (hey, look, erlang!) and is accelerating. Paul Tyma did some great experimentation with this a while back, http://is.gd/nJ6Z .

[+] sandGorgon|17 years ago|reply
One of the things about nginx is the lack of an organised community - for e.g. there is not even an official repository for nginx (http://marc.info/?l=nginx&m=122153991029203&w=2). there are simply some mirrors of people who maintain a patch-based tree (http://mdounin.ru/hg/nginx-vendor-current). There is no bug-tracker (!!), just a wiki page (http://wiki.nginx.org//NginxBugs) and as someone mentioned (http://www.wikivs.com/wiki/Lighttpd_vs_nginx), very less activity on IRC.

It comes down to the original issue of Linus Torvalds, Ingo Molnar and Con Kolivar: do you have a clear roadmap of where the architecture is going, vs a very cool technology that had a lot of support and was no doubt popular.

I am in no way commenting on the technology behind nginx, but as an architect, making a deployment decision that is going to take hell to change later, I would be very concerned.

[+] grandalf|17 years ago|reply
i think much of this is due to the language barrier, and also due to the ease of use (and ease of writing nginx modules)...
[+] TFrancis|17 years ago|reply
If your architecture is designed such that changing the web layer is hell, you probably should reconsider the architecture. Your other points stand. The channels of communication around nginx are not as clear and robust as Apache.
[+] jwilliams|17 years ago|reply
The memory is particularly a big deal if you're on a small slicehost/linode instance - a standard Apache setup without tweaking can take up half your RAM.
[+] patio11|17 years ago|reply
Yep. I lost my slice to thrashing twice before I discovered that a standard PHP forum under trivial load (6 simultaneous users plus Googlebot) can easily balloon under the default settings. Nginx has much better "works right out of the box" properties for folks who are not httpd.conf gurus.
[+] snprbob86|17 years ago|reply
My understanding is that Nginx is the server of choice for static content and Lighttpd for dynamic content (particularly FastCGI). Is that still the latest and greatest advice?

I've found Lighttpd way easier to configure than Apache and am having it serve my static content simply because we don't need to worry about every little bit of performance just yet.

[+] pwk|17 years ago|reply
Depends on the app platform. In the rails world Phusion Passenger (aka mod_rails or mod_rack) in combination with Apache is making inroads for serving up dynamic content. Despite the bigger footprint and other downsides of Apache, I'm hearing more and more that stability and ease of configuration of Passenger are a win. I'm only running it on a low usage backend app for the moment, but it was definitely easy to set up.
[+] nickb|17 years ago|reply
There's no need for Lighttpd when you use Nginx. It can do everything that Lighttpd can do and more (and has no memory leaks).
[+] jedberg|17 years ago|reply
We just use an http load balancer (haproxy) and have the app servers talk http directly. No need for a web server, which makes things much more stable. We use nginx for static content though (haproxy points at nginx for the static content).
[+] mtalantikite|17 years ago|reply
All of the sites hosted at engineyard.com use Nginx (github is one of them, for example). Works great.
[+] tlrobinson|17 years ago|reply
How does Nginx compare to lighttpd?
[+] jedberg|17 years ago|reply
We have used both at reddit. Performace-wise they are comparable for us, but nginx was a lot easier to configure, and lighttpd had a nasty bug that made us switch away (for the life of me though, I can't remember what the bug was).
[+] njharman|17 years ago|reply
I only (tried) to use lighttpd for a couple weeks. I got so frustrated with the crap config file which I could never get to do what I wanted and whose docs did not match reality I gave up and tried the at the time new competitor. Never looked back.

Many internet years latter I believe the momentum has shifted to nginx (http://news.netcraft.com/archives/2009/01/16/january_2009_we...) and it has so much going for it, check out the modules and add ons http://wiki.nginx.org/NginxModules

But if you really care http://www.google.com/search?q=lighttpd vs nginx

[+] ken|17 years ago|reply
Both are pretty simple to configure and run. The last website I launched, I just needed something to stick in front of my process to do compression, and found out that lighttpd can only compress static files it's serving from disk. nginx can compress any input it's serving.
[+] stanley|17 years ago|reply
What is the optimal solution for PHP-based sites? Is Apache w/ mod_php faster than Nginx with FastCGI?
[+] handelaar|17 years ago|reply
Anecdotes are not data, but if you're in the market for an anecdote anyway...

A thousand times no. Nginx+php-fastcgi is screamingly fast by comparison, while allowing me to free up about 70% of the memory previously in use, and get huge gains from loading the PHP code into RAM with APC.

I look after one managed server which chucks out tens of millions of requests per day despite only having half a gig of RAM in it. Before, running apache2, it had a load average of about 6.0. Now? 0.2.

[+] aliasaria|17 years ago|reply
For a site where Nginx doesn't make sense, has anyone used memory caching on Apache (to store static files in memory) with success? I am curious as to how this would perform in comparison.

e.g. modmemcachecache

http://code.google.com/p/modmemcachecache/

[+] ilaksh|17 years ago|reply
Does anyone(anything) package php-fpm (or whatever you are supposed to use) together with nginx?
[+] pinkbike|17 years ago|reply
apache w/modphp has the best latency in comparison to any fastcgi setup. When it comes to high concurrency latency and time to finish is your biggest issue. Slow clients are another killer (slow clients are users who have a slow connection and take an order of magnitude longer or more do download the page data as compared to generation). If you application is fast (less than 20ms) page generation your best bet is the following setup...

nginx or varnish reverse proxy front end. (depending on your load you can turn keepalives on here) This front end isolates your www/php/db from slow clients making sure that your request gets processed fast, resources are released, and then a light process of your proxy handles the delivery of the data. On the back end use apache/mod_php with a limit of only 50-100 clients.

[+] artificer|17 years ago|reply
Interesting. Another nice choice for serving static content is rumored to be thttpd. It lacks any kind of FastCGI support though (it's in the proprietary,premium version). Has anyone had any experiences of thttpd versus nginx?
[+] furburger|17 years ago|reply
apache is perfectly capable of saturating the outbound connection on static content on any reasonable setup. you may save a little on memory with nginx but you aren't saving on speed (how can you deliver more content than the outbound connection can carry??). this is why the in-kernel http servers went nowhere. in any case most people use CDNs these days for static content.

note that by not using apache you give up a lot of security hardening, add-on-modules, and mindshare that nginx does not have.

[+] sunkencity|17 years ago|reply
For a very tight virtual server config I can see the use for nginx, but for a normal server, running just apache, memory is not going to be an issue. There are probably other limits that will affect performance, such as the connection just as you say.

For example I tried running a server with apache + passenger on an ec2 node, bumped up the MaxClients to 1024. I evened out at around 400 simulaneous connections. Maybe it was due to some mysql limit or limits from the placed I sent the load from, but I was consuming around 50% cpu, so there seemed to be some other thing.

[+] blasdel|17 years ago|reply
No, in-kernel http servers went nowhere because of sendfile(2)