Interlude: nginx + php

In a fit of "no I don't want to do more programming right now" I got back to another interesting topic: alternatives to the Apache server. While I love it for it's stability and ease of setup (I rarely have problems with that aspect of web-development), it's still good to know of the alternatives and to see what else is out there. I'll likely never play with IIS (unless MS decides it should be free and work on a free OS ... madness, I know) but luckily there are other very interesting alternatives, one of them being nginx.

The interesting thing about nginx is that it's so very lightweight. In fact, on my server, I run nginx with php as a proxy for web and mail to a number of VPS setups. The individual VPS setups that run webapps typically run Apache, and not one of them is remotely close to the same memory footprint as nginx + php. This is not much of an observation though, seeing as the Apache installs are not trimmed to be low-resource - however, I haven't trimmed nginx either, it just is small-footprint resource-wise. Thus, for the average layperson sysadmin (a contradiction in terms, though not in practice) an nginx + PHP install can make rather good sense, especially if you're on a low-powered VPS setup.

So I decided to look a bit more into nginx and PHP. As mentioned I already have it running on my server but that was a fairly fast setup without too much thought - just consulting the various tutorials I could find while trying to sidestep any obvious issues. Hence, more study was needed.

Installing nginx

On Debian/Ubuntu you can just apt-get nginx

sudo apt-get install nginx

That will, however, install a slightly older version of nginx. To get the newest stable, go to and grab the latest.

sudo -s
cd /opt
wget && tar -zxvvf nginx-1.0.5.tar.gz
rm nginx-1.0.5.tar.gz
cd nginx-1.0.5
./configure --http-log-path=/var/log/nginx/ --error-log-path=/var/log/nginx/ --with-http_ssl_module --with-mail --with-mail_ssl_module --with-imap --with-imap_ssl_module --with-pcre
make install

You'll need a number of packages to be able to run this, but the configure command should complain and tell you which if you don't know. Also note that the mail and imap configuration settings above are in place because I use nginx to proxy email access to the individual VPS machines I have setup - if you don't need that, you don't need those settings. After you're through running the commands (apt-get or manual install), you should have nginx setup and ready to run (running actually, if you used apt-get).

After this, you need to get PHP installed, and here things get more tricky. There are a number of ways of going about things:

  • compile PHP yourself from source ( or install php5-cgi using apt-get. Then use your a homegrown script to manage PHP
  • as above, but use spawn-fcgi to manage PHP (depending upon your OS version, you might need to install lighttpd as spawn-fcgi only became
  • use PHP-FPM (manually compiled from or installed via apt-get)

The configuration from this point on differs a bit, because you'll either be setting up your own script, configuring spawn-fcgi or configuring PHP-FPM. However, the most relevant part for this post refers to nginx and PHP communicating - for info on the bits about setting things up, you can check out Linode Library for scripts and info. The nginx wiki also has good info. What it boils down to, in the end, is whether nginx will be communicating with PHP over TCP or through sockets. If you google you'll find differing advices but the majority seems to point in the direction of sockets. The reasoning behind this is that sockets are faster than TCP - there's much less overhead. You might also find some warnings that some people have had problems with sockets though - so it's not an easy choice, and essentially you need to test things out yourself.

Which I then did - or rather, I settled for sockets, then decided to do some benchmark testing, and then figured out there was a need to test out both before you actually choose. The configuration I went with looks like:

# relevant nginx part
location / {                                                                                                                                                                                                                                  
    root           html;
    fastcgi_pass   unix:/tmp/php5-fpm.sock;
    fastcgi_index  index.php;
    fastcgi_param  SCRIPT_FILENAME  /var/www/nginx-php/$fastcgi_script_name;
    include        fastcgi_params;

# relevant php-fpm part, from /etc/php5/fpm/pool.d/
listen = /tmp/php5-fpm.sock

Together, the two will make the server serve requests to php-fpm through sockets. Now, because I was playing with nginx and PHP-fastcgi I thought it would be interesting to see how it would handle a bunch of requests, so I fired up ab (Apache Benchmark) from apache and started flooding nginx. First I went with something like:

ab -kc 100 -n 1000 nginx-php:8000/

which worked fine (if you're not familiar with the options for ab, the line above will fire 100 concurrent keepalive requests at the server at nginx-php, port 8000, until a 1000 requests have been sent. I had put a .php script with nothing but a single echo at the end of that url, just to check the throughput. Then, just for kicks, I upped the concurrency to 200 requests a second - and suddenly I start seeing errors. Checking the error log of nginx, I see the following:

connect() to unix:/tmp/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream

I then added a -v 3 to the ab command line, to get some more detailed info about what nginx would output if that happened. Turns out you'll get something like this:

WARNING: Response code not 2xx (502)
LOG: header received:
HTTP/1.1 502 Bad Gateway

Essentially, what happens is that nginx tries to open a connection to PHP-fastcgi, fails for one reason or another, then responds to the user with a 502 Bad Gateway.

Thinking that maybe it was a limitation based on the php setup I tried changing the fastcgi settings (how many PHP processes to keep around, max processes, etc) but no change - I kept coming up against the 502. The limit on my local setup seems to be around 150-160 concurrent requests - more than that and errors crop up. However, I then switched to a TCP based setup and the errors are gone. Testing a bit, I up the concurrency to 400 connections, still no errors. At triple the concurrency I start to see problems, but not before.

However, instead of calling this a win for TCP there's the other aspect: speed. Running a test with 100 concurrent connections for a total of 50,000 requests, sockets gave a 15% speed increase compared to TCP. Looking at CPU consumption, on the other hand, showed a higher spike with sockets than TCP (reasonably explained by bigger throughput). Memory consumption didn't spike much for either setup, though sockets used a tad more.


You need to figure out what your likely scenario is when setting up your server. That should more or less self-explanatory - however, it's something you find out time and again as you play around with hardware and software.

While I have seen some possible solutions on how to ramp up the concurrency for nginx + PHP using sockets, the main thing to do is ask yourself: will I actually ever hit much more than 100 concurrent requests that I need to forward to PHP? If not, then you're safe using sockets. If you expect to hit much more, you should probably look into how to optimize nginx with PHP - or consider other options that might scale easier.