Different design goals. Apache is meant to be robust, extensible and portable.
* Robust: that's reflected in its internal API that makes it near impossible to leak resources.
* Extensible: witness the gazillion modules out there.
* Portable: compiles and runs on very exotic or outdated systems. SCO, IRIX, Digital UNIX, VMS, the list goes on.
nginx and such were designed from the ground up with performance in mind - and with success - but the trade-off is a lack of portability and an API that is much harder to program to.
Well gee. It's been a while since I've used Digital UNIX, Irix, SCO, or VMS.
Because they've become totally irrelevant.
Seems like a waste of resources when there are more pressing things to do than worry about the 5 people who (a) use VMS and (b) demand a bleeding-edge Apache.
No need to be so acerbic. They are providing a valuable service to you, for free. Apache is older, much more ubiquitous, supports more modules, has a more familiar setup and configuration system for many people, runs on more platforms, etc. It has more legacy issues, which are why it's so popular, but also why it moves slower. Nginx is much newer, doesn't have the same legacy issues, and so could afford to focus on speed.
There's a reason that Apache is installed on just about every random web host you can find, and has a module for every language or environment you need to deel with, while Nginx is a bit more of a specialty web server, usually used for dedicated sites that can spend the time to tune carefully for the highest performance. They both have their place.
It's great that Apache is still innovating moving towards loadable MPMs as well as adding an evented MPM. But it's not a bad thing that they're moving slower than a new server like Nginx; there's room for more than one great free web server in the world.
Preforked and threaded blocking IO consumes far more resources than event-driven non-blocking network IO. Process/thread stacks consume a significant amount of memory. This is why nginx ruins Apache at almost anything at scale (especially reverse proxying). Apache falls over while consuming grandiose amounts of memory in the low millions of connections per day while nginx can easily handle 10mm connections per day on a single core machine with 256mb ram. It's ridiculous.
On the other hand, for long-lived connections, you're better off using blocking IO since it's more efficient. Most web traffic is not long-lived, though. See Paul's interesting article on NIO vs IO performance:
And don't forget about Slowloris. If you see your competition using Apache, you really don't need to worry. :)
Disclaimer: I have nothing personal against Apache and used it for six years or so. I've just moved on to better software. If it works for you, that's great.
I still use Apache because I trust it; and because the ease with which I can configure a Python server (i.e. mod_wsgi), a Ruby server (passenger) and a Perl server, while being able to make use of the same modules I've been using for years.
Also, a properly configured Varnish placed in front of Apache ruins Nginx at almost anything at scale. I've seen it.
Most web traffic is not long-lived, though.
But most web traffic is blocking. Going NIO requires caching - which is a huge penalty and a PITA; and doing it while not having actual users doesn't make sense.
Varnish isn't a webserver. You can put Varnish in front of nginx as well. :)
Agreed on the blocking.
On large sites, I've been doing a single HAProxy instance -> nginx instance on each webserver -> Unicorn app server on each webserver with really good results.
Yes, I believe so. I'm really excited that the event MPM is no longer experimental. I had tried it way back in the day, but mod_php could not be safely used with it as mod_php was (is?) not thread-safe. I wasn't aware of FCGI at the time.
0) People don't know how to use PHP with it: module or fastcgi; and which MPM to use here.
1) People don't know how to configure the MPM setting to allow Apache full use of server resources (without under-utilizing or over-utilizing).
2) Apache is a full / well-rounded web server application... It's not a specialty server (ex: static content mostly) that can exclude this and that and only provide 1 feature to excel at. Other servers are stripped down compared to Apache.
* Not the AMA guy (nor an Apache hacker guy), buy have a little known WAMP distribution (called WampDeveloper, used to be Web.Developer Server Suite) since 2003 with over 250,000 unique ip downloads between 2003-2006 (have stopped counting), and have worked on 1000s of issues for users and clients since the start.
The only reason I still run Apache is because a lot of PHP stuff requires mod_rewrite to create pretty URL's and I haven't had the time to rewrite them using LUA so I can use Lighttpd.
But on the same server, using PHP-FPM with FastCGI on both lighttpd and Apache I am serving almost 4x the requests that Apache is serving with less memory overhead. Switching the sites that are currently running on Lighttpd back to Apache as I attempted to do not to long ago to only have to maintain a single server made Apache die a miserable fast death. It just could not keep up, I've had friends look over my config (datacenter techs, help people scale their stuff, porn mostly) and they said it looked fine. Apache was the limiting factor here.
I moved from prefork to threaded, that helped a little, but not much. Apache was just using a lot of memory, and overall did not provide the performance I wanted. Switched back to lighttpd, load on the server went down, and the websites were responsive as before.
"made Apache die a miserable fast death. It just could not keep up, I've had friends look over my config (datacenter techs, help people scale their stuff, porn mostly) and they said it looked fine."
Apache crashing under load is indicative of seriously bad MPM settings: specifically the # of processes / threads allowed to be used (and some other settings such as KeepAlive timeout, how php is being used, etc).
With a properly configured MPM, anything coming over the process / thread limit goes into a backlog queue which is 511 entries long. Anything over this just gets dropped.
To iterate, Apache does not crash under load (too many requests), it crashes due to too many processes / threads (an MPM setting) being used and sometimes due to leaking modules such as mod_python/ruby.