Optimizing Apache for large concurrent requests and static files
We were finally able to solve this problem, and bring down response time from several seconds to several milliseconds consistently using Apache itself.
The problem wasn't Apache; it was the default MaxClients setting that was causing an issue. We upped the MaxClients/ServerLimit setting, and Apache was able to handsomely handle a lot of concurrent requests and deliver static files with only a small increase in memory usage. (It took us a while to detect this since we used ApacheBench for stress-testing, which didn't give us an accurate insight into what was going on.)
We're still using the prefork MPM, but might also look at worker and event MPM in the future. But we're quite happy with the prefork MPM performance for now.
This is a good reference article: MaxClients in apache. How to know the size of my proccess?
Related videos on Youtube
Suman
Web developer focusing on AngularJS, PHP and some WordPress. With some Big Data and Cloud Computing.
Updated on September 18, 2022Comments
-
Suman over 1 year
OK, I know I'm asking a question that's been asked multiple times before (e.g. here - Serve millions of concurrent connections and static files?) but there appears to be no concrete solutions, so I would like to ask again; please be patient with me.
We could use nginx for this, but we're using Apache for many reasons (e.g. familiarity with Apache, keeping stack consistent, log formatting, etc).
We are trying to serve a large number of concurrent requests for static files using Apache. This should be simple and straightforward, especially since the static files are small images, but Apache doesn't seem to be handling this well.
More specifically, Apache seems to be falling over on an Amazon EC2 m1.medium box (4 GB RAM + 1 core with 2 hyper-threads) when Apache sees close to 100 concurrent (simultaneous) requests/sec. (The box itself appears to be handling more connections at this time - netstat -n | grep :80 | grep SYN | wc -l shows 250+ connections.)
The biggest issue is that requests for the static content sometimes take 5-10 seconds to get fulfilled, which is causing a bad user experience for our end users.
We are not RAM/memory constrained - running free -m shows the following:
total used free shared buffers cached Mem: 3754 1374 2380 0 139 332 -/+ buffers/cache: 902 2851 Swap: 0 0 0
Can we optimize Apache further so that it is able to handle more simultaneous connections and serve up the static content quicker? Would having more RAM or CPU help (even though those seem to be under-utilized.)
Or is there maybe some other entirely different problem that we are missing?
-
Grumpy over 11 yearsadd stats for
top
,iostat -x 10 2
as well as your apache config. And honestly consider switching to nginx. Apache is literally one of the worst for serving static content. It's downright silly to try and serve huge number of static through apache. -
Ladadadada over 11 yearsYou're missing two things: 1. Which MPM are you using? 2. Why aren't you using S3?
-
Suman over 11 years@Ladadadada: (1) We are using the Worker MPM (2) We specifically need the Apache logging functionality to get real-time request logs.
-
Suman over 11 years@Zoredache: we considered that possibility, but it would only be writing 100-500 lines per second... I just wrote a test script in PHP - gist.github.com/4455806 - to write to disk which can clock around 200K lines per second.
-