MPM Prefork, too many apache2 process?

19,184

Solution 1

I gave you the answer to this in the comments over on Server not responding to SSH and HTTP but ping works, but apparently you don't believe me. Really, it's true!

You need to size MaxClients / ServerLimit to your system. The "5-10 settings for Min/Max Servers" which you mention are basically irrelevant — that's just the number of extra servers hanging around not doing anything that Apache will retain.

In order to set MaxClients appropriately, look at the typical high-water mark for your httpd (or apache2) processes, and then divide your available memory by that. Best to drop down by a little bit to give the rest of the system room to breathe. Since you've got 4GB of RAM, and 185MB processes, that means your ServerLimit value should be 21 at most — probably 20 or 19.

Now, it may be that 190MB is atypical. You can set the ServerLimit higher, based on a different estimate of typical usage, but then you're basically gambling that you'll never have a spike. If it does happen, your system will be out of memory.

If you can find a way to constrain your per-worker memory usage, that's gonna be a win. I'm betting this is a case of PHP Ate My RAM. Can you code your app to live within a lower memory_limit? If you can't do that, you need a different model under which to run your PHP. If you can't do that, you need to buy more RAM.

Solution 2

Apache's prefork MPM self-manages servers. It will always start with StartServers daemons, and will never run fewer than MinSpareServers once it gets going. It will also eventually retire/kill off servers in excess of MaxSpareServers if they're idle long enough (I don't recall what "Long Enough" is in this context, nor if/how it can be modified).

ServerLimit sets the maximum number of apache daemons that can be running at any given time -- This is why in your situation you can have hundreds of sleeping apache processes (they got spawned to service a flood of requests and haven't been idle long enough to be killed by the mother process yet).


Personally I think 1250 is a pretty high value for ServerLimit/MaxClients -- 250 may be a more reasonable number (though this may result in the occasional 503/Server Busy error if you get a massive flood of requests: if that becomes a chronic issue you can increase the number or add more servers to handle the load).

Relating this question to your previous one Re: an out-of-memory crash, definitely follow the guidance from the Apache Manual on this parameter:

Most important is that MaxClients be big enough to handle as many simultaneous
requests as you expect to receive, but small enough to assure that there is enough
physical RAM for all processes.

…and my personal axiom: It's better to give a client a 503 page than knock the server down. :)

Solution 3

Turn off Keepalives and set MaxClients to 150. The most likely reason you have 260 processes just sitting there is because Apache is dutifully holding browser connections open because KeepAlive on is set in you apache config file.

Solution 4

In my experience it is worth the effort to tune KeepAliveTimeout after properly setting other parameters regarding number of processes. I say tune which means you should change the parameter slightly and measure the server responsiveness. Among our sites, one performs best with KeepAliveTimeout=3 yet another with KeepAliveTimeout=1. None of these are happy with KeepAlive turned off. This additional tuning saves you from buying/allocating extra RAM too early.

Tuning is easy because the change is effective immediately after graceful restart:

sudo apache2ctl -k graceful

(I'm reviving an old thread because Google still deems it relevant... ;) )

Solution 5

Calculate how many servers you can have running within the constrains of your system's RAM by running this command:

$ ps -ylC apache2 | awk '{x += $8;y += 1} END {print "Apache Memory Usage (MB): "x/1024; print "Average Process Size (MB): "x/((y-1)*1024)}'

If will produce output like:

Apache Memory Usage (MB): 1608.76
Average Process Size (MB): 55.4745

Now stop apache and find out how much RAM you have available without it by using free:

               total       used       free     shared    buffers     cached
  Mem:       7629384    7415780     213604          0     333428    5341884
  -/+ buffers/cache:    1740468    5888916
  Swap:      7629380       7968    7621412

(above is in kilobytes. free -m would show you megabytes.)

Linux will fill available memory with buffers and cache, so adding free+buffers+cache (213604+333428+5341884) yields 5888916 Kbytes available.

588916K available / 55474K per apache process = 106 servers. But set it lower than that to leave some breathing room.

Share:
19,184

Related videos on Youtube

dynamic
Author by

dynamic

Updated on September 17, 2022

Comments

  • dynamic
    dynamic almost 2 years

    I have this settings:

    <IfModule mpm_prefork_module>
        StartServers          5
        MinSpareServers       5
        MaxSpareServers      10
        ServerLimit      1250
        MaxClients            1250
        MaxRequestsPerChild   1500
    </IfModule>
    

    Is possibile that with a 5-10 settings for Min/Max Servers when i do top, there are tons of apache 2 processs??

    Shouldn't they be only between 5-10? Just look at the 260 process sleeping O_O (d*mn apache)

    Click http://img200.imageshack.us/img200/3285/senzatitolo1iw.jpg

    Edit1:

    After 30min of up here a screen of top:

    Click: http://img816.imageshack.us/img816/1645/immagineov.png

    After 24hours of UP (top orderer for MEM usage)

    Thanks for any explanation

    (debian 6, lamp, 4gb ram)

  • voretaq7
    voretaq7 over 13 years
    Did you ever have a spike of N simultaneous requests before? And are we even sure that's what happened? Looking more closely at your top output I don't see that many apache processes (Restrict to user: www-data and see what you get -- 200 or so processes on an idle Linux box is not uncommon, I doubt they're all httpd :-)
  • dynamic
    dynamic over 13 years
    i don't think i had these spike before. Google is pwning my server requesting more than 20 pages/minute as i can see from the log (the top screenshot is taken right after an hard reboot)
  • dynamic
    dynamic over 13 years
    i taken another TOP screenshot. See first post. (now you can see after 30min there are only apahce2)
  • voretaq7
    voretaq7 over 13 years
    ahh, now that's a problem - two actually. Crash-wise 1200 Apache processes is definitely too high a limit for your hardware (1200 * 11M(RSZ) = ~13Gigs: Way more than your physical RAM). Workload-wise you need to hunt down why you're getting so many requests: You may need to tweak your crawl rate in robots.txt, but check your apache logs to see what else is going on too...
  • voretaq7
    voretaq7 over 13 years
    If this is indeed a "PHP Ate my RAM" situation (and the eating is relatively confined -- staying within a child rather than hitting the shared pool) you may also get some benefit by setting an aggressive (low) MaxRequestsPerClient -- Older (fatter) daemons will be sacrificed to the RAM-freeing gods... Note that this is best viewed as a temporary solution because killing off and restarting apache daemons during high load periods can put a hurting on your server...
  • dynamic
    dynamic over 13 years
    Ok, i will set memory_limit at 16megabyte in my scripts, i wanna see one of them needs more than 16mb (i really doubt)
  • mattdm
    mattdm over 13 years
    @yes123: sounds like a good plan. Watch for php memory limit messages in the error log.
  • dynamic
    dynamic over 13 years
    currnetly usage is way less than 5mb per php script, As i said above i doubt this is the problem. I think it's just an apache2 settings problem
  • mattdm
    mattdm over 13 years
    @yes123: are you sure you're measuring the PHP memory usage correctly? Note that the memory_limit_ is a setting in the core php.ini file. If it's not PHP, something else out of the ordinary is causing your apache processes to consume a lot of RAM. The general advice remains: if you can't limit or constrain whatever that is, you still need to set your ServerLimit to match.
  • dynamic
    dynamic over 13 years
    what do you mean? I did ini_set('memory_limit','16M'); Now php scripts can't use more than 16mb. If 1 single apache2 process uses 190MB it's not due to serverlimit i guess neither to php at this point
  • kashani
    kashani over 13 years
    Are you also setting MaxKeepAliveRequests? MaxKeepAliveRequests 80 seems to be the recommended setting.
  • dynamic
    dynamic over 13 years
    yep 80..........
  • kashani
    kashani over 13 years
    Hmmm, if it's still that high with a few hundred connections sitting idle you might want to consider putting Varnish or Nginx on port 80 and doing reverse proxy to Apache processes. Let either of those servers do the tcp connections and just pass requests to Apache. I'd do a quick test though with KeepAlive off just to make sure that would actually decrease the number of idle Apaches. If it does, then the reverse proxy stuff is worth doing.
  • dynamic
    dynamic over 13 years
    @hashani: i actually lowered a bit serverlimit maxc and maxreqxchild. Let's wait if this happens again then i will disable keepaliove
  • mattdm
    mattdm over 13 years
    @yes123 — can you humor me and set it globally? (And check your codebase for any code which raises the limit.) If that's still not working, can you try disabling PHP entirely temporarily and observing the size of the apache worker processes?
  • dynamic
    dynamic over 13 years
    @matt: you are going offtopic... My website can't work without PHP because is made with... PHP. phpinfo() shows correct limit of 16M so it's ok plus i have error_reporting e_all and error logging. Plus this is the same website code of my previous server with only 2GB of ram and apache2 didn't went to 200MB there.
  • mattdm
    mattdm over 13 years
    @yes123: you can call it offtopic, or you can call it "I'm trying to help you nail down your problem". If the phpinfo is showing that limit yet the processes are still huge, either PHP is showing you unhelpful information or something other than PHP is eating memory. Unless you have an idea of what that might be, narrowing down the problem is the only rational approach.
  • dynamic
    dynamic over 13 years
    Yes I think it's apache2 (or its settings) the problem not my php scripts. As I said i am logging the memory usage with memory_get_peak() and there isn't 1 php script using more than 3MB or taking more than 0.09 secs to load. Anyway yesterday after the server reboot now the values are perfectly fine with 20% of ram usage and 0% of swap usage. I guess now I have to wait some days to see if apache2 eats up all the memory again
  • mattdm
    mattdm over 13 years
    @yes123: are your apache2 processes now each smaller, or are you seeing them at size 185MB again?
  • dynamic
    dynamic over 13 years
    @matt: much much smaller, indeed the ram is full only for 20%
  • mattdm
    mattdm over 13 years
    @yes123: but what is the per-process size now?
  • dynamic
    dynamic over 13 years
    HM, i did TOP with order for %MEM usage here the screen apache2 Res is 13MB, but VIRT is 186MB... HMM (i added the scren to first post) thanks
  • mattdm
    mattdm over 13 years
    @yes123: that's okay. So, 4GB/13 gives you a server limit of roughly 300.
  • dynamic
    dynamic over 13 years
    @matt: so do you think that's the problem? But what happens if requests overcome that limit? I remeber on the previous server with only 2GB i setted up a serverlimit of 2000 without any problem =/
  • mattdm
    mattdm over 13 years
    @yes123: you just never got the traffic to fire them all up. It's definitely the problem. If requests overcome that limit, people will have to wait. But that's a lot of requests — you say your longest php script takes 0.09 seconds to load, so worst case each one can handle 11 requests a second. Are you really likely to serve over 3000 requests a second? And if you are, can the CPU keep up with that? If you're hitting that problem, you need a bigger box.
  • dynamic
    dynamic over 13 years
    No of course i don't server 3000 requests per second... but when the settings was at 256 i found in the error log "... consider rising MaxClients directive ..." But are you sure that 190mb of apache2 VIRT mem usage is fair?
  • mattdm
    mattdm over 13 years
    @yes123: don't worry about the virt usage — sorry for stressing you out about that yesterday; I was being a little overzealous and should have caught that. It's the RSS that is closest to meaningful and what you should size the maxservers/maxclients setting by.
  • mattdm
    mattdm over 13 years
    @yes123: if you're hitting the suggestion to raise MaxClients and you're already at the reasonable limit for your server size, you need more RAM.