Why would you ever set MaxKeepAliveRequests to anything but unlimited?

33,128

Solution 1

Basically, because Apache wasn't built for it. The problem is server memory usage. In many configurations, content generation is done in the same process as content delivery, so each process will grow to the size of the largest thing it handles. Picture a process expanding to 64mb because of a heavy php script, then that bloated process sitting and serving static files. Now multiply that by 100. Also, if there are memory leaks anywhere, processes will grow without limit.

The keepalive settings should be balanced based on the type of your content and your traffic. Generally, the optimal configuration has MaxKeepAliveRequests high (100-500), and KeepAliveTimeout low (2-5) to free them up quickly.

Solution 2

I know this is a old question, but I have been doing some debugging, and it seems as (and this is not only true for Apache) MaxKeepAliveRequests works independently of KeepAliveTimeout.

Meaning, the timeout directive only counts against idling persistent connections (no reads or writes) - if you keep requesting below the timeout you can virtually do an unlimited amount of requests over the same connection.

This might not be good for some reasons including long running tcp connections being randomly killed? In any case, http clients are not that stupid and can handle a "low" MaxKeepAliveRequests setting pretty well, e.g. it is relatively easy in programming language to detect if a tcp connection has been closed by the server and thus re-connecting to it again. Additionally, most of the http-clients are going to have limits in place on their own (e.g. browsers would close a keep-alive connection after 300 seconds or so).

Solution 3

Partly, to keep a single user from hogging all the connection slots. Without a limit, one malicious or poorly-written client could take over every single connection available and hold onto them forever. This isn't a great mitigation for that, however, compared to something like a per-IP connection limit.

Mostly load balancing, but specifically with regards to maintenance. If you want to take a server offline, you drop it to 0 connections but allow existing connections to finish for some amount of time. Putting a limit on the number of keepalive requests means that eventually users will gracefully create a new connection and be moved to a new back-end server. Probably some way to signal to the server that it should stop accepting keepalives altogether during the drain process would be even better, but so far as I know such a feature doesn't exist.

Solution 4

One reason would be for load balancing. Once a keep-alive or http 1.1 persistent connection is made the load balancer won't move it to a new host until it closes. If you have one client making a huge number of requests over their one connection you might not get good balancing between servers.

Share:
33,128

Related videos on Youtube

Jonathon Reinhart
Author by

Jonathon Reinhart

Professional C, asm, Python developer GitLab, FreeNas admin

Updated on September 18, 2022

Comments

  • Jonathon Reinhart
    Jonathon Reinhart almost 2 years

    Apache's KeepAliveTimeout exists to close a keep-alive connection if a new request is not issued within a given period of time. Provided the user does not close his browser/tab, this timeout (usually 5-15 seconds) is what eventually closes most keep-alive connections, and prevents server resources from being wasted by holding on to connections indefinitely.

    Now the MaxKeepAliveRequests directive puts a limit on the number of HTTP requests that a single TCP connection (left open due to KeepAlive) will serve. Setting this to 0 means an unlimited number of requests are allowed.

    Why would you ever set this to anything but "unlimited"? Provided a client is still actively making requests, what harm is there in letting them happen on the same keep-alive connection? Once the limit is reached, the requests still come in, just on a new connection.

    The way I see it, there is no point in ever limiting this. What am I missing?

  • Jonathon Reinhart
    Jonathon Reinhart over 7 years
    But why would that matter? To me, it seems undesirable to ever spread a single user's connection's out over multiple servers. Load balancing is to handle a high number of users, not connections from a single user. In fact - if a single user is hammering a service, you'd rather it be confined to a single server (where they would effectively rate-limit themselves).
  • dtauzell
    dtauzell over 7 years
    Good points. A few thoughts: 1. anybody else on that server would be getting hammered as well. 2. For load balancers that work below the HTTP level: when you take a server out of the load balancer pool it doesn't close the existing HTTP connection. That makes it a bit harder to move people to a different server with just the load balancer. Reason 2 is how I got to this page while searching to see what people had to say about this parameter.
  • dtauzell
    dtauzell over 7 years
    A third reason: if your server/app gets into a bad state and is erroring out this pinning can make all the requests error out until the situation is corrected, whereas if you limit how many they have a chance of getting balanced to a new server.
  • Manuel
    Manuel about 4 years
    I have not come across a load balancer that works like this. A load balancer usually has a "stickiness” parameter that defines whether all requests of client (determined by IP for example) within the current session should be routed to the same upstream, or spread among upstreams. Which option is useful depends on the app that is running on the upstreams.