home | O'Reilly's CD bookshelfs | FreeBSD | Linux | Cisco | Cisco Exam  


Practical mod_perlPractical mod_perlSearch this book

Chapter 14. Defensive Measures for Performance Enhancement

If you have already worked with mod_perl, you have probably noticed that it can be difficult to keep your mod_perl processes from using a lot of memory. The less memory you have, the fewer processes you can run and the worse your server will perform, especially under a heavy load. This chapter presents several common situations that can lead to unnecessary consumption of RAM, together with preventive measures.

14.1. Controlling Your Memory Usage

When you need to control the size of your httpd processes, use one of the two modules, Apache::GTopLimit and Apache::SizeLimit, which kill Apache httpd processes when those processes grow too large or lose a big chunk of their shared memory. The two modules differ in their methods for finding out the memory usage. Apache::GTopLimit relies on the libgtop library to perform this task, so if this library can be built on your platform you can use this module. Apache::SizeLimit includes different methods for different platforms—you will have to check the module's manpage to figure out which platforms are supported.

14.1.1. Defining the Minimum Shared Memory Size Threshold

As we have already discussed, when it is first created, an Apache child process usually has a large fraction of its memory shared with its parent. During the child process's life some of its data structures are modified and a part of its memory becomes unshared (pages become "dirty"), leading to an increase in memory consumption. You will remember that the MaxRequestsPerChild directive allows you to specify the number of requests a child process should serve before it is killed. One way to limit the memory consumption of a process is to kill it and let Apache replace it with a newly started process, which again will have most of its memory shared with the Apache parent. The new child process will then serve requests, and eventually the cycle will be repeated.

This is a fairly crude means of limiting unshared memory, and you will probably need to tune MaxRequestsPerChild, eventually finding an optimum value. If, as is likely, your service is undergoing constant changes, this is an inconvenient solution. You'll have to retune this number again and again to adapt to the ever-changing code base.

You really want to set some guardian to watch the shared size and kill the process if it goes below some limit. This way, processes will not be killed unnecessarily.

To set a shared memory lower limit of 4 MB using Apache::GTopLimit, add the following code into the startup.pl file:

use Apache::GTopLimit;
$Apache::GTopLimit::MIN_PROCESS_SHARED_SIZE = 4096;

and add this line to httpd.conf:

PerlFixupHandler Apache::GTopLimit

Don't forget to restart the server for the changes to take effect.

Adding these lines has the effect that as soon as a child process shares less than 4 MB of memory (the corollary being that it must therefore be occupying a lot of memory with its unique pages), it will be killed after completing its current request, and, as a consequence, a new child will take its place.

If you use Apache::SizeLimit you can accomplish the same by adding this to startup.pl:

use Apache::SizeLimit;
$Apache::SizeLimit::MIN_SHARE_SIZE = 4096;

and this to httpd.conf:

PerlFixupHandler Apache::SizeLimit

If you want to set this limit for only some requests (presumably the ones you think are likely to cause memory to become unshared), you can register a post-processing check using the set_min_shared_size( ) function. For example:

use Apache::GTopLimit;
if ($need_to_limit) {
    # make sure that at least 4MB are shared
    Apache::GTopLimit->set_min_shared_size(4096);
}

or for Apache::SizeLimit:

use Apache::SizeLimit;
if ($need_to_limit) {
    # make sure that at least 4MB are shared
    Apache::SizeLimit->setmin(4096);
}

Since accessing the process information adds a little overhead, you may want to check the process size only every N times. In this case, set the $Apache::GTopLimit::CHECK_EVERY_N_REQUESTS variable. For example, to test the size every other time, put the following in your startup.pl file:

$Apache::GTopLimit::CHECK_EVERY_N_REQUESTS = 2;

or, for Apache::SizeLimit:

$Apache::SizeLimit::CHECK_EVERY_N_REQUESTS = 2;

You can run the Apache::GTopLimit module in debug mode by setting:

PerlSetVar Apache::GTopLimit::DEBUG 1

in httpd.conf. It's important that this setting appears before the Apache::GTopLimit module is loaded.

When debug mode is turned on, the module reports in the error_log file the memory usage of the current process and also when it detects that at least one of the thresholds was crossed and the process is going to be killed.

Apache::SizeLimit controls the debug level via the $Apache::SizeLimit::DEBUG variable:

$Apache::SizeLimit::DEBUG = 1;

which can be modified any time, even after the module has been loaded.

14.1.1.1. Potential drawbacks of memory-sharing restrictions

In Chapter 11 we devised a formula to calculate the optimum value for the MaxClients directive when sharing is taking place. In the same section, we warned that it's very important that the system not be heavily engaged in swapping. Some systems do swap in and out every so often even if they have plenty of real memory available, and that's OK. The following discussion applies to conditions when there is hardly any free memory available.

If the system uses almost all of its real memory (including the cache), there is a danger of the parent process's memory pages being swapped out (i.e., written to a swap device). If this happens, the memory-usage reporting tools will report all those swapped out pages as nonshared, even though in reality these pages are still shared on most OSs. When these pages are getting swapped in, the sharing will be reported back to normal after a certain amount of time. If a big chunk of the memory shared with child processes is swapped out, it's most likely that Apache::SizeLimit or Apache::GTopLimit will notice that the shared memory threshold was crossed and as a result kill those processes. If many of the parent process's pages are swapped out, and the newly created child process is already starting with shared memory below the limit, it'll be killed immediately after serving a single request (assuming that the $CHECK_EVERY_N_REQUESTS variable is set to 1). This is a very bad situation that will eventually lead to a state where the system won't respond at all, as it'll be heavily engaged in the swapping process.

This effect may be less or more severe depending on the memory manager's implementation, and it certainly varies from OS to OS and between kernel versions. Therefore, you should be aware of this potential problem and simply try to avoid situations where the system needs to swap at all, by adding more memory, reducing the number of child servers, or spreading the load across more machines (if reducing the number of child servers is not an option because of the request-rate demands).

14.1.3. Defining the Maximum Unshared Memory Size Threshold

Instead of setting the shared and total memory usage thresholds, you can set a single threshold that measures the amount of unshared memory by subtracting the shared memory size from the total memory size.

Both modules allow you to set the thresholds in similar ways. With Apache::GTopLimit, you can set the unshared memory threshold server-wide with:

$Apache::GTopLimit::MAX_PROCESS_UNSHARED_SIZE = 6144;

and locally for a handler with:

Apache::GTopLimit->set_max_unshared_size(6144);

If you are using Apache::SizeLimit, the corresponding settings would be:

$Apache::SizeLimit::MAX_UNSHARED_SIZE = 6144;

and:

Apache::SizeLimit->setmax_unshared(6144);


Library Navigation Links

Copyright © 2003 O'Reilly & Associates. All rights reserved.