A little while ago, we ran into memory problems on mahara.org. It turned out to be due to the GD library having issues with large (as in height and width, not file size) images.

What we discovered is that the PHP memory limit (which is set to a fairly low value) only applies to actual PHP code, not C libraries like GD that are called from PHP. It's not obvious what PHP libraries are implemented as external C calls, which fall outside of the control of the interpreter, but anything that sounds like it's using some other library is probably not in PHP and is worth looking at.

To put a cap on the memory usage of Apache, we set process limits for the main Apache process and all of its children using ulimit.

Unfortunately, the limit we really wanted to change (resident memory or "-m") isn't implemented in the Linux kernel. So what we settled on was to limit the total virtual memory that an Apache process (or sub-process) can consume using "ulimit -v".

On a Debian box, this can be done by adding this to the bottom of /etc/default/apache2:

ulimit -v 1048576

for a limit of 1GB of virtual memory.

You can ensure that it works by setting it first to a very low value and then loading one of your PHP pages and seeing it die with some kind of malloc error.

I'm curious to know what other people do to prevent runaway Apache processes.

putting apache into it's own cgroup and restrict cpu and memory to the whole cgroup.
Thus the limit is not applied per process but to all apache and cgi processes together.

Comment by gebi
@gebi: do cgroups' limits apply across e.g. suPHP or other UID-changing/CGI-spawning children of the apache httpd process?
Comment by Marek


Actually the problem here is that Debian uses system's libgd, which makes GD use system alloc instead of PHP. While the bundled GD relies on php's memory API. This allows it to work inside the memory_limit settings.

That being said, still useful to do something outside as imagemagick or any other libraries allocating memory may cause this problem as well.


Comment by Pierre


Instead of trying to tweak Apache, what you could do is use sbox-dtc, which is available in SID, and which I maintain. With it, you can execute any scripting language, be it PHP, Python, Ruby or Perl, then you can set limits of memory usage for your scripts, and on top, execute them in a chroot. It works pretty well! :)

Comment by Anonymous
At the company I work for, in cloud services, we use mod_perl with apache 1.3.37 as a backend (nginx as front-end server) for the application. The mod_perl processes can grow depending on the client's useage pattern, so we set a limit on the number of requests an instance can take before it is killed and a fresh one is generated.
Comment by Anonymous
Additional to any memory limiting mechanism you put in place, you should set a limit on image resolutions. You don't want a 1 gigapixel image, do you?
Comment by Anonymous

yes sure, as soon as a process is in a cgroup (EVERY child or process it starts) also ends up in the same cgroup without any race possible (by the kernel).

Thus even double fork or any other child which was ever started by apache is put into the same cgroup where you put your initscript in the beginning.
e.g with echo $$ >/sys/fs/cgroup/apache

on stop of apache you just do a
kill -9 of all tasks in /sys/fs/cgroup/apache/tasks and there will never be any dangling cgi or other processes.

you can restrict cpu and memory but also devices easily.

hint: the cgroup-bin is IMHO completly broken (and that by design), but thats another story.
cgroups can also be used with simple mkdir/echo/rmdir/...

seems my first answer got lost somehow...

Comment by gebi