Putting a limit on Apache and PHP memory usageFeeding the Cloud
<a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>
https://feeding.cloud.geek.nz/posts/putting-limit-on-apache-and-php-memory/Feeding the Cloudikiwiki2012-11-04T04:30:26Zhttps://feeding.cloud.geek.nz/posts/putting-limit-on-apache-and-php-memory/comment_1_de7797109d640276b3f42ef2ec580028/gebi2012-11-04T04:30:26Z2012-02-21T09:25:36Z
<p>putting apache into it's own cgroup and restrict cpu and memory to the whole cgroup.<br />
Thus the limit is not applied per process but to all apache and cgi processes together.</p>
https://feeding.cloud.geek.nz/posts/putting-limit-on-apache-and-php-memory/comment_2_1d97f4501c966d9be10bbe3418ba737f/Marek2012-11-04T04:30:26Z2012-02-21T11:06:46Z
@gebi: do cgroups' limits apply across e.g. suPHP or other UID-changing/CGI-spawning children of the apache httpd process?
https://feeding.cloud.geek.nz/posts/putting-limit-on-apache-and-php-memory/comment_3_b75ce32d1aa5f6942d5c0562952a167f/Pierre2012-11-04T04:30:26Z2012-02-21T14:30:03Z
<p>hi!</p>
<p>Actually the problem here is that Debian uses system's libgd, which makes GD use system alloc instead of PHP. While the bundled GD relies on php's memory API. This allows it to work inside the memory_limit settings.</p>
<p>That being said, still useful to do something outside as imagemagick or any other libraries allocating memory may cause this problem as well.</p>
<p>Cheers!</p>
https://feeding.cloud.geek.nz/posts/putting-limit-on-apache-and-php-memory/comment_4_40ce57bc230ac506020c634ea0372737/Anonymous2012-11-04T04:30:26Z2012-02-23T13:30:58Z
<p>Hi!</p>
<p>Instead of trying to tweak Apache, what you could do is use sbox-dtc, which is available in SID, and which I maintain. With it, you can execute any scripting language, be it PHP, Python, Ruby or Perl, then you can set limits of memory usage for your scripts, and on top, execute them in a chroot. It works pretty well! <img alt=":)" src="https://feeding.cloud.geek.nz/smileys/smile.png" /></p>
https://feeding.cloud.geek.nz/posts/putting-limit-on-apache-and-php-memory/comment_5_89098acc5a8e430d2a33026c0708be4e/Anonymous2012-11-04T04:30:26Z2012-02-25T15:37:10Z
At the company I work for, in cloud services, we use mod_perl with apache 1.3.37 as a backend (nginx as front-end server) for the application. The mod_perl processes can grow depending on the client's useage pattern, so we set a limit on the number of requests an instance can take before it is killed and a fresh one is generated.
https://feeding.cloud.geek.nz/posts/putting-limit-on-apache-and-php-memory/comment_6_334af9c48014d5f26203080283d9dddd/Anonymous2012-11-04T04:30:26Z2012-02-28T01:05:24Z
Additional to any memory limiting mechanism you put in place, you should set a limit on image resolutions. You don't want a 1 gigapixel image, do you?
https://feeding.cloud.geek.nz/posts/putting-limit-on-apache-and-php-memory/comment_7_acb11145cf8e7d491f423cd50947af0f/gebi2012-11-04T04:30:26Z2012-02-28T08:31:14Z
<p>@Marek:<br />
yes sure, as soon as a process is in a cgroup (EVERY child or process it starts) also ends up in the same cgroup without any race possible (by the kernel).</p>
<p>Thus even double fork or any other child which was ever started by apache is put into the same cgroup where you put your initscript in the beginning.<br />
e.g with echo $$ >/sys/fs/cgroup/apache</p>
<p>on stop of apache you just do a<br />
kill -9 of all tasks in /sys/fs/cgroup/apache/tasks and there will <em>never</em> be any dangling cgi or other processes.</p>
<p>you can restrict cpu and memory but also devices easily.</p>
<p>hint: the cgroup-bin is IMHO <em>completly</em> broken (and that by design), but thats another story.<br />
cgroups can also be used with simple mkdir/echo/rmdir/...</p>
<p>seems my first answer got lost somehow...</p>