[Hcoop-discuss] CGI/PHP script security

ntk at hcoop.net ntk at hcoop.net
Sat Dec 17 08:34:47 EST 2005


>>>>>> On 2005-12-16 07:57 PST, Adam Chlipala writes:
>
>     Adam> Adam Chlipala wrote:
>     >> I'm curious: is anyone else serving/retrieving large files
>     >> through our Apache?  10 seconds already makes it pretty
>     >> trivial for a buggy infinite-looping script to take down
>     >> Apache.
>
> You can use something such as 'and' (auto nice daemon) which
> allows more complex criteria for killing (and nicing) processes,
> such as using x% CPU or x seconds of *CPU time* (rather than
> wallclock time).

As I understand it, and as discussed before, the problem is not a script
hogging CPU time, it's someone either maliciously or unintentionally
starting a few dozen of these scripts (as simply as opening a link in many
windows, or iterating "wget"s), and using up apache processes.  Since
there is as finite and relatively small upper limit of Apache processes,
once these are used up all further web requests will be denied until the
other processes finish (in this case 10 seconds).

So nicing doesn't solve the problem here.  Any large limit on runtime for
scripts makes it simple for someone to starve out apache, and the larger
the limit, the more likely this is to happen accidentally.  (Increasing
the maximum number of Apache processes is not a solution either, because
beyond some point processes start swapping out and this effectively causes
DoS as well.)

> As for the server being slow/unresponsive, every time it's
> unresponsive for me, it turns out it's running the backup job.  Is
> DMA enabled?  I notice that hdparm is installed but
> /etc/hdparm.conf doesn't contain any stanzas to enable DMA.

This was discussed before as well and is a problem in need of solving. 
Backup jobs are already running nice, and I don't think that's doing the
trick because I think the problem here is disk latency.  Our backup rsync
isn't using that much CPU, but it's taxing the disk bandwidth.  Apache
requests for pages that aren't cached in core are then probably getting
starved out for a noticeable time while they wait to be scheduled among
writes to backup partition.  I'm not sure how we can solve this
problem--backups take a long time, I don't know if there's a simple way to
throttle a process's disk access, and if there were that would make
backups take proportionately longer than they do now.  I think we might
see big gains in responsiveness though if we could somehow throttle disk
access for the backup to even 75 or 80% of capacity.

-ntk





More information about the HCoop-Discuss mailing list