I'm setting cpu time limits for any scripts running on apahe using
"RlimitCPU" for perl and "php_value max_execution_time" for php . The default value for perl is not set and for PHP is 30 sec.
I currently set these value to 1 sec to prevent nasty user from eating up valuable resources like sending spam from their own scripts or running poor written script. All users are running their scripts fine in this 1 sec limit because most scripts will not take longer than 0.1-1 sec, but I'm afraid that some clients may sometimes run scripts that take longer time like mailing to their users which I need to support them also. Anybody please suggest how much max CPU time should I really limit ?
if I set this value too HIGH , some users may hog up server's resources anytime. But If I set this value too LOW some clients who want to do specific stuff on webserver may got stuck.
From a great deal of experience dealing with such limitations with some hosting providers I would recommend something around an absolute minimum of 15-20 seconds. Anything less and you might just as well say that your hosting plans cannot support CGI scripts that are anything more complex than form to email.
The issue isn't necessarily how long the script is running for, but how long it's consuming a sustained high CPU %age. A script may be doing disk I/O, waiting for another daemon to respond (e.g. your SMTP/POP3 server).
A simple example is a search script that searches someone's site. Depending on how fast your disks are, how much memory you have available and the actual speed of your processor, plus of course how many pages need to be searched, a setting of anything less than, say, 10 seconds could may such scripts unusable.
I think that if you're having to seriously start restricting users arbitrarily like this, then you most likely have a resource problem and need to expand the server.
I do think that it's a good idea to have some kind of limit to kill off looping scripts, but it's your job as a system administrator to understand what is happening on your server and to know where the bottlenecks are. If your server is overloaded and you need to throttle CPU execution time, then it would seem that you need to upgrade your server or get a new one. You should be working with your customers if they have scripts that are executing a long time, not just killing off their scripts because you want shortcut solutions.
Of course, you may be a "stack 'em high, sell 'em low" provider, but in that case, I think you should make it very clear that your do kill of scripts indescriminately and what the maximum execution time is before they signup with you.
One last thought, do think carefully just how indescriminate this is. Killing off a process arbitrarily can very easily corrupt a product database, and if you do that without first working with your customer, they will leave you immediately for someone more reasonable and will happily sully your name.
Any users can upload a new script any time and if that script happens to overload the server. The machine will be slow down. Other clients who feel this interruption by that time will ask/blame us as provider , and keep their bad impression. Althought the server has no resource problem, a poor written script can increase load from 0.xx -> 2.0-5.0.
The problem is , I as sysadmin must leave my routine jobs many hours to find out whose script is hogging the server, and takes a few days to contact/workaround with scripts owner who have no idea what happen and don't know how to fix it until I DISCRIMINATELY place .htacccess that limit CPU time to 1 sec in his directory , then he know what's the problem and fix his script to the limit. This workaround may take even a knowledgeable sysadmin a week to complete.
What if I put the CPU time limit to a low value (to prevent accidentally abuse) and let the clients adjust these values thorugh their control panel?