Not sure it will work with your particular OS Distro (you didn't post any server specs). From their project description:
PRM monitors the process table on a given system and matches process id's with set resource limits in the config file or per-process based rules. Process id's that match or exceed the set limits are logged and killed; includes e-mail alerts, kernel logging routine and more...
MediaLayer, LLC - www.medialayer.comLearn how we can make your website load faster, translating to better conversion rates for your business!
The pioneers of optimized web hosting, featuring LiteSpeed Web Server & SSD Storage - Celebrating 10 Years in Business
Not Contradicting elix but.. i believe using TimeOut or PRM this would be hard to do unless you get the help of a pro, as these wouldn't be that efffective for what you are asking, still you can try it out.
And, you should've posted more details..
Articles, Blog Posts, Content Development and Copy Writing
Skype me or Please send a PM for Contact Information
Agreed that PRM (I posted that one) is probably *not* exactly what he needs, it's form of limiting a process is to kill it once it goes above the threshold you set. But since it isn't readily apparent that Apache has anything built in (correct me if I'm wrong), and if he wants to use alternatives (entirely up to any admin to make an informed choice) rather than do nothing, I threw this up for his further investigation.
He could also (depending on Apache version) look into mod_bandwidth or mod_throttle (apache 1.3x) or bw_mod (apache 2.x) etc. All of which are also probably not exactly what he wants.
Look at RLimitNProc, RLimitMem, and RLimitCPU. They are Apache configuration directives. You need to be running suExec to use them.
They set resource limitations on customer CGI code, this way, a buggy perl script won't spike your machine. I used them for a while until moving on to a custom code "fix" that was more configurable.
Note that RLimitMem can be a PITA if you have FrontPage users. When you "recalculate links", or load big sites up, the shtml.exe application keeps a lot of information in memory on the server. Resource limits can squash that process (I've seen it happen a lot).
Perhaps that'll help.
Edit: The configuration is in CPU time, not percentage. Each forked process can only use up X cpu seconds prior to getting the stick.
It won't work with mod_(pick one). Reason being is that your httpd processes will always run as a webserver user, you'll be effectivly placing the same restriction (one instance) on all of your customers.
If you want to use this, you need to be running your perl apps as CGI scripts, and use suExec to ensure there is a UID change for each CGI call. There's a harder hit on the system for each CGI request, offset it with a bit more memory if it becomes a problem.
It won't work with mod_php, mod_python, mod_perl... the modules that bring scripting capabilities directly into the httpd address space. You don't loose that functionality, though, as you can still use those languages in a CGI environment.