We've seen an increase in malicious web bot activity targeting sites with WordPress and a few other popular CMS systems. Normally this isn't a problem, but some sites in particular get pounded hard enough that the bad bot requests eat up a lot of server resources.
For the bigger and longer-running attacks we've been implementing various ad hoc approaches on servers such as detecting/blocking bad IP addresses, bad query patterns as well as specific user-agent patterns.
I'm trying to take a step back and come up with a more comprehensive and long-term approach to malicious web requests. Any other input from people responsible for administering lots of these sites or high-traffic sites would be appreciated. Looking for info such as;
- are there any reputable centralized databases of IPs that are malicious yet safe to block (sort of similair to spamcop/spamhaus)
- are there any companies/organizations that publish malicious attack patterns with very low chance of false positives? Has anyone used Trustwave Modsecurity Rules, Sourcefire / Snort VRT rules, or similar services for detecting malicious web traffic?
mod_security with a good ruleset is definitely the way to go. There are some free rulesets you can find out there that are updated fairly often that are just as good as ones you would pay for (if not better).
Anything else would require an IPS of some sort which means additional equipment.
Some webhosts actually include IPS protectoin, so if moving is an option you might want to consider that. It's possible you've outgrown the current host.
If you don't want to go either route then iptables blocking is the way to go, as tedious as that is.