We tried to use one software for offline browsing to download our site and test it if it will fail or not. We used 500 threads at once. Program was able to request 56 pages per second. Of course server (site) failed because there were no more available mysql connections. So site went down. Mod_evasive didn't block that.
Here is the copy of text I found on one site about mod_evasive:
Mod_evasive does work relatively well for small to medium sized brute force or HTTP level DoS attacks. There is, however, an important limitation that mod_evasive has that you should be aware of. The mod_evasive module is not as good as it could be because it does not use shared memory in Apache to keep information about previous requests persistent. Instead, the information is kept with each child process or thread. Other Apache children that are then spawned know nothing about abuse against one of them. When a child serves the maximum number of requests and dies, the DoS information goes with it. So, what does this mean? This means that if an attacker sends their HTTP DoS requests and they do not use HTTP Keep-Alives, then Apache will spawn a new child process for every request and it will never trigger the mod_evasive thresholds. This is not good…
Is there any solution for such type of attack with Keep Alive disabled ?
Isn't limiting the number of connections allowed to a single host a job for the firewall?
I know even the best traditional firewalls have issues with connection table getting full. Traditional firewalls (including hardware firewalls - let alone software firewalls like iptables) are designed primarily for allow/deny rules. Rate limiting is done in software, if at all. That is prone to such attacks which ckissi described. Thanks for doing that experiment - ckissi. Very useful for all of us.
Hardware based DDoS mitigation systems such as IntruGuard take care of single source flooding the connection table, too many sources flooding the connection table, a single source making connections too fast, a single source making connections too fast (or too slow) to a server, a server having too many connections, etc. These checks and mitigations are all done in hardware and before your firewall in the datacenter. That ensures that your firewall (if you have one - I know that most of us don't even have one) or server does not get fried during such attacks and gets a clean pipe to it. Mod_evasive, iptables, litespeed need to be supplemented with a hardware based DDoS mitigation for such 'real-life' attacks.
Well, use a firewall with connection tracking like csf, and then adjust your tcp parameters. here is a good sysctl for managing ddos as long as you are banning all the ips in a timely manner. Oh and seeing you are using linux
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.
# Controls IP packet forwarding
net.ipv4.ip_forward = 0
kernel.shmmax = 4294967295
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0
# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0
# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1
# Decrease the time default value for tcp_fin_timeout connection
net.ipv4.tcp_fin_timeout = 15
# Decrease the time default value for tcp_keepalive_time connection
net.ipv4.tcp_keepalive_time = 1800
# Turn off the tcp_window_scaling
net.ipv4.tcp_window_scaling = 0
# Turn off the tcp_sack
net.ipv4.tcp_sack = 0
# Turn off the tcp_timestamps
net.ipv4.tcp_timestamps = 0
# Enable TCP SYN Cookie Protection
net.ipv4.tcp_syncookies = 1
# increase TCP max buffer size
net.core.rmem_max = 33554432
net.ipv4.tcp_rmem = 4096 33554432 33554432
net.core.wmem_max = 33554432
net.ipv4.tcp_wmem = 4096 33554432 33554432
net.ipv4.tcp_mem = 8388608 16777216 33554432
net.core.optmem_max = 409600
net.core.rmem_default = 2097152
net.core.rmem_default = 2097152
# Increases the size of the socket queue (effectively, q0).
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_recv = 30
net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 30
net.ipv4.netfilter.ip_conntrack_tcp_timeout_fin_wait = 30
net.ipv4.netfilter.ip_conntrack_max = 1048576
It all depends on which conntrack modules your firewall is loading, will be either ip or nf
Check dmesg, you get out of socket memory, raise your mem values.
You get conntrack full do
sysctl -a | grep conntrack
sysctl -a | grep conntrack_count
Find out which one is filling up, raise values in sysctl. But with this config you should be able to handle quite a bit. This config is for machine with 2-3 gb ram but Ive doubled some of the same values on other machines.
Another thing that would help is ditching apache for lighttpd or litespeed