Results 1 to 15 of 15
Thread: Nginx And DDOS Protection ?
-
10-20-2014, 11:37 AM #1Web Hosting Master
- Join Date
- Mar 2009
- Posts
- 3,700
Nginx And DDOS Protection ?
do you have any ideal about how to use nginx to do ddos protection ?
i find the article http://syslint.com/syslint/nginx-and...#comment-13358,
do you have any mide about it ?
because i use nginxcp with default vaalue on my cloudlinux/cpanel server,
and certain site get http attack and the server load go to over 100,
i just wonder how to tune my server to get better performacne ?
i see a article shows,about the http ddos,
it is because the limit of kernel file opening number limit ?
thanx
-
10-20-2014, 11:57 AM #2Newbie
- Join Date
- Oct 2014
- Location
- Pakistan
- Posts
- 19
it depends upon the volume of ddos attack. nginx is good option to serve static content. but I don't consider it as a ddos solution but it works for an ordinary attack as it serves the pages from cache.
-
10-20-2014, 12:54 PM #3
I assume you are talking about layer 7 attacks ? not precisely bulk attacks but slow http attacks or similar ?
There are different things you can do on nginx to help soothe the pain on the backend servers.
As you know http attacks vary a lot ... so you need to read on nginx configuration options to tweak it to reach your goals, i consider nginx a raw diamond.
For example
reset_timedout_connection on;
That will help disconnect clients that have not responded on a timely manner, which is typical from a ddos attack, some bots will simply make an http call and never send the ack which will make the tcp connection timeout on the webservers default timeout which is pretty long ( usually ).
The following settings need to be tweaked carefully and i personally recommend to set them under attack only:
You create a new zone , for example antiddos:
Apply that zone to an specific server and rate the requests per second to something that makes sense depending on your website, for example, the line below will limit requests per second per client to 50 , anything above that will be disconnected:
limit_req_zone $binary_remote_addr zone=antiddos:10m rate=50r/s;
To apply that zone to a server { } you add:
limit_req zone=antiddos burst=10 nodelay;
Like i said, read a little more on limit_req_zone , there are other feature that can be of a great help on certain ddos attacks , this is not an exact science , make your homework and dont apply a pre-cooked recipe on your nginx , not all environments are equal, not all websites on the back are equal.
All in all ... nginx can be AWESOME to help with certain ddos attacks if you know where you need to go.
Cheers█ RACKNATION: Costa Rica DDOS Protected Servers & Colocation - [ https://www.racknation.cr ]
-
10-20-2014, 01:26 PM #4Web Hosting Evangelist
- Join Date
- Nov 2009
- Location
- Riga, Latvia
- Posts
- 473
nginx is used by many renowned DDOS mitigation services and it can actually do wonders (especially if used together with Varnish). Would recommend you to hire a specialist to configure it though.
SERVERIA.COM: top secret servers Fully managed confidential dedicated Linux & Windows servers.
SERVERADE.NET: server management PROs Request a quote for your server now!
SECRETGSM.COM: anonymous SIM cards Anonymous prepaid calling cards & more.
-
10-20-2014, 01:30 PM #5Digital Marketing Strategist
- Join Date
- Dec 2011
- Location
- Germany
- Posts
- 1,180
If the attack has any specific patterns, such as unique user agents, referrers, URIs or anything, you could block those by sending them to a 444 error page. Other than that I'd also suggest to take the steps that @racknationcr suggested, although you should also add connection limiting and not just request limiting. Also the NGINX fork Tengine has some advanced request limiting to for example block hosts only if they send requests to the same URI too often. The "normal" limit_req module would limit all requests, which can cause issues with sites/pages that would normally require more than 100 requests per page load (many images or external scripts or whatever), so limiting requests per URI makes much more sense than to limit requests per host. Another option would of course be to use an external anti DDoS provider for the affected client to save you from all this hassle, but it seems that that's not what you're looking for. So go ahead and try the suggestions and let us know if it works for you. Log samples would also help.
➤ Inbound Marketing & real SEO for web hosting providers
✎ Get in touch with me: co<at>infinitnet.de
-
10-20-2014, 09:51 PM #6Web Hosting Master
- Join Date
- Mar 2009
- Posts
- 3,700
-
10-21-2014, 04:38 AM #7Digital Marketing Strategist
- Join Date
- Dec 2011
- Location
- Germany
- Posts
- 1,180
-> http://bit.ly/122xz9S
And then just use a 444 status instead of 403 status or whatever the examples suggest. The 444 status is an NGINX specific status code that will close the connection instantly and not send any reponse, not even an error page.➤ Inbound Marketing & real SEO for web hosting providers
✎ Get in touch with me: co<at>infinitnet.de
-
10-21-2014, 05:12 AM #8Web Hosting Master
- Join Date
- Mar 2009
- Posts
- 3,700
with nginxcp's default config content
user nobody;
# no need for more workers in the proxy mode
worker_processes auto;
error_log /var/log/nginx/error.log warn;
worker_rlimit_nofile 20480;
events {
worker_connections 5120; # increase for busier servers
use epoll; # you should use epoll here for Linux kernels 2.6.x
}
http {
server_name_in_redirect off;
server_names_hash_max_size 10240;
server_names_hash_bucket_size 1024;
include mime.types;
default_type application/octet-stream;
server_tokens off;
# remove/commentout disable_symlinks if_not_owner;if you get Permission denied error
# disable_symlinks if_not_owner;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 5;
gzip on;
gzip_vary on;
gzip_disable "MSIE [1-6]\.";
gzip_proxied any;
gzip_http_version 1.0;
gzip_min_length 1000;
gzip_comp_level 6;
gzip_buffers 16 8k;
# You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU
gzip_types text/plain text/xml text/css application/x-javascript application/xml application/javascript application/xml+rss text/javascript application/atom+xml;
ignore_invalid_headers on;
client_header_timeout 3m;
client_body_timeout 3m;
send_timeout 3m;
reset_timedout_connection on;
connection_pool_size 256;
client_header_buffer_size 256k;
large_client_header_buffers 4 256k;
client_max_body_size 200M;
client_body_buffer_size 128k;
request_pool_size 32k;
output_buffers 4 32k;
postpone_output 1460;
proxy_temp_path /tmp/nginx_proxy/;
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=microcache:15m inactive=24h max_size=500m;
client_body_in_file_only on;
log_format bytes_log "$msec $bytes_sent .";
log_format custom_microcache '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" nocache:$no_cache';
include "/etc/nginx/vhosts/*";
}
i add some lines as following,is it correct ? thanx
user nobody;
# no need for more workers in the proxy mode
worker_processes auto;
error_log /var/log/nginx/error.log warn;
worker_rlimit_nofile 20480;
events {
worker_connections 5120; # increase for busier servers
use epoll; # you should use epoll here for Linux kernels 2.6.x
}
http {
server_name_in_redirect off;
server_names_hash_max_size 10240;
server_names_hash_bucket_size 1024;
include mime.types;
default_type application/octet-stream;
server_tokens off;
# remove/commentout disable_symlinks if_not_owner;if you get Permission denied error
# disable_symlinks if_not_owner;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 5;
gzip on;
gzip_vary on;
gzip_disable "MSIE [1-6]\.";
gzip_proxied any;
gzip_http_version 1.0;
gzip_min_length 1000;
gzip_comp_level 6;
gzip_buffers 16 8k;
# You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU
gzip_types text/plain text/xml text/css application/x-javascript application/xml application/javascript application/xml+rss text/javascript application/atom+xml;
ignore_invalid_headers on;
client_header_timeout 3m;
client_body_timeout 3m;
send_timeout 3m;
reset_timedout_connection on;
connection_pool_size 256;
client_header_buffer_size 256k;
large_client_header_buffers 4 256k;
client_max_body_size 200M;
client_body_buffer_size 128k;
request_pool_size 32k;
output_buffers 4 32k;
postpone_output 1460;
proxy_temp_path /tmp/nginx_proxy/;
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=microcache:15m inactive=24h max_size=500m;
client_body_in_file_only on;
log_format bytes_log "$msec $bytes_sent .";
log_format custom_microcache '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" nocache:$no_cache';
include "/etc/nginx/vhosts/*";
}
if ($http_user_agent ~ (wordpress) ) {
return 444;
}
-
10-21-2014, 05:20 AM #9Digital Marketing Strategist
- Join Date
- Dec 2011
- Location
- Germany
- Posts
- 1,180
Yes, that's almost correct, although I'd put it in the vHosts's "server {}" block, but it should work like that as well. Also the WordPress user agents contain "WordPress", not "wordpress". If you use a case-insensitive "if" statement it's fine, but in your example it's case-sensitive, so it wouldn't match "WordPress" if you make a case-sensitive lookup for "wordpress". So use one of those instead:
if ($http_user_agent ~* (wordpress) ) {
return 444;
}
if ($http_user_agent ~ (WordPress) ) {
return 444;
}➤ Inbound Marketing & real SEO for web hosting providers
✎ Get in touch with me: co<at>infinitnet.de
-
10-21-2014, 05:39 AM #10Web Hosting Master
- Join Date
- Mar 2009
- Posts
- 3,700
-
10-21-2014, 05:48 AM #11Digital Marketing Strategist
- Join Date
- Dec 2011
- Location
- Germany
- Posts
- 1,180
➤ Inbound Marketing & real SEO for web hosting providers
✎ Get in touch with me: co<at>infinitnet.de
-
10-21-2014, 07:37 AM #12Newbie
- Join Date
- May 2014
- Posts
- 6
The method above works wonders for WordPress pingback but for a more generic protection to add on top of it I suggest you look at limit_conn_zone and limit_req_zone.
Also:
client_header_timeout 3m;
client_body_timeout 3m;
send_timeout 3m;
These timeouts are excessive and making the effects of the attack worse; I suggest reducing this to 30 seconds (30s) or even less.
-
10-21-2014, 08:07 AM #13Web Hosting Master
- Join Date
- Mar 2009
- Posts
- 3,700
ok,i will search on what to set catchall vHost.
but compare catchall vHost vs my way within nginxcp's default config content directly,
whill they have much different ? such as performance or stability ?
i check the apache log,i do not get the attack now,
so,i can not check if it works well now.
because we can use "/etc/init.d/httpd status" to check who is connection now,
i wonder if nginx also has similar way to monitor it?
thanx
-
10-21-2014, 11:36 AM #14Web Hosting Master
- Join Date
- Mar 2009
- Posts
- 3,700
i try to add the following with the http{},but no making effect,im not sure if any wrong ? thanx
server {
if ($http_user_agent ~* (Mozilla) ) {
return 444;
}
}
-
10-22-2014, 01:20 PM #15Junior Guru Wannabe
- Join Date
- Oct 2010
- Posts
- 58
Take a look at NAXSI for application-level protection
https://github.com/nbs-system/naxsi
Similar Threads
-
My Custom Hosting | US+CA | cPanel | Nginx | FREE DDoS Protection in CA | $3 Yearly
By MyCustomHosting in forum Shared Hosting OffersReplies: 0Last Post: 07-21-2014, 08:45 AM -
My Custom Hosting | US+CA | cPanel | Nginx | FREE DDoS Protection in CA | $3 Yearly
By MyCustomHosting in forum Shared Hosting OffersReplies: 2Last Post: 07-14-2014, 07:32 AM -
My Custom Hosting | US+CA | cPanel | Nginx | FREE DDoS Protection in CA | $3 Yearly
By MyCustomHosting in forum Shared Hosting OffersReplies: 0Last Post: 05-26-2014, 01:17 PM -
Hiawatha or Nginx for better DDOS protection?
By befree33 in forum VPS HostingReplies: 3Last Post: 01-22-2014, 01:29 AM -
DDoS protection on nginx
By synapse-host in forum Hosting Security and TechnologyReplies: 8Last Post: 08-30-2011, 10:18 AM