preamble
It is well known that DDoS is so common that it is even known as the entry skill of the hacker circle; DDoS is so fierce that it almost crushes one side of the network when it is messed up.
DDOS is characterized by distributed, bandwidth and service attacks, that is, four-layer traffic attacks and seven-layer application attacks, the corresponding bottleneck in the defense of the four-layer in the bandwidth, seven-layer more in the architecture of the throughput. For seven-layer application attacks, we can still do some configuration to defend, for example, the front-end is Nginx, the main use of nginx http_limit_conn and http_limit_req module to defend.
What is Distributed Denial of Service DDoS (Distributed Denial of Service) means Distributed Denial of Service Attack, the attacker uses a large number of "broiler" to launch a large number of normal or abnormal requests to the attack target, exhausting the target host resources or network resources, so that the attacked party can not provide services to legitimate users. The attacker uses a large number of "brokers" to launch a large number of normal or abnormal requests to the attack target, exhausting the resources of the target host or network resources, so that the attacked party cannot provide services to legitimate users. Typically, the attacker will attempt to saturate a system with so many connections that it is no longer able to accept new traffic or becomes so slow as to be unusable.
In other words, Lao Zhang's restaurant (the target of the attack) can receive 100 customers to eat at the same time, next door, Lao Wang (the attacker) hired 200 people (broiler), into the restaurant to occupy the location but do not eat or drink (abnormal request), the restaurant was crowded (resource exhaustion), and really want to eat the customers can not come in, the restaurant can not be a normal business (DDoS attack reached). So the question is, what should Lao Zhang do?
Of course it is, blast it out!
Typically, an attacker will try to saturate a system with so many connections and require that it no longer be able to accept new traffic, or become so slow that it becomes unusable.
Application Layer DDoS Attack Characteristics
Application layer (Layer 7/ HTTP) DDoS attacks are executed by software programs (bots) that can be customized to best exploit vulnerabilities in a particular system. For example, for systems that do not handle large numbers of concurrent connections well, opening a large number of connections and keeping them active by sending a small amount of traffic only periodically may exhaust the system's capacity for new connections. Other attacks can take the form of sending a large number of requests or very large requests. Since these attacks are executed by a bot program rather than an actual user, an attacker can easily open a large number of connections and send a large number of requests very quickly.
Characteristics of DDoS attacks that can be used to help mitigate these attacks include the following (this is not meant to be an exhaustive list):
-Traffic usually comes from a fixed set of IP addresses belonging to the machine used to perform the attack. As a result, each IP address is responsible for far more connections and requests than you would expect from a real user.
Attention:Do not assume that this traffic pattern always represents a DDoS attack. The use of forwarding proxies can also create this pattern because the IP address of the forwarding proxy server is used as the client address for requests from all the real clients it serves. However, the number of connections and requests from forwarding proxies is usually much lower than a DDoS attack.
-Since the traffic is generated by bots and meant to overwhelm the server, the traffic rate is much higher than what a human user could generate.
- The User-Agent header is set sometimes to non-standard values.
-The Referer header is sometimes set to a value that you can associate with an attack.
Defending Against DDoS Attacks with NGINX and NGINX Plus
NGINX and NGINX Plus have a number of features that, when combined with the DDoS attack features described above, can make them an important part of a DDoS attack mitigation solution. These features address DDoS attacks by regulating incoming traffic and by controlling traffic proxying back-end servers.
Inherent protections of the NGINX event-driven architecture
NGINX is designed to be a "shock absorber" for your website or application. It has a non-blocking, event-driven architecture that can handle large numbers of requests without significantly increasing resource utilization.
New requests from the network do not interrupt NGINX's processing of ongoing requests, which means that NGINX can utilize the techniques described below to protect your site or application from attacks.
For more information on the underlying architecture, see Inside NGINX: How We Design for Performance and Scale.
Limiting the rate of requests
You can limit the rate at which NGINX and NGINX Plus receive incoming requests to values typical of real users. For example, you might decide that real users accessing the login page can make only one request every 2 seconds. You can configure NGINX and NGINX Plus to allow a single client IP address to attempt a login every 2 seconds (equivalent to 30 requests per minute):
limit_req_zone $binary_remote_addr zone=one: 10m rate= 30r /m; server { # ... location / { limit_req zone=one; # ... } }
The limit_req_zone directive configures a shared memory area named "one" to store the request status of the specified key, in this case the client IP address ($binary_remote_addr). The limit_req directive in the /block references the shared memory area. The limit_req directive in the / block references the shared memory area.
For a detailed discussion of rate limiting, see Rate Limiting for NGINX and NGINX Plus on the blog.
Limit the number of connections
You can limit the number of connections that can be opened by a single client IP address, or you can limit it to a value that is appropriate for real users. For example, you can allow each client IP address to open no more than 10 connections to the / store area of your site:
limit_conn_zone $binary_remote_addr zone=addr: 10m ; server { # ... location /store/ { limit_conn addr 10 ; # ... } }
This limit_conn_zone directive configures a shared memory area named addr to store requests for the specified key, in this case (as shown in the previous example) the client IP address $binary_remote_addr. in the limit_conn This directive location references the shared memory area for block/storage and sets a maximum of 10 connections from each client IP address.
Close Slow Connection
You can close a connection that is writing data, which may mean trying to keep the connection open as long as possible (thus reducing the server's ability to accept new connections.) Slowloris is an example of this attack. This client_body_timeout directive controls the amount of time NGINX waits between client body writes and this client_header_timeout directive controls the amount of time NGINX waits between writes to the client header. The default value for both directives is 60 seconds. This example configures NGINX to wait no more than 5 seconds between writes or headers from the client:
server { client_body_timeout 5s; client_header_timeout 5s; # ... }
Blacklisted IP addresses
If you can identify the IP address of the client used for the attack, you can use this deny command to blacklist it so that NGINX and NGINX Plus will not accept its connections or requests. For example, if you determine that the attack is coming from the address range 123.123.123.1 to 123.123.123.16:
location / { deny 123.123 . 123.0 / 28 ; # ... }
Or, if you are sure that the attack is coming from the client IP addresses 123.123.123.3,123.123.123.5 and 123.123.123.7:
location / { deny 123.123.123.3; deny 123.123.123.5; deny 123.123.123.7; # ... }
Whitelist IP addresses
If you allow access to your site or application only from one or more client IP addresses from a specific group or range, you can use the allow and deny commands together to allow access to the site or application only from those addresses. For example, you can restrict access to only addresses in a specific local network:
location / { allow 192.168.1.0/24; deny all; # ... }
Here, the deny all command blocks all client IP addresses that are not in the range specified by the allow command.
Using caching to smooth out traffic spikes
You can configure NGINX and NGINX Plus to absorb large spikes in traffic caused by attacks by enabling caching and setting certain caching parameters to offload back-end requests. Some useful settings are:
- The updating parameter of this directive, proxy_cache_use_stale, tells NGINX that when it needs to get an update to a stale cached object, it should send only a single request for the update and continue to serve the stale object to clients requesting it from the back-end servers during the receive time period. This significantly reduces the number of requests to the back-end server when duplicate requests for a file are part of the attack.
- The key defined by this proxy_cache_key directive typically consists of embedded variables (default key $scheme$proxy_host$request_uri with three variables). If the value contains the $query_string variable, an attack that sends a random query string may result in overcaching. $query_string Unless you have a specific reason to do so, we recommend that you do not include variables within variables.
prevent a request
You can configure NGINX or NGINX Plus to block several requests:
- Request a specific URL that appears to be targeted
- Requests where the User-Agent header is set to a value that does not correspond to normal client traffic
- Requests that set the Referer header to a value that can be associated with an attack
- Requests for other headers with values that can be associated with an attack
For example, if you determine that the target of a DDoS attack is the URL /, you can block all requests for that page:
location / { deny all; }
Alternatively, if you find that DDoS attack requests have a User-Agent header value of foo or bar, you can block those requests.
location / { if ($http_user_agent ~* foo|bar) { return 403; } # ... }
This variable references a request header, which in the example above is the header. A similar approach can be used for other headers with values that can be used to identify attacks. http_*name*``User-Agent
Limit connections to back-end servers
An NGINX or NGINX Plus instance can typically handle more concurrent connections than a load-balanced back-end server. With NGINX Plus, you can limit the number of connections to each back-end server. For example, if you want to limit the number of connections that NGINX Plus establishes with two back-end servers in the upstream group of a Web site to no more than 200:
upstream website { server 192.168.100.1:80 max_conns=200; server 192.168.100.2:80 max_conns=200; queue 10 timeout=30s; }
The max_conns parameter applied to each server specifies the maximum number of connections opened by NGINX Plus. The queue directive limits the number of requests queued when all servers in the upstream group reach their connection limit, and the timeout parameter specifies the amount of time to keep requests in the queue.
Handling range-based attacks
One method of attack is to send a Range with a very large value in the header, which can result in a buffer overflow. For a discussion of how to use NGINX and NGINX Plus to mitigate this type of attack, see Using NGINX and NGINX Plus to Protect CVE-2015-1635.
Handling high loads
DDoS attacks typically result in high traffic loads. For tips on tuning NGINX or NGINX Plus and operating systems that allow the system to handle higher loads, see Tuning NGINX for Performance.
Recognizing DDoS Attacks
So far, we've focused on the ways you can use NGINX and NGINX Plus to help mitigate the effects of a DDoS attack. But how can NGINX or NGINX Plus help you spot DDoS attacks? The NGINX Plus Status Module provides detailed metrics about the loaded back-end servers that you can use to spot abnormal traffic patterns balancing traffic.NGINX Plus comes with a Status Dashboard web page that graphically depicts the current state of the NGINX Plus system (see the example above). The same metrics are also available through the API, which you can use to make the metrics available to custom or third-party monitoring systems, where you can perform historical trending to discover unusual patterns and enable alerts.
consultation
/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus
summarize
Above is the entire content of this article, I hope that the content of this article on your learning or work has a certain reference learning value, if there are questions you can leave a message to exchange, thank you for my support.