SoFunction
Updated on 2025-05-07

Nginx security protection and https deployment process

Core security configuration

Prepare and install nginx

Install support software

[root@localhost ^]# dnf install -y gcc make pcre-deve1 zlib-developenssl-devel perl-ExtUtils-MakeMaker git wget tar

Create running users, groups, and log directories

[root@localhost~]# useradd -M -s /sbin/nologin nginx
[root@localhost~]# mkdir-p/var/log/nginx
[root@localhost~]# chown -R nginx:nginx /var/log/nginx

Compile and install

[root@localhost ^]# tar zxf nginx-1.26.
[root@localhost ^]# cd nginx-1.26.3
[root@localhost nginx-1.26.3]# ./configure\

--prefix=/usr/local/nginx
--user=nginx\
--group=nginx
--with-http_ssl_module \
--with-http_v2_module \
--with-http_realip_module \
--with-http_stub_status_module\
--with-http_gzip_static_module \
--with-pcre\
--with-stream

[root@localhost nginx-1.26.3]# make & make install

Create link files for main program nginx

[root@localhost nginx-1.26.3]# In -s /usr/local/nginx/sbin/nginx/usr/local/sbin/

Add nginx system service

[root@localhost ^]# vi /lib/systemd/system/

[Unit]
Description=The NGINX HTTP and reverse proxy server
After=
[Service]
Type=forking
ExecStartPre=/usr/local/sbin/nginx -t
ExecStart=/usr/local/sbin/nginx
ExecReload=/usr/local/sbin/nginx -s reload
ExecStop=/bin/kill -s QUIT $MAINPID
TimeoutStopSec=5
KillMode=process
PrivateTmp=true
User=root
Group=root
[Instal1]
WantedBy=

[root@localhost^]# systemctl daemon-reload
[root@localhost~]# systemctl start nginx
[root@localhost^]# systemctl enable nginx

Hide version number

In production environments, Nginx version number needs to be hidden to avoid leaking Nginx white versions, so that attackers cannot attack specific versions.

[root@localhost ~]# curl -I 192.168.10.101
HTTP/1.1 200 OK
Server: nginx/1.26.3

Modify the configuration file

[root@localhost^]# vi /usr/local/nginx/conf/

http {
include ;
default_type application/octet-stream;
server_tokens off;

.......

[root@localhost^]#nginx -t
[root@localhost~]#nginx -s reload
[root@localhost~]# curl -I 192.168.10.101

HTTP/1.1 200 OK
Server: nginx

Methods for restricting hazard requests

The unsafe request method is a potential security risk, TRACE (prone to cause XST attacks), PUT/DELETE (file modification risk), CONNECT (proxy abuse), match the request method through regular expressions, and the non-whitelist method returns 444 (close the connection without response)

Modify the configuration file

[root@localhost ^]# vi /usr/local/nginx/conf/

server {
($request_method !~ (GET|HEAD|POST)$) {
return 444;

}
}

verify

[root@localhost ^]# curl -XPUT -I 192.168.10.10]
curl: (52) Empty reply from server

Request limit (cc attack defense)

CC attack (Challenge Collapse) is a common network J attack method that consumes server resources through a large number of legal or forged small traffic requests, resulting in normal users not having access to the website. To effectively prevent CC attacks in Nginx, a variety of strategies and methods can be used

CC attacks, also known as connection number attacks or request rate limiting attacks, consume server resources by simulating a large number of user access, thus making normal users unable to access the website normally. To prevent such attacks, Nginx-provided modules can be used to limit request rates and concurrent connections

Use nginx's limit_req module to limit request rate

[root@localhost ^]# vi /usr/local/nginx/conf/

http {
limit_req_zone_$binary_remote_addr_zone=req_limit:10m rate=10r/s;
server {
location / {
root html;
index  ;
limit_req zone=req_limit burst=20 nodelay;

}
}
}

Stress test verification

Install ab test tool

[root@localhost ^]# dnf install httpd-tools -y

[root@localhost~]# ab -n 300 -c 30 http://192.168.10.101/

[root@localhost ^]# tail -300 /usr/local/nginx/logs/access. log | grep -c
503
279
  • -n 300: means that the total number of requests is 300, that is, the simulated client sends 300 HTTP requests to the server
  • -c 30: Indicates that the number of concurrent users is 30, that is, 30 requests are sent to the server in parallel.

Anti-theft chain

Anti-theft links are an important security setting designed to prevent unauthorized users from misappropriating website (static) resources. Link theft not only infringes on the copyright of content creators, but may also lead to excessive consumption of the original website bandwidth and resources, affecting the access speed and experience of normal users.

Modify the C:\Windows\System32 drivers\etc\hosts file of Windows, and set the domain name and IP mapping relationship

192.168.10.101

192.168.10.102

Modify the hosts files of two OpenEulers and set the domain name and IP mapping relationship.

192.168.10.101

192.168.10.

Put the image in the working directory of the source host

[root@localhost ^]# ls /usr/local/nginx/html
Index. html 

Edit the original homepage file

[root@localhost ^]# vi /usr/local/nginx/html/

<html>
<body>
<h1>aaa It work!</h1>
<img src=""/>

<body>

</html>

Use browser access to verify

Edit the homepage file of the thief website

[root@localhost ^]# dnf -y install httpd
[root@localhost~]# vi /usr/local/nginx/html/

<html
<body>
<h1>bbb It work! </hl>
<img src="/
</body>
</html>

[root@localhost~]# systemctl stop firewalld
[root@localhost`]# systemctl start httpd

Test access to the thief website (using a browser)

Configure Nginx anti-theft chain

[root@localhost ^]# vi /usr/local/nginx/conf/
location ^* \. (gif|jpg|jpeg|png|bmp|swf|flv|mp4|webp|ico)s{
root html;
valid_referers  *.;
if($invalid referer){
return 403;

}

}

[root@localhost^]#nginx -t
[root@localhost~]#nginx -s reload

Test access to the thief website (the thief failed 403)

Advanced protection

Dynamic blacklist

Dynamic blacklist is a security mechanism in Nginx that intercepts malicious requests in real time. It allows dynamic updates to block IP addresses or network segments without restarting the service.

Compared with the allow/deny commands configured with static configuration, dynamic blacklists are more flexible and efficient, and are suitable for high-concurrency and changeable attack protection scenarios.

Edit the blacklist configuration file

[root@localhost ^]# vi /usr/local/nginx/conf/

192.168.1.0/24 1;
192.168.10.102 1;

Edit the main configuration file

[root@localhost^]# vi /usr/local/nginx/conf/

http {
geo $block_ip {
default 0;
finclude /usr/local/nginx/conf/;

}

server
if
($block_ip) {
return 403;

}

}

}
[root@localhost^]# nginx -t
[root@localhost~]# nginx -s reload

Use blocked IP test access

[root@localhost ~]# curl 192.168.10.101

<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx</center>
</body>
</html>

Automatically add blacklists

#!/bin/bash
#Automatically block IP accesses over 100 timesawk '{print$1}'/var/log/nginx/|sort|uniq -cəssort -nr | awk'{if($1&gt;100) print $2"1;"}' &gt;/usr/local/nginx/conf/
  • uniq-c: counts the number of consecutive occurrences and displays them at the beginning of the line
  • sort-nr: sort by numerical

nginx https configuration

https concept

As we all know, http (Hypertext Transfer Protocol) is a communication protocol between the client browser and the web server, and the https protocol can be considered HTTP+SSL/TLS. A layer of ss1 is added to tcp under http to encrypt and decrypt application layer data.

As shown below:

  • http
  • ssl/tsl
  • tcp
  • ip

Why http is not safe

Since HTTP is a plain text transmission, there are three main risks: the risk of stealing, tampering with the area insurance, and impersonation.

  • Listen to the risk
  • Intermediaries can obtain the communication content. Since the content is plain text, there is a security risk after obtaining the plain text.
  • Tampering risk
  • The intermediary can tamper with the content of the message and then send it to the other party, which is extremely risky.
  • Risk of impersonation
  • For example, you think you are communicating with Taobao, but you are actually communicating with a phishing website.

Four principles of secure communication

Communication needs to include the following four principles: confidentiality, integrity, identity authentication and undeniable.

  • Confidentiality:That is, encrypting the data, solving the risk of plagiarism, because even if the intermediary plagiarizes it, since the data is encrypted, he cannot get the plain text:
  • Integrity:It means that the data has not been tampered with during the transmission process, no more or less, and remains the same. If even a punctuation mark is changed in the middle, the receiver can recognize it and it is never legal to determine that the received message is illegal.
  • Identity Authentication:Confirm the other party's true identity, that is, the problem of proving that "your mother is your mother", which solves the problem of impersonation and users do not have to worry about visiting Taobao but communicating with the phishing website:
  • Undeniable:That is, it is undeniable that the behavior has occurred. For example, Xiao Ming borrowed 1,000 yuan from Xiaohong but did not write the IOU, or wrote the IOU but did not sign it, which would cause Xiaohong to lose money.

Brief description of the principle of https communication

Since HTTP is transmitted in plain text, we can just encrypt the packet. Since we want to encrypt it, we must negotiate the key between the two parties in the communication. One is that both parties to the communication use the same key, that is, symmetric encryption, to encrypt and decrypt the packet.

Asymmetric encryption means that the two parties use different keys. One is used as the public key, which can be disclosed, and it cannot be disclosed. The ciphertext encrypted by the public key can only be decrypted, and the content of the private key signature can only be verified.

Digital certificates to solve the problem of public key transmission trust

The certificate is applied by the site manager to the CA. When applying, it will submit data such as domain name, organizational unit information and public key (these data constitute the Certificate Signing Request certificate signature request, referred to as CSR), and the CA will generate a certificate based on this information.

https summary

The server transmits the certificate issued by the CA to the client. After receiving the certificate, the client uses the public key of the system's built-in CA certificate to verify the signature. After the signature verification, it proves that the certificate is trusted. If the certificate is trusted, the public key in the certificate is trusted. This solves the risk of being transferred during the public key transmission process.

nginx configuration https certificate

Since the ssl certificate needs to be applied to the CA organization, the experiment uses a self-signed certificate (that is, sign and issue a certificate by yourself. Of course, this certificate is not trusted)

Use openssl to generate certificates and private keys

Create a certificate storage directory

[root@localhost ^]# mkdir -p /etc/nginx/ssl

Generate a self-signed certificate

[root@localhost ^]# openssl req -x509 -nodes -days :365 -newkey rsa:2048
-keyout/etc/nginx/ssl/
-out/etc/nginx/ssl/\
-subj "/C=CN/ST=Beijing/L=Beijing/0=MyOrg/CN=localhost

Ps:

CA Signing Certificate:

Need to be issued by a trusted third-party certificate authority (CA).

The process is as follows:

  • 1. The user generates a private key and a CSR (certificate signature request).
  • 2. Submit the CSR to the CA (such as Let'sEncrypt, DigiCert, etc.).
  • After the institution verifies its identity, it uses the CA's private key to sign the certificate to generate the final certificate.

Self-signed certificate:

  • The Issuer and Subject of the certificate are the same entity (i.e. itself).
  • Without the participation of third-party CA, you can directly use tools (such as 0penSSL) to generate private keys and certificates.
  • Use your own private key when signing, not the CA's private key.
  • Suitable for testing, internal environments, or scenarios where public trust is not required.

nginx enable https

Edit nginx configuration file

[root@localhost ~]#vi /usr/local/nginx/conf/
server {
listen 443 ssl; #listen to HTTPS portserver_namelocalhost; #Domain name or IP#Specify the certificate and private key pathssl_certificate
/etc/nginx/ssl/;
/etc/nginx/ssl/;
issl_certificate_key

server {
listen 80;
server_name localhost;
return 301 https://$host$request_uri;

}
[root@localhost ^]#nginx -t
[root@localhost ^]#nginx -s reload

Summarize

The above is personal experience. I hope you can give you a reference and I hope you can support me more.