(engine x) is a lightweight and high-performance HTTP and reverse proxy server, and is also a universal proxy server (TCP/UDP/IMAP/POP3/SMTP), originally written by Russian Igor Sysoev.
Basic Commands
nginx -t Check if there are syntax errors in the configuration file nginx -s reload Hot loading,Reload the configuration file nginx -s stop Quickly close nginx -s quit Wait for the worker process to close after completing
After building the nginx server and starting it, we first look at the default configuration of nginx, and then introduce different usage scenarios one by one.
Default configuration
In the Nginx installation directory, we copy a copy of ``` to backup the configuration file, and then modify ``
# Number of worker processesworker_processes 1; events { worker_connections 1024; # Number of connections per worker process} http { include ; default_type application/octet-stream; # Log format log_format access '$remote_addr - $remote_user [$time_local] $host "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" "$clientip"'; access_log /srv/log/nginx/ access; # Log output directory gzip on; sendfile on; # Link timeout, automatically disconnected keepalive_timeout 60; # Virtual Host server { listen 8080; server_name localhost; # Browser access domain name charset utf-8; access_log logs/ access; #Route location / { root www; # Access the root directory index ; # Entry File } } # Introduce other configuration files include servers/*; }
Build a website
In the other configuration files `servers` directory, add the new site configuration file.
Computer hosts file addition 127.0.0.1 xx_domian
# Virtual Hostserver { listen 8080; server_name xx_domian; # Browser access domain name charset utf-8; access_log logs/xx_domian. access; #Route location / { root www; # Access the root directory index ; # Entry File } }
Run nginx -s reload, and after successful, the browser visits xx_domian and you will see your page
Set expiration time according to file type
location ~.*\.css$ { expires 1d; break; } location ~.*\.js$ { expires 1d; break; } location ~ .*\.(gif|jpg|jpeg|png|bmp|swf)$ { access_log off; expires 15d; #Save for 15 days break; } # curl -x127.0.0.1:80 /static/image/common/ -I #Test picturesmax-age
Prohibit file caching
The development environment often changes the code because the browser cache needs to be forced refreshed to see the effect. This is what we can prohibit browser cache to improve efficiency
location ~* \.(js|css|png|jpg|gif)$ { add_header Cache-Control no-store; }
Anti-theft chain
It can prevent files from being called by other websites
location ~* \.(gif|jpg|png)$ { # Only 192.168.0.1 request resources are allowed valid_referers none blocked 192.168.0.1; if ($invalid_referer) { rewrite ^/ http://$host/; } }
Static file compression
server { # Enable gzip compression gzip on; # Set the minimum version of http protocol required by gzip (HTTP/1.1, HTTP/1.0) gzip_http_version 1.1; # Set the compression level, the higher the compression level, the longer the compression time (1-9) gzip_comp_level 4; # Set the minimum number of compressed bytes, get the page Content-Length gzip_min_length 1000; # Set the type of compressed file (text/html) gzip_types text/plain application/javascript text/css; }
Execute the command nginx -s reload, and the browser accesses it after success
Specify the specified error page
# Return to the error page according to the status codeerror_page 500 502 503 504 /; location = / { root /source/error_page; }
Execute the command nginx -s reload, and the browser accesses it after success
Cross-domain issues
Cross-domain definition
The same-origin policy limits how documents or scripts loaded from the same source interact with resources from another source. This is an important security mechanism for isolating potentially malicious files. Reading between different sources is usually not allowed.
Definition of homologous
If the protocol, port (if specified) and domain name of both pages are the same, then both pages have the same source.
The principle of nginx to solve cross-domain
For example:
- The front-end server domain name is:
http://xx_domain
- The backend server domain name is:
Nowhttp://xx_domain
rightWhen making a request, it will definitely occur across domains.
But only need to start a nginx server,server_name
Set asxx_domain
, and then set the corresponding location to intercept cross-domain requests that the front-end needs, and finally proxy the request back. As shown in the following configuration:
## Configure the parameters of the reverse proxyserver { listen 8080; server_name xx_domain ## 1. When the user visits http://xx_domain, the reverse proxy is location / { proxy_pass ; proxy_redirect off; proxy_set_header Host $host; # Pass the domain name proxy_set_header X-Real-IP $remote_addr; # Pass ip proxy_set_header X-Scheme $scheme; # Passing Protocol proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }
This can perfectly bypass the browser's homologous policy:access
nginx
ofbelongs to homologous access, and
nginx
Requests forwarded to the server will not trigger the browser's same-origin policy.
Detailed Chinese description of Nginx configuration parameters:
#Define users and user groups running on Nginxuser www www; # #nginx process number, it is recommended to set it to equal the total number of CPU cores.worker_processes 8; # #Global error log definition type,[ debug | info | notice | warn | error | crit]error_log /var/log/nginx/ info; # #Process Filepid /var/run/; # #The maximum number of file descriptors opened by a nginx process, the theoretical value should be the maximum number of open files (the system's value ulimit -n) divided by the number of nginx processes, but the nginx allocation request is not uniform, so it is recommended to be consistent with the value of ulimit -n.worker_rlimit_nofile 65535; # #Work mode and number of connectionsevents { #Reference event model, use [ kqueue | rtsig | epoll | /dev/poll | select | poll ]; the epoll model is a high-performance network I/O model in the kernel of Linux 2.6 or above. If it runs on FreeBSD, use the kqueue model.use epoll; #Maximum number of connections for a single process (maximum number of connections = number of connections * number of processes)worker_connections 65535; } # #Set http serverhttp { include ; #File extension and file type mapping tabledefault_type application/octet-stream; #Default file type#charset utf-8; #Default encodingserver_names_hash_bucket_size 128; #The hash table size of the server nameclient_header_buffer_size 32k; #Upload file size limitlarge_client_header_buffers 4 64k; #Set request slowclient_max_body_size 8m; #Set request slow# Enable directory list access, suitable download server, and close by default.autoindex on; # Show directoryautoindex_exact_size on; # Display file size. The default is on, display the exact size of the file. The unit is bytes. After changing to off, the approximate size of the file is kB, MB or GBautoindex_localtime on; # Display file time Default is off, the displayed file time is GMT time After changing to on, the displayed file time is the file server timesendfile on; # Enable efficient file transfer mode. The sendfile instruction specifies whether nginx calls the sendfile function to output files. For ordinary applications, set it to on. If it is used for downloading and other applications, it can be set to off to balance the disk and network I/O processing speed and reduce the system load. Note: If the picture display is not normal, change this to off.tcp_nopush on; # Prevent network blockagetcp_nodelay on; # Prevent network blockagekeepalive_timeout 120; # (Unit s) Set the timeout time for the client connection to remain active. After this time exceeds this time, the server will close the link.# FastCGI related parameters are to improve the performance of the website: reduce resource usage and improve access speed. The following parameters can be understood by looking at the literal meaning.fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # gzip module settingsgzip on; #Enable gzip compression outputgzip_min_length 1k; #The minimum number of bytes of the compressed page is allowed, and the number of bytes on the page is obtained from the header's content-length. The default is 0, and no matter how large the page is, it is compressed. It is recommended to set the number of bytes greater than 1k. If less than 1k, the more you press it, the bigger the pressure will be.gzip_buffers 4 16k; #It means to apply 4 memory units of 16k as the compression result stream cache. The default value is to apply for the same memory space as the original data size to store gzip compression results.gzip_http_version 1.1; #Compressed version (default 1.1, most browsers currently support gzip decompression. If the front-end is squid2.5, please use 1.0)gzip_comp_level 2; #Compression level.1 The minimum compression ratio is fast, the processing speed is fast.9 The maximum compression ratio is relatively consuming CPU resources and the processing speed is slowest. However, because the compression ratio is the largest, the packet is the smallest and the transmission speed is fastgzip_types text/plain application/x-javascript text/css application/xml; #Compression type, the default is text/html, so you don’t need to write it anymore, and there will be no problem when writing it, but there will be a warning.gzip_vary on;The # option allows the front-end cache server to cache pages compressed by gzip. For example: use squid to cache nginx compressed data#It is necessary to use when opening limiting the number of IP connections#limit_zone crawler $binary_remote_addr 10m; ##upstream load balancing, four scheduling algorithms (the following example is the main lecture)###Configuration of virtual hostserver { # Listen to the portlisten 80; # There can be multiple domain names, separated by spacesserver_name ; # HTTP Automatically jump HTTPSrewrite ^(.*) https://$server_name$1 permanent; } server { # Listen to HTTPSlisten 443 ssl; server_name ; # Configure the domain name certificatessl_certificate C:\WebServer\Certs\; ssl_certificate_key C:\WebServer\Certs\; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on; index ; root /data/www/; location ~ .*\.(php|php5)?$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index ; include ; } # Configure address interception and forwarding to solve cross-domain verification problemslocation /oauth/{ proxy_pass https://localhost:13580/oauth/; proxy_set_header HOST $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } # Image cache time settingslocation ~ .*\.(gif|jpg|jpeg|png|bmp|swf)$ { expires 10d; } # JS and CSS cache time settingslocation ~ .*\.(js|css)?$ { expires 1h; } # Log format settingslog_format access '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" $http_x_forwarded_for'; # Define the access log of this virtual hostaccess_log /var/log/nginx/ access; # Set the address to view Nginx status. The StubStatus module can obtain the working status of Nginx since its last startup. This module is not a core module and needs to be manually specified during Nginx compilation and installation to be used.location /NginxStatus { stub_status on; access_log on; auth_basic "NginxStatus"; auth_basic_user_file conf/htpasswd; The content of #htpasswd file can be generated using the htpasswd tool provided by apache.} } } NginxMultiple servers realize load balancing: Load balancing server: IP:192.168.0.4(Nginx-Server) Server list: Web1:192.168.0.5(Nginx-Node1/Nginx-Web1) ;Web2:192.168.0.7(Nginx-Node2/Nginx-Web2) 3.Achieve the goal:User accessNginx-Server(“:8888”)hour,passNginxLoad balancingWeb1andWeb2server NginxLoad balancing server的配置注释如下: events { use epoll; worker_connections 65535; } http { ##upstream load balancing, four scheduling algorithms###Scheduling algorithm 1: Polling. Each request is assigned to different backend servers in chronological order. If a backend server goes down, the faulty system will be automatically removed, so that user access will not be affected.upstream webhost { server 192.168.0.5:6666 ; server 192.168.0.7:6666 ; } #Scheduling algorithm 2: weight. Weight can be defined according to machine configuration. The higher the weight, the greater the chance of being allocated.upstream webhost { server 192.168.0.5:6666 weight=2; server 192.168.0.7:6666 weight=3; } #Scheduling algorithm 3: ip_hash. Each request is allocated according to the hash result of accessing the IP. In this way, visitors from the same IP access a backend server regularly, effectively solving the session sharing problem of dynamic web pages.upstream webhost { ip_hash; server 192.168.0.5:6666 ; server 192.168.0.7:6666 ; } #Scheduling algorithm 4: url_hash (a third-party plug-in is required). This method allocates the request according to the hash result of accessing the url, so that each url is directed to the same backend server, which can further improve the efficiency of the backend cache server. Nginx itself does not support url_hash. If you need to use this scheduling algorithm, you must install the hash software package of Nginxupstream webhost { server 192.168.0.5:6666 ; server 192.168.0.7:6666 ; hash $request_uri; } #Scheduling Algorithm 5: fair (third-party plug-ins need to be installed). This is a more intelligent load balancing algorithm than the above two. This algorithm can intelligently perform load balancing based on the page size and loading time, that is, allocate requests based on the response time of the backend server. The short response time is allocated first. Nginx itself does not support fair. If you need to use this scheduling algorithm, you must download Nginx's upstream_fair module# #Configuration of virtual host (using scheduling algorithm 3: ip_hash)server { listen 80; server_name ; #Enable reverse proxy for "/"location / { proxy_pass http://webhost; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; #The backend web server can obtain the user's real IP through X-Forwarded-Forproxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #The following are some reverse proxy configurations, optional.proxy_set_header Host $host; client_max_body_size 10m; #The maximum number of single file bytes allowed by the client to requestclient_body_buffer_size 128k; #The maximum number of bytes requested by the buffer proxy buffers the user side,proxy_connect_timeout 90; #nginx and backend server connection timeout timeout (proxy connection timeout)proxy_send_timeout 90; # Backend server data return time (Proxy send timeout)proxy_read_timeout 90; #After the connection is successful, the backend server response time (proxy reception timeout)proxy_buffer_size 4k; #Set the buffer size of the proxy server (nginx) to save user header informationproxy_buffers 4 32k; #proxy_buffers buffer, the average setting for web pages below 32kproxy_busy_buffers_size 64k; #Buffer size under high load (proxy_buffers*2)proxy_temp_file_write_size 64k; #Set the cache folder size, which is greater than this value, and will be passed from the upstream server} } } The load balancing operation demonstration is as follows: Operation object:192.168.0.4(Nginx-Server) # Create a folder to prepare to store configuration files$ mkdir -p /opt/confs $ vim /opt/confs/ # Editing content is as follows:events { use epoll; worker_connections 65535; } http { upstream webhost { ip_hash; server 192.168.0.5:6666 ; server 192.168.0.7:6666 ; } server { listen 80; server_name ; location / { proxy_pass http://webhost; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } } # Then save and exit# Start load balancing server 192.168.0.4 (Nginx-Server)docker run -d -p 8888:80 --name nginx-server -v /opt/confs/:/etc/nginx/ --restart always nginx Operation object:192.168.0.5(Nginx-Node1/Nginx-Web1) # Create folders for storing web pages$ mkdir -p /opt/html $ vim /opt/html/ # Editing content is as follows:<div> <h1> The host is 192.168.0.5(Docker02) - Node 1! </h1> </div> # Then save and exit# Start 192.168.0.5 (Nginx-Node1/Nginx-Web1)$ docker run -d -p 6666:80 --name nginx-node1 -v /opt/html:/usr/share/nginx/html --restart always nginx Operation object:192.168.0.7(Nginx-Node2/Nginx-Web2) # Create folders for storing web pages$ mkdir -p /opt/html $ vim /opt/html/ # Editing content is as follows:<div> <h1> The host is 192.168.0.7(Docker03) - Node 2! </h1> </div> # Then save and exit# Start 192.168.0.7 (Nginx-Node2/Nginx-Web2)$ docker run -d -p 6666:80 --name nginx-node2 -v $(pwd)/html:/usr/share/nginx/html --restart always nginx
This is the end of this article about nginx detailed parameter configuration (the most complete in history). For more related nginx parameter configuration content, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!