1. NGINX Architecture & Working Principle
/Worker process model
-
Master
: Only responsible for loading configuration and managing Workers (smooth reloading, signal processing). -
Worker
: Event-driven, generally the same number of CPU cores or slightly more, throughepoll
/kqueue
Efficiently handle massive concurrent connections.
2. Non-blocking I/O & asynchronous events
- Each Worker can manage tens of thousands of connections within one thread, greatly reducing context switching.
3. Modular design
Subsystems such as HTTP, Stream, Mail, etc. can compile modules on demand:
-
ngx_http_upstream_module
(Load balancing) -
ngx_http_stub_status_module
(Status Monitoring) - Third-party health check module, Lua script extension, etc.
2. Windows platform deployment
Applicable scenarios: Internal testing of the enterprise, collaboration with IIS (passedARR
/ AJP
Protocols, etc.) may facilitate rapid deployment in Windows environment.
2.1 Download and decompression
- access(en) Download page
- GetStable versionZIP packages, e.g.
nginx-1.24.
。 - Unzip to
C:\nginx\
Table of contents.
2.2 Register as a Windows service (optional)
useNSSM(Non-Sucking Service Manager):
# Assuming it has been placed in C:\tools\nssm\C:\tools\nssm\nssm install nginx "C:\nginx\" # Start the servicenet start nginx
2.3 Configuration file location
- Main configuration:
C:\nginx\conf\
- log:
C:\nginx\logs\
/
2.4 Common Commands
# start upC:\nginx\ # Smooth reload (reread configuration)C:\nginx\ -s reload # stopC:\nginx\ -s quit
Tips: Under Windows,-s reload
Perhaps not as stable as Linux, it can achieve zero Downtime by combining restarting of NSSM services.
3. Linux platform deployment
Applicable scenarios: The first choice for production environment. This article takes Ubuntu as an example, and CentOS/RHEL is the same.
3.1 Installation (package management/compilation and installation)
- 3.1.1 Package Management Installation
# Ubuntu / Debian sudo apt update sudo apt install -y nginx # CentOS / RHEL sudo yum install -y epel-release sudo yum install -y nginx
- 3.1.2 Source code compilation (custom module)
# Installation dependenciessudo apt install -y build-essential libpcre3 libpcre3-dev zlib1g zlib1g-dev \ libssl-dev # Download the source code and compile itwget /download/nginx-1.24. tar zxvf nginx-1.24. && cd nginx-1.24.0 ./configure \ --prefix=/usr/local/nginx \ --with-http_ssl_module \ --with-http_v2_module \ --with-http_stub_status_module \ --with-stream \ --with-stream_ssl_module make && sudo make install
3.2 Service Management
Systemd
sudo systemctl enable nginx sudo systemctl start nginx sudo systemctl reload nginx sudo systemctl status nginx
Configuration Directory
-
/etc/nginx/
: Main configuration -
/etc/nginx//*.conf
: Virtual Host/Load Balancing Snippet -
/usr/share/nginx/html/
: Default static service root
4. Core load balancing configuration
The following examples are all placed inhttp { … }
In the section.
4.1 Basics Round-Robin (polling)
upstream backend { server :80; server :80; server :80; } server { listen 80; location / { proxy_pass http://backend; } }
- characteristic: Default polling, new connections are distributed in sequence; peer servers do not require additional instructions.
4.2 Least-Connected (minimum connection)
upstream backend { least_conn; server ; server ; server ; }
- Scene: The request time is large, or the performance of some nodes is different, and dynamic balance is more fair.
4.3 IP-Hash (based on client IP)
upstream backend { ip_hash; server ; server ; server ; }
- Session keeping: The same client IP always hits the same backend, suitable for stateful applications (Session, shopping cart, etc.).
4.4 Weight
upstream backend { server weight=4; server weight=1; server weight=1; }
- significance: Of every 6 requests, srv1 will bear 4 and 1 other. Can be used in conjunction with any scheduling algorithm (polling, least_conn, ip_hash).
4.5 Alternate node & forced offline
upstream backend { server ; server backup; # Use only if the main group is unavailable server down; # Permanent offline (manual maintenance)}
5. Health checks and failure recovery
5.1 Passive health check (out of the box)
-
max_fails
: Number of consecutive failures -
fail_timeout
: How long does it take to disable after failure
upstream backend { server max_fails=2 fail_timeout=15s; server ; }
process:
- If 2 requests timeout/disconnect, srv1 is marked as Unhealthy;
- Disable 15s;
- After 15 seconds, the first new request is activated and it will be restored if successful.
5.2 Active health check (NGINX Plus/third-party module)
-
NGINX Plus:built-in
health_check
with visualization API. -
Open Source: Compilable
nginx-upstream-check-module
, or use Lua script to activate the intervalhttp
Detection.
6. Performance Tuning
6.1 Connection and Buffer
# Set up at http{} top levelworker_connections 10240; # Maximum number of connections per Workerkeepalive_timeout 65s; # Client long connection timeoutkeepalive_requests 100; # Maximum number of requests for single connections
- proxy_buffers: Adjust the upstream response buffer
- proxy_busy_buffers_size: Large response scenario optimization
6.2 Timeout configuration
server { proxy_connect_timeout 3s; # Establishing TCP connection timeout proxy_send_timeout 10s; # Send request to the backend timeout proxy_read_timeout 30s; # Receive backend response timeout}
6.3 Kernel & Network Tuning
- somaxconn
sysctl -w =65535
- tcp_tw_reuse/tcp_fin_timeout: Accelerate TIME_WAIT recycling
- ulimit -n: Increase file descriptor upper limit
7. Monitoring & Logging
7.1 Stub Status
existserver{}
Section opens:
location /nginx_status { stub_status on; allow 127.0.0.1; deny all; }
Output example:
Active connections: 291 server accepts handled requests 15394 15394 54123 Reading: 0 Writing: 1 Waiting: 290
7.2 Log format customization
log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' 'rt=$request_time ua="$upstream_addr" us="$upstream_response_time"'; access_log /var/log/nginx/ main;
-
rt
/us
Can be used for requesting time-consuming analysis and back-end performance monitoring.
7.3 Prometheus Integration
- usenginx-prometheus-exporter;
- Combined with Grafana to build a real-time large screen.
8. Common troubleshooting
Scene | Error manifestations | Check the idea |
---|---|---|
Backend 502 Bad Gateway | 502 pages | Check whether the backend service is started; Positioning upstream prematurely closed |
Timeout 504 Gateway | Request timeout | Increase proxy_read_timeout; backend performance analysis |
Configuration overload failed | invalid number… | Syntax check: nginx -t; pay attention to semicolons and brackets |
Exhausted connections | 500 / Stop | Improve worker_connections; monitor Active connections |
9. Advanced & Expand
- Dynamic DNS resolution:
server resolve; + resolver
- gRPC & HTTP/2:
http2 on; grpc_pass grpc://backend;
-
cache:
proxy_cache_path
+proxy_cache
- Safety: IP whitelist, WAF (ModSecurity), TLS uninstallation and hardware acceleration
Summarize
This article systematically covers:
- NGINX architecture and event model
- Windows and Linux dual-platform installation and service
- Core scheduling algorithm: polling, minimum connection, IP hashing, weighting, standby/downline
- Health Checkup (Passive & Active)
- Performance and kernel tuning
- Monitoring, log collection and visualization
- Common troubleshooting and advanced functions
Through the above practices, you can quickly build a highly available, scalable HTTP load balancing layer in your own development environment or production cluster. In the future, it can combine Service Mesh, WAF, security audit and grayscale release to achieve more advanced traffic control and operation and maintenance automation.
The above is personal experience. I hope you can give you a reference and I hope you can support me more.