1. Problem background: When concurrent connections encounter performance bottlenecks
1.1 Case environment
- Server configuration:
vCPU: 8nuclear | Memory: 16GB | Network bandwidth: 4Gbps | PPS: 80Ten thousand
- Observed anomalies:
-
TIME_WAIT
Connection stacking (2464 pieces) - exist
CLOSE_WAIT
Connection (4) - Occasionally new connection establishment timeout
-
1.2 Initial parameter analysis
passsysctl
The original configuration viewed:
= 65535 net.ipv4.tcp_max_syn_backlog = 8192 net.ipv4.tcp_max_tw_buckets = 131072 net.ipv4.ip_local_port_range = 1024 61999
Key defects: small semi-connection queue, narrow port range, and strict buffer restrictions.
2. Deep diagnostics: connection status and kernel parameters
2.1 Connection status monitoring skills
Real-time statistics on TCP status
watch -n 1 'netstat -ant | awk '\''/^tcp/ {++S[$NF]} END {for(a in S) print a, S[a]}'\'''
Output example:
ESTABLISHED 790 TIME_WAIT 2464 SYN_RECV 32 # Semi-connection focus!
Semi-connection special inspection
# View SYN_RECV connection detailsss -ntp state syn-recv # Monitor queue overflownetstat -s | grep -i 'listen drops'
2.2 Interpretation of key parameters
parameter | effect | Default value issue |
---|---|---|
tcp_max_syn_backlog |
Semi-connect queue length | 8192 (burst traffic is prone to full) |
somaxconn |
Full connection queue length | Must match the application backlog parameter |
tcp_tw_reuse |
Quickly reuse TIME_WAIT port | Default off (causes port exhaustion) |
tcp_rmem /tcp_wmem
|
Read and write buffer size | Maximum value is only 6MB (affects throughput) |
3. Tuning plan: from parameters to practice
3.1 Connection Management Optimization
Solve TIME_WAIT stacking
echo "net.ipv4.tcp_tw_reuse = 1" >> /etc/ echo "net.ipv4.tcp_max_tw_buckets = 262144" >> /etc/ echo "net.ipv4.ip_local_port_range = 1024 65000" >> /etc/
Shorten connection recycling time
echo "net.ipv4.tcp_fin_timeout = 30" >> /etc/
3.2 Queue and Buffer Optimization
Expand the connection queue
echo "net.ipv4.tcp_max_syn_backlog = 65535" >> /etc/ echo " = 65535" >> /etc/ echo ".netdev_max_backlog = 10000" >> /etc/
Adjust the memory buffer
cat >> /etc/ <<EOF net.ipv4.tcp_mem = 8388608 12582912 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 .rmem_max = 16777216 .wmem_max = 16777216 EOF
3.3 Keepalive and timeout optimization
echo "net.ipv4.tcp_keepalive_time = 600" >> /etc/ echo "net.ipv4.tcp_keepalive_intvl = 30" >> /etc/
4. Verification and monitoring
4.1 Real-time monitoring scripts
Connect to Status Board
#!/bin/bash while true; do clear date echo "--- TCP status ----" netstat -ant | awk '/^tcp/ {++S[$NF]} END {for(a in S) print a, S[a]}' echo "---- Semi-connection queue ----" ss -ltn | awk 'NR>1 {print "Listenqueue: Recv-Q="$2", Send-Q="$3}' echo "---- Port Usage ----" echo "Used ports: $(netstat -ant | grep -v LISTEN | awk '{print $4}' | cut -d: -f2 | sort -u | wc -l)/$((65000-1024))" sleep 5 done
Kernel Alarm Rules (Prometheus Example)
4.2 Stress testing suggestions
usewrk
Simulate high concurrency:
wrk -t16 -c10000 -d60s http://service:8080
Key indicators for monitoring:
-
SYN_RECV
Quantity fluctuations -
netstat -s
Packet loss count in - Memory usage rate (
free -m
)
5. Guide to avoid pits
5.1 Common Mistakes
Blindly enable
tcp_tw_recycle
In NAT environment, connection failure (removed from Linux 4.12)The buffer is too large and causes OOM
Need to be adjusted according to memorytcp_mem
:
# Calculate the safety value (unit: page, 1 page = 4KB)echo $(( $(free -m | awk '/Mem:/ {print $2}') * 1024 / 4 / 3 )) >> /proc/sys/net/ipv4/tcp_mem
5.2 Parameter dependencies
-
somaxconn
Need to be applied layerbacklog
For example, Nginx needs to be adjusted synchronously:
listen 80 backlog=65535;
6. Summary
Through the tuning practice in this article, we have achieved:
- TIME_WAIT connection reduction by 70%
- The maximum number of concurrent connections increased to 30,000+
- Network throughput increased by 2 times
The above is the detailed content of the practical guide for network parameter tuning in Linux high concurrency scenarios. For more information about Linux network parameter tuning, please pay attention to my other related articles!