1. View real-time logs (stream monitoring)
tail -f /var/log/nginx/ # View access logs in real timetail -f /var/log/nginx/ # View error logs in real time
2. Statistical status code distribution
# Statistics the number of various status codesawk '{print $9}' | sort | uniq -c | sort -nr # Statistics 404 Error URLgrep ' 404 ' | awk '{print $7}' | sort | uniq -c | sort -nr # Statistics 500 Error IPs and Timesgrep ' 500 ' | awk '{print $1, $4}'
3. Analyze popular URLs/resources
# Statistics the top 10 URLs with the most accessesawk '{print $7}' | sort | uniq -c | sort -nr | head -10 # Statistics of resources consumed by large trafficawk '{print $7, $10}' | sort -k2 -nr | head -10
4. Analyze client IP
# Statistics the number of accessed IPsawk '{print $1}' | sort | uniq | wc -l # Statistics the top 10 most frequently visited IPsawk '{print $1}' | sort | uniq -c | sort -nr | head -10 # View detailed access history for a specific IPgrep '192.168.1.100'
5. Analyze the request time
# Extract the response time field (the log needs to include $request_time)awk '{print $NF}' | sort -n | tail -10 # The 10 slowest requests # Calculate the average response timeawk '{sum+=$NF} END {print "Average:", sum/NR}' # Find requests with response time of more than 1 secondawk '$NF > 1 {print $4, $7, $NF}'
6. Analyze the access time period
# Statistics of visits by hourawk '{print substr($4, 14, 2)}' | sort | uniq -c # Statistics visits for specific time periods (such as 10:00-11:00)grep ' \[01/Jan/2025:10:' | wc -l
7. Analyze User Agent (User-Agent)
# Statistics the number of visits to different browsers/crawlersawk '{print $12}' | sort | uniq -c | sort -nr # Find crawler access recordsgrep -i 'bot|spider'
8. Analyze the request method
# Statistics the frequency of usage of different HTTP methodsgrep -oP '^\S+ \K\S+' | sort | uniq -c
9. Analyze the referer source
# Statistics of external link sourcesgrep -v '"-"' | awk '{print $11}' | sort | uniq -c | sort -nr
10. Combination Analysis (Example)
# Find the 10 most visited and slowest responsive URLsawk '{print $7, $NF}' | sort -k2 -nr | head -10 | sort -k1 # Analyze the behavior of a specific IP (such as checking whether it is a scanner)grep '192.168.1.100' | awk '{print $7}' | sort | uniq -c
11. Customize complex analysis with awk
# count the number of requests and traffic by minuteawk '{split($4, a, ":"); minute=a[2]":"a[3]; count[minute]++; bytes[minute]+=$10} END {for(m in count) print m, count[m], bytes[m]}' | sort
12. Log format description
The above command assumes that the Nginx log adopts the default combined format:
log_format combined '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"';
If you use a custom format, you need to adjust the field index in the command accordingly.
13. Performance optimization suggestions
For super-large log files, you can first use grep/sed to filter and then analyze:
grep '2025:05:14' | awk '{print $7}' | sort | uniq -c
Archive old logs regularly and compress them to reduce the scope of analysis.
Consider using ELK Stack or Graylog for real-time log analysis and visualization.
Through these commands, you can quickly locate performance bottlenecks, security threats and user behavior patterns to improve system operation and maintenance efficiency.
This is the end of this article about the collection of common log analysis commands in nginx. For more related contents of nginx log analysis commands, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!