php, nginx高并发优化

linux内核层面

以centos7.0为例php

# 容许等待中的监听 echo 50000 >/proc/sys/net/core/somaxconn #tcp链接快速回收 echo 1 >/proc/sys/net/ipv4/tcp_tw_recycle # tcp链接重用 echo 1 >/proc/sys/net/ipv4/tcp_tw_reuse #不抵御洪水攻击 echo 0 >/proc/sys/net/ipv4/tcp_syncookies 

nginx优化

worker_process

worker_process 修改成内核数的1-2倍, 通常是4或8, 8以上优化不大mysql

这里须要注意, 开太多的worker进程, 会增长cpu开销,cpu占用会增高linux

keepalive_timeout

高并发下设为0nginx

可是文件上传须要保持链接, 开发时需注意, 作好业务拆分redis

worker_connections

设置worker进程最大打开的链接数, 建议尽可能高,好比20480sql

worker_rlimit_nofile

将此值增长到大于worker_processes * worker_connections的值。 应该是增长当前worker运行用户的最大文件打开数值apache

php-fpm

emergency_restart*

# 60秒内有10次子进程中断,则重启php-fpm, 防止因php垃圾代码形成的中断问题 emergency_restart_threshold =10 emergency_restart_interval =60 

process.max

容许的最大进程数, 一个php-fpm进程大约占用15-40m的内从, 具体设置值须要根据实际状况得出 我这里设为 512centos

pm.max_children

某个链接池容许的最大子进程, 不要超过process_maxapi

pm.max_requests

容许的最大请求 ,设置2048cookie

关掉慢请求日志

;request_slowlog_timeout = 0 ;slowlog = var/log/slow.log 

成果

环境

硬件

  • i5-3470 CPU
  • 4g 内存

软件

  • php7.1.30
  • thinkPHP 5.1.35
  • nginx

业务说明

ab 到 thinkPHP框架首页面, tp开启了强路由模式, 未配置首页路由, 走到miss 路由, 返回miss信息, 未调用db,返回的miss信息以下:

{"code":-8,"msg":"api不存在"} 

ab测试结果, 1w并发, 请求10次, 共10w请求

D:\soft\phpstudy\PHPTutorial\Apache\bin>ab -c 10000 -n 100000 http://fs_server.test/ This is ApacheBench, Version 2.3 <$Revision: 1748469 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking fs_server.test (be patient) Completed 10000 requests Completed 20000 requests Completed 30000 requests Completed 40000 requests Completed 50000 requests Completed 60000 requests Completed 70000 requests Completed 80000 requests Completed 90000 requests Completed 100000 requests Finished 100000 requests Server Software: nginx Server Hostname: fs_server.test Server Port: 80 Document Path: / Document Length: 32 bytes Concurrency Level: 10000 Time taken for tests: 492.928 seconds Complete requests: 100000 Failed requests: 0 Total transferred: 19500000 bytes HTML transferred: 3200000 bytes Requests per second: 202.87 [#/sec] (mean) Time per request: 49292.784 [ms] (mean) Time per request: 4.929 [ms] (mean, across all concurrent requests) Transfer rate: 38.63 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 2 6.6 2 1365 Processing: 18749 46094 8055.0 49145 52397 Waiting: 12231 45636 8504.8 48793 51627 Total: 18751 46096 8055.0 49147 52399 Percentage of the requests served within a certain time (ms) 50% 49147 66% 49279 75% 49347 80% 49386 90% 49473 95% 49572 98% 49717 99% 50313 100% 52399 (longest request) 

无丢失的请求, 就是花费时间有些长, 毕竟虽然没走db,也是走的一套完整的tp框架, 整体算是合理的结果

不过这块有个很是难堪的问题, 若是进行mysql,redis的操做, 会由于存储媒介的链接问题, 形成响应丢失, nginx直接5XX错误,初步方案是提升其最大链接数待测试.

附几个经常使用指令, 能够查看当前开启了几个fpm进程, 总内存开销, 正在处理请求的进程等

# 确认php-fpm的worker进程是否够用,若是不够用就等于没有开启同样 计算开启worker进程数目: ps -ef | grep 'php-fpm'|grep -v 'master'|grep -v 'grep' |wc -l #计算正在使用的worker进程,正在处理的请求 netstat -anp | grep 'php-fpm'|grep -v 'LISTENING'|grep -v 'php-fpm.conf'|wc -l # 内存开销 ps auxf | grep php | grep -v grep | grep -v master | awk '{sum+=$6} END {print sum}'
相关文章
相关标签/搜索