Redis与服务接口相结合的高并发python
Xadserver接口与redis结合实现高并发须要知足如下三个条件:redis
事前须要关注的三个概念:xadserver并发数,redis支持的最大链接数,链接池的最大链接数(以后若是须要动态设置redis的最大链接数而又不想重启redis影响线上的服务,能够经过config set maxclients 65535 命令实时设置):json
127.0.0.1:6379> config get maxclients并发
1) "maxclients"app
2) "2000"python2.7
先来查询一下,咱们当前redis支持的最大链接数:高并发
实验1测试
为了说明链接池最大链接数与并发数的关系,咱们 先作如下这个试验;ui
# coding=utf-8
from gevent import monkey
import requests
monkey.patch_all()
import gevent
import redis
import time
import redis
Pool = redis.ConnectionPool(host='127.0.0.1', port=6379, max_connections=10, db=2)
pr = redis.Redis(connection_pool=Pool, decode_responses=True)
# print pr.get('__h5_campaign_info__122671')
import sys
def getFunc(key):
"""取key"""
v = pr.get('__h5_campaign_info__122671')
print v
def call_gevent(count):
"""调用gevent 模拟高并发"""
begin_time = time.time()
run_gevent_list = []
num = 1
for i in range(count):
print('--------------%d--Test-------------' % i)
mykey = 'test' + str(num)
run_gevent_list.append(gevent.spawn(getFunc, mykey))
num = num + 1
gevent.joinall(run_gevent_list)
end = time.time()
print('测试并发量' + str(count))
print('单次测试时间(平均)s:', (end - begin_time) / count)
print('累计测试时间 s:', end - begin_time)
if __name__ == '__main__':
# 并发请求数量
test_count = 20 # 改变并发量查看测试效果。。我这里取7000,10000,20000进行测试。记得将rdis的最大链接数改成30000并重启redis。
while 1:
call_gevent(count=test_count)spa
如代码所示,咱们设置的链接池最大链接数是10,而并发数咱们设置为20,执行该并发条件下的返回结果,发现出现以下异常;
Traceback (most recent call last):
File "src/gevent/greenlet.py", line 716, in gevent._greenlet.Greenlet.run
File "/Users/liquid/PycharmProjects/aliyun_sls/redis高并发.py", line 19, in getFunc
v = pr.get('__h5_campaign_info__122671')
File "/Users/liquid/PycharmProjects/aliyun_sls/venv/lib/python2.7/site-packages/redis/client.py", line 1207, in get
return self.execute_command('GET', name)
File "/Users/liquid/PycharmProjects/aliyun_sls/venv/lib/python2.7/site-packages/redis/client.py", line 752, in execute_command
connection = pool.get_connection(command_name, **options)
File "/Users/liquid/PycharmProjects/aliyun_sls/venv/lib/python2.7/site-packages/redis/connection.py", line 970, in get_connection
connection = self.make_connection()
File "/Users/liquid/PycharmProjects/aliyun_sls/venv/lib/python2.7/site-packages/redis/connection.py", line 986, in make_connection
raise ConnectionError("Too many connections")
ConnectionError: Too many connections
2019-04-16T06:55:17Z <Greenlet "Greenlet-2" at 0x102051050: getFunc('test13')> failed with ConnectionError
查看redis的当前链接数为11(去除本就存在的1个链接,可知redis当前链接数为10,而更多的并发数并无分配到资源)
127.0.0.1:6379> info clients
# Clients
connected_clients:11
client_recent_max_input_buffer:2
client_recent_max_output_buffer:0
blocked_clients:0
实验2
以后咱们设置redis链接池的最大链接数为20,再试一次;
测试并发量20
('\xe5\x8d\x95\xe6\xac\xa1\xe6\xb5\x8b\xe8\xaf\x95\xe6\x97\xb6\xe9\x97\xb4\xef\xbc\x88\xe5\xb9\xb3\xe5\x9d\x87\xef\xbc\x89s:', 7.859468460083007e-05)
('\xe7\xb4\xaf\xe8\xae\xa1\xe6\xb5\x8b\xe8\xaf\x95\xe6\x97\xb6\xe9\x97\xb4 s:', 0.0015718936920166016)
能够正常处理并发的请求,并不会报错
以后再去查看redis的当前链接数为21(去除本就存在的1个链接,可知redis当前链接数为20每个并发的请求都分配到了的redis链接池中的资源),
127.0.0.1:6379> info clients
# Clients
connected_clients:21
client_recent_max_input_buffer:2
client_recent_max_output_buffer:0
blocked_clients:0
实验3
此时咱们修改咱们链接池的最大链接数为30,再次执行代码,代码依然没有报错
测试并发量20
('\xe5\x8d\x95\xe6\xac\xa1\xe6\xb5\x8b\xe8\xaf\x95\xe6\x97\xb6\xe9\x97\xb4\xef\xbc\x88\xe5\xb9\xb3\xe5\x9d\x87\xef\xbc\x89s:', 7.630586624145508e-05)
('\xe7\xb4\xaf\xe8\xae\xa1\xe6\xb5\x8b\xe8\xaf\x95\xe6\x97\xb6\xe9\x97\xb4 s:', 0.0015261173248291016)
结论:
在使用redis的链接池访问redis里的资源时,链接池数必须大于等于并发数(两者同时小于redis可支持的最大链接数),不然多出来的并发数将会由于分配不到redis的资源而收到报错信息(参考地址:https://redis.io/topics/clients)
上述三个实验并无与咱们的接口服务结合在一块儿,下面将结合接口服务再次实验
实验4:
# coding=utf-8
from gevent import monkey
import requests
monkey.patch_all()
import gevent
import redis
import time
def getFunc(key):
"""取key"""
v = requests.get('http://127.0.0.1:91/sss')
print v
def call_gevent(count):
"""调用gevent 模拟高并发"""
begin_time = time.time()
run_gevent_list = []
num = 1
for i in range(count):
print('--------------%d--Test-------------' % i)
mykey = 'test' + str(num)
run_gevent_list.append(gevent.spawn(getFunc, mykey))
num = num + 1
gevent.joinall(run_gevent_list)
end = time.time()
print('测试并发量' + str(count))
print('单次测试时间(平均)s:', (end - begin_time) / count)
print('累计测试时间 s:', end - begin_time)
if __name__ == '__main__':
# 并发请求数量
test_count = 100 # 改变并发量查看测试效果。。我这里取7000,10000,20000进行测试。记得将rdis的最大链接数改成30000并重启redis。
while 1:
call_gevent(count=test_count)
接口代码以下:
Pool = redis.ConnectionPool(host='127.0.0.1', port=6379, max_connections=50, db=2)
# 从池子中拿一个连接
pr = redis.Redis(connection_pool=Pool, decode_responses=True)
@app.route("/sss", methods=["GET", "POST"])
def test_concurrent():
try:
pr.get('__h5_campaign_info__111127')
return json.dumps({'code':1})
except:
traceback.print_exc()
return json.dumps({'code':0})
本地机器上,该接口最多支持2600的并发(uwsgi启动两个进程处理请求),因此等下模拟请求时,数量并不会太多(100的并发量作测试)
此时并发量为100,而咱们设置的redis链接池为50,按照预期,应该是100个请求分摊到链接池的50个资源上,多余的请求资源等待前50个资源的释放(事实上链接池会在初始化时申请一部分资源,使用完后归还链接池,从而达到减小链接redis与注销链接开销的目的),接下来看接口的响应如何
有一半的请求响应以下(预期中的):
127.0.0.1 - - [16/Apr/2019 15:20:07] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:20:07] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:20:07] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:20:07] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:20:07] "GET /sss HTTP/1.1" 200 -
另外一半的请求响应(预期外的):
Traceback (most recent call last):
File "run.py", line 533, in test_concurrent
pr.get('__h5_campaign_info__111127')
File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/client.py", line 880, in get
return self.execute_command('GET', name)
File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/client.py", line 570, in execute_command
connection = pool.get_connection(command_name, **options)
File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/connection.py", line 897, in get_connection
connection = self.make_connection()
File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/connection.py", line 904, in make_connection
raise ConnectionError("Too many connections")
ConnectionError: Too many connections
此时查询redis中的客户端链接数为51(去除本就存在的一个链接),数量和链接池申请的资源相匹配
127.0.0.1:6379> info clients
# Clients
connected_clients:51
client_recent_max_input_buffer:2
client_recent_max_output_buffer:0
blocked_clients:0
实验5:
此时修改链接池的最大链接数为100,并发量依然控制在100
所有请求的响应为:
127.0.0.1 - - [16/Apr/2019 15:30:58] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:30:58] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:30:58] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:30:58] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:30:58] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:30:58] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:30:58] "GET /sss HTTP/1.1" 200 -
此时查询redis的客户端链接数(去除本就存在的一个链接),数量和链接池申请的资源彻底匹配
127.0.0.1:6379> info clients
# Clients
connected_clients:101
client_recent_max_input_buffer:2
client_recent_max_output_buffer:0
blocked_clients:0
实验6
链接池的最大链接数设置为200,并发量控制在100
所有请求的响应为:
127.0.0.1 - - [16/Apr/2019 15:35:49] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:35:49] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:35:49] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:35:49] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:35:49] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:35:49] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:35:49] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:35:49] "GET /sss HTTP/1.1" 200 –
此时查询redis的客户端链接数(去除本就存在的一个链接),数量和并发数彻底匹配
127.0.0.1:6379> info clients
# Clients
connected_clients:101
client_recent_max_input_buffer:2
client_recent_max_output_buffer:0
blocked_clients:0
这6个实验所有都是基于redis可支持的最大链接数大于链接池的最大链接数以及并发数
实验7:
此时咱们修改redis可支持的最大链接数为20,再次实验(修改命令以下):
127.0.0.1:6379> config get maxclients
1) "maxclients"
2) "2000"
127.0.0.1:6379> config set maxclients 20
OK
127.0.0.1:6379> config get maxclients
1) "maxclients"
2) "20"
此时修改链接池的数量为30,而咱们的并发也控制在30,再次实验:
返回结果有20个以下:
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 -
127.0.0.1 - - [16/Apr/2019 15:42:32] "GET /sss HTTP/1.1" 200 –
剩余10个请求的返回结果以下:
Traceback (most recent call last):
File "run.py", line 533, in test_concurrent
pr.get('__h5_campaign_info__111127')
File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/client.py", line 880, in get
return self.execute_command('GET', name)
File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/client.py", line 578, in execute_command
connection.send_command(*args)
File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/connection.py", line 563, in send_command
self.send_packed_command(self.pack_command(*args))
File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/connection.py", line 538, in send_packed_command
self.connect()
File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/connection.py", line 446, in connect
self.on_connect()
File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/connection.py", line 520, in on_connect
if nativestr(self.read_response()) != 'OK':
File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/connection.py", line 577, in read_response
response = self._parser.read_response()
File "/Users/liquid/PycharmProjects/Env/lib/python2.7/site-packages/redis/connection.py", line 255, in read_response
raise error
ConnectionError: max number of clients reached
此时可分配资源内的20个请求能够正常返回,而资源外的10个请求则按照redis拒绝请求的信息返回
综合以上7个实验,咱们得出如下结论:
1, redis可支持的最大链接数必须大于等于链接池设置的最大链接数或者并发数的任意一个数值
2, redis链接池申请资源的数量必须大于等于并发数,不然多余的并发请求将会由于分配不到资源而出现异常
3, 考虑到redis不停建立链接和销毁链接的系统开销会影响咱们的接口质量,因此咱们在xadserver项目中使用redis链接池申请到足够的资源供并发请求分配调用
仍需确认的点:
1,修改机器的文件描述符以及redis配置的最大链接数超过咱们的并发数,redis是否能够按照预期接受并处理请求
2,接口内调用redis实时获取数据,对接口的响应速度影响有多大(须要彻底模拟线上环境测试)