redis配置详解(中英文)

V2.8.21: (中英字幕同步)html

# Redis configuration file example
#* Redis
配置文件例子

node

# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.

#
注意内存单位: 当使用内存大小的限制须要设置时, 在这里能够设置它的大小格式
#
例如: 1k 5GB 4M 等等都是能够的:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
#
单位不区分大小写所以 so 1GB 1Gb 1gB 都是同样的.

################################## INCLUDES ###################################

# Include one or more other config files here. This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings. Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
#
# include /path/to/local.conf
# include /path/to/other.conf

#
要包含其余额外的配置文件在这里设置. 这个设置对于有本身的redis标准配置模板颇有用
#
#
本身声明的配置文件不会被命令"CONFIG REWRITE"重写
# redis
使用最后一个配置文件做为重写的文件,若是不行被重写,
#
那么请放在前面声明本身的配置文件
#
最后一行的配置就会被重写
#
# include /path/to/local.conf
# include /path/to/other.conf


################################ GENERAL #####################################

# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
#
默认状况下Redis不是运行在守护进程的模式. 若是你须要运行在守护进程的模式,请设置为'yes'.
#
当运行在守护进程模式,则会写到一个pid文件: /var/run/redis.pid.

daemonize no

# When running daemonized, Redis writes a pid file in /var/run/redis.pid by
# default. You can specify a custom pid file location here.
#
当运行在守护进程模式,Redis会默认写到/var/run/redis.pid
#
你能够在这里设置修改
pidfile /var/run/redis.pid

# Accept connections on the specified port, default is 6379.
# If port 0 is specified Redis will not listen on a TCP socket.
# Redis
监听端口来接收链接,默认端口是 6379
#
若是端口是0, Redis不会监听TCP socket
port 6379

# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.

#
在高并发环境下你须要一个高backlog值来避免慢客户端链接问题。注意Linux内核默默地将这个值减少
#
/proc/sys/net/core/somaxconn的值,因此须要确认增大somaxconntcp_max_syn_backlog
#
两个值来达到想要的效果。
# [syn queue && accept queue,
慢客户端会形成accept queue 比较长, 因此加大一些若是客户端太慢的话]
tcp-backlog 511

# By default Redis listens for connections from all the network interfaces
# available on the server. It is possible to listen to just one or multiple
# interfaces using the "bind" configuration directive, followed by one or
# more IP addresses.
#
# Examples:
#
#
默认Redis监听服务器上全部可用网络接口的链接。能够用"bind"配置指令跟一个或多个ip地址来实现
#
监听一个或多个网络接口
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1


# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
#
指定用来监听Unix套套接字的路径。没有默认值,因此在没有指定的状况下Redis不会监听Unix套接字
# unixsocket /tmp/redis.sock
# unixsocketperm 700


# Close the connection after a client is idle for N seconds (0 to disable)
#
一个客户端空闲多少秒后关闭链接。(0表明禁用,永不关闭)
timeout 0

# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
# equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 60 seconds.
# TCP keepalive.

#
若是非零,则设置SO_KEEPALIVE选项来向空闲链接的客户端发送ACK,因为如下两个缘由这是颇有用的:
#
# 1
)可以检测无响应的对端
# 2
)让该链接中间的网络设备知道这个链接还存活
#
#
Linux上,这个指定的值(单位:秒)就是发送ACK的时间间隔。
#
注意:要关闭这个链接须要两倍的这个时间值。
#
在其余内核上这个时间间隔由内核配置决定
#
#
这个选项的一个合理值是60

tcp-keepalive 0

# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)

#
指定服务器调试等级
#
可能值:
# debug
(大量信息,对开发/测试有用)
# verbose
(不少精简的有用信息,可是不像debug等级那么多)
# notice
(适量的信息,基本上是你生产环境中须要的)
# warning
(只有很重要/严重的信息会记录下来)
loglevel notice

# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null

#
指明日志文件名。也可使用""来强制让Redis把日志信息写到标准输出上。
#
注意:若是Redis以守护进程方式运行,而设置日志显示到标准输出的话,日志会发送到/dev/null
logfile ""

# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
#
要使用系统日志记录器,只要设置 "syslog-enabled" "yes" 就能够了。
#
而后根据须要设置其余一些syslog参数就能够了。
# syslog-enabled no

# Specify the syslog identity.
#
指明syslog身份
# syslog-ident redis

# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
#
指明syslog的设备。必须是userLOCAL0 ~ LOCAL7之一。
# syslog-facility local0

# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
#
设置数据库个数。默认数据库是 DB 0
#
能够经过select <dbid> (0 <= dbid <= 'databases' - 1 )来为每一个链接使用不一样的数据库。
databases 16

################################ SNAPSHOTTING ################################
#
# Save the DB on disk:
#
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving completely by commenting out all "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save ""

#
#
把数据库存到磁盘上:
#
# save <seconds> <changes>
#
#
会在指定秒数和数据变化次数以后把数据库写到磁盘上。
#
#
下面的例子将会进行把数据写入磁盘的操做:
# 900
秒(15分钟)以后,且至少1次变动
# 300
秒(5分钟)以后,且至少10次变动
# 60
秒以后,且至少10000次变动
#
#
注意:你要想不写磁盘的话就把全部 "save" 设置注释掉就好了。
#
#
经过添加一条带空字符串参数的save指令也能移除以前全部配置的save指令
#
像下面的例子:
# save ""

save 900 1
save 300 10
save 60 10000


# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.

#
默认若是开启RDB快照(至少一条save指令)而且最新的后台保存失败,Redis将会中止接受写操做
#
这将使用户知道数据没有正确的持久化到硬盘,不然可能没人注意到而且形成一些灾难。
#
#
若是后台保存进程能从新开始工做,Redis将自动容许写操做
#
#
然而若是你已经部署了适当的Redis服务器和持久化的监控,你可能想关掉这个功能以便于即便是
#
硬盘,权限等出问题了Redis也可以像平时同样正常工做
stop-writes-on-bgsave-error yes

# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
#
当导出到 .rdb 数据库时是否用LZF压缩字符串对象?
#
默认设置为 "yes",由于几乎在任何状况下它都是不错的。
#
若是你想节省CPU的话你能够把这个设置为 "no",可是若是你有可压缩的keyvalue的话,
#
那数据文件就会更大了。
rdbcompression yes

# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.

#
由于版本5RDB有一个CRC64算法的校验和放在了文件的最后。这将使文件格式更加可靠但在
#
生产和加载RDB文件时,这有一个性能消耗(大约10%),因此你能够关掉它来获取最好的性能。
#
#
生成的关闭校验的RDB文件有一个0的校验和,它将告诉加载代码跳过检查
rdbchecksum yes

# The filename where to dump the DB
#
持久化数据库的文件名
dbfilename dump.rdb

# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
#
工做目录
#
#
数据库会写到这个目录下,文件名就是上面的 "dbfilename" 的值。
#
#
累加文件也放这里。
#
#
注意你这里指定的必须是目录,不是文件名。
dir ./

################################# REPLICATION #################################

# Master-Slave replication. Use slaveof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
# 1) Redis replication is asynchronous, but you can configure a master to
# stop accepting writes if it appears to be not connected with at least
# a given number of slaves.
# 2) Redis slaves are able to perform a partial resynchronization with the
# master if the replication link is lost for a relatively small amount of
# time. You may want to configure the replication backlog size (see the next
# sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
# network partition slaves automatically try to reconnect to masters
# and resynchronize with them.

#
主从同步。经过 slaveof 指令来实现Redis实例备份其余实例。
#
理解redis asap备份的几个要点以下:
# 1) redis
备份是异步的,可是你能够配置达到必定数量的从redis能够工做时,redis才进行备份,不然中止接受写操做。
# 2) redis
支持跟主redis分步从新同步数据,若是这个链接断开比较短的时间,你能够配置这个分步同步的buffer
# 3)
同步是自动进行的,无须要用户参与,当从redis主动从新链接上主redis后,这个同步就会自动进行。
#
注意,这里是本地从远端复制数据。也就是说,本地能够有不一样的数据库文件、绑定不一样的IP、监听
#
不一样的端口。
#
# slaveof <masterip> <masterport>

# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
#
#
若是master设置了密码保护(经过 "requirepass" 选项来配置),那么slave在开始同步以前必须
#
进行身份验证,不然它的同步请求会被拒绝。
# masterauth <master-password>

# When a slave loses its connection with the master, or when the replication
# is still in progress, the slave can act in two different ways:
#
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) if slave-serve-stale-data is set to 'no' the slave will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO and SLAVEOF.
#
#
当一个slave失去和master的链接,或者同步正在进行中,slave的行为能够有两种:
#
# 1)
若是 slave-serve-stale-data 设置为 "yes" (默认值)slave会继续响应客户端请求,
#
多是正常数据,或者是过期了的数据,也多是还没得到值的空数据。
# 2)
若是 slave-serve-stale-data 设置为 "no"slave会回复"正在从master同步
#
SYNC with master in progress"来处理各类请求,除了 INFO SLAVEOF 命令。
#
slave-serve-stale-data yes

# You can configure a slave instance to accept writes or not. Writing against
# a slave instance may be useful to store some ephemeral data (because data
# written on a slave will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default slaves are read-only.
#
# Note: read only slaves are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only slave exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only slaves using 'rename-command' to shadow all the
# administrative / dangerous commands.
#
#
你能够配置salve实例是否接受写操做。可写的slave实例可能对存储临时数据比较有用(由于写入salve
#
的数据在同master同步以后将很容被删除),可是若是客户端因为配置错误在写入时也可能产生一些问题。
#
#
Redis2.6默认全部的slave为只读
#
#
注意:只读的slave不是为了暴露给互联网上不可信的客户端而设计的。它只是一个防止实例误用的保护层。
#
一个只读的slave支持全部的管理命令好比config,debug等。为了限制你能够用'rename-command'
#
隐藏全部的管理和危险命令来加强只读slave的安全性
slave-read-only yes

# Replication SYNC strategy: disk or socket.
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
#
# New slaves and reconnecting slaves that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the slaves.
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
# file on disk. Later the file is transferred by the parent
# process to the slaves incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
# RDB file to slave sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more slaves
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new slaves arriving will be queued and a new transfer
# will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple slaves
# will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
#
#
主从备份策略: 写磁盘方式或者socket方式.
#
# -------------------------------------------------------
#
注意: 当前主从无磁盘备份还处于试验阶段
# -------------------------------------------------------
#
#
新的从redis或者那些不能进行部分同步备份的redis须要进行"全磁盘备份".
#
一个RDB文件会从主redis传输到从redis.
#
这个传输可使用两种不一样的策略:
#
# 1)
磁盘方式: redis会建立一个子进程进行写RDB文件到磁盘中,
#
以后这个文件会被父进程传送给从redis.
# 2)
无磁盘方式: redis会建立一个子进程进行直接将RDB文件经过socket传送给从redis
#
而无需使用到磁盘.
#
#
使用磁盘主从备份的方式, 当这个RDB文件产生以后, 多个从redis能够进行排队等待当前子进程
#
完成这个RDB文件的写工做后进行同步。
#
使用无磁盘主从备份的方式,一旦这个传输已经开始,那么新链接进来的从redis须要等待新的一次
#
全新的传输,这个须要等待上一次传输的完成。
#
#
当采用无磁盘备份的方式以后, redis会等待必定的时间以后才会开始传输, 以便尽量多的
#
对多个从redis进行同时备份,即平衡备份。这个时间能够在这里配置,单位为秒。
#
即配置 repl-diskless-sync-delay x
#
#
对于磁盘比较慢的并且带宽比较大的环境下,无磁盘主从备份会工做得更好。

repl-diskless-sync no

# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that trnasfers the RDB via socket
# to the slaves.
#
# This is important since once the transfer starts, it is not possible to serve
# new slaves arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more slaves arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
#
#
当开启了无磁盘同步备份, 它能够配置这个延时备份的时间。
#
redis等待必定的时间后建立子进程经过socket和多个从redis进行同步备份。
#
#
这个很重要, 由于一旦传输开始了,它不可能中途将新的从redis加入到这个同步传输中
#
只可以等待新的一次RDB传输, 由于主redis等待必定的时间是为了尽量多的等待从redis链接上来。
#
#
这个延时是使用秒为单位, 默认值是5秒钟.
#
若是要禁用这个延时的特性,将其设为0便可,那么这个同步会立刻开始.

repl-diskless-sync-delay 5

# Slaves send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_slave_period option. The default value is 10
# seconds.
#
# slave
根据指定的时间间隔向master发送ping请求。
#
时间间隔能够经过 repl_ping_slave_period 来设置。
#
默认10秒。

# repl-ping-slave-period 10

# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of slave.
# 2) Master timeout from the point of view of slaves (data, pings).
# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-slave-period otherwise a timeout will be detected
# every time there is low traffic between the master and the slave.
#
#
如下选项设置同步的超时时间
#
# 1
slave在与master SYNC期间有大量数据传输,形成超时
# 2
)在slave角度,master超时,包括数据、ping
# 3
)在master角度,slave超时,当master发送REPLCONF ACK pings
#
#
确保这个值大于指定的repl-ping-slave-period,不然在主从间流量不高时每次都会检测到超时
#
# repl-timeout 60

# Disable TCP_NODELAY on the slave socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to slaves. But this can add a delay for
# the data to appear on the slave side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the slave side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and slaves are many hops away, turning this to "yes" may
# be a good idea.

#
是否在slave套接字发送SYNC以后禁用 TCP_NODELAY
#
#
若是你选择“yes”Redis将使用更少的TCP包和带宽来向slaves发送数据。可是这将使数据传输到slave
#
上有延迟,Linux内核的默认配置会达到40毫秒
#
#
若是你选择了 "no" 数据传输到salve的延迟将会减小但要使用更多的带宽
#
#
默认咱们会为低延迟作优化,但高流量状况或主从之间的跳数过多时,把这个选项设置为“yes”
#
是个不错的选择。

repl-disable-tcp-nodelay no

# Set the replication backlog size. The backlog is a buffer that accumulates
# slave data when slaves are disconnected for some time, so that when a slave
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the slave missed while
# disconnected.
#
# The bigger the replication backlog, the longer the time the slave can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a slave connected.
#
#
设置数据备份的backlog大小。backlog是一个slave在一段时间内断开链接时记录salve数据的缓冲,
#
因此一个slave在从新链接时,没必要要全量的同步,而是一个增量同步就足够了,将在断开链接的这段
#
时间内slave丢失的部分数据传送给它。
#
#
同步的backlog越大,slave可以进行增量同步而且容许断开链接的时间就越长。
#
# backlog
只分配一次而且至少须要一个slave链接
#
# repl-backlog-size 1mb

# After a master has no longer connected slaves for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last slave disconnected, for
# the backlog buffer to be freed.
#
# A value of 0 means to never release the backlog.
#
#
master在一段时间内再也不与任何slave链接,backlog将会释放。如下选项配置了从最后一个
# slave
断开开始计时多少秒后,backlog缓冲将会释放。
#
# 0
表示永不释放backlog
#
# repl-backlog-ttl 3600

# The slave priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a slave to promote into a
# master if the master is no longer working correctly.
#
# A slave with a low priority number is considered better for promotion, so
# for instance if there are three slaves with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the slave as not able to perform the
# role of master, so a slave with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
#

# slave
的优先级是一个整数展现在RedisInfo输出中。若是master再也不正常工做了,哨兵将用它来
#
选择一个slave提高=升为master
#
#
优先级数字小的salve会优先考虑提高为master,因此例若有三个slave优先级分别为1010025
#
哨兵将挑选优先级最小数字为10slave
#
# 0
做为一个特殊的优先级,标识这个slave不能做为master,因此一个优先级为0slave永远不会被
#
哨兵挑选提高为master
#
#
默认优先级为100
#
slave-priority 100

# It is possible for a master to stop accepting writes if there are less than
# N slaves connected, having a lag less or equal than M seconds.
#
# The N slaves need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the slave, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough slaves
# are available, to the specified number of seconds.
#
# For example to require at least 3 slaves with a lag <= 10 seconds use:
#
# min-slaves-to-write 3
# min-slaves-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-slaves-to-write is set to 0 (feature disabled) and
# min-slaves-max-lag is set to 10.
#
# master
里面若是slave少于N个延时小于等于M秒的已链接slave,就能够中止接收写操做。
#
# N
slave须要是“oneline”状态
#
#
延时是以秒为单位,而且必须小于等于指定值,是从最后一个从slave接收到的ping(一般每秒发送)
#
开始计数。
#
#
这个设置选项并不保证多个从redis都会接受到这个写请求,
#
可是能够减少这个数据丢失的机率,一旦没有知足数量的从服务器或者没达到这个延时秒数之下.
#
#
例如至少须要3个延时小于等于10秒的slave用下面的指令:
#
# min-slaves-to-write 3
# min-slaves-max-lag 10

#
#
二者之一设置为0将禁用这个功能。
#
#
默认 min-slaves-to-write 值是0(该功能禁用)而且 min-slaves-max-lag 值是10


################################## SECURITY ###################################

# Require clients to issue AUTH <PASSWORD> before processing any other
# commands. This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
#
要求客户端在处理任何命令时都要验证身份和密码。
#
这个功能在有你不信任的其它客户端可以访问redis服务器的环境里很是有用。
#

#
为了向后兼容的话这段应该注释掉。并且大多数人不须要身份验证(例如:它们运行在本身的服务器上)
#
#
警告:由于Redis太快了,因此外面的人能够尝试每秒150k的密码来试图破解密码。这意味着你须要
#
一个高强度的密码,不然破解太容易了。
#
# requirepass foobared
#


# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to slaves may cause problems.
#
#
命令重命名
#
#
在共享环境下,能够为危险命令改变名字。好比,你能够为 CONFIG 改个其余不太容易猜到的名字,
#
这样内部的工具仍然可使用,而普通的客户端将不行。
#
#
例如:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
#
也能够经过更名为空字符串来彻底禁用一个命令
#
# rename-command CONFIG ""
#
#
请注意:改变命令名字被记录到AOF文件或被传送到从服务器可能产生问题。

################################### LIMITS ####################################

# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
#
设置最多同时链接的客户端数量。默认这个限制是10000个客户端,然而若是Redis服务器不能配置
#
处理文件的限制数来知足指定的值,那么最大的客户端链接数就被设置成当前文件限制数减32(因
#
Redis服务器保留了一些文件描述符做为内部使用)
#
#
一旦达到这个限制,Redis会关闭全部新链接并发送错误'max number of clients reached'

# maxclients 10000

# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU cache, or to set
# a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have slaves attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the slaves are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of slaves is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have slaves attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for slave
# output buffers (but this is not needed if the policy is 'noeviction').
#
#
不要用比设置的上限更多的内存。一旦内存使用达到上限,Redis会根据选定的回收策略(参见:
# maxmemmory-policy
)删除key
#
#
若是由于删除策略Redis没法删除key,或者策略设置为 "noeviction"Redis会回复须要更
#
多内存的错误信息给命令。例如,SET,LPUSH等等,可是会继续响应像Get这样的只读命令。
#
#
在使用Redis做为LRU缓存,或者为实例设置了硬性内存限制的时候(使用 "noeviction" 策略)
#
的时候,这个选项一般事颇有用的。
#
#
警告:当有多个slave连上达到内存上限的实例时,master为同步slave的输出缓冲区所需
#
内存不计算在使用内存中。这样当驱逐key时,就不会因网络问题 / 从新同步事件触发驱逐key
#
的循环,反过来slaves的输出缓冲区充满了key被驱逐的DEL命令,这将触发删除更多的key
#
直到这个数据库彻底被清空为止
#
#
总之...若是你须要附加多个slave,建议你设置一个稍小maxmemory限制,这样系统就会有空闲
#
的内存做为slave的输出缓存区(可是若是最大内存策略设置为"noeviction"的话就不必了)
#
# maxmemory <bytes>

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key according to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys-random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
#
# Note: with any of the above policies, Redis will return an error on write
# operations, when there are no suitable keys for eviction.
#
# At the date of writing these commands are: set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# getset mset msetnx exec sort
#
# The default is:
#
#
最大内存策略:若是达到内存限制了,Redis如何选择删除key。你能够在下面五个行为里选:
#
# volatile-lru ->
根据LRU算法删除带有过时时间的key
# allkeys-lru ->
根据LRU算法删除任何key
# volatile-random ->
根据过时设置来随机删除key, 具有过时时间的key
# allkeys->random ->
无差异随机删, 任何一个key
# volatile-ttl ->
根据最近过时时间来删除(辅以TTL, 这是对于有过时时间的key
# noeviction ->
谁也不删,直接在写操做时返回错误。
#
#
注意:对全部策略来讲,若是Redis找不到合适的能够删除的key都会在写操做时返回一个错误。
#

#
目前为止涉及的命令:set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# getset mset msetnx exec sort
#

#
默认值以下:
#
# maxmemory-policy volatile-lru

# LRU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can select as well the sample
# size to check. For instance for default Redis will check three keys and
# pick the one that was used less recently, you can change the sample size
# using the following configuration directive.
#
# LRU
和最小TTL算法的实现都不是很精确,可是很接近(为了省内存),因此你能够用样本量作检测。
#
例如:默认Redis会检查3key而后取最旧的那个,你能够经过下面的配置指令来设置样本的个数。
#
# maxmemory-samples 3

############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.
#
#
默认状况下,Redis是异步的把数据导出到磁盘上。这种模式在不少应用里已经足够好,但Redis进程
#
出问题或断电时可能形成一段时间的写操做丢失(这取决于配置的save指令)
#
# AOF
是一种提供了更可靠的替代持久化模式,例如使用默认的数据写入文件策略(参见后面的配置)
#
在遇到像服务器断电或单写状况下Redis自身进程出问题但操做系统仍正常运行等突发事件时,Redis
#
能只丢失1秒的写操做。
#
# AOF
RDB持久化能同时启动而且不会有问题。
#
若是AOF开启,那么在启动时Redis将加载AOF文件,它更能保证数据的可靠性。
#
#
请查看 http://redis.io/topics/persistence 来获取更多信息.

appendonly no

# The name of the append only file (default: "appendonly.aof")
#
纯累加文件名字(默认:"appendonly.aof"
#

appendfilename "appendonly.aof"

# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".

# fsync()
系统调用告诉操做系统把数据写到磁盘上,而不是等更多的数据进入输出缓冲区。
#
有些操做系统会真的把数据立刻刷到磁盘上;有些则会尽快去尝试这么作。
#
# Redis
支持三种不一样的模式:
#
# no
:不要马上刷,只有在操做系统须要刷的时候再刷。比较快。
# always
:每次写操做都马上写入到aof文件。慢,可是最安全。
# everysec
:每秒写一次。折中方案。
#
#
默认的 "everysec" 一般来讲能在速度和数据安全性之间取得比较好的平衡。根据你的理解来
#
决定,若是你能放宽该配置为"no" 来获取更好的性能(但若是你能忍受一些数据丢失,能够考虑使用
#
默认的快照持久化模式),或者相反,用“always”会比较慢但比everysec要更安全。
#
#
请查看下面的文章来获取更多的细节
# http://antirez.com/post/redis-persistence-demystified.html
#
#
若是不能肯定,就用 "everysec"

# appendfsync always
appendfsync everysec
# appendfsync no


# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
#
#
若是AOF的同步策略设置成 "always" 或者 "everysec",而且后台的存储进程(后台存储或写入AOF
#
日志)会产生不少磁盘I/O开销。某些Linux的配置下会使Redis由于 fsync()系统调用而阻塞好久。
#
注意,目前对这个状况尚未完美修正,甚至不一样线程的 fsync() 会阻塞咱们同步的write(2)调用。
#
#
为了缓解这个问题,能够用下面这个选项。它能够在 BGSAVE BGREWRITEAOF 处理时阻止主进程进行fsync()
#
#
这就意味着若是有子进程在进行保存操做,那么Redis就处于"不可同步"的状态。
#
这其实是说,在最差的状况下可能会丢掉30秒钟的日志数据。(默认Linux设定)
#
#
若是你有延时问题把这个设置成"yes",不然就保持"no",这是保存持久数据的最安全的方式。
#

no-appendfsync-on-rewrite no

# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
#
#
自动重写AOF文件
#
若是AOF日志文件增大到指定百分比,Redis可以经过 BGREWRITEAOF 自动重写AOF日志文件。
#
#
工做原理:Redis记住上次重写时AOF文件的大小(若是重启后尚未写操做,就直接用启动时的AOF大小)
#
#
这个基准大小和当前大小作比较。若是当前大小超过指定比例,就会触发重写操做。你还须要指定被重写
#
日志的最小尺寸,这样避免了达到指定百分比但尺寸仍然很小的状况还要重写。
#
#
指定百分比为0会禁用AOF自动重写特性。
#

auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb


# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
# AOF
文件可能在尾部是不完整的(这跟system关闭有问题,尤为是mount ext4文件系统时
#
没有加上data=ordered选项。只会发生在os死时,redis本身死不会不完整)。
#
redis重启时load进内存的时候就有问题了。
#
发生的时候,能够选择redis启动报错,而且通知用户和写日志,或者load尽可能多正常的数据。
#
若是aof-load-truncatedyes,会自动发布一个log给客户端而后load(默认)。
#
若是是no,用户必须手动redis-check-aof修复AOF文件才能够。
#
注意,若是在读取的过程当中,发现这个aof是损坏的,服务器也是会退出的,
#
这个选项仅仅用于当服务器尝试读取更多的数据但又找不到相应的数据时。
#
aof-load-truncated yes

################################ LUA SCRIPTING ###############################

# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
#
# Lua
脚本的最大执行时间,毫秒为单位
#
#
若是达到了最大的执行时间,Redis将要记录在达到最大容许时间以后一个脚本仍然在执行,而且将
#
开始对查询进行错误响应。
#
#
当一个长时间运行的脚本超过了最大执行时间,只有 SCRIPT KILL SHUTDOWN NOSAVE 两个
#
命令可用。第一个能够用于中止一个尚未调用写命名的脚本。第二个是关闭服务器惟一方式,当
#
写命令已经经过脚本开始执行,而且用户不想等到脚本的天然终止。
#
#
设置成0或者负值表示不限制执行时间而且没有任何警告
#
lua-time-limit 5000

################################## SLOW LOG ###################################

# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.

# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
#
# Redis
慢查询日志能够记录超过指定时间的查询。运行时间不包括各类I/O时间,例如:链接客户端,
#
发送响应数据等,而只计算命令执行的实际时间(这只是线程阻塞而没法同时为其余请求服务的命令执
#
行阶段)
#
#
你能够为慢查询日志配置两个参数:一个指明Redis的超时时间(单位为微秒)来记录超过这个时间的命令
#
另外一个是慢查询日志长度。当一个新的命令被写进日志的时候,最老的那个记录从队列中移除。
#
#
下面的时间单位是微秒,因此1000000就是1秒。注意,负数时间会禁用慢查询日志,而0则会强制记录
#
全部命令。
#
slowlog-log-slower-than 10000

# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
#
#
这个长度没有限制。只是要主要会消耗内存。你能够经过 SLOWLOG RESET 来回收内存。
#
slowlog-max-len 128

################################ LATENCY MONITOR ##############################

# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enalbed at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
#
# redis
延时监控系统在运行时会采样一些操做,以便收集可能致使延时的数据根源。
#
#
经过 LATENCY命令能够打印一些图样和获取一些报告,方便监控
#
#
这个系统仅仅记录那个执行时间大于或等于预约时间(毫秒)的操做,
#
这个预约时间是经过latency-monitor-threshold配置来指定的,
#
当设置为0时,这个监控系统处于中止状态
#
#
默认状况下这个监控系统是处于中止状态的,由于大部分状况下都是不须要的,若是你
#
没有延时问题,收集数据会有一个性能冲击,这个影响会比较小,而且能够测试出来
#
延时监控能够很容易的在线开启,经过命令 CONFIG SET latency-monitor-threshold
# <milliseconds>
开启.
latency-monitor-threshold 0

############################# Event notification ##############################

# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
# K Keyspace events, published with __keyspace@<db>__ prefix.
# E Keyevent events, published with __keyevent@<db>__ prefix.
# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
# $ String commands
# l List commands
# s Set commands
# h Hash commands
# z Sorted set commands
# x Expired events (events generated every time a key expires)
# e Evicted events (events generated when a key is evicted for maxmemory)
# A Alias for g$lshzxe, so that the "AKE" string means all the events.
#
# The "notify-keyspace-events" takes as argument a string that is composed
# of zero or multiple characters. The empty string means that notifications
# are disabled.
#
# Example: to enable list and generic events, from the point of view of the
# event name, use:
#
# notify-keyspace-events Elg
#
# Example 2: to get the stream of the expired keys subscribing to channel
# name __keyevent@0__:expired use:
#
# notify-keyspace-events Ex
#
# By default all notifications are disabled because most users don't need
# this feature and the feature has some overhead. Note that if you don't
# specify at least one of K or E, no events will be delivered.
#
# Redis
能通知 Pub/Sub 客户端关于键空间发生的事件
#
这个功能文档位于http://redis.io/topics/keyspace-events
#
#
例如:若是键空间事件通知被开启,而且客户端对 0 号数据库的键 foo 执行 DEL 命令时,将经过
# Pub/Sub
发布两条消息:
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
#
能够在下表中选择Redis要通知的事件类型。事件类型由单个字符来标识:
#
# K
键空间通知,以__keyspace@<db>__为前缀
# E
键事件通知,以__keysevent@<db>__为前缀
# g DEL , EXPIRE , RENAME
等类型无关的通用命令的通知, ...
# $ String
命令
# l List
命令
# s Set
命令
# h Hash
命令
# z
有序集合命令
# x
过时事件(每次key过时时生成)
# e
驱逐事件(当key在内存满了被清除时生成)
# A g$lshzxe
的别名,所以”AKE”意味着全部的事件
#
# notify-keyspace-events
带一个由0到多个字符组成的字符串参数。空字符串意思是通知被禁用。
#
#
例子:启用List和通用事件通知:
# notify-keyspace-events Elg
#
#
例子2:为了获取过时key的通知订阅名字为 __keyevent@__:expired 的频道,用如下配置
# notify-keyspace-events Ex
#
#
默认所用的通知被禁用,由于用户一般不须要该特性,而且该特性会有性能损耗。
#
注意若是你不指定至少KE之一,不会发送任何事件。
#
notify-keyspace-events ""

############################### ADVANCED CONFIG ###############################

# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
#
hash只有少许的entry时,而且最大的entry所占空间没有超过指定的限制时,会用一种节省内存的
#
数据结构来编码。能够经过下面的指令来设定限制
hash-max-ziplist-entries 512
hash-max-ziplist-value 64


# Similarly to hashes, small lists are also encoded in a special way in order
# to save a lot of space. The special representation is only used when
# you are under the following limits:
#
hash似,数据元素较少的list,能够用另外一种方式来编码从而节省大量空间。
#
这种特殊的方式只有在符合下面限制时才能够用:
list-max-ziplist-entries 512
list-max-ziplist-value 64


# Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
# set
有一种特殊编码的状况:当set数据全是十进制64位有符号整型数字构成的字符串时。
#
下面这个配置项就是用来设置set使用这种编码来节省内存的最大长度。
set-max-intset-entries 512

# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
#
hashlist类似,有序集合也能够用一种特别的编码方式来节省大量空间。
#
这种编码只适合长度和元素都小于下面限制的有序集合:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64


# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
#
# HyperLogLog
稀疏结构表示字节的限制。该限制包括
# 16
个字节的头。当HyperLogLog使用稀疏结构表示
#
这些限制,它会被转换成密度表示。

#
值大于16000是彻底没用的,由于在该点
#
密集的表示是更多的内存效率。

#
建议值是3000左右,以便具备的内存好处, 减小内存的消耗
#
不会减慢太多PFADD操做时间,
#
它是ON)使用稀疏编码的话。该值能够提升到
# 10000
左右若是CPU不是一个问题的话,但空间,数据集是
#
由许多HyperLogLogs与基数在0 - 15000范围内。

hll-sparse-max-bytes 3000

# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply from time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
#
#
启用哈希刷新,每100CPU毫秒会拿出1个毫秒来刷新Redis的主哈希表(顶级键值映射表)。
# redis
所用的哈希表实现(见dict.c)采用延迟哈希刷新机制:你对一个哈希表操做越多,哈希刷新
#
操做就越多;反之,若是服务器是空闲的,那么哈希刷新就不会完成,哈希表就会占用更多的一些
#
内存而已。
#
#
默认是每秒钟进行10次哈希表刷新,用来刷新字典,而后尽快释放内存。
#
#
建议:
#
若是你对延迟比较在乎,不可以接受Redis时不时的对请求有2毫秒的延迟的话,就用
# "activerehashing no"
,若是不太在乎延迟而但愿尽快释放内存就设置"activerehashing yes"
activerehashing yes

# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
#
# The limit can be set differently for the three different classes of clients:
#
# normal -> normal clients including MONITOR clients
# slave -> slave clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
#
# Instead there is a default limit for pubsub and slave clients, since
# subscribers and slaves receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
#
#
客户端的输出缓冲区的限制,可用于强制断开那些由于某种缘由从服务器读取数据的速度不够快的客户端,
#
(一个常见的缘由是一个发布/订阅客户端消费消息的速度没法遇上生产它们的速度)
#
#
能够对三种不一样的客户端设置不一样的限制:
# normal ->
正常客户端
# slave -> slave
MONITOR 客户端
# pubsub ->
至少订阅了一个pubsub channelpattern的客户端
#
#
下面是每一个client-output-buffer-limit语法:
# client-output-buffer-limit <class><hard limit> <soft limit> <soft seconds>
#
#
一旦达到硬限制客户端会当即被断开,或者达到软限制并持续达到指定的秒数(连续的)。
#
例如,若是硬限制为32兆字节和软限制为16兆字节/10秒,客户端将会当即断开
#
若是输出缓冲区的大小达到32兆字节,或客户端达到16兆字节并连续超过了限制10秒,就将断开链接。
#
#
默认normal客户端不作限制,由于他们在不主动请求时不接收数据(以推的方式),只有异步客户端
#
可能会出现请求数据的速度比它能够读取的速度快的场景。
#
# pubsub
slave客户端会有一个默认值,由于订阅者和slaves以推的方式来接收数据
#
#
把硬限制和软限制都设置为0来禁用该功能
#
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60


# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform according to the specified "hz" value.
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
#
# Redis
调用内部函数来执行许多后台任务,如关闭客户端超时的链接,清除未被请求过的过时Key等等。
#
#
不是全部的任务都以相同的频率执行,但Redis依照指定的“hz”值来执行检查任务。
#
#
默认状况下,“hz”的被设定为10。提升该值将在Redis空闲时使用更多的CPU时,但同时当有多个key
#
同时到期会使Redis的反应更灵敏,以及超时能够更精确地处理。
#
#
范围是1500之间,可是值超过100一般不是一个好主意。
#
大多数用户应该使用10这个默认值,只有在很是低的延迟要求时有必要提升到100
#
hz 10

# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
#
#
当一个子进程重写AOF文件时,若是启用下面的选项,则文件每生成32M数据会被同步。
#
为了增量式的写入硬盘而且避免大的延迟高峰这个指令是很是有用的

#
aof-rewrite-incremental-fsync yes
redis

相关文章
相关标签/搜索