白話翻譯 Redis-conf

>一篇易懂的 Redis-conf 翻譯,貼出來方便查閱

~~~properties
# Redis configuration file example. Redis配置文件示例
#
# Note that in order to read the configuration file, Redis must be
# started with the file path as first argument:
# 如果要使用自定義的Redis配置文件,則需要將配置文件的路徑(絕對/相對)跟在"./redis-server"命令後的第一個參數,如:
# ./redis-server /path/to/redis.conf

# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
# 下面是對Redis內存申請單位的註釋,比如1k代表1000字節,而1kb代表1024字節,依次類推,並且Redis並不區分大小寫,1k 1K都是1000字節
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.

################################## INCLUDES 包含 ###################################

# Include one or more other config files here.  This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings.  Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
#
# 可以將公共配置抽取成模板,然後在主配置中使用"include"選項來外掛它,"include"可以在文件開始或者結束的時候使用
# 如果在文件開始,那麼主配置文件中指定的key會覆蓋"include"掛進來的key,比如"include"掛進來配置文件port爲6380,而主配置文件port=6379,那麼這種情況Redis啓動之後,監聽的端口依然是6379
# 如果在文件結束,上面舉例的情況,Redis啓動之後,監聽的端口是6380
# 其實在集羣中可以使用這個特性,可以減少配置,如果使用共享文件(NFS),還可以做到一處修改處處修改的效果,但是後者是通過分發文件實現的,這裏是通過共享磁盤
# include /path/to/local.conf
# include /path/to/other.conf

################################## MODULES 模塊 #####################################

# Load modules at startup. If the server is not able to load modules
# it will abort. It is possible to use multiple loadmodule directives.
# 掛載官方或者其他大牛寫的module,有哪些module可以在Redis官網中的module中查看https://redis.io/modules,比如
# redis-cell(漏洞限流) RedisBloom(布隆過濾器) RedisSearch(全文檢索) rediSQL(SQL操作redis)等等module

# loadmodule /path/to/my_module.so
# loadmodule /path/to/other_module.so

################################## NETWORK 網絡 #####################################

# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
# 如果沒有指定"bind"配置,則任何機器都可以連接到該Redis服務器,但也可以通過配置"bind",讓一個或者多個地址可以連接該Redis服務器
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
# 通過配置可以發現,該配置是可以支持範圍的,另外如果配置是某一個IP,其實整個網段都可以訪問該Redis服務器
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 loopback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
# 如果沒有指定"bind",那麼將Redis服務器將暴露在互聯網上,這是非常危險的,因此在生產系統上應該禁止這樣的設置
# 默認情況下是"bind"指定到本機IPV4的迴環地址上,因此只有本機上運行的程序纔可以訪問該Redis服務器

# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind 192.168.10.20
# 雖然我配置的本地IP地址,但是我192.168.10.12主機一樣訪問,整個192.168.10網段都可以訪問

# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
# "protected mode"是一個安全保護層,可以避免Redis服務器被互聯網上的機器訪問和利用
#
# When protected mode is on and if:
#
# 1) The server is not binding explicitly to a set of addresses using the
#    "bind" directive.
# 2) No password is configured.
#
# The server only accepts connections from clients connecting from the
# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
# sockets.
# 當"protected mode"被設置爲on(即設置爲"protected-mode yes"),且沒有顯示用bind指定ip地址集合或者沒有設置密碼,那麼Redis服務器只能被本機訪問
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured, nor a specific set of interfaces
# are explicitly listed using the "bind" directive.
# 默認"protected mode"是開啓的,如果確定自己的服務器需要暴露在互聯網上,且不存在安全問題,可以將"protected mode"關閉掉
protected-mode yes

# Accept connections on the specified port, default is 6379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
# Redis Server的監聽端口
port 6379

# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
# 在高併發場景下,爲了避免客戶端連接緩慢問題,需要高的backlog,默認值是511。但是真正使用的值依賴於LINUX內核參數somaxconn,而somaxconn默認值是128,所以即便這裏設置了511,最終生效的128。
# 所以如果公司沒有主機工程師一定要記得在安裝新機器操作系統時就將一些內核參數改大一些,比如將somaxconn修改爲20480
tcp-backlog 511

# Unix socket.
#
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
# 利用Unix socket可以提升同一服務器上的進程間通信速度,而且是數量級的提升,但通常Redis服務器和應用服務器是分開的,所以下面的兩個參數可以不管
# unixsocket 指定一個文件作爲通信的媒介
# unixscoketperm 對unixsocket指定文件的訪問權限(讀-寫-執行),如果真的用了這個特性,該值應該根據系統用戶進行權限設置
# unixsocket /tmp/redis.sock
# unixsocketperm 700

# Close the connection after a client is idle for N seconds (0 to disable)
# 當連接變得空閒了之後多少秒關閉連接,默認設置爲0,表示禁用這個選項帶來的效果--連接不關閉
timeout 0

# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
#    equipment in the middle.
# 如果tcp-keepalive值不爲0,那麼在客戶端和服務器缺乏通信的情況下使用SO_KEEPALIVE,每隔"tcp-keepalive"指定的時間發送"TCP ACKS"給客戶端
# 這麼做有兩個原因:
# 1.可以檢測客戶端是否還存活
# 2.保持網絡連接是活着的,這樣避免客戶端反覆與服務器建立鏈接導致性能低下
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 300 seconds, which is the new
# Redis default starting with Redis 3.2.1.
# 默認值是300S,我個人覺得300S還是太長了,即使大的集羣60S發送一次也不會造成大的網絡流量
tcp-keepalive 300

################################# GENERAL #####################################

# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
# 將"daemonize"設置爲yes,Redis會以守護進程的方式運行,並且會在/var/run目錄下生成一個redis.pid文件
daemonize no

# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
#   supervised no      - no supervision interaction
#   supervised upstart - signal upstart by putting Redis into SIGSTOP mode
#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
#   supervised auto    - detect upstart or systemd method based on
#                        UPSTART_JOB or NOTIFY_SOCKET environment variables
# 使用Linux系統的upstart或者systemd兩種方式來管理redis的啓動,需要結合linux的版本來決定,centos7設置爲systemd,而ubuntu設置爲upstart
# Note: these supervision methods only signal "process is ready."
#       They do not enable continuous liveness pings back to your supervisor.
supervised no

# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
# 如果設置了pid文件,那麼Redis啓動時會寫該pid文件到指定的目錄下,退出時刪除該pid文件
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/var/run/redis.pid".
# 如果redis是以守護進程方式運行的,如果沒有指定"pidfile"的值,默認生成一個/var/run/redis.pid文件,否則使用指定的"pidfile"
# 如果redis是以非守護進程方式運行,如果沒有指定"pidfile"的值,則不會產生pid file
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
pidfile /var/run/redis_6379.pid

# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
# 指定Redis服務的詳細日誌級別,有debug\verbose\notice\warning四種級別,debug當然是不推薦的,日誌太多了,除非有特殊情況,開發環境可以試試
loglevel notice

# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
# 如果以非守護進程的方式運行,且沒有指定"logfile",那麼日誌會發送到/dev/null(空文件),我們無法通過它看到任何日誌,但如果指定了"logfile",則輸出到配置的文件當中
# 如果以守護進程的方式運行,且沒有指定"logfile",那麼日誌會輸出到標準輸出(控制檯),但如果指定了"logfile",則輸出到配置的文件當中
logfile ""

# 以下3個(syslog-enabled/syslog-ident/syslog-facility)參數感覺不需要關注,它們的目的就是將日誌輸出使用系統自帶的logger,而且可以修改syslog的參數來實現自己特殊的需求
# 我自己沒有去測試過,感覺應該很少會用到,也許大企業專門負責Redis集羣的會使用它來定製Redis的日誌輸出格式,然後使用程序來統計最後通過UI來展示
# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# 如果要使用system logger,則將"syslog-enabled"設置爲yes
# syslog-enabled no

# Specify the syslog identity.
# 指定syslog的id,應該是隨便指定吧,起到唯一標識的作用???
# syslog-ident redis

# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# 指定localxxx,需要配合/etc/rsyslog.conf文件使用,意思就是將日誌文件輸出導出到rsyslog.conf指定的文件中。如果開啓了syslog-enable,也許自己指定的logfile就失效了,需要通過rsyslog.conf指定localxxx將日誌導出指定的文件中
# syslog-facility local0

# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
# 設置redis有多少個db,默認是16個,可以超過16,但是最大值是多少我也不知道。
# 客戶端連接上服務器之後,可以通過"select databases-1"來選擇使用的db,比如要使用第15個db,則使用"select 14"
databases 16

# By default Redis shows an ASCII art logo only when started to log to the
# standard output and if the standard output is a TTY. Basically this means
# that normally a logo is displayed only in interactive sessions.
#
# However it is possible to force the pre-4.0 behavior and always show a
# ASCII art logo in startup logs by setting the following option to yes.
# 是否啓動的時候輸出(顯示)Redis的ASCII LOGO,這個沒事就不要去動他了吧,看看也不錯至少曉得它正在啓動了
always-show-logo yes

################################ SNAPSHOTTING 持久化 ################################
# 下持久化有三種,RDB\AOF\RDB+AOF混合,簡答提一下對應的實現原理
# RDB:將數據庫以二進制存放在磁盤文件中,持久化的時間間隙比較大,丟失的數據比較多,單獨只使用這種方式不推薦
# AOF:將操作數據庫的指令(包括協議信息)以文本方式存放在磁盤文件中,根據配置最多會丟失1S的數據,這種方式還可以
# 混合:推薦這種方式,但是4.0開始纔有此功能,混合持久化結合了RDB快速恢復數據和AOF丟失數據少的優點,而且減少了磁盤開銷。。。關於它們更詳細的介紹請查看
# Redis設計與實現-RDB持久化 https://my.oschina.net/u/3049601/blog/3153571
# Redis設計與實現-AOF持久化 https://my.oschina.net/u/3049601/blog/3153678
# Redis設計與實現-混合持久化 https://my.oschina.net/u/3049601/blog/3158904
#
# Save the DB on disk:
# 保存DB到磁盤中
#   save <seconds> <changes>
#
#   Will save the DB if both the given number of seconds and the given
#   number of write operations against the DB occurred.
#
#   In the example below the behaviour will be to save:
#   after 900 sec (15 min) if at least 1 key changed
#   15分鐘內至少有一個KEY改變了
#   after 300 sec (5 min) if at least 10 keys changed
#   5分鐘內至少有10個KEY改變了
#   after 60 sec if at least 10000 keys changed
#   1分鐘內至少有10000個KEY改變了
#
#   Note: you can disable saving completely by commenting out all "save" lines.
#
#   It is also possible to remove all the previously configured save
#   points by adding a save directive with a single empty string argument
#   like in the following example:
#
#   save ""
#   如果想禁止RDB持久化,可以將下面的三個save配置項使用"#"註釋掉。還可以使用save ""來代替使用"#"來註釋掉三個save配置項

save 900 1
save 300 10
save 60 10000

# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
# 如果RDB開啓了且最近的BGSAVE失敗了,那麼Redis默認是不會再接收新增或者修改請求了,但如果BGSAVE又恢復工作,那麼新增和修改操作可以繼續(表示可以自動恢復)
# 如果公司有自己的監控系統可以很好的檢測Redis服務和持久化情況,那麼可以將此功能關閉,這樣可以提高系統的可用性
# 如果使用集羣,且從節點夠的情況下有自己的監控,真的可以將這個功能關閉掉
stop-writes-on-bgsave-error yes

# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
# 默認會採用LZF壓縮dump出來的數據庫並寫入到xxx.rdb文件中。壓縮會增加CPU的開銷,如果想節約CPU的開銷,可以將"rdbcompression"設置爲"no",但是會佔用更多的磁盤
rdbcompression yes

# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
# 從Redis5版本開始,默認在RDB文件末尾有一個CRC64(一個隨機算法,生成信息指紋用的)校驗和,它可以讓文件格式可以更強的抵抗風險,但是它會帶來10左右的性能損失,我們可以禁止它以獲得最大的性能輸出
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
# 如果禁用掉"checksum",那麼生成的RDB文件結尾的校驗和爲"0",那麼加載程序則會跳過校驗
# 針對大企業(不差錢)個人覺得就保留默認設置應該比較好
rdbchecksum yes

# The filename where to dump the DB
# 指定RDB文件的名字,建議使用ip+port來指定,運維可以更好的分辨,甚至可以通過程序掃描展示到UI上
dbfilename dump.rdb

# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
# 指定RDB和AOF文件的存儲目錄,注意:這裏只能指定到目錄,不要帶文件名稱
dir ./

################################# REPLICATION 主從 #################################

# Master-Replica replication. Use replicaof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP(as soon as possible) about Redis replication.
# Redis的主從複製,使用"replicaof"從一個Redis Server複製到另外一個去,下面有幾個要點需要理解
#   +------------------+      +---------------+
#   |      Master      | ---> |    Replica    |
#   | (receive writes) |      |  (exact copy) |
#   +------------------+      +---------------+
#
# 1) Redis replication is asynchronous, but you can configure a master to
#    stop accepting writes if it appears to be not connected with at least
#    a given number of replicas.
#    主從複製雖然是異步進行的,但是可以通過配置(min-replicas-to-write)讓從節點小於特定值時,主節點不接受"write"請求
#
# 2) Redis replicas are able to perform a partial resynchronization with the
#    master if the replication link is lost for a relatively small amount of
#    time. You may want to configure the replication backlog size (see the next
#    sections of this file) with a sensible value depending on your needs.
#    Redis的老版本沒有複製重同步,但是從2.8開始支持部分同步(使用複製擠壓緩衝區實現),解決了斷線後老版本"完全同步"低效、阻塞、循環同步的問題
#    如果從節點與主節點斷開聯繫一小段時間,則會發起部分同步,但是複製擠壓緩衝區也有大小,可以設置緩衝區的大小來減少完全同步出現
#
# 3) Replication is automatic and does not need user intervention. After a
#    network partition replicas automatically try to reconnect to masters
#    and resynchronize with them.
#    複製時自動進行的,且不需要人工介入。在出現網絡分區後,從節點會自動嘗試去重連主節點,連接成功之後發起部分同步,如果複製積壓緩衝區中的數據丟失了,則會發起完全同步
#
# 這段英文解釋如果是入門學習Redis,可能不會太看得懂,推薦先看看Redis設計與實現這本書,會對這段描述有比較深刻的認識
# replicaof <masterip> <masterport>
# replicaof 主節點IP  主節點端口,注意一定要保證主從節點網絡是通的,檢查本機防火牆和第三方防火牆

# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the replica to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the replica request.
# 如果主節點設置了密碼,那麼在從節點的redis.conf中要設置masterauth配置項,將密碼寫在這裏,如果沒有
# 設置,那麼主節點會拒絕從節點的複製請求
#
# masterauth <master-password>
# masterauth 主節點密碼

# When a replica loses its connection with the master, or when the replication
# is still in progress, the replica can act in two different ways:
# 如果從節點與主節點失去連接或者正在從主節點同步數據,那麼從節點根據配置可以工作在兩種模式下:
#
# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will
#    still reply to client requests, possibly with out of date data, or the
#    data set may just be empty if this is the first synchronization.
#     如果"replica-serve-stale-data"設置爲"yes",這也是默認設置,那麼從節點將會回覆客戶端的請求,但是得到的數據可能出現下面兩種情況
#     1.如果是與主節點失去連接,那麼得到的數據可能是過時的
#     2.如果是第一次從主節點同步數據,那麼得到的數據集會是空的
#
# 2) if replica-serve-stale-data is set to 'no' the replica will reply with
#    an error "SYNC with master in progress" to all the kind of commands
#    but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,
#    SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB,
#    COMMAND, POST, HOST: and LATENCY.
#    如果"replica-serve-stale-data"設置爲"no",從節點將回復客戶端"SYNC with master in progress"錯誤
#    但是INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG等命令是可以成功執行並得到相應結果
#
replica-serve-stale-data yes

# You can configure a replica instance to accept writes or not. Writing against
# a replica instance may be useful to store some ephemeral data (because data
# written on a replica will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
# 我們可以配置從節點是否可以接受write請求,非常不建議將從節點設置爲可接受write請求,因爲同步可能會導致數據丟失
# 因此從Redis2.6就將"replica-read-only"默認設置爲"yes"了
#
# Since Redis 2.6 by default replicas are read-only.
#
# Note: read only replicas are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only replica exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only replicas using 'rename-command' to shadow all the
# administrative / dangerous commands.
# 從節點(只讀)在架構時建議不要設計爲暴露給互聯網上的不可信任客戶端,它可以起到濫用實例的保護作用
# 從節點依然支持所有的管理命令,比如CONFI,DEBUG等等,爲了提高從節點的安全性,可以使用"rename-command"來屏蔽所有的"管理命令"
replica-read-only yes

# Replication SYNC strategy: disk or socket.
# 同步策略:通過磁盤(Disk-backed)或者通過SOCKET(Diskless)
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
# 很遺憾通過SOCKET同步還處在試驗階段
#
# New replicas and reconnecting replicas that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the replicas.
# The transmission can happen in two different ways:
# 新的從節點和重連接的從節點(從節點的最新偏移量不在主節點的複製積壓緩衝區中),則會執行"full synchronization"操作,一個RDB文件會採用以下兩種方式中的一種傳輸給從節點:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
#                 file on disk. Later the file is transferred by the parent
#                 process to the replicas incrementally.
#                 主節點fork一個子進程出來將RDB文件生成到磁盤上,然後父進程將RDB文件逐漸發給從節點
# 2) Diskless: The Redis master creates a new process that directly writes the
#              RDB file to replica sockets, without touching the disk at all.
#              主節點fork一個子進程,然後創建一個和從節點的SOCKET連接,直接將數據發送給從節點,而不借助磁盤
#
# With disk-backed replication, while the RDB file is generated, more replicas
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new replicas arriving will be queued and a new transfer
# will start when the current one terminates.
# 如果使用磁盤(disk-backed),當子進程生成RDB文件後,多個從節點立即就可以使用RDB文件進行復制操作
# 如果使用SOCKET(diskless),一旦複製開始,當新的節點複製請求必須等已開始複製完成之後才能進行。
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple replicas
# will arrive and the transfer can be parallelized.
# 當使用SOCKET(diskless)時,主節點可以等一段時間(可配置),這段時間內過來的複製請求可以並行開始
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
# 如果磁盤效率低,而網絡速度快且帶寬也大的情況下,Diskless方式完複製效果更好
repl-diskless-sync no

# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the replicas.
# 如果"repl-diskless-sync"設置爲yes,就需要配置"repl-diskless-sync-delay"讓主節點等待更多的複製請求過來,並讓他們併發複製
#
# This is important since once the transfer starts, it is not possible to serve
# new replicas arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more replicas arrive.
# 因爲一旦有複製開始進行,新來的複製請求就會排隊,因此設置了延遲時間就可以讓更多的複製同時執行
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
# 延遲時間單位是"秒",默認值是5秒。可以將"repl-diskless-sync-delay"設置爲0,這樣複製就會立即執行
repl-diskless-sync-delay 5

# Replicas send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_replica_period option. The default value is 10
# seconds.
# 從節點使用"repl-ping-replica-period"指定的時長(單位:秒)定期發送"pings"給主節點,通過這個操作可以檢測從節點是否和主節點失聯。該值默認是10秒
#
# repl-ping-replica-period 10

# The following option sets the replication timeout for:
# "repl-timeout"會影響複製過程中一下三種情況的超時時間
#
# 1) Bulk transfer I/O during SYNC, from the point of view of replica.
# 2) Master timeout from the point of view of replicas (data, pings).
# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-replica-period otherwise a timeout will be detected
# every time there is low traffic between the master and the replica.
# 一定要確保"repl-timeout"的值大於"repl-ping-replica-period"的值,否則當主從節點之間通信量很低時,每次判斷超時都是成功的,默認值是60秒
#
# repl-timeout 60

# Disable TCP_NODELAY on the replica socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to replicas. But this can add a delay for
# the data to appear on the replica side, up to 40 milliseconds with
# Linux kernels using a default configuration.
# 如果將"repl-disable-tcp-nodelay"設置爲"yes",那麼主節點會使用更小的TCP packet和更少的帶寬發送數據到從節點
# 但是這會讓從節點的數據延遲40毫秒(LINUX默認配置,也許可以通過tcp_delack_min修改),關於tcp-nodelay可以看博客:https://blog.csdn.net/bytxl/article/details/17677495
#
# If you select "no" the delay for data to appear on the replica side will
# be reduced but more bandwidth will be used for replication.
# 如果將"repl-disable-tcp-nodelay"設置爲no,那麼從節點接收數據的延遲會減少,但是要求更多的帶寬來完成複製工作
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and replicas are many hops away, turning this to "yes" may
# be a good idea.
# 默認我們使用低延遲的選項,也就是"repl-disable-tcp-nodelay"設置爲no,但是在非常高的通信量情況下或者從主節點到從節點會經過很多次轉發,將"repl-disable-tcp-nodelay"設置爲yes是可能是更好的選擇
repl-disable-tcp-nodelay no

# Set the replication backlog size. The backlog is a buffer that accumulates
# replica data when replicas are disconnected for some time, so that when a replica
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the replica missed while
# disconnected.
# backlog設置複製積壓緩衝區的大小,用以存放從節點與主節點斷開連接後這段時間的write等命令,當從節點重新連接上來時,通常不需要做"完全同步",只需要做部分同步(要求從節點偏移量在複製擠壓緩衝區可以找到,表示數據可以使用偏移量後的命令進行恢復),將偏移量後面的命令發送給從節點執行
#
# The bigger the replication backlog, the longer the time the replica can be
# disconnected and later be able to perform a partial resynchronization.
# 更大的backlog意味着從節點可以斷開連接更長時間,然後纔可以執行部分同步
#
# The backlog is only allocated once there is at least a replica connected.
# 一旦有從節點連接到主節點,複製積壓緩衝區就會創建,此時並沒有數據。
#
# 這個值到底設置多少,有一個計算公式:2*平均斷線時間*每秒寫入數據大小,因此和業務量強相關
# repl-backlog-size 1mb

# After a master has no longer connected replicas for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last replica disconnected, for
# the backlog buffer to be freed.
# 當主節點在最後一個從節點斷線之後的一段時間後(repl-backlog-ttl設置),會將複製積壓緩衝區釋放
#
# Note that replicas never free the backlog for timeout, since they may be
# promoted to masters later, and should be able to correctly "partially
# resynchronize" with the replicas: hence they should always accumulate backlog.
# 從節點永遠都不會釋放,因爲它必須保存自己接收的最新偏移量,當出現斷線重連時將這個偏移量發給主節點,主節點決定使用完全同步還是部分同步
#
# A value of 0 means to never release the backlog.
# 如果設置爲0,表示永不釋放複製積壓緩衝區
#
# repl-backlog-ttl 3600

# The replica priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a replica to promote into a
# master if the master is no longer working correctly.
# "replica-priority"是一個整數值,在哨兵的集羣模式下,當"主事哨兵"被選舉(選舉採用過半原則)出來之後,由它決定掛掉主節點下的某一個從節點作爲新的主節點,當其他條件都相同的情況下,"replica-priority"值越小的從節點會被選中作爲新的主節點
#
# A replica with a low priority number is considered better for promotion, so
# for instance if there are three replicas with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
# 總結起來就是值越小優先級越高,其對應的從節點會優先被選擇作爲新的主節點
#
# However a special priority of 0 marks the replica as not able to perform the
# role of master, so a replica with priority of 0 will never be selected by
# Redis Sentinel for promotion.
# 如果將"replica-priority"設置爲0,則該從節點永遠都不會被選擇爲新的主節點,根本就不參與選舉。可以減少選舉過程中過多的網絡通信,加快選舉過程
# 我還只是一個理論派,沒實戰經驗,個人感覺如果機器硬件夠好,且機器所在的網絡質量夠好,可以將其優先級設置得高一些
#
# By default the priority is 100.
# 默認值是100
replica-priority 100

# It is possible for a master to stop accepting writes if there are less than
# N replicas connected, having a lag less or equal than M seconds.
# 如果連接的從節點小於N且它們的滯後時間小於或者等於M秒,那麼主節點可能會停止接收寫操作
# 網上說這兩個條件中一個不滿足就可能導致主節點不能接收寫操作,是很準確,必須是兩個條件同時滿足纔會觸發
#
# The N replicas need to be in "online" state.
# 要求這N個從節點是"online"狀態,還有在哨兵和Cluster模式下節點還有主觀下線和客觀下線狀態
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the replica, that is usually sent every second.
# 滯後時間的秒數必須小於指定值,滯後時間=當前時間-最後一次接收到的從replica發過來的ping時間
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough replicas
# are available, to the specified number of seconds.
# 這個選項並不能保證N個從節點接收寫操作,但是可以將丟失的數據限制在指定秒數內
#
# For example to require at least 3 replicas with a lag <= 10 seconds use:
#
# min-replicas-to-write 3
# min-replicas-max-lag 10
#
# Setting one or the other to 0 disables the feature.
# 如果將這兩個選項中的任何一個設置爲0,表示禁用這個特徵
#
# By default min-replicas-to-write is set to 0 (feature disabled) and
# min-replicas-max-lag is set to 10.
# 默認"min-replicas-to-write"被設置爲0,即禁止了這個特徵,"min-replicas-max-lag"默認值爲10秒

# A Redis master is able to list the address and port of the attached
# replicas in different ways. For example the "INFO replication" section
# offers this information, which is used, among other tools, by
# Redis Sentinel in order to discover replica instances.
# Another place where this info is available is in the output of the
# "ROLE" command of a master.
#
# The listed IP and address normally reported by a replica is obtained
# in the following way:
#
#   IP: The address is auto detected by checking the peer address
#   of the socket used by the replica to connect with the master.
#
#   Port: The port is communicated by the replica during the replication
#   handshake, and is normally the port that the replica is using to
#   listen for connections.
#
# However when port forwarding or Network Address Translation (NAT) is
# used, the replica may be actually reachable via different IP and port
# pairs. The following two options can be used by a replica in order to
# report to its master a specific set of IP and port, so that both INFO
# and ROLE will report those values.
#
# There is no need to use both the options if you need to override just
# the port or the IP address.
# 總結起來說:在主節點中使用info replication可以列出所有從節點的IP+PORT,本來主節點可以使用SOCKET拿到從節點的IP+PORT,但如果使用端口轉發(docker,k8s)和NAT或者因爲使用了代理,從節點不能直接通過IP+PORT到達,下面這兩個從節點選項纔有用,它可以將設置的IP和PORT報告給主節點,INFO和ROLE命令會顯示設置的值
#
# replica-announce-ip 5.5.5.5
# replica-announce-port 1234

################################## SECURITY ###################################

# Require clients to issue AUTH <PASSWORD> before processing any other
# commands.  This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
# 爲了讓不信任的客戶端訪問Redis Server,可以要求客戶端在執行任何命令之前先校驗密碼
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
# 如果客戶端和Redis server運行在同一個機器,我們也可以將"requirepass"註釋掉,客戶端就不需要校驗密碼。
# 可以推廣到:如果在一個局域網裏面,如果安全做得足夠好,則都可以不設置"requirepass"
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
# 因爲Redis可以每秒可以驗證150K個密碼,因此如果要設置密碼,一定要設置一個非常強壯的密碼,否則很容易被破解
#
# requirepass foobared

# Command renaming.
# 重命名command,可以保護我們的管理員命令和一些會導致Redis卡頓的命令
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
# 在共享環境中,可以將一些危險的命令進行重命名,這樣可以讓普通的客戶端不可以使用那些被重命名的命令,只有內部工具可以使用
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
# 可以將危險的命令重命名爲一個空串,徹底的禁止該命令
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to replicas may cause problems.
# 因爲主節點的會將write命令使用緩衝區記錄下來並傳播給從節點執行,因此如果重命名的命令在從節點沒有同步修改的話,這可能帶來一些意想不到的問題,因此一定要小心這一點。
# 比如將set命令重命名爲myset,那麼在主節點執行myset foo Messi之後,從節點並不會有foo這個key,因爲從節點並不認識myset這個命令

################################### CLIENTS ####################################

# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
# 設置同時可以連接到服務器的客戶端數量,默認值是10000,但如果主機的最大文件打開數並沒有比"maxclients"大,那麼"maxclients"=最大文件打開數-32,這個32是提供給Redis內部使用的,比如集羣之間的通信等也需要連接數
#
# maxclients 10000

############################## MEMORY MANAGEMENT ################################

# Set a memory usage limit to the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
# 設置Redis的工作最大內存爲某一個特定的限制值。當內存使用達到限制值,根據設置的淘汰策略刪除keys
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
# 當Redis無法根據設置的淘汰策略刪除keys時或者淘汰策略被設置爲"noeviction",像set lpush等命令會收到報錯,此時管理員就應該特別注意了,及時的增加內存,但是此時讀命令還是可以繼續正常使用的
#
# This option is usually useful when using Redis as an LRU or LFU cache, or to
# set a hard memory limit for an instance (using the 'noeviction' policy).
# 如果將Redis作爲一個LRU或者LFU的緩存,再或者將Redis作爲hard memory limit for an instance使用時,這個選項就非常有用
#
# WARNING: If you have replicas attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the replicas are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of replicas is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
# 如果一個設置了"maxmemory"的主節點連接了一個從節點,那麼用於主從複製傳遞命令的輸出緩衝區佔用的內存也在maxmemory當中,如果當內存被佔滿時而出現大量的刪除key的操作寫到緩衝區,而緩衝區又不夠,又會觸發刪除更多的key,這樣就會造成一個死循環,直到整個數據庫變成空的。
# 因此
#
# In short... if you have replicas attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for replica
# output buffers (but this is not needed if the policy is 'noeviction').
# 因爲used memory是可以大於maxmemory的,只不過出現這種情時會導致內存回收而觸發刪除KEY的操作。因此,如果在主從模式下,主節點的maxmemory在設置得足夠大的情況下,還要給輸出緩衝區留出一點空間來,避免出現死循環而導致數據庫被清空。不要物理內存有多少就設置多少,況且還有操作系統和其他程序在運行,一般設置爲3/4。
#
# maxmemory <bytes>

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
# allkeys-lru -> Evict any key using approximated LRU.
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# allkeys-lfu -> Evict any key using approximated LFU.
# volatile-random -> Remove a random key among the ones with an expire set.
# allkeys-random -> Remove a random key, any key.
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# noeviction -> Don't evict anything, just return an error on write operations.
# 關於這幾個策略的講解百度可以找到非常好的描述,在這裏就不詳細描述了,篇幅也不夠
# 推薦一個:https://cloud.tencent.com/developer/article/1530553 講了原理和使用說明
#
# LRU means Least Recently Used   最近沒有被使用的
# LFU means Least Frequently Used 最近使用頻率最小的
#
# Both LRU, LFU and volatile-ttl are implemented using approximated randomized algorithms
# LRU LFU 和volatile-ttl使用了較接近的隨機算法
#
# Note: with any of the above policies, Redis will return an error on write
#       operations, when there are no suitable keys for eviction.
#       選擇上面的任何一種策略,如果沒有適合的KEY被淘汰,那麼下面的這些寫操作就會報錯
#
#       At the date of writing these commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
#
# The default is:
# 默認設置是noeviction
#
# maxmemory-policy noeviction

# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
# LRU LFU TTL三種方式並不是精準的算法,這是爲了提高速度和節省內存,同時達到了近似的效果。。。很妙
# 我們可以基於速度或者精準度的要求去調整採樣的數據大小,"maxmemory-samples"值越大精準度越高,速度越慢,消耗的內存也越多,反之速度快,但是精準度低,內存開銷少
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs more CPU. 3 is faster but not very accurate.
# 默認值是5,如果設置爲10就非常接近真正的LRU算法了,但是CPU開銷也越多了。如果設置爲3,速度快了,但是沒那麼準確
#
# maxmemory-samples 5

# Starting from Redis 5, by default a replica will ignore its maxmemory setting
# (unless it is promoted to master after a failover or manually). It means
# that the eviction of keys will be just handled by the master, sending the
# DEL commands to the replica as keys evict in the master side.
# 從Redis 5開始,從節點默認是忽略掉maxmemory設置的,除非從節點在故障轉移時變成了主節點
# 正常情況下,從節點的Key淘汰是通過從主節點發送del命令過來實現的
#
# This behavior ensures that masters and replicas stay consistent, and is usually
# what you want, however if your replica is writable, or you want the replica to have
# a different memory setting, and you are sure all the writes performed to the
# replica are idempotent, then you may change this default (but be sure to understand
# what you are doing).
# "replica-ignore-maxmemory"可以保證主從的數據一致性,除非你真的知道自己把"replica-ignore-maxmemory"
# 設置爲no帶來的副作用,那建議你不要做着騷的操作,坑人哦
#
# Note that since the replica by default does not evict, it may end using more
# memory than the one set via maxmemory (there are certain buffers that may
# be larger on the replica, or data structures may sometimes take more memory and so
# forth). So make sure you monitor your replicas and make sure they have enough
# memory to never hit a real out-of-memory condition before the master hits
# the configured maxmemory setting.
# 由於從節點默認情況下是不主動刪除KEY的,它可能比主節點消耗更多的內存(可能buffer更大,可能數據結構消耗的內存更多等等),所以要使用你的monitor實時監控你的從節點,並保證主節點達到maxmemory時間先於從節點的內存超過真正的物理內存
#
# replica-ignore-maxmemory yes

############################# LAZY FREEING 惰性回收 ####################################

# Redis has two primitives to delete keys. One is called DEL and is a blocking
# deletion of the object. It means that the server stops processing new commands
# in order to reclaim all the memory associated with an object in a synchronous
# way. If the key deleted is associated with a small object, the time needed
# in order to execute the DEL command is very small and comparable to most other
# O(1) or O(log_N) commands in Redis. However if the key is associated with an
# aggregated value containing millions of elements, the server can block for
# a long time (even seconds) in order to complete the operation.
# Redis提供了兩個命令來手動刪除keys,其中一個是大家熟知的del,另外一個是unlink
# "del"命令:刪除是阻塞式(執行刪除時,後續的命令就要排隊等待)刪除以便釋放空間,如果一個Key比較小則刪除很快,影響小,但如果這個Key對應的對象非常大,那麼刪除會很耗時,在高併發的系統裏面會阻塞後面的請求,如果系統架構設計不合理則可能導致整個業務系統不可供,造成嚴重的生產事故
# "unlink"命令:是異步的儘可能快的逐步刪除,它所需的時間複雜度是O(1),Redis會啓動另外一個線程來執行真正的刪除並回收內存的操作,它不會阻塞後續命令。比如flushall flushdb命令也是異步執行的。
#
# For the above reasons Redis also offers non blocking deletion primitives
# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and
# FLUSHDB commands, in order to reclaim memory in background. Those commands
# are executed in constant time. Another thread will incrementally free the
# object in the background as fast as possible.
#
# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.
# It's up to the design of the application to understand when it is a good
# idea to use one or the other. However the Redis server sometimes has to
# delete keys or flush the whole database as a side effect of other operations.
# Specifically Redis deletes objects independently of a user call in the
# following scenarios:
# 除了用戶可以使用del,unlink,flushall,flushdb刪除key,Redis Server在某些情況下不得不刪除Key,甚至清空整個db以保證服務的可用性,下面列舉了4種情況
#
# 1) On eviction, because of the maxmemory and maxmemory policy configurations,
#    in order to make room for new data, without going over the specified
#    memory limit.
#    爲了避免Redis使用的內存超過"maxmemory",且一直在這種狀態下運行,Redis Server會根據選擇的刪除策略去自動刪除一些Key,以釋放空間給其他數據使用。
#
# 2) Because of expire: when a key with an associated time to live (see the
#    EXPIRE command) must be deleted from memory.
#    Key設置的過期時間到了,當用戶訪問這個Key會自動刪除,或者Redis Server定期將這種Key刪除。
#
# 3) Because of a side effect of a command that stores data on a key that may
#    already exist. For example the RENAME command may delete the old key
#    content when it is replaced with another one. Similarly SUNIONSTORE
#    or SORT with STORE option may delete existing keys. The SET command
#    itself removes any old content of the specified key in order to replace
#    it with the specified string.
#    一些命令的底層實現就是先刪除再新增,所以再使用這些命令的時候會執行刪除操作,比如SET,SORT,RENAME
#
# 4) During replication, when a replica performs a full resynchronization with
#    its master, the content of the whole database is removed in order to
#    load the RDB file just transferred.
#    主從模式下,如果斷網重連後觸發了"完全同步",也會將整個DB數據刪除掉,然後再從RDB文件/SOCKET中加載所有數據
#
# In all the above cases the default is to delete objects in a blocking way,
# like if DEL was called. However you can configure each case specifically
# in order to instead release memory in a non-blocking way like if UNLINK
# was called, using the following configuration directives:
# 上面的4種情況,Redis Server刪除數據都是阻塞式刪除,就像"del"命令。我們可以將這4種情況的設置爲異步刪除,就像命令"unlink"一樣

lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no

############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
# Redis的"bgsave"可以異步的將數據集導出到RDB文件中,這種持久化方式滿足了大多數的應用,但是有一種情況是當因爲一些情況掛掉,比如斷電,根據"save xxx"的配置可能會導致幾分鐘的數據丟失,在一些要求高的系統中這種情況是不被允許的。
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
# Redis提供了"Append Only File"新的持久化技術,該技術理論上可以做到當發生斷電時讓丟失的數據小於等於1秒,或者服務器本身沒有掛,只是Redis Server程序掛了,甚至只有一個single write丟失
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
# AOF和RDB兩種持久化技術可以同時開啓,如果AOF開啓了,那麼啓動Redis時,是從AOF文件中加載數據的,因爲它保存的數據更完整,提供更好的持久化功能
#
# Please check http://redis.io/topics/persistence for more information.
# 更多的信息請出門左轉到:http://redis.io/topics/persistence for more information
# 開啓AOF,"appendonly"設置爲yes
appendonly no

# The name of the append only file (default: "appendonly.aof")
# 指定AOF文件名,此文件存放的目錄和RDB是共用的,使用"dir"進行指定
appendfilename "appendonly.aof"

# 對於沒有OS知識的朋友,接下來的appendfsync功能可以先要去百度找操作系統寫文件緩衝區的知識點,fsync不同的選項決定了寫入緩衝區的數據什麼時候真正寫到磁盤上
# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
# 系統調用"fsync()"告訴OS要真正的將數據寫入到磁盤上,而不是寫入到緩衝區當中。一些OS會立即寫到磁盤,一些OS可能會盡可能快的藏屍將數據寫到磁盤
#
# Redis supports three different modes:
# Redis支持三種不同的模式:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# 模式1-"no":不調用OS的fsync函數,讓OS自己決定什麼時候將緩衝區的數據寫入到磁盤上,該模式對Redis來說速度最快
#
# always: fsync after every write to the append only log. Slow, Safest.
# 模式2-"always":每次"寫操作"都會調用一次fsync函數,這種方式最安全,但是速度是最慢的
#
# everysec: fsync only one time every second. Compromise.
# 模式3-"everysec":每一秒鐘調用一次fsync,這是一種這種折中方案。
#
# 看到這裏順便提一下,在Redis中隨處可見這種思想,比如前面近似LRU的隨機算法,有序集合底層數據結構中結合Hash表和跳躍表實現高效的單個和範圍查詢,過期key的惰性刪除等等
# 在我們自己設計系統、開發模塊、甚至生活中也可以將這個思想好好運用
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
# 默認模式是"everysec"的,這是結合速度和安全性的這種方案。如果你不考慮系統DOWN可能帶來的數據丟失,可以將模式設置爲"no",而如果你想數據完全不丟,且願意犧牲性能,可以將模式設置爲"always"
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
# 更多的細節請出門左轉:http://antirez.com/post/redis-persistence-demystified.html
# 另外大牛"antirez"還開發了基於Redis的神經網絡訓練模塊(neural-redis)和分佈式作業隊列(Disque)
# If unsure, use "everysec".
# 如果自己不確定到底使用哪一種,就使用默認值everysec

# appendfsync always
appendfsync everysec
# appendfsync no

# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
# 當AOF模式設置爲"everysec"或者"always",執行後臺保存AOF文件操作或者AOF文件重寫(可以單獨百度一下,有的面試官會問這個問題)會產生大量的IO,而一些LINUX OS的fsync調用會被阻塞很長時間(目前還未解決這個問題),這種情況會阻塞另外線程的同步寫操作
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
# 爲了減輕這個問題帶來的影響,可以使用"no-appendfsync-on-rewrite"配置,一旦有BGSAVE和BGREWRITEAOF在執行,阻止fsync函數調用
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
# 簡單點說就是:當"no-appendfsync-on-rewrit"設置爲no,那麼有一個進程在執行SAVE操作,AOF持久化模式相當於被設置成了"no",也就是說根據OS的設置,糟糕的情況下可能丟失30秒以上的數據
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
# 如果你知道上面說的潛在風險,可以將"no-appendfsync-on-rewrite"設置爲yes,否則就不要瞎搞,就保持爲no

no-appendfsync-on-rewrite no

# AOF文件重寫是Redis面試的一個點,也是優化Redis的一個點,將它設置得足夠大,可以保存更多日誌數據
# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
# 當AOF文件大小超過指定值"auto-aof-rewrite-min-size",就會發生AOF文件重寫
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
# Redis會記住AOF重寫後的AOF文件大小,如果重啓後還未發生重寫,那麼記住的就是剛開始加載AOF文件的大小
# 這個文件大小值會與下面的配置項值進行比較,決定什麼時候做AOF文件重寫
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
# 如果當"當前大小/最後一次重寫大小"的比值大於"auto-aof-rewrite-percentage"指定的值,則會觸發AOF重寫
# 爲了避免AOF已經很小還進行AOF重寫的尷尬情況,因此需要設置一個AOF重寫最小AOF文件大小
# 比如"auto-aof-rewrite-min-size"設置爲64M,只有當AOF文件超過64M,且"當前大小/最後一次重寫大小">"auto-aof-rewrite-percentage"纔會觸發AOF重寫
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
# 如果將"auto-aof-rewrite-percentage"設置爲0,表示不允許執行自動AOF重寫

auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
# 如果運行Redis的OS崩潰掉,特別是ext4格式的文件系統使用"data=ordered"選項執行mount操作,在這些情況下
# AOF文件可能是截斷(損壞)的,重啓Redis時如果"aof-load-truncated"被設置爲yes,那麼AOF文件在加載時可能會丟失掉崩潰前的一些數據
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
# 針對損壞的AOF文件,在重啓Redis的時候,支持兩種方式
# 1.發現文件損壞,直接報錯
# 2.儘可能的從找到的截斷(損壞)文件中恢復數據到內存中,這是Redis的默認方式
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# 如果"aof-load-truncated"被設置爲yes,且發現了被截斷的AOF文件,那麼在啓動Redis時日誌或者控制檯中會輸出日誌,讓運維人員或者監控看到這條信息
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
# 如果將"aof-load-truncated"設置no,且發現了被截斷的AOF文件,重啓Redis會報錯,這個時候就需要借用redis-check-aof工具修復AOF文件
# 其實在主從模式下,是否可以到從節點拿AOF文件進行恢復,好像這個方法是多想了,因爲哨兵、Codis、Cluster模式會自動進行故障轉移,只有單機和純主從模式也許這種方式可以嘗試,但是現在的企業至少應該是哨兵模式了,大企業都用Cluster了或者豌豆莢搞的Codis
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
# 如果AOF文件在文件中間損壞了,即使"aof-load-truncated"設置爲yes,重啓Redis一樣會報錯且退出啓動
# 這個選項只適合AOF被截斷的情況,也就是AOF沒有足夠的字節
aof-load-truncated yes

# 混合持久化,Redis 4提供的新功能
# When rewriting the AOF file, Redis is able to use an RDB preamble in the
# AOF file for faster rewrites and recoveries. When this option is turned
# on the rewritten AOF file is composed of two different stanzas:
# 如果"aof-use-rdb-preamble"設置爲yes,那麼AOF文件由"rdb file"+"aof tail"兩部分組成,這種組合方式可以發揮RDB持久化加載速度快和壓縮存儲使用空間小的優勢,與AOF持久化丟失數據小於1S的優勢
#
#   [RDB file][AOF tail]
#
# When loading Redis recognizes that the AOF file starts with the "REDIS"
# string and loads the prefixed RDB file, and continues loading the AOF
# tail.
# 該混合持久化方式下的AOF文件用"REDIS"字符串區分,前面是RDB內容,後面是AOF內容

aof-use-rdb-preamble yes

################################ LUA SCRIPTING LUA腳本 ###############################
# LUA腳本我沒有研究過,簡單說下這個配置項是設置LUA腳本最大執行時間
# 另外LUA腳本執行是原子的,因此可以用它做一些特殊的實現,不過就像Oracle的存儲過程一樣,維護不方便,比較這個腳本語言會的人太少了
# 如果確實有需要,在考慮運維的情況下可以使用它來實現原子性等操作,慎用
# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
# 如果設置爲0或者負值,表示不限制執行時間
lua-time-limit 5000

################################ REDIS CLUSTER 集羣 ###############################
# 在看下面的內容之前建議先去百度一下redis hash slots,以及集羣的架構圖
#
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however
# in order to mark it as "mature" we need to wait for a non trivial percentage
# of users to deploy it in production.
# 雖然Redis Cluster被認爲是穩定的,但是依然需要大量的用戶在生產環境中使用它。。。這段註釋應該從redis.conf中刪除了,全世界已經有知名的大企業使用了Redis Cluster
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#
# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
# 將"cluster-enabled"設置爲yes,redis instance才能成爲集羣的一部分,但集羣要真正開始工作,還需要將
# 所有的slots分配給cluster node
#
# cluster-enabled yes

# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
# 每個cluster node有自己的cluster configuration file,且該配置文件不能手工編輯,而是自動創建和更新的
# cluster configuration file不能重名
#
# cluster-config-file nodes-6379.conf

# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
# 集羣節點在"cluster-node-timeout"規定的超時時間內,如果不可達,則被認爲是失敗狀態
# 注意:集羣內的大多數其他內部時間限制是"cluster-node-timeout"的倍數
#
# cluster-node-timeout 15000

# A replica of a failing master will avoid to start a failover if its data
# looks too old.
# 如果一個掉線主節點的從節點數據太老了,是不允許參與故障轉移的
#
# There is no simple way for a replica to actually have an exact measure of
# its "data age", so the following two checks are performed:
# 沒得撒子簡單的辦法可以一下計算出數據的年齡,因此Redis提供下面的兩點來校驗數據年齡,以決定集羣節點是否參與故障轉移過程:
#
# 1) If there are multiple replicas able to failover, they exchange messages
#    in order to try to give an advantage to the replica with the best
#    replication offset (more data from the master processed).
#    Replicas will try to get their rank by offset, and apply to the start
#    of the failover a delay proportional to their rank.
#    根據從節點的偏移量(主從複製-複製擠壓緩衝區裏面的偏移量,這個偏移量會跟着命令發給從節點,並保存下來)誰是最新的,並且根據偏移量排序,根據這個排序結果將從節點作爲候選主節點
#
# 2) Every single replica computes the time of the last interaction with
#    its master. This can be the last ping or command received (if the master
#    is still in the "connected" state), or the time that elapsed since the
#    disconnection with the master (if the replication link is currently down).
#    If the last interaction is too old, the replica will not try to failover
#    at all.
#    每個從節點都會計算它與主節點最後一次交互時間,比如最後一次ping時間、最後一次接收命令時間、與主節點斷開連接過去的時長
#    如果最後一次交互時間太長,那麼這個從節點也不會參與故障轉移過程
#
# The point "2" can be tuned by user. Specifically a replica will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
# 前面講到的第2點有一個計算公式來衡量"最後一次交互時間"是否太長
#
#   (node-timeout * replica-validity-factor) + repl-ping-replica-period
#
# So for example if node-timeout is 30 seconds, and the replica-validity-factor
# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the
# replica will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
# 假設"cluster-node-timeout"是30S,"replica-validity-factor"是10,"repl-ping-replica-period"是10S
# 如果"最後一次交互"時間超過"30*10+10=310"就被認爲太長,而不能參與故障轉移
#
# A large replica-validity-factor may allow replicas with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a replica at all.
# "replica-validity-factor"太大,從節點數據可能會太久,如果太小可能選舉不成功,集羣不可用,所以要根據實際情況設置
#
# For maximum availability, it is possible to set the replica-validity-factor
# to a value of 0, which means, that replicas will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
# 如果爲了保證最大的可用性,可以將"cluster-replica-validity-factor"設置爲0。此時所有的從節點考慮最後一次交互時間的大小,總是會參與故障轉移過程
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
# "cluster-replica-validity-factor"的"0"是唯一可以讓集羣總是可用的選項值
#
# cluster-replica-validity-factor 10

# Cluster replicas are able to migrate to orphaned masters, that are masters
# that are left without working replicas. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working replicas.
# 再不看下面的配置項功能時,有可能集羣從節點會變成一個孤立的從節點,針對這種情況,如果它再發生故障,因爲沒有備選的從節點,所以故障轉移動作沒法完成。
#
# Replicas migrate to orphaned masters only if there are still at least a
# given number of other working replicas for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a replica
# will migrate only if there is at least 1 other working replica for its master
# and so forth. It usually reflects the number of replicas you want for every
# master in your cluster.
# 爲了避免上面的情況發生,Redis Cluster默認配置要求一個主節點至少有兩個從節點,一旦主節點掛了被新選舉出來的主節點至少有一個從節點在工作。"cluster-migration-barrier"可以指定該值的大小,默認值是"1"
#
# Default is 1 (replicas migrate only if their masters remain with at least
# one replica). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
# 默認值是1,如果要想禁止"migration",可以將"cluster-migration-barrier"設置爲一個超大的值
# 可以爲了調試或者你想讓自己的系統存在高風險的運行,可以設置爲0。。。no zuo no die
#
# cluster-migration-barrier 1

# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
# Redis Cluster默認情況下如果有一個hash slot沒有被分配(用一個Cluster Node接收它),那麼整個集羣是不可用的
# 在這種模式下,一旦出現網絡分區(一段hash slots 就變成未分配),整個集羣就不可用了,直到所有hash slots被分配,集羣會自動變得可用
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
# 也許有時你想即使出現hash slots unconverd,而集羣的部分節點仍然是可用的,可以將"cluster-require-full-coverage"設置爲no
#
# cluster-require-full-coverage yes

# This option, when set to yes, prevents replicas from trying to failover its
# master during master failures. However the master can still perform a
# manual failover, if forced to do so.
# 如果將"cluster-replica-no-failover"設置爲yes,那麼該集羣從節點不會參與自動故障轉移過程,但是可以手動強制執行故障轉移
#
# This is useful in different scenarios, especially in the case of multiple
# data center operations, where we want one side to never be promoted if not
# in the case of a total DC failure.
# 在不同場景可能非常有用,比如有多個數據中心,而我們又不希望整個集羣中的某一個數據中心的從節點被提升爲主節點
#
# cluster-replica-no-failover no

# In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site.

########################## CLUSTER DOCKER/NAT support  ########################

# In certain deployments, Redis Cluster nodes address discovery fails, because
# addresses are NAT-ted or because ports are forwarded (the typical case is
# Docker and other containers).
#
# In order to make Redis Cluster working in such environments, a static
# configuration where each node knows its public address is needed. The
# following two options are used for this scope, and are:
#
# * cluster-announce-ip
# * cluster-announce-port
# * cluster-announce-bus-port
#
# Each instruct the node about its address, client port, and cluster message
# bus port. The information is then published in the header of the bus packets
# so that other nodes will be able to correctly map the address of the node
# publishing the information.
#
# If the above options are not used, the normal Redis Cluster auto-detection
# will be used instead.
#
# Note that when remapped, the bus port may not be at the fixed offset of
# clients port + 10000, so you can specify any port and bus-port depending
# on how they get remapped. If the bus-port is not set, a fixed offset of
# 10000 will be used as usually.
#
# Example:
#
# cluster-announce-ip 10.1.1.5
# cluster-announce-port 6379
# cluster-announce-bus-port 6380

################################## SLOW LOG 慢日誌 ###################################

# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
# 記錄Redis執行耗時超過指定值的"查詢命令",整個"耗時"僅僅是執行命令的耗時(在這段時間內,因爲線程被阻塞,其他命令會被阻塞),不包括與客戶端網絡IO所耗時間或者發數據給客戶端的耗時
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.
# 可以使用"slowlog-log-slower-than"指定耗時的閾值(單位是微妙),一旦執行超過這個時間就會記錄日誌到緩衝區
# 可以使用"slowlog-max-len 128"指定隊列長度,如果超過隊列,最老的元素會被覆蓋
#
# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
# 單位是微妙,不能設置爲負值,如果設置爲0,那麼所有的查詢命令都會記錄到隊列中
#
slowlog-log-slower-than 10000

# There is no limit to this length. Just be aware that it will consume memory.
# 最大值沒有限制,我們只需要考慮內存是否足夠大
# You can reclaim memory used by the slow log with SLOWLOG RESET.
# 可以使用slowlog reset回收已使用的內存
slowlog-max-len 128

################################ LATENCY MONITOR ##############################

# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
# 延遲監控子系統通過採集運行時的不同操作去收集造成Redis實例延遲的相關可能來源
#
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
# 可以通過latency命令獲得可用信息的圖表,比如latency docter xxx/latency graph等
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
# 該監控子系統只會記錄那些耗時>="latency-monitor-threshold"指定的值對應的操作,如果設置爲0,表示關閉延時監控
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
# Redis默認是關閉了延遲監控的,因爲絕大多數時間是用不着的,因爲開啓它有一定的性能損失,除非你的服務發生了延時而開啓監控
# 當Redis是運行着的時候,可以通過config set latency-monitor-threshold xxx輕鬆開啓監控
latency-monitor-threshold 0

############################# EVENT NOTIFICATION ##############################
# 下面的條件說明很多看上去挺複雜的,其實很簡單:就是多個字符代表的意思組合到一起而已
# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
# Redis可以將關於"鍵空間(簡單理解爲Hash表中的鍵值對)"發生的事件以通知的形式發送給Pub/Sub客戶端
# 更詳細的請參考Redis的官方文檔:http://redis.io/topics/notifications
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
# 如果通過配置開啓了鍵空間和鍵時間的通知,如果通過客戶端在第0號database上執行一個DEL foo操作,那麼會
# 發佈兩條消息
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
# 我們可以通過組合下面的分類將事件通知發給客戶端
#  "K"和"E"代表兩大類,無論怎麼組合,必須有其中一個,可以兩個同時選擇,K代表Keyspace事件,E代表Keyevent事件
#  K以爲着一個或多個數據類型的所有符合規則事件都會生成通知
#  E以爲着一個或多個數據類型的某一個命令的時間會生成通知
#  如果看到這裏還沒明白,建議去百度一下,推薦一個:http://redisdoc.com/topic/notification.html#id1
#  K     Keyspace events, published with __keyspace@<db>__ prefix.
#  E     Keyevent events, published with __keyevent@<db>__ prefix.
#
#  一般的命令,比如DEL SET EXPIRE RENAME等等,感覺像是所有會產生改變的命令都符合條件      
#  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
#
#  下面的$ l s h z 分別代表大家都知道5種數據類型
#  $     String commands
#  l     List commands
#  s     Set commands
#  h     Hash commands
#  z     Sorted set commands
#
#  x 代表過期事件  e 代表內存使用超過maxmemory時KEY被淘汰的事件
#  x     Expired events (events generated every time a key expires)
#  e     Evicted events (events generated when a key is evicted for maxmemory)
#
#  A 是一個別名,代表了"g$lshzxe"的組合,可以增強閱讀性
#  A     Alias for g$lshzxe, so that the "AKE" string means all the events.
#
#  The "notify-keyspace-events" takes as argument a string that is composed
#  of zero or multiple characters. The empty string means that notifications
#  are disabled.
#  可以給"notify-keyspace-events"設置0或者多個字符,如果設置爲空字符串,則表示關閉此功能
#
#  Example: to enable list and generic events, from the point of view of the
#           event name, use:
#
#  notify-keyspace-events Elg
#
#  Example 2: to get the stream of the expired keys subscribing to channel
#             name __keyevent@0__:expired use:
#
#  notify-keyspace-events Ex
#
#  By default all notifications are disabled because most users don't need
#  this feature and the feature has some overhead. Note that if you don't
#  specify at least one of K or E, no events will be delivered.
#  因爲開啓此功能是有一定開銷的,會影響性能,而且大多數用戶不需要此功能,所以默認是關閉了此功能的,不會有事件通知被髮送
notify-keyspace-events ""

############################### ADVANCED CONFIG 高級配置 ###############################
# 下面的配置需要對Redis的原理,特別5中數據類型的底層數據結構有比較清楚的瞭解才能看得懂,總的來說就是根據自己的鍵-值選擇5中數據類型在某些條件下使用何種數據結構來存放數據。
# 最常見的高效數據結構就是ziplist、intset,但是他們通常只有元素(條目)較小且元素(條目)較小時才適合
# 要學習這部分內容可以看看redis設計與實現和Redis資深歷險兩本書,前一本書將原理很多,且深度足夠,但是Redis的版本有點太老了,後一步本書可以在原理上對前一本書進行補充,且Redis版本很新,已經到5了。而且它還將了很多實戰的知識。
#
# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
#
# hash數據類型:如果條目數小於512,且條目大小不超過64字節,則使用ziplist作爲hash數據類型的底層數據結構
hash-max-ziplist-entries 512
hash-max-ziplist-value 64

# 新版本的Redis針對list數據類型的底層數據結構做了優化採用的是"鏈表+ziplist",其思想有點像Java HashMap的"數組+鏈表/紅黑樹"
# Lists are also encoded in a special way to save a lot of space.
# The number of entries allowed per internal list node can be specified
# as a fixed maximum size or a maximum number of elements.
# 可以通過"list-max-ziplist-size"設置鏈表中ziplist的條目數量,其值可以是條目數量,也可以最大字節數
# For a fixed maximum size, use -5 through -1, meaning:
# 下面是5個可能取值,建議使用-1 和 -2,其他選項不推薦使用,除非有特殊需求
# -5: max size: 64 Kb  <-- not recommended for normal workloads
# -4: max size: 32 Kb  <-- not recommended
# -3: max size: 16 Kb  <-- probably not recommended
# -2: max size: 8 Kb   <-- good
# -1: max size: 4 Kb   <-- good
#
# Positive numbers mean store up to _exactly_ that number of elements
# per list node.
# 上面的負值就是單個鏈表節點所包含的條目數
#
# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
# but if your use case is unique, adjust the settings as necessary.
# 取值爲-1 -2 發揮的性能是最好的
list-max-ziplist-size -2

# Lists may also be compressed.
# Compress depth is the number of quicklist ziplist nodes from *each* side of
# the list to *exclude* from compression.  The head and tail of the list
# are always uncompressed for fast push/pop operations.  Settings are:
# 0: disable all list compression
#    表示不壓縮任何節點
# 1: depth 1 means "don't start compressing until after 1 node into the list,
#    going from either the head or tail"
#    So: [head]->node->node->...->node->[tail]
#    [head], [tail] will always be uncompressed; inner nodes will compress.
#    表示除鏈表的頭尾以外,其他鏈表節點都壓縮
# 2: [head]->[next]->node->node->...->node->[prev]->[tail]
#    2 here means: don't compress head or head->next or tail->prev or tail,
#    but compress all nodes between them.
#    依次類推,即前兩個和後兩個以外的都壓縮
# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
#    依次類推
# etc.
# 默認壓縮深度爲0,也就是說不壓縮。。。無論如何設置頭尾是不會壓縮的,比如當list被當做隊列使用時,如果壓縮了,還需要解壓,降低了性能。
list-compress-depth 0

# Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# 當集合(set)存放的值都是64位的無符號10進制整數時,且條目數小於512時會採用intset作爲集合的底層數據結構
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512

# 和hash數據類型類似,如果條目數小於128,且條目大小<64會使用ziplist作爲有序集合的底層數據結構
# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64

# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
# 這個是Redis高級功能,可以用這種數據結構統計網站的UV,能夠去重,其準確度接近真實值
# 簡單點說:當去重後統計出來的值小於"hll-sparse-max-bytes"指定的值時,Redis會使用稀疏矩陣來存放,一個Key佔用的空間比稠密矩陣小,如果統計出來的值大於"hll-sparse-max-bytes"指定的值,那麼使用稠密矩陣,此時一個Key佔用的空間是12KB
# "hll-sparse-max-bytes"默認爲3000,如果設置爲16000以上完全是無用的,因爲此時稠密矩陣效果更好
hll-sparse-max-bytes 3000

# Streams macro node max size / items. The stream data structure is a radix
# tree of big nodes that encode multiple items inside. Using this configuration
# it is possible to configure how big a single node can be in bytes, and the
# maximum number of items it may contain before switching to a new node when
# appending new stream entries. If any of the following settings are set to
# zero, the limit is ignored, so for instance it is possible to set just a
# max entires limit by setting max-bytes to 0 and max-entries to the desired
# value.
# 設置Stream的單個節點最大字節數和最多能有多少個條目,如果任何一個條件滿足就會新增加一個節點用以保存新的數據
# 如果將任何一個配置項設置爲0,表示不限制
stream-node-max-bytes 4096
stream-node-max-entries 100

# Redis數據庫存放鍵值對數據結構是一個類型爲字典長度爲2的數組,假設這個數組名稱爲"ht",在rehash的時候就是將其中一個字典(ht[0])中的所有數據搬到另一個字典(ht[1])中,而且rehash是惰性的(因爲redis要高效的響應查詢或者寫,不可能去一次完成rehash操作,不像Java的HashMap),當方式key時或者CPU比較空閒時會觸發,因此也被稱之爲"漸進式hash"
# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
# 默認是使用1秒鐘的10毫秒進行rehash,在適當的時候回收內存
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply from time to time
# to queries with 2 milliseconds delay.
# 如果系統有嚴格的延時要求,在2毫秒內不斷的查詢出結果,可以將"activerehashing"設置no
# 但是這對你的系統並不是一個好事情,因此不建議這樣設置,所以保持不動吧
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
# 如果沒有非常嚴格的要求,建議將"activerehashing"設置爲yes,這樣可以讓內存儘可能快的釋放
activerehashing yes

# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
# 可以通過設置客戶端輸出緩衝區大小將待接收數據超過緩衝區大小的客戶端斷開
# 通常使用pub/sub的時候,客戶端沒有及時消費而導致超過緩衝區大小
#
# The limit can be set differently for the three different classes of clients:
# 提供三種客戶端的設置,分別是普通的、主從複製的、pub/sub的客戶端,我們可以分別對這三種客戶端的輸出緩衝區設置大小
#
# normal -> normal clients including MONITOR clients
# replica  -> replica clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
# 下面是三種客戶端緩衝區大小設置的語法
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# 如果客戶端輸出緩衝區的大小達到了"hard limit",服務器會立即斷開連接
# 如果客戶端輸出緩衝區的大小達到了"soft limit",且持續時間達到了"soft seconds",服務器會立即斷開連接
#
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
# 這上面是一個舉例,省略。。。。
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
# 默認情況下普通的client不限制,因爲它們都是發起請求後等待接收數據,並不像異步的客戶端(比如主從複製客戶端和PUB/SUB)會造成數據的擠壓,擠壓的原因就是客戶端處理速度跟不上數據產生的速度
#
# Instead there is a default limit for pubsub and replica clients, since
# subscribers and replicas receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
# hard or soft limit 都可以通過設置爲0而禁止掉
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60

# Client query buffers accumulate new commands. They are limited to a fixed
# amount by default in order to avoid that a protocol desynchronization (for
# instance due to a bug in the client) will lead to unbound memory usage in
# the query buffer. However you can configure it here if you have very special
# needs, such us huge multi/exec requests or alike.
# 客戶端查詢緩衝區會累加新命令,默認情況下,緩衝區大小是一個固定值以避免協議同步失效(如客戶端的bug)導致查詢緩衝區出現未綁定的內存(即客戶端都已經不存在了,但是它發過來的命令還在緩衝區當中)
# 如果有巨大的multi/exec請求,則可以修改這個值以滿足我們的特殊需求
#
# client-query-buffer-limit 1gb

# In the Redis protocol, bulk requests, that are, elements representing single
# strings, are normally limited ot 512 mb. However you can change this limit
# here.
# 如果一個大容量請求(即客戶端單次發送過來的字符串)被限制爲512MB,我們也可以通過修改"proto-max-bulk-len"值
# 不過我可能一輩子也不會用到
# proto-max-bulk-len 512mb

# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform according to the specified "hz" value.
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
# 簡單點說:Redis有後臺任務,通過設置"hz"可提高或者降低檢查這些任務是否應該執行的頻率,值越大消耗的CPU越多,反之越少
# 值可以設置在1到500之間,通常不建議將該值設置得比100大,一般都使用10這個默認值,除非我們的系統有非常嚴格的延時要求,纔會將"hz"設置得等於或者超過100
hz 10

# Normally it is useful to have an HZ value which is proportional to the
# number of clients connected. This is useful in order, for instance, to
# avoid too many clients are processed for each background task invocation
# in order to avoid latency spikes.
#
# Since the default HZ value by default is conservatively set to 10, Redis
# offers, and enables by default, the ability to use an adaptive HZ value
# which will temporary raise when there are many connected clients.
#
# When dynamic HZ is enabled, the actual configured HZ will be used as
# as a baseline, but multiples of the configured HZ value will be actually
# used as needed once more clients are connected. In this way an idle
# instance will use very little CPU time while a busy instance will be
# more responsive.
# 英文有時還真的描述很囉唆,還是中文編碼更高效。。。吐槽一下英文
# 前面的"ht"配置項是固定值,當連接客戶端非常多時,如果"ht"還是10,則可能會導致延遲比較高,因此Redis搞了一個
# "dynamic-hz"配置項,當設置爲yes時,可以基於"ht"配置值動態的調整使用的"ht"值,比如連接的客戶端很多事,動態將ht調高,可以減少延遲。而當連接客戶端比較少,又可以動態降低"ht",這樣消耗的CPU會很少
# 默認值是yes,這個根本不需要我們自己去動,有了它我們也不需要去動"ht"配置
dynamic-hz yes

# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
# 當子進程在重寫AOF文件時,如果將"aof-rewrite-incremental-fsync"設置爲yes,那麼一旦生成32M數據纔會調用一次OS的fsync函數,這樣可以降低出現訪問峯值時系統的延遲。因爲可以減少fsync調用次數和IO請求
aof-rewrite-incremental-fsync yes

# When redis saves RDB file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
# 和"aof-rewrite-incremental-fsync"一個意思,只不過是用在生成RDB文件時用。
# 如果持久化採用的混合方式,即AOF文件是由"RDB部分+AOF部分"組成的話,我想"aof-rewrite-incremental-fsync"和"rdb-save-incremental-fsync"都會使用到
rdb-save-incremental-fsync yes

# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good
# idea to start with the default settings and only change them after investigating
# how to improve the performances and how the keys LFU change over time, which
# is possible to inspect via the OBJECT FREQ command.
#
# There are two tunable parameters in the Redis LFU implementation: the
# counter logarithm factor and the counter decay time. It is important to
# understand what the two parameters mean before changing them.
#
# Redis的LFU實現有兩個可調整的參數:計數器對數因子(couter logarithm factor)和計數器衰退時間(counter decay time)
# 一定要充分理解這兩個參數之後才能去修改,如果不懂就不要去瞎搞了,如果非要修改,一定要使用"OBJECT FREQ"命令充分調查並知道如何提升性能的情況下才能進行
# The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis
# uses a probabilistic increment with logarithmic behavior. Given the value
# of the old counter, when a key is accessed, the counter is incremented in
# this way:
# 在介紹maxmemory的時候提到了兩個參數作用的原理,這裏就不贅述了。
#
# 1. A random number R between 0 and 1 is extracted.
# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).
# 3. The counter is incremented only if R < P.
#
# The default lfu-log-factor is 10. This is a table of how the frequency
# counter changes with a different number of accesses with different
# logarithmic factors:
# "lfu-log-factor"的默認值=10,下表是不同對數因子下計數器的改變頻率:
#
# +--------+------------+------------+------------+------------+------------+
# | factor | 100 hits   | 1000 hits  | 100K hits  | 1M hits    | 10M hits   |
# +--------+------------+------------+------------+------------+------------+
# | 0      | 104        | 255        | 255        | 255        | 255        |
# +--------+------------+------------+------------+------------+------------+
# | 1      | 18         | 49         | 255        | 255        | 255        |
# +--------+------------+------------+------------+------------+------------+
# | 10     | 10         | 18         | 142        | 255        | 255        |
# +--------+------------+------------+------------+------------+------------+
# | 100    | 8          | 11         | 49         | 143        | 255        |
# +--------+------------+------------+------------+------------+------------+
#
# NOTE: The above table was obtained by running the following commands:
# 上面的表格可以通過下面的命令得到:
#
#   redis-benchmark -n 1000000 incr foo
#   redis-cli object freq foo
#
# NOTE 2: The counter initial value is 5 in order to give new objects a chance
# to accumulate hits.
# 默認counter的初始值是5,爲了讓新的對象有機會累加它的命中率
#
# The counter decay time is the time, in minutes, that must elapse in order
# for the key counter to be divided by two (or decremented if it has a value
# less <= 10).
# 計數器衰減時間是key計數器除以2(如果值小於<=10,則遞減)所必須經過的時間,單位爲分鐘。
#
# The default value for the lfu-decay-time is 1. A Special value of 0 means to
# decay the counter every time it happens to be scanned.
# "lfu-decay-time" 的默認值爲 1,0 表示每次都對計數器進行衰減
#
# lfu-log-factor 10
# lfu-decay-time 1

########################### ACTIVE DEFRAGMENTATION #######################
########################### 在線碎片整理 #######################
#
# WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested
# even in production and manually tested by multiple engineers for some
# time.
# 這還只是一個實驗功能,就像Redis Cluster一樣,其實已經有很多人在使用了
# What is active defragmentation?
# -------------------------------
#
# Active (online) defragmentation allows a Redis server to compact the
# spaces left between small allocations and deallocations of data in memory,
# thus allowing to reclaim back memory.
#
# Fragmentation is a natural process that happens with every allocator (but
# less so with Jemalloc, fortunately) and certain workloads. Normally a server
# restart is needed in order to lower the fragmentation, or at least to flush
# away all the data and create it again. However thanks to this feature
# implemented by Oran Agra for Redis 4.0 this process can happen at runtime
# in an "hot" way, while the server is running.
# 活動碎片整理允許Redis服務器壓縮內存中由於申請和釋放數據塊導致的碎片,從而回收內存,就好像window的磁盤整理一樣
# 碎片是每次申請內存(幸運的是Jemalloc出現碎片的機率小很多)的時候會自然發生的
# 通常來說,爲了降低碎片化程度需要重啓服務,或者清除所有的數據然後重新創建。 得益於Oran Agra在Redis 4.0實現的這個特性,進程可以在服務運行時以"熱"方式完成
#
# Basically when the fragmentation is over a certain level (see the
# configuration options below) Redis will start to create new copies of the
# values in contiguous memory regions by exploiting certain specific Jemalloc
# features (in order to understand if an allocation is causing fragmentation
# and to allocate it in a better place), and at the same time, will release the
# old copies of the data. This process, repeated incrementally for all the keys
# will cause the fragmentation to drop back to normal values.
# 通常來說當碎片化達到一定程度(查看下面的配置)Redis 會使用Jemalloc創建連續的內存空間,並在此內存空間對現有的值進行拷貝,拷貝完成後會釋放掉舊的數據。
# 這個過程會對所有的導致碎片化的key以增量的形式進行,Redis處處使用漸進式的,真實辛苦設計者了
#
# Important things to understand:
# 要重點理解的三點:
#
# 1. This feature is disabled by default, and only works if you compiled Redis
#    to use the copy of Jemalloc we ship with the source code of Redis.
#    This is the default with Linux builds.
#    默認情況下,該功能是關閉的,並且只有在編譯Redis時使用了代碼中的Jemalloc才生效(這是 Linux 下的默認行爲)
# 2. You never need to enable this feature if you don't have fragmentation
#    issues.
#    如果沒有碎片問題,我們永遠也不需要啓用該功能
#
# 3. Once you experience fragmentation, you can enable this feature when
#    needed with the command "CONFIG SET activedefrag yes".
#    可以通過命令"CONFIG SET activefrag yes"來啓用並試驗
#
# The configuration parameters are able to fine tune the behavior of the
# defragmentation process. If you are not sure about what they mean it is
# a good idea to leave the defaults untouched.
# 相關的配置參數可以很好的調整碎片整理過程,如果你不知道這些選項的作用最好使用默認值。

# Enabled active defragmentation
# 開啓在線整理
# activedefrag yes

# Minimum amount of fragmentation waste to start active defrag
# 有多少碎片時開始整理
# active-defrag-ignore-bytes 100mb

# Minimum percentage of fragmentation to start active defrag
# 有多少比例的碎片時開始整理
# active-defrag-threshold-lower 10

# Maximum percentage of fragmentation at which we use maximum effort
# 有多少比例的碎片時開始進行整理
# active-defrag-threshold-upper 100

# Minimal effort for defrag in CPU percentage
# 進行碎片整理時使用多少比例的CPU時間
# active-defrag-cycle-min 5

# Maximal effort for defrag in CPU percentage
# 進行整理時使用多少CPU時間
# active-defrag-cycle-max 75

# Maximum number of set/hash/zset/list fields that will be processed from
# the main dictionary scan
# 進行主字典掃描時處理的 set/hash/zset/list 字段的最大數量(就是說在進行主字典掃描時 set/hash/zset/list 的長度小於這個值纔會處理,大於這個值的會放在一個列表中延遲處理)
# 因爲如果某一個key過大,一次性處理完會非常耗時的
# active-defrag-max-scan-fields 1000


~~~

參考:https://my.oschina.net/u/3049601/blog/3163953

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章