codis 3.2集羣安裝

通過codis3.2實現redis3.2.8集羣


一:Codis 是一個分佈式 Redis 解決方案, 對於上層的應用來說, 連接到 Codis Proxy 和連接原生的 Redis Server 沒有明顯的區別 (不支持的命令列表https://github.com/CodisLabs/codis/blob/release3.1/doc/unsupported_cmds.md), 上層應用可以像使用單機的 Redis 一樣使用, Codis 底層會處理請求的轉發, 不停機的數據遷移等工作, 所有後邊的一切事情, 對於前面的客戶端來說是透明的, 可以簡單的認爲後邊連接的是一個內存無限大的 Redis 服務:
Codis是豌豆莢的開源方案,目前在redis集羣實現方式對比,codis集羣比較穩定的方案,並且客戶端不需要做任何修改,相對redis cluster兼容性更強,可節約大量開發成本並減少大量後期維護成本,豌豆莢gitlab地址https://github.com/pingcap,豌豆莢codis項目官方github地址https://github.com/CodisLabs/codis,codis 主要由以下特點:


可以無縫遷移到codis,自帶遷移工具,並且案例較多
可以動態擴容和縮容
多業務完全透明,業務不知道運行的是codis
支持多核心CPU,twemproxy只能單核
codis是中心基於proxy的設計,是客戶端像連接單機一樣操作proxy
有部分命令不能支持,比如keys *等
支持group劃分,組內可以設置一個主多個從,通過sentinel 監控redis主從,當主down了自動將從切換爲主
設置的進程要最大等於CPU的核心,不能超過CPU的核心數
其依賴於zookeeper,裏面保存的是key保存的redis主機位置,因此zookeeper要做高可用
監控可以使用接口和dashboard


1.1:安裝go環境,codis基於go開發:
1.1.1:架構環境:
codis-proxy相當於redis,即連接codis-proxy和連接redis是沒有任何區別的,codis-proxy無狀態,不負責記錄是否在哪保存,數據在zookeeper記錄,即codis proxy向zookeeper查詢key的記錄位置,proxy 將請求轉發到一個組進行處理,一個組裏面有一個master和一個或者多個slave組成,默認有1024個槽位,redis cluster 默認有16384個槽位,其把不同的槽位的內容放在不通的group。
部署環境:3臺服務器


1.1.2:codis是基於go語言編寫的,因此要安裝go語言環境:

複製代碼
# cd /usr/local/src
[root@node1 src]# yum install -y gcc glibc gcc-c++ make git
[root@node1 src]# wget https://storage.googleapis.com/golang/go1.7.3.linux-amd64.tar.gz
[root@node1 src]# tar zxf go1.7.3.linux-amd64.tar.gz
[root@node1 src]# mv go /usr/local/
[root@node1 src]# mkdir /usr/local/go/work
[root@node1 src]# vim /root/.bash_profile


export PATH=$PATH:/usr/local/go/bin
export GOROOT=/usr/local/go
export GOPATH=/usr/local/go/work
path=$PATH:$HOME/bin:$GOROOT/bin:$GOPATH/bin
[root@node1 src]# source /root/.bash_profile


[root@node1 src]# echo $GOPATH
/usr/local/go/work
[root@node1 ~]# go version
go version go1.7.3 linux/amd64
複製代碼
 

1.1.3:每臺服務器安裝java環境和zookeeper,zookeeper集羣最少需要3臺服務器,推薦5臺,因爲zookeeper是基於java開發的:

# tar zxf jdk-8u131-linux-x64.gz
# mv jdk1.8.0_131 /usr/local/
加入腳本


複製代碼
# vim /etc/profile
export JAVA_HOME=/usr/local/jdk1.8.0_131
export PATH=$JAVA_HOME/bin:$PATH


[root@node1 jdk1.8.0_131]# source /etc/profile
[root@node1 jdk1.8.0_131]# java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
[root@node1 jdk1.8.0_131]# echo $JAVA_HOME
/usr/local/jdk1.8.0_131


# tar zxf zookeeper-3.4.6.tar.gz 
# mv zookeeper-3.4.6 /usr/local/

[root@node1 src]# ln -sv /usr/local/zookeeper-3.4.6/ /usr/local/zookeeper
‘/usr/local/zookeeper’ -> ‘/usr/local/zookeeper-3.4.6/’

[root@node1 src]# cd /opt
[root@node1 opt]# mkdir zk1 zk2 zk3 #準備zookeeper 服務ID,每個服務器的ID是不同的

[root@node1 opt]# echo 1 > zk1/myid
[root@node1 opt]# echo 2 > zk2/myid
[root@node1 opt]# echo 3 > zk3/myid

[root@node1 opt]# cp /usr/local/zookeeper-3.4.6/conf/zoo_sample.cfg /opt/zk1/zk1.cfg
複製代碼
#準備配置文件
#第一個zookeeper的配置文件:


複製代碼
[root@redis1 opt]# grep "^[a-Z]" /opt/zk1/zk1.cfg
tickTime=6000    #服務器和客戶端的心跳維持間隔,間隔多久發送心跳 ,6000微秒等於6毫秒
initLimit=10    #選舉的時候的時間間隔是10次,10次 * 6000微秒 即60秒
syncLimit=10    # Leader 與 Follower 之間發送消息,請求和應答時間長度,最長不能超過多少個 tickTime 的時間長度
dataDir=/opt/zk1 # 數據保存目錄
clientPort=2181 # 客戶端連接的端口


# 集羣端口和ID配置
server.1=192.168.3.198:2887:3887
server.2=192.168.3.198:2888:3888
server.3=192.168.3.198:2889:3889
複製代碼

1.1.4:配置第二個zookeeper服務:
#配置第二個zookeeper服務,每個服務對應不用的配置文件和數據目錄:
[root@node1 opt]# cp /opt/zk1/zk1.cfg /opt/zk2/zk2.cfg
[root@node1 opt]# grep "^[a-Z]" /opt/zk2/zk2.cfg


複製代碼
tickTime=6000
initLimit=20
syncLimit=10
dataDir=/opt/zk2 # 需要修改配置
clientPort=2182    # 需要修改監聽端口
server.1=192.168.3.198:2887:3887
server.2=192.168.3.198:2888:3888
server.3=192.168.3.198:2889:3889
複製代碼
 
1.1.5:配置第三個zookeeper服務:
#配置第三個zookeeper服務,每個服務對應不用的配置文件和數據目錄:


複製代碼
[root@node1 opt]# cp /opt/zk1/zk1.cfg /opt/zk3/zk3.cfg
[root@node1 opt]# vim /opt/zk3/zk3.cfg 
[root@node1 opt]# grep "^[a-Z]" /opt/zk3/zk3.cfg 
tickTime=6000
initLimit=20
syncLimit=10
dataDir=/opt/zk3    # 需要修改配置
clientPort=2183    # 需要修改監聽端口
server.1=192.168.3.198:2887:3887
server.2=192.168.3.198:2888:3888
server.3=192.168.3.198:2889:3889
複製代碼
 
1.1.6:參數詳解:
詳細解釋:
tickTime:這個時間是作爲 Zookeeper 服務器之間或客戶端與服務器之間維持心跳的時間間隔,也就是每個 tickTime 時間就會發送一個心跳。


dataDir:顧名思義就是 Zookeeper 保存數據的目錄,默認情況下,Zookeeper 將寫數據的日誌文件也保存在這個目錄裏。


clientPort:這個端口就是客戶端連接 Zookeeper 服務器的端口,Zookeeper 會監聽這個端口,接受客戶端的訪問請求。


initLimit:這個配置項是用來配置 Zookeeper 接受客戶端(這裏所說的客戶端不是用戶連接 Zookeeper 服務器的客戶端,而是 Zookeeper 服務器集羣中連接到 Leader 的 Follower 服務器)初始化連接時最長能忍受多少個心跳時間間隔數。當已經超過 5個心跳的時間(也就是 tickTime)長度後 Zookeeper 服務器還沒有收到客戶端的返回信息,那麼表明這個客戶端連接失敗。總的時間長度就是 10*6000=60 秒


syncLimit:這個配置項標識 Leader 與 Follower 之間發送消息,請求和應答時間長度,最長不能超過多少個 tickTime 的時間長度,總的時間長度就是 5*6000=30 秒


server.A=B:C:D:其中 A 是一個數字,表示這個是第幾號服務器;B 是這個服務器的 ip 地址;C 表示的是這個服務器與集羣中的 Leader 服務器交換信息的端口;D 表示的是萬一集羣中的 Leader 服務器掛了,需要一個端口來重新進行選舉,選出一個新的 Leader,而這個端口就是用來執行選舉時服務器相互通信的端口。如果是僞集羣的配置方式,由於 B 都是一樣,所以不同的 Zookeeper 實例通信端口號不能一樣,所以要給它們分配不同的端口號。

1.1.7:分別啓動各zookeeper服務:


複製代碼
[root@node1 opt]# /usr/local/zookeeper/bin/zkServer.sh start /opt/zk1/zk1.cfg
[root@node1 opt]# /usr/local/zookeeper/bin/zkServer.sh start /opt/zk2/zk2.cfg
[root@node1 opt]# /usr/local/zookeeper/bin/zkServer.sh start /opt/zk3/zk3.cfg

[root@node1 opt]# ss -tnlp|grep 218*
LISTEN 0 50 :::2181 :::* users:(("java",pid=2893,fd=24))
LISTEN 0 50 :::2182 :::* users:(("java",pid=3055,fd=24))
LISTEN 0 50 :::2183 :::* users:(("java",pid=3099,fd=24))
複製代碼

1.1.9:查看各個zookeeper節點的狀態:

複製代碼
[root@node1 opt]# /usr/local/zookeeper/bin/zkServer.sh status /opt/zk1/zk1.cfg 
JMX enabled by default
Using config: /opt/zk1/zk1.cfg
Mode: follower # 備用節點
[root@node1 opt]# /usr/local/zookeeper/bin/zkServer.sh status /opt/zk2/zk2.cfg 
JMX enabled by default
Using config: /opt/zk2/zk2.cfg
Mode: leader # 主節點
[root@node1 opt]# /usr/local/zookeeper/bin/zkServer.sh status /opt/zk3/zk3.cfg 
JMX enabled by default
Using config: /opt/zk3/zk3.cfg
Mode: follower # 備用節點
複製代碼

1.1.10:測試連接到zookeeper節點:

複製代碼
[root@node1 opt]# /usr/local/zookeeper/bin/zkCli.sh -server 192.168.10.101:2181
Connecting to 192.168.10.101:2181
2017-05-12 17:27:41,481 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2017-05-12 17:27:41,485 [myid:] - INFO [main:Environment@100] - Client environment:host.name=www.chinasoft.com
2017-05-12 17:27:41,485 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.8.0_131
2017-05-12 17:27:41,488 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2017-05-12 17:27:41,488 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/usr/local/jdk1.8.0_131/jre
2017-05-12 17:27:41,488 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/usr/local/zookeeper/bin/../build/classes:/usr/local/zookeeper/bin/../build/lib/*.jar:/usr/local/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/local/zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/usr/local/zookeeper/bin/../lib/netty-3.7.0.Final.jar:/usr/local/zookeeper/bin/../lib/log4j-1.2.16.jar:/usr/local/zookeeper/bin/../lib/jline-0.9.94.jar:/usr/local/zookeeper/bin/../zookeeper-3.4.6.jar:/usr/local/zookeeper/bin/../src/java/lib/*.jar:/usr/local/zookeeper/bin/../conf:..:/usr/local/jdk/lib:/usr/local/jdk/jre/lib:/usr/local/jdk/lib/tools.jar:/usr/local/jdk/lib:/usr/local/jdk/jre/lib:/usr/local/jdk/lib/tools.jar
2017-05-12 17:27:41,489 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2017-05-12 17:27:41,489 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2017-05-12 17:27:41,489 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=<NA>
2017-05-12 17:27:41,489 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux
2017-05-12 17:27:41,489 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64
2017-05-12 17:27:41,489 [myid:] - INFO [main:Environment@100] - Client environment:os.version=3.10.0-514.el7.x86_64
2017-05-12 17:27:41,489 [myid:] - INFO [main:Environment@100] - Client environment:user.name=root
2017-05-12 17:27:41,490 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/root
2017-05-12 17:27:41,490 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/opt
2017-05-12 17:27:41,491 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=192.168.10.101:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@799f7e29
Welcome to ZooKeeper!
2017-05-12 17:27:41,534 [myid:] - INFO [main-SendThread(192.168.10.101:2181):ClientCnxn$SendThread@975] - Opening socket connection to server 192.168.10.101/192.168.10.101:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
[zk: 192.168.10.101:2181(CONNECTING) 0]


1.1.11:獲取zookeeper命令行幫助:
[zk: 192.168.10.101:2181(CONNECTING) 0] help
ZooKeeper -server host:port cmd args
stat path [watch]
set path data [version]
ls path [watch]
delquota [-n|-b] path
ls2 path [watch]
setAcl path acl
setquota -n|-b val path
history 
redo cmdno
printwatches on|off
delete path [version]
sync path
listquota path
rmr path
get path [watch]
create [-s] [-e] path data acl
addauth scheme auth
quit 
getAcl path
close 
connect host:port
[zk: 192.168.10.101:2181(CONNECTING) 1]
複製代碼
 
1.2.12:下載codis 3.2版本:

複製代碼
# mkdir -p $GOPATH/src/github.com/CodisLabs
[root@node1 work]# cd /usr/local/go/work/src/github.com/CodisLabs
[root@node1 CodisLabs]# git clone https://github.com/CodisLabs/codis.git -b release3.2


[root@node1 CodisLabs]# cd $GOPATH/src/github.com/CodisLabs/codis
[root@node1 codis]# pwd
/usr/local/go/work/src/github.com/CodisLabs/codis
複製代碼
 
1.1.13:執行make進行編譯:
安裝依賴

複製代碼
[root@node1 codis]# yum install autoconf automake libtool -y
[root@node1 codis]# make
make -j4 -C extern/redis-3.2.8/
make[1]: Entering directory `/usr/local/go/work/src/github.com/CodisLabs/codis/extern/redis-3.2.8'
cd src && make all
make[2]: Entering directory `/usr/local/go/work/src/github.com/CodisLabs/codis/extern/redis-3.2.8/src'
...
lazy_lock : 0
tls : 1
cache-oblivious : 1
===============================================================================
go build -i -o bin/codis-dashboard ./cmd/dashboard
go build -i -tags "cgo_jemalloc" -o bin/codis-proxy ./cmd/proxy
go build -i -o bin/codis-admin ./cmd/admin
go build -i -o bin/codis-fe ./cmd/fe
複製代碼
 
1.1.14:執行測試:

複製代碼
[root@node1 codis]# make gotest
go test ./cmd/... ./pkg/...
? github.com/CodisLabs/codis/cmd/admin    [no test files]
? github.com/CodisLabs/codis/cmd/dashboard    [no test files]
? github.com/CodisLabs/codis/cmd/fe    [no test files]
? github.com/CodisLabs/codis/cmd/proxy    [no test files]
? github.com/CodisLabs/codis/pkg/models    [no test files]
? github.com/CodisLabs/codis/pkg/models/etcd    [no test files]
? github.com/CodisLabs/codis/pkg/models/fs    [no test files]
? github.com/CodisLabs/codis/pkg/models/zk    [no test files]
ok github.com/CodisLabs/codis/pkg/proxy    2.525s
ok github.com/CodisLabs/codis/pkg/proxy/redis    0.530s
ok github.com/CodisLabs/codis/pkg/topom    6.560s
ok github.com/CodisLabs/codis/pkg/utils    0.009s
? github.com/CodisLabs/codis/pkg/utils/assert    [no test files]
ok github.com/CodisLabs/codis/pkg/utils/bufio2    0.006s
ok github.com/CodisLabs/codis/pkg/utils/bytesize    0.004s
? github.com/CodisLabs/codis/pkg/utils/errors    [no test files]
? github.com/CodisLabs/codis/pkg/utils/log    [no test files]
ok github.com/CodisLabs/codis/pkg/utils/math2    0.002s
? github.com/CodisLabs/codis/pkg/utils/redis    [no test files]
? github.com/CodisLabs/codis/pkg/utils/rpc    [no test files]
? github.com/CodisLabs/codis/pkg/utils/sync2    [no test files]
? github.com/CodisLabs/codis/pkg/utils/sync2/atomic2    [no test files]
ok github.com/CodisLabs/codis/pkg/utils/timesize    0.009s
? github.com/CodisLabs/codis/pkg/utils/trace    [no test files]
ok github.com/CodisLabs/codis/pkg/utils/unsafe2    0.003s
複製代碼
1.1.15:執行全部指令後,會在 bin 文件夾內生成 codis-proxy、codis-server三個可執行文件。另外, bin/assets 文件夾是 dashboard http 服務需要的前端資源)


複製代碼
[root@node1 codis]# ll bin
total 75680
drwxr-xr-x 4 root root 117 May 12 18:00 assets
-rwxr-xr-x 1 root root 15474864 May 12 18:00 codis-admin
-rwxr-xr-x 1 root root 17093776 May 12 18:00 codis-dashboard
-rwxr-xr-x 1 root root 15365824 May 12 18:00 codis-fe
-rwxr-xr-x 1 root root 19167944 May 12 18:00 codis-proxy    # 代理
-rwxr-xr-x 1 root root 5357008 May 12 18:00 codis-server    # codis 開發的codis server
-rwxr-xr-x 1 root root 2431984 May 12 18:00 redis-benchmark
-rwxr-xr-x 1 root root 2586040 May 12 18:00 redis-cli
-rw-r--r-- 1 root root 169 May 12 18:00 version
[root@node1 codis]# cat bin/version
version = 2017-05-12 17:22:43 +0800 @07352186632fafd45ca31b0cbde4a541862d46fe @3.2-rc1-32-g0735218
compile = 2017-05-12 18:00:10 +0800 by go version go1.7.3 linux/amd64
複製代碼
 

編譯codis3.2錯誤記錄:

按 Ctrl+C 複製代碼

make[2]: Leaving directory `/usr/local/go/work/src/github.com/CodisLabs/codis/extern/redis-3.2.8/src'
make[1]: Leaving directory `/usr/local/go/work/src/github.com/CodisLabs/codis/extern/redis-3.2.8'
autoconf
./autogen.sh: line 5: autoconf: command not found
Error 0 in autoconf
make[2]: *** [config] Error 1
make[1]: *** [build] Error 2
make: *** [codis-deps] Error 2
按 Ctrl+C 複製代碼
 
解決:
安裝依賴
[root@node1 codis]# yum install autoconf automake libtool -y


1.2:默認啓動的會讀取config目錄的dashboard.toml文件,編輯如下:
1.2.1:dashboard 的配置文件:


vim /usr/local/go/work/src/github.com/CodisLabs/codis/config/dashboard.toml


修改成以下配置:
[root@redis1 codis]# more config/dashboard.toml 


##################################################
#                                                #
#                  Codis-Dashboard               #
#                                                #
##################################################


# Set Coordinator, only accept "zookeeper" & "etcd" & "filesystem".
# Quick Start
#coordinator_name = "filesystem"
#coordinator_addr = "/tmp/codis"
coordinator_name = "zookeeper"
coordinator_addr = "192.168.4.70:2181,192.168.4.71:2181,192.168.4.72:2181"
product_name = "codis-chinasoft"


# Set Codis Product Name/Auth.
#product_name = "codis-demo"
product_auth = ""


# Set bind address for admin(rpc), tcp only.
admin_addr = "192.168.4.70:18080"


# Set arguments for data migration (only accept 'sync' & 'semi-async').
migration_method = "semi-async"
migration_parallel_slots = 100
migration_async_maxbulks = 200
migration_async_maxbytes = "32mb"
migration_async_numkeys = 500
migration_timeout = "30s"


# Set configs for redis sentinel.
sentinel_client_timeout = "10s"
sentinel_quorum = 2
sentinel_parallel_syncs = 1
sentinel_down_after = "30s"
sentinel_failover_timeout = "5m"
sentinel_notification_script = ""
sentinel_client_reconfig_script = ""

主要配置信息如下:


coordinator_name = "zookeeper"
coordinator_addr = "192.168.3.198:2181,192.168.3.198:2182,192.168.3.198:2183"
product_name = "codis-chinasoft"

啓動dashboard:


nohup ./bin/codis-dashboard --ncpu=1 --config=config/dashboard.toml --log=dashboard.log --log-level=WARN >> /var/log/codis_dashboard.log &

修改proxy.toml


[root@redis1 codis]# vi config/proxy.toml 


##################################################
#                                                #
#                  Codis-Proxy                   #
#                                                #
##################################################


# Set Codis Product Name/Auth.
product_name = "codis-chinasoft"
product_auth = ""


# Set auth for client session
#   1. product_auth is used for auth validation among codis-dashboard,
#      codis-proxy and codis-server.
#   2. session_auth is different from product_auth, it requires clients
#      to issue AUTH <PASSWORD> before processing any other commands.
session_auth = ""


# Set bind address for admin(rpc), tcp only.
admin_addr = "192.168.4.70:11080"


# Set bind address for proxy, proto_type can be "tcp", "tcp4", "tcp6", "unix" or "unixpacket".
proto_type = "tcp4"
proxy_addr = "0.0.0.0:19000"


# Set jodis address & session timeout
#   1. jodis_name is short for jodis_coordinator_name, only accept "zookeeper" & "etcd".
#   2. jodis_addr is short for jodis_coordinator_addr
#   3. proxy will be registered as node:
#        if jodis_compatible = true (not suggested):
#          /zk/codis/db_{PRODUCT_NAME}/proxy-{HASHID} (compatible with Codis2.0)
#        or else
#          /jodis/{PRODUCT_NAME}/proxy-{HASHID}
jodis_name = "zookeeper"
jodis_addr = "192.168.4.70:2181,192.168.4.71:2181,192.168.4.72:2181"
jodis_timeout = "20s"
jodis_compatible = true


# Set datacenter of proxy.
proxy_datacenter = ""


# Set max number of alive sessions.
"config/proxy.toml" 112L, 3854C
backend_keepalive_period = "75s"


# Set number of databases of backend.
backend_number_databases = 16


# If there is no request from client for a long time, the connection will be closed. (0 to disable)
# Set session recv buffer size & timeout.
session_recv_bufsize = "128kb"
session_recv_timeout = "30m"


# Set session send buffer size & timeout.
session_send_bufsize = "64kb"
session_send_timeout = "30s"


# Make sure this is higher than the max number of requests for each pipeline request, or your client may be blocked.
# Set session pipeline buffer size.
session_max_pipeline = 10000


# Set session tcp keepalive period. (0 to disable)
session_keepalive_period = "75s"


# Set session to be sensitive to failures. Default is false, instead of closing socket, proxy will send an error response
to client.
session_break_on_failure = false


# Set metrics server (such as http://localhost:28000), proxy will report json formatted metrics to specified server in a p
redefined period.
metrics_report_server = ""
metrics_report_period = "1s"


# Set influxdb server (such as http://localhost:8086), proxy will report metrics to influxdb.
metrics_report_influxdb_server = ""
metrics_report_influxdb_period = "1s"
metrics_report_influxdb_username = ""
metrics_report_influxdb_password = ""
metrics_report_influxdb_database = ""


# Set statsd server (such as localhost:8125), proxy will report metrics to statsd.
metrics_report_statsd_server = ""
metrics_report_statsd_period = "1s"
metrics_report_statsd_prefix = ""


主要配置信息如下:


複製代碼
product_name = "codis-chinasoft"
product_auth = "123456"


jodis_name = "zookeeper"
jodis_addr = "192.168.3.198:2181,192.168.3.198:2182,192.168.3.198:2183"
jodis_timeout = "20s"
jodis_compatible = true



# 默認配置文件獲取方式:./bin/codis-dashboard --default-config | tee dashboard.toml


啓動代理


nohup ./bin/codis-proxy --ncpu=1 --config=config/proxy.toml --log=proxy.log --log-level=WARN >> /var/log/codis_proxy.log &


啓動codis-admin


./bin/codis-admin --dashboard=192.168.4.70:18080 --create-proxy -x 192.168.4.70:11080


./bin/codis-admin --dashboard=192.168.4.71:18080 --create-proxy -x 192.168.4.71:11080


./bin/codis-admin --dashboard=127.0.0.1:18080 --create-proxy -x 192.168.1.237:11080


其中 127.0.0.1:18080 以及 127.0.0.1:11080 分別爲 dashboard 和 proxy 的 admin_addr 地址;


這裏需要注意的是:admin_addr 最好配置成主機 IP 或主機名。否則,有可能出現奇怪的錯誤!    
















啓動codis-server,即創建redis實例(此處我們創建4個redis實例,給予codis修改過的redis-3.2.8非原生redis)




[root@node1 codis]# mkdir -pv /var/lib/redis_638{1..4}
mkdir: created directory ¡®/var/lib/redis_6381¡¯
mkdir: created directory ¡®/var/lib/redis_6382¡¯
mkdir: created directory ¡®/var/lib/redis_6383¡¯
mkdir: created directory ¡®/var/lib/redis_6384¡¯


[root@node1 redis-3.2.8]# pwd
/usr/local/go/work/src/github.com/CodisLabs/codis/extern/redis-3.2.8
[root@node1 redis-3.2.8]# cp redis.conf /usr/local/go/work/src/github.com/CodisLabs/codis/
[root@node1 redis-3.2.8]# cd /usr/local/go/work/src/github.com/CodisLabs/codis/
 


修改redis.conf


pidfile /var/run/redis_6381.pid
port 6381
dbfilename dump_6381.rdb
dir /var/lib/redis_6381
logfile "/tmp/redis_6381.log"
maxmemory 1g #一定要設置最大內存,否則後面的codis無法使用
 


[root@node1 codis]# cp redis.conf redis_6381.conf
[root@node1 codis]# cp redis_6381.conf redis_6382.conf 
[root@node1 codis]# cp redis_6381.conf redis_6383.conf 
[root@node1 codis]# cp redis_6381.conf redis_6384.conf
[root@node1 codis]# sed -i 's/6381/6382/g' redis_6382.conf
[root@node1 codis]# sed -i 's/6381/6383/g' redis_6383.conf
[root@node1 codis]# sed -i 's/6381/6384/g' redis_6384.conf
 


1.2.3:通過codis-server指定redis.conf文件啓動redis服務,不能通過redis命令啓動redis服務,通過redis啓動的redis 服務加到codis集羣無法正常使用:


[root@redis1 codis]# ./bin/codis-server ./redis_6381.conf 
[root@redis1 codis]# ./bin/codis-server ./redis_6382.conf 
[root@redis1 codis]# ./bin/codis-server ./redis_6383.conf 
[root@redis1 codis]# ./bin/codis-server ./redis_6384.conf
 


1.2.4:驗證通過codis啓動redis 服務成功:


[root@node1 codis]# ss -tnlp|grep 638*
LISTEN 0 128 127.0.0.1:6381 *:* users:(("codis-server",pid=11726,fd=4))
LISTEN 0 128 127.0.0.1:6382 *:* users:(("codis-server",pid=11733,fd=4))
LISTEN 0 128 127.0.0.1:6383 *:* users:(("codis-server",pid=11738,fd=4))
LISTEN 0 128 127.0.0.1:6384 *:* users:(("codis-server",pid=11743,fd=4))






nohup ./bin/codis-fe --ncpu=1 --log=fe.log --log-level=WARN \
--zookeeper=192.168.4.70:2181,192.168.4.71:2181,192.168.4.72:2181 --listen=192.168.4.70:8080 >> /var/log/codis-fe.log &






最後,可以打開瀏覽器,查看地址:http://192.168.4.70:8080
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章