Mongo集羣分片部署實踐(4.2版本)

>使用一臺Windows機器模擬集羣分片部署,配置方案如下:

  • 3個分片sharding
  • 每一個分片由三個節點構成1主2備的 Replica Sets
  • 3個配置節點Configserver
  • 一個路由節點Mongos

分片複製集A(三個分片節點構成一個複製集):127.0.0.1:10000   127.0.0.1:10001  127.0.0.1:10002

分片複製集B(三個分片節點構成一個複製集):127.0.0.1:20000   127.0.0.1:20001  127.0.0.1:20002

分片複製集C(三個分片節點構成一個複製集):127.0.0.1:30000   127.0.0.1:30001  127.0.0.1:30002

Configsvc(三個配置服務器節點):127.0.0.1:40000   127.0.0.1:40001  127.0.0.1:4002

mongos(一個路由節點):127.0.0.1:50000

 

>配置目錄

進入安裝目錄下,創建數據和日誌目錄:

創建數據文件目錄:

data/a/r0、data/a/r1、data/a/r2

data/b/r0、data/b/r1、data/b/r2

data/c/r0、data/c/r1、data/c/r2

data/configsvr/r0、data/configsvr/r1、data/configsvr/r2

創建日誌文件目錄:

logs/a、logs/b、logs/c、logs/configsvr

 

>創建分片和複製集

1)配置第一組:

從命令行進入d:/mongodb目錄,分別執行如下命令!

mongod.exe --logpath d:/MongoDB/Server/4.2/logs/a/r0.log --logappend --dbpath d:/MongoDB/Server/4.2/data/a/r0 --port 9999 --shardsvr --replSet setA   --oplogSize 64

mongod.exe --logpath d:/MongoDB/Server/4.2/logs/a/r1.log --logappend --dbpath d:/MongoDB/Server/4.2/data/a/r1 --port 10001 --shardsvr --replSet setA  --oplogSize 64

mongod.exe --logpath d:/MongoDB/Server/4.2/logs/a/r2.log --logappend --dbpath d:/MongoDB/Server/4.2/data/a/r2 --port 10002 --shardsvr --replSet setA  --oplogSize 64

啓動上述分片節點之後,再使用mongo的命令行來初始化複製集

打開命令行輸入mongo --port 9999(前提是需要把mongodb配置的環境變量中去)

>config={_id:'setA',members:[{_id:0,host:'127.0.0.1:10000'},{_id:1,host:'127.0.0.1:10001'},{_id:2,host:'127.0.0.1:10002'}]}

> rs.initiate(config);

配置完成後,可以查看複製集狀態狀態:

rs.status()、rs.isMaster() 等,下面是示例,可以忽略這部分。

可以通過停掉成員,觀察故障自動轉移,這裏不做演示。

setA:PRIMARY> rs.status()
{
        "set" : "setA",
        "date" : ISODate("2019-10-10T06:47:27.962Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "syncingTo" : "",
        "syncSourceHost" : "",
        "syncSourceId" : -1,
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1570690043, 1),
                        "t" : NumberLong(1)
                },
                "lastCommittedWallTime" : ISODate("2019-10-10T06:47:23.390Z"),
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1570690043, 1),
                        "t" : NumberLong(1)
                },
                "readConcernMajorityWallTime" : ISODate("2019-10-10T06:47:23.390Z"),
                "appliedOpTime" : {
                        "ts" : Timestamp(1570690043, 1),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1570690043, 1),
                        "t" : NumberLong(1)
                },
                "lastAppliedWallTime" : ISODate("2019-10-10T06:47:23.390Z"),
                "lastDurableWallTime" : ISODate("2019-10-10T06:47:23.390Z")
        },
        "lastStableRecoveryTimestamp" : Timestamp(1570690033, 1),
        "lastStableCheckpointTimestamp" : Timestamp(1570690033, 1),
        "members" : [
                {
                        "_id" : 0,
                        "name" : "127.0.0.1:9999",
                        "ip" : "127.0.0.1",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 10685,
                        "optime" : {
                                "ts" : Timestamp(1570690043, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2019-10-10T06:47:23Z"),
                        "syncingTo" : "",
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "",
                        "electionTime" : Timestamp(1570679979, 1),
                        "electionDate" : ISODate("2019-10-10T03:59:39Z"),
                        "configVersion" : 1,
                        "self" : true,
                        "lastHeartbeatMessage" : ""
                },
                {
                        "_id" : 1,
                        "name" : "127.0.0.1:10001",
                        "ip" : "127.0.0.1",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 10079,
                        "optime" : {
                                "ts" : Timestamp(1570690043, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1570690043, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2019-10-10T06:47:23Z"),
                        "optimeDurableDate" : ISODate("2019-10-10T06:47:23Z"),
                        "lastHeartbeat" : ISODate("2019-10-10T06:47:25.991Z"),
                        "lastHeartbeatRecv" : ISODate("2019-10-10T06:47:26.587Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "127.0.0.1:9999",
                        "syncSourceHost" : "127.0.0.1:9999",
                        "syncSourceId" : 0,
                        "infoMessage" : "",
                        "configVersion" : 1
                },
                {
                        "_id" : 2,
                        "name" : "127.0.0.1:10002",
                        "ip" : "127.0.0.1",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 10079,
                        "optime" : {
                                "ts" : Timestamp(1570690043, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1570690043, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2019-10-10T06:47:23Z"),
                        "optimeDurableDate" : ISODate("2019-10-10T06:47:23Z"),
                        "lastHeartbeat" : ISODate("2019-10-10T06:47:25.991Z"),
                        "lastHeartbeatRecv" : ISODate("2019-10-10T06:47:26.614Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "127.0.0.1:9999",
                        "syncSourceHost" : "127.0.0.1:9999",
                        "syncSourceId" : 0,
                        "infoMessage" : "",
                        "configVersion" : 1
                }
        ],
        "ok" : 1,
        "$clusterTime" : {
                "clusterTime" : Timestamp(1570690043, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1570690043, 1)
}
setA:PRIMARY> rs.isMaster()
{
        "hosts" : [
                "127.0.0.1:9999",
                "127.0.0.1:10001",
                "127.0.0.1:10002"
        ],
        "setName" : "setA",
        "setVersion" : 1,
        "ismaster" : true,
        "secondary" : false,
        "primary" : "127.0.0.1:9999",
        "me" : "127.0.0.1:9999",
        "electionId" : ObjectId("7fffffff0000000000000001"),
        "lastWrite" : {
                "opTime" : {
                        "ts" : Timestamp(1570690223, 1),
                        "t" : NumberLong(1)
                },
                "lastWriteDate" : ISODate("2019-10-10T06:50:23Z"),
                "majorityOpTime" : {
                        "ts" : Timestamp(1570690223, 1),
                        "t" : NumberLong(1)
                },
                "majorityWriteDate" : ISODate("2019-10-10T06:50:23Z")
        },
        "maxBsonObjectSize" : 16777216,
        "maxMessageSizeBytes" : 48000000,
        "maxWriteBatchSize" : 100000,
        "localTime" : ISODate("2019-10-10T06:50:29.611Z"),
        "logicalSessionTimeoutMinutes" : 30,
        "connectionId" : 19,
        "minWireVersion" : 0,
        "maxWireVersion" : 8,
        "readOnly" : false,
        "ok" : 1,
        "$clusterTime" : {
                "clusterTime" : Timestamp(1570690223, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1570690223, 1)
}

 

2)配置第二組:

從命令行進入d:/MongoDB/Server/4.2/bin目錄,分別執行如下命令!

mongod.exe --logpath d:/MongoDB/Server/4.2/logs/b/r0.log --logappend --dbpath d:/MongoDB/Server/4.2/data/b/r0 --port 20000 --shardsvr --replSet setB  --oplogSize 64

mongod.exe --logpath d:/MongoDB/Server/4.2/logs/b/r1.log --logappend --dbpath d:/MongoDB/Server/4.2/data/b/r1 --port 20001 --shardsvr --replSet setB  --oplogSize 64

mongod.exe --logpath d:/MongoDB/Server/4.2/logs/b/r2.log --logappend --dbpath d:/MongoDB/Server/4.2/data/b/r2 --port 20002 --shardsvr --replSet setB  --oplogSize 64

啓動上述分片節點之後,再使用mongo的命令行來初始化複製集

打開命令行輸入mongo --port 2000(前提是需要把mongodb配置的環境變量中去)

>config={_id:'setB',members:[{_id:0,host:'127.0.0.1:20000'},{_id:1,host:'127.0.0.1:20001'},{_id:2,host:'127.0.0.1:20002'}]}

> rs.initiate(config);

3)配置第三組:

從命令行進入d:/MongoDB/Server/4.2/bin目錄,分別執行如下命令!

mongod.exe --logpath d:/MongoDB/Server/4.2/logs/c/r0.log --logappend --dbpath d:/MongoDB/Server/4.2/data/c/r0 --port 30000 --shardsvr --replSet setC   --oplogSize 64

mongod.exe --logpath d:/MongoDB/Server/4.2/logs/c/r1.log --logappend --dbpath d:/MongoDB/Server/4.2/data/c/r1 --port 30001 --shardsvr --replSet setC   --oplogSize 64

mongod.exe --logpath d:/MongoDB/Server/4.2/logs/c/r2.log --logappend --dbpath d:/MongoDB/Server/4.2/data/c/r2 --port 30002 --shardsvr --replSet setC   --oplogSize 64

啓動上述分片節點之後,再使用mongo的命令行來初始化複製集

打開命令行輸入mongo --port 3000(前提是需要把mongodb配置的環境變量中去)

>config={_id:'setC',members:[{_id:0,host:'127.0.0.1:30000'},{_id:1,host:'127.0.0.1:30001'},{_id:2,host:'127.0.0.1:30002'}]}

> rs.initiate(config);

 

>啓動三個配置服務節點Configsvr

從命令行分別執行如下命令,配置三個Configsvr

cd d:/MongoDB/Server/4.2/bin

call mongod.exe --configsvr –replSet cfgReplSet --logpath d:/MongoDB/Server/4.2/logs/configsvr/r0.log --logappend --dbpath d:/MongoDB/Server/4.2/data/configsvr/r0 --port 40000

cd d:/MongoDB/Server/4.2/bin

call mongod.exe --configsvr –replSet cfgReplSet --logpath d:/MongoDB/Server/4.2/logs/configsvr/r1.log --logappend --dbpath d:/MongoDB/Server/4.2/data/configsvr/r1 --port 40001

cd d:/MongoDB/Server/4.2/bin

call mongod.exe --configsvr –replSet cfgReplSet  --logpath d:/MongoDB/Server/4.2/logs/configsvr/r2.log --logappend --dbpath d:/MongoDB/Server/4.2/data/configsvr/r2 --port 40002

注意:–replSet cfgReplSet這個參數是mongodb 3.4之後的要求,因爲mongodb3.4之後,要求config server也做成副本集。

將配置節點也做成複製集:

rs.initiate({_id:"cfgReplSet",configsvr:true,members:[{_id:0,host:'127.0.0.1:40000'},{_id:1,host:'127.0.0.1:40001'},{_id:2,host:'127.0.0.1:40002'}]})

 

>啓動一個路由節點:

注意:3.4之後的版本,如果config server不配置replica set,還是採用mongodb 3.2的mirror模式,會報錯!

call mongos.exe --configdb cfgReplSet/127.0.0.1:40000,127.0.0.1:40001,127.0.0.1:40002 --logpath  d:/MongoDB/Server/4.2/logs/mongos.log --logappend --port 50000

 

>在剛啓動好的路由上,配置分片:

> use admin

>db.runCommand({addshard:"setA/127.0.0.1:9999,127.0.0.1:10001,127.0.0.1:10002",name:"ShardSetA"})

>db.runCommand({addshard:"setB/127.0.0.1:20000,127.0.0.1:20001,127.0.0.1:20002",name:"ShardSetB"})

>db.runCommand({addshard:"setC/127.0.0.1:30000,127.0.0.1:30001,127.0.0.1:30002",name:"ShardSetC"})

> printShardingStatus()

配置到這裏,集羣搭建完畢了!

在完成了集羣的搭建工作之後,需要做的就是建立一個數據庫,建立表,設置分片主鍵來初始化數據了!

可以查看配置的具體情況,下面是本次示例,仍可以不用管。

mongos>  printShardingStatus()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5d9ecee2a77cc1d708f69a8a")
  }
  shards:
        {  "_id" : "ShardSetA",  "host" : "setA/127.0.0.1:9999,127.0.0.1:10001,127.0.0.1:10002",  "state" : 1 }
        {  "_id" : "ShardSetB",  "host" : "setB/127.0.0.1:20000,127.0.0.1:20001,127.0.0.1:20002",  "state" : 1 }
        {  "_id" : "ShardSetC",  "host" : "setC/127.0.0.1:30000,127.0.0.1:30001,127.0.0.1:30002",  "state" : 1 }
  active mongoses:
        "4.2.0" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                ShardSetA       1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : ShardSetA Timestamp(1, 0) 

 

>對上面提到的片鍵補充介紹:

分片是什麼?分片就是將數據存儲在多個機器上。當數據集超過單臺服務器的容量,服務器的內存,磁盤IO都會有問題,即超過單臺服務器的性能瓶頸。此時有兩種解決方案,垂直擴展和水平擴展(分片)。

      垂直擴展就是增加CPU,增加容量,但高性能系統的CPU和容量不成比例,這樣擴展成本大,並且有上限。

      水平擴展分片,將數據分發到多個服務器,每個服務器是一個單獨的數據庫,各個服務器加起來組成一個邏輯數據庫,把寫壓力和操作分流到不同服務器,提高容量和吞吐量。

      MongoDB的文檔是無模式的,不固定結構,因此只能進行水平分片。當塊超過指定大小或者文檔數超過最大文檔數,MongoDB嘗試分割這個塊,若分割成功,把它標記爲一個大塊避免重複分割。拆分塊的關鍵就是片鍵,片鍵是文檔的一個屬性字段或者一個複合索引字段,一旦建立不能改變。片鍵是分片拆分數據的關鍵,片鍵的選擇直接影響集羣的性能。  

      MongoDB首先根據片鍵劃分塊chunks當塊超過指定大小(默認64M),然後把塊分到其他的分片上。

      注意:片鍵也是查詢時常用的一個索引。因爲片鍵的選擇和創建索引時鍵的選擇原則是相似的,實際使用中,通常片鍵就是創建索引使用的鍵!

>部署示例

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章