mongodb 分片集羣安裝搭建測試

關於什麼是mongodb分片集羣,這些這裏就不介紹,網上有很多的說明。

由於是測試資源有限,於是就把所有的節點都搭建在一臺服務器上面了,實際的生產環境最好config/mongos/shard都分開安裝到不同的機器上,這裏搭建了12個節點的分片集羣;將端口區分開,創建3個mongs節點,3節點的config-server副本集,3個2節點的shard副本集。使用版本:4.0.18

主機

端口

角色

副本集/分片類型

172.100.1.35

27017,37017,47017

mongos

172.100.1.35

27019,37019,47019

config server

cs0/configsvr

172.100.1.35

27018,37018,47018

shardA

shardA/shardsvr

172.100.1.35

27016,37016,47016

shardB

shardB/shardsvr

話不多說直接開搞:

下面就是此次測試環境需要的mongodb節點

[root@cpe-172-100-1-35 jsunicom]# ll
total 0
drwxr-xr-x  6 root  root  203 Jun  2 16:10 mongodb_config_27019
drwxr-xr-x  6 root  root  203 Jun  2 11:28 mongodb_config_37019
drwxr-xr-x  6 root  root  203 Jun  2 11:28 mongodb_config_47019
drwxr-xr-x  6 root  root  198 Jun  2 14:31 mongodb_mongos1
drwxr-xr-x  6 root  root  198 Jun  2 14:31 mongodb_mongos2
drwxr-xr-x  6 root  root  198 Jun  2 14:30 mongodb_mongos3
drwxr-xr-x  6 root  root  199 Jun  2 14:38 mongodb_shardA_1
drwxr-xr-x  6 root  root  199 Jun  2 14:38 mongodb_shardA_2
drwxr-xr-x  6 root  root  199 Jun  2 14:02 mongodb_shardA_3
drwxr-xr-x  6 root  root  199 Jun  2 14:22 mongodb_shardB_1
drwxr-xr-x  6 root  root  199 Jun  2 14:22 mongodb_shardB_2
drwxr-xr-x  6 root  root  199 Jun  2 14:22 mongodb_shardB_3

首先對configsvr進行配置

分別在 mongodb_config_27019,mongodb_config_37019,mongodb_config_47019下創建對應的配置文件,這裏我的配置文件分別爲:mogondb_config_27019.conf,mogondb_config_37019.conf,mogondb_config_47019.conf

編輯配置文件,這裏就以添加mongodb_config_27019幾點爲例,添加以下內容:


[root@cpe-172-100-1-35 mongodb_config_27019]# cat mogondb_config_27019.conf
systemLog:
   traceAllExceptions: false
   path: /webapp/jsunicom/mongodb_config_27019/logs/mongod.log
   logAppend: true
   destination: file
   timeStampFormat: ctime
processManagement:
   fork: true
   pidFilePath: /webapp/jsunicom/mongodb_config_27019/tmp/mongod.pid
net:
   port: 27019
   bindIp: 172.100.1.35
   maxIncomingConnections: 20000
storage:
   dbPath: /webapp/jsunicom/mongodb_config_27019/data
   directoryPerDB: true
   wiredTiger:
      engineConfig:
         cacheSizeGB: 1
         directoryForIndexes: true
operationProfiling:
   slowOpThresholdMs: 1000
replication:
   oplogSizeMB: 2048
   replSetName: cs0
sharding:
   clusterRole: configsvr

注意:其餘2個節點除了端口號以及路徑和上面的不一樣,其餘完全一致。

配置好之後,分別啓動這3個config節點

[root@cpe-172-100-1-35 ~]#  /webapp/jsunicom/mongodb_config_27019/bin/mongod -f /webapp/jsunicom/mongodb_config_27019/mogondb_config_27019.conf

[root@cpe-172-100-1-35 ~]#  /webapp/jsunicom/mongodb_config_37019/bin/mongod -f /webapp/jsunicom/mongodb_config_37019/mogondb_config_37019.conf

[root@cpe-172-100-1-35 ~]#  /webapp/jsunicom/mongodb_config_47019/bin/mongod -f /webapp/jsunicom/mongodb_config_47019/mogondb_config_47019.conf

啓動之後,隨便登錄上面3個config節點中的一個進行初始化

[root@cpe-172-100-1-35 ~]# /webapp/jsunicom/mongodb_config_27019/bin/mongo --host 172.100.1.35 --port 27019

執行以下命令
use admin

rs.initiate(
  {
    _id: "cs0",
    configsvr: true,
    members: [
      { _id : 0, host : "172.100.1.35:27019" },
      { _id : 1, host : "172.100.1.35:37019" },
      { _id : 2, host : "172.100.1.35:47019" }
    ]
  }
)

執行成功之後,一定要用以下命令來查看初始化是否正常
rs.status()

其中health:1 代表初始化成功

到此config配置結束

下面對shardA,shardB配置:

shardA配置:

分別在mongodb_shardA_1,mongodb_shardA_2,mongodb_shardA_3下創建對應的配置文件

我這裏分別爲:mongodb_shardA_1.conf,mongodb_shardA_2.conf ,mongodb_shardA_3.conf

編輯配置文件,這裏還是以mongodb_shardA_1.conf爲例,添加如下內容:

[root@cpe-172-100-1-35 mongodb_shardA_1]# cat mongodb_shardA_1.conf 
systemLog:
   traceAllExceptions: false
   path: /webapp/jsunicom/mongodb_shardA_1/logs/mongod.log
   logAppend: true
   destination: file
   timeStampFormat: ctime
processManagement:
   fork: true
   pidFilePath: /webapp/jsunicom/mongodb_shardA_1/tmp/mongod.pid
net:
   port: 27018
   bindIp: 172.100.1.35
   maxIncomingConnections: 20000
storage:
   dbPath: /webapp/jsunicom/mongodb_shardA_1/data
   directoryPerDB: true
   wiredTiger:
      engineConfig:
         cacheSizeGB: 1
         directoryForIndexes: true
operationProfiling:
   slowOpThresholdMs: 1000
replication:
   oplogSizeMB: 3072
   replSetName: shardA
sharding:
   clusterRole: shardsvr
setParameter:
   connPoolMaxShardedInUseConnsPerHost: 100
   shardedConnPoolIdleTimeoutMinutes: 10
   connPoolMaxInUseConnsPerHost: 100
   globalConnPoolIdleTimeoutMinutes: 10
   maxIndexBuildMemoryUsageMegabytes: 2048

備註:其餘2個節點配置除了端口號以及路徑不一致以外,其餘完全和上面一致

shardB配置:

其實shardB配置和shardA完全一樣,只要注意端口以及路徑就行了

同樣分別在mongodb_shardB_1,mongodb_shardB_2,mongodb_shardB_3下創建對應的配置文件

我這裏分別爲:mongodb_shardB_1.conf,mongodb_shardB_2.conf ,mongodb_shardB_3.conf

編輯配置文件,這裏還是以mongodb_shardB_1.conf爲例,添加如下內容:

[root@cpe-172-100-1-35 mongodb_shardB_1]# cat mongodb_shardB_1.conf 
systemLog:
   traceAllExceptions: false
   path: /webapp/jsunicom/mongodb_shardB_1/logs/mongod.log
   logAppend: true
   destination: file
   timeStampFormat: ctime
processManagement:
   fork: true
   pidFilePath: /webapp/jsunicom/mongodb_shardB_1/tmp/mongod.pid
net:
   port: 27016
   bindIp: 172.100.1.35
   maxIncomingConnections: 20000
storage:
   dbPath: /webapp/jsunicom/mongodb_shardB_1/data
   directoryPerDB: true
   wiredTiger:
      engineConfig:
         cacheSizeGB: 1
         directoryForIndexes: true
operationProfiling:
   slowOpThresholdMs: 1000
replication:
   oplogSizeMB: 3072
   replSetName: shardB
sharding:
   clusterRole: shardsvr
setParameter:
   connPoolMaxShardedInUseConnsPerHost: 100
   shardedConnPoolIdleTimeoutMinutes: 10
   connPoolMaxInUseConnsPerHost: 100
   globalConnPoolIdleTimeoutMinutes: 10
   maxIndexBuildMemoryUsageMegabytes: 2048

備註:其餘2個節點配置除了端口號以及路徑不一致以外,其餘完全和上面一致

配置好之後,那就把上面的shardA3個節點以及shardB三個節點分別起起來。

[root@cpe-172-100-1-35 ~]# /webapp/jsunicom/mongodb_shardA_1/bin/mongod -f /webapp/jsunicom/mongodb_shardA_1/mongodb_shardA_1.conf


[root@cpe-172-100-1-35 ~]# /webapp/jsunicom/mongodb_shardA_2/bin/mongod -f /webapp/jsunicom/mongodb_shardA_2/mongodb_shardA_2.conf


[root@cpe-172-100-1-35 ~]# /webapp/jsunicom/mongodb_shardA_3/bin/mongod -f /webapp/jsunicom/mongodb_shardA_3/mongodb_shardA_3.conf


[root@cpe-172-100-1-35 ~]# /webapp/jsunicom/mongodb_shardB_1/bin/mongod -f /webapp/jsunicom/mongodb_shardB_1/mongodb_shardB_1.conf


[root@cpe-172-100-1-35 ~]# /webapp/jsunicom/mongodb_shardB_2/bin/mongod -f /webapp/jsunicom/mongodb_shardB_2/mongodb_shardB_2.conf


[root@cpe-172-100-1-35 ~]# /webapp/jsunicom/mongodb_shardB_3/bin/mongod -f /webapp/jsunicom/mongodb_shardB_3/mongodb_shardB_3.conf

然後分別初始化話replicat 配置

shardA:

隨便登錄shardA3個節點中的其中一個節點即可:

[root@cpe-172-100-1-35 ~]# /webapp/jsunicom/mongodb_config_27019/bin/mongo --host 172.100.1.35 --port 27018

登錄之後執行以下命令
use admin

rs.initiate(
  {
    _id : "shardA",
    members: [
      { _id : 0, host : "172.100.1.35:27018" },
      { _id : 1, host : "172.100.1.35:37018" },
      { _id : 2, host : "172.100.1.35:47018",arbiterOnly: true }
    ]
  }
)

同樣也要用下面命令去查看初始化是否成功
rs.status()


其中health:1 代表初始化成功

shardB:

隨便登錄shardB3個節點中的其中一個節點即可:

[root@cpe-172-100-1-35 ~]# /webapp/jsunicom/mongodb_config_27019/bin/mongo --host 172.100.1.35 --port 27016

登錄成功以後執行以下命令進行初始化
use admin

rs.initiate(
  {
    _id : "shardB",
    members: [
      { _id : 0, host : "172.100.1.35:27016" },
      { _id : 1, host : "172.100.1.35:37016" },
      { _id : 2, host : "172.100.1.35:47016",arbiterOnly: true }
    ]
  }
)

同樣需要使用rs.status()命令查看是否執行成功

其中health :1 代表初始化成功

至此shardA,shardB分片配置到此結束

最後配置mongos

這裏mongos也做了3個節點,

分別在mongodb_mongos1,mongodb_mongos2,mongodb_mongos3下配置mongodb_mongos1.conf,mongodb_mongos2.conf,mongodb_mongos3.conf

編輯配置文件,這裏同樣以mongodb_mongos1.conf爲例,添加如下內容:

[root@cpe-172-100-1-35 mongodb_mongos1]# cat mongodb_mongos1.conf 
systemLog:
   traceAllExceptions: false
   path: /webapp/jsunicom/mongodb_mongos1/logs/mongos.log
   logAppend: true
   destination: file
   timeStampFormat: ctime
processManagement:
   fork: true
   pidFilePath: /webapp/jsunicom/mongodb_mongos1/tmp/mongos.pid
net:
   port: 27017
   bindIp: 172.100.1.35
   maxIncomingConnections: 20000
operationProfiling:
   slowOpThresholdMs: 1000
replication:
   localPingThresholdMs: 300
sharding:
   configDB: cs0/172.100.1.35:27019,172.100.1.35:37019,172.100.1.35:47019
setParameter:
   connPoolMaxShardedInUseConnsPerHost: 100
   shardedConnPoolIdleTimeoutMinutes: 10
   connPoolMaxInUseConnsPerHost: 100
   globalConnPoolIdleTimeoutMinutes: 10

備註:其餘2個節點配置除了端口號以及路徑和上面不一樣之外,其餘完全一致

上面配置文件配置好之後,分別啓動這3個節點:

[root@cpe-172-100-1-35 ~]# /webapp/jsunicom/mongodb_mongos1/bin/mongos -f /webapp/jsunicom/mongodb_mongos1/mongodb_mongos1.conf

[root@cpe-172-100-1-35 ~]# /webapp/jsunicom/mongodb_mongos2/bin/mongos -f /webapp/jsunicom/mongodb_mongos2/mongodb_mongos2.conf

[root@cpe-172-100-1-35 ~]# /webapp/jsunicom/mongodb_mongos3/bin/mongos -f /webapp/jsunicom/mongodb_mongos3/mongodb_mongos3.conf

隨便登錄上面3個節點中的其中一個,添加replicat 到sharding中

[root@cpe-172-100-1-35 ~]# /webapp/jsunicom/mongodb_config_27019/bin/mongo --host 172.100.1.35 --port 27017

登錄之後執行如下命令:
use admin

sh.addShard("shardA/172.100.1.35:27018,172.100.1.35:37018,172.100.1.35:47018")

sh.addShard("shardA/172.100.1.35:27018,172.100.1.35:37016,172.100.1.35:47016")


執行完之後使用rs.status()命令查看狀態
mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5ed5e536856e004008317884")
  }
  shards:
        {  "_id" : "shardA",  "host" : "shardA/172.100.1.35:27018,172.100.1.35:37018",  "state" : 1 }
        {  "_id" : "shardB",  "host" : "shardB/172.100.1.35:27016,172.100.1.35:37016",  "state" : 1 }
  active mongoses:
        "4.0.18" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shardB  1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shardB Timestamp(1, 0) 

到此所有的配置都結束了,下面來進行測試以下:

下面所有的操作都是在mongos其中一個節點上執行的

啓動sharding庫:
sh.enableSharding("yuhuashi")

創建分片集合:
sh.shardCollection("yuhuashi.user",{"name" : 1}) #ranged 分片
sh.shardCollection("yuhuashi.user", { "task_id" : "hashed" } ) #hash分片

我這裏使用了hashed分片

mongos> sh.shardCollection("yuhuashi.user", { "task_id" : "hashed" } )
{
        "collectionsharded" : "yuhuashi.user",
        "collectionUUID" : UUID("43d3cce7-0225-4dcc-8b7e-aecc7b3932b6"),
        "ok" : 1,
        "operationTime" : Timestamp(1591081873, 27),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1591081873, 27),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}

可以使用sh.status()來查看狀態

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5ed5e536856e004008317884")
  }
  shards:
        {  "_id" : "shardA",  "host" : "shardA/172.100.1.35:27018,172.100.1.35:37018",  "state" : 1 }
        {  "_id" : "shardB",  "host" : "shardB/172.100.1.35:27016,172.100.1.35:37016",  "state" : 1 }
  active mongoses:
        "4.0.18" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shardB  1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shardB Timestamp(1, 0) 
        {  "_id" : "yuhuashi",  "primary" : "shardA",  "partitioned" : true,  "version" : {  "uuid" : UUID("fb60172b-49e9-4d3a-8052-1e8780b0ff52"),  "lastMod" : 1 } }
                yuhuashi.user
                        shard key: { "task_id" : "hashed" }
                        unique: false
                        balancing: true
                        chunks:
                                shardA  2
                                shardB  2
                        { "task_id" : { "$minKey" : 1 } } -->> { "task_id" : NumberLong("-4611686018427387902") } on : shardA Timestamp(1, 0) 
                        { "task_id" : NumberLong("-4611686018427387902") } -->> { "task_id" : NumberLong(0) } on : shardA Timestamp(1, 1) 
                        { "task_id" : NumberLong(0) } -->> { "task_id" : NumberLong("4611686018427387902") } on : shardB Timestamp(1, 2) 
                        { "task_id" : NumberLong("4611686018427387902") } -->> { "task_id" : { "$maxKey" : 1 } } on : shardB Timestamp(1, 3) 

下面來測試插入測試數據看以下

mongos> use yuhuashi
switched to db yuhuashi
mongos> show tables;
user
mongos> db.user.find().count()
0
mongos> for(i=1;i<=10000;i++){db.user.insert({"task_id":i,"name":"shiyu"+i,"age":i})}
WriteResult({ "nInserted" : 1 })

 

分別登錄shardA和shardB分片查看

shardA:PRIMARY> use yuhuashi
switched to db yuhuashi
shardA:PRIMARY> db.user.find().count()
7521
shardA:PRIMARY> 


shardB:PRIMARY> use yuhuashi
switched to db yuhuashi
shardB:PRIMARY> db.user.find().count()
2479
shardB:PRIMARY> 

 

GAME OVER!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章