mongodb副本集+分片集羣部署step by step
本文只講述mongodb副本集+分片集羣的部署,關於mongdb shading & replica set原理優點等不在本文討論範圍內。
MongoDB Sharding Cluster,需要三種角色:
Shard Server: mongod 實例,用於存儲實際的數據塊,實際生產環境中一個shard server角色可由幾臺機器組個一個relica set承擔,防止主機單點故障
Config Server: mongod 實例,存儲了整個 Cluster Metadata,其中包括 chunk 信息。
Route Server: mongos 實例,前端路由,客戶端由此接入,且讓整個集羣看上去像單一數據庫,前端應用可以透明使用。
本實例用的機器:
分別在3臺機器運行一個mongod實例(稱爲mongod shard11,mongod shard12,mongod shard13)組織replica set1,作爲cluster的shard1
分別在3臺機器運行一個mongod實例(稱爲mongod shard21,mongod shard22,mongod shard23)組織replica set2,作爲cluster的shard2
每臺機器運行一個mongod實例,作爲3個config server
每臺機器運行一個mongs進程,用於客戶端連接
1、安裝mongodb
使用mongodb的版本 mongodb-linux-x86_64-rhel62-3.0.7.tgz
將mongodb-linux-x86_64-rhel62-3.0.7.tgz 解壓到 /home/services
Server1、Server2、Server3 均作如下操作:
cd /home/services/
tar -zxf mongodb-linux-x86_64-rhel62-3.0.7.tgz
mv mongodb-linux-x86_64-rhel62-3.0.7 mongodb
2、創建sharding數據目錄
根據本例sharding架構圖所示,在各臺sever上創建shard數據文件目錄
Server1:
cd /home/services/mongodb
mkdir -p data/shard11
mkdir -p data/shard21
Server2:
cd /home/services/mongodb
mkdir -p data/shard12
mkdir -p data/shard22
Server3:
cd /home/services/mongodb
mkdir -p data/shard13
mkdir -p data/shard23
3、配置relica sets
3.1 配置shard1所用到的replica sets 1:
server1:
./bin/mongod –shardsvr –replSet shard1 –port 27018 –dbpath /home/services/mongodb/data/shard11 –oplogSize 100 –logpath /home/services/mongodb/data/shard11.log –logappend –fork
server2:
./bin/mongod –shardsvr –replSet shard1 –port 27018 –dbpath /home/services/mongodb/data/shard12 –oplogSize 100 –logpath /home/services/mongodb/data/shard12.log –logappend –fork
server3:
./bin/mongod –shardsvr –replSet shard1 –port 27018 –dbpath /home/services/mongodb/data/shard13 –oplogSize 100 –logpath /home/services/mongodb/data/shard13.log –logappend –fork
3.2 初始化replica set 1
連接到一臺mongod
./mongo –port 27018
執行如下腳本
config = {
"_id" : "shard1",
"members" : [
{
"_id" : 0,
"host" : "192.168.66.10:27018"
},
{
"_id" : 1,
"host" : "192.168.66.20:27018"
},
{
"_id" : 2,
"host" : "192.168.66.30:27018"
}
]
}
rs.initiate(config);
3.3 配置shard2所用到的replica sets 2:
server1:
./bin/mongod –shardsvr –replSet shard2 –port 27019 –dbpath /home/services/mongodb/data/shard21 –oplogSize 100 –logpath /home/services/mongodb/data/shard21.log –logappend –fork
server2:
./bin/mongod –shardsvr –replSet shard2 –port 27019 –dbpath /home/services/mongodb/data/shard22 –oplogSize 100 –logpath /home/services/mongodb/data/shard22.log –logappend –fork
server3:
./bin/mongod –shardsvr –replSet shard2 –port 27019 –dbpath /home/services/mongodb/data/shard23 –oplogSize 100 –logpath /home/services/mongodb/data/shard23.log –logappend –fork
3.4 初始化replica set 2
連接到一臺mongod
./mongo –port 27019
執行如下腳本
config = {
"_id" : "shard2",
"members" : [
{
"_id" : 0,
"host" : "192.168.66.10:27019"
},
{
"_id" : 1,
"host" : "192.168.66.20:27019"
},
{
"_id" : 2,
"host" : "192.168.66.30:27019"
}
]
}
rs.initiate(config);
4、配置三臺config server
Server1:
mkdir -p /home/services/mongodb/data/config
./bin/mongod –configsvr –dbpath /home/services/mongodb/data/config –port 20000 –logpath /home/services/mongodb/data/config.log –logappend –fork
Server2:
mkdir -p /home/services/mongodb/data/config
./bin/mongod –configsvr –dbpath /home/services/mongodb/data/config –port 20000 –logpath /home/services/mongodb/data/config.log –logappend –fork
Server3:
mkdir -p /home/services/mongodb/data/config
./bin/mongod –configsvr –dbpath /home/services/mongodb/data/config –port 20000 –logpath /home/services/mongodb/data/config.log –logappend –fork
5、配置mongs
在server1,server2,server3上分別執行:
./bin/mongos –configdb 192.168.66.10:20000,192.168.66.20:20000,192.168.66.30:20000 –port 27017 –chunkSize 5 –logpath /home/services/mongodb/data/mongos.log –logappend –fork
6、配置分片集羣(Configuring the Shard Cluster)
連接到其中一個mongos進程,並切換到admin數據庫做以下配置
6.1、連接到mongs,並切換到admin
./bin/mongo 192.168.66.10:27017/admin
db
Admin
6.2、 加入shards
如裏shard是單臺服務器,用>db.runCommand( { addshard : “[:]” } )這樣的命令加入,如果shard是replica sets,用replicaSetName/[:port][,serverhostname2[:port],…]這樣的格式表示,例如本例執行:
>db.runCommand( { addshard:"shard1/192.168.66.10:27018,192.168.66.20:27018,192.168.66.30:27018",name:"s1",maxsize:20480});
>db.runCommand( { addshard:"shard2/192.168.66.10:27019,192.168.66.20:27019,192.168.66.30:27019",name:"s2",maxsize:20480});
注:
可選參數
Name:用於指定每個shard的名字,不指定的話系統將自動分配
maxSize:指定各個shard可使用的最大磁盤空間,單位megabytes
6.3、 Listing shards
db.runCommand( { listshards : 1 } )
如果列出了以上二個你加的shards,表示shards已經配置成功
如下爲執行效果
[root@appsvr mongodb]# ./bin/mongo 192.168.66.10:27017/admin
mongos> db
admin
mongos> db.runCommand( { addshard:"shard1/192.168.66.10:27018,192.168.66.20:27018,192.168.66.30:27018",name:"s1",maxsize:20480});
{ "shardAdded" : "s1", "ok" : 1 }
mongos> db.runCommand( { addshard:"shard2/192.168.66.10:27019,192.168.66.20:27019,192.168.66.30:27019",name:"s2",maxsize:20480});
{ "shardAdded" : "s2", "ok" : 1 }
mongos> db.runCommand( { listshards : 1 } )
{
"shards" : [
{
"_id" : "s1",
"host" : "shard1/192.168.66.10:27018,192.168.66.20:27018,192.168.66.30:27018"
},
{
"_id" : "s2",
"host" : "shard2/192.168.66.10:27019,192.168.66.20:27019,192.168.66.30:27019"
}
],
"ok" : 1
}
mongos>
6.4、 激活數據庫分片
命令:
db.runCommand( { enablesharding : “” } );
通過執行以上命令,可以讓數據庫跨shard,如果不執行這步,數據庫只會存放在一個shard,一旦激活數據庫分片,數據庫中不同的collection將被存放在不同的shard上,但一個collection仍舊存放在同一個shard上,要使單個collection也分片,還需單獨對collection作些操作
Collecton分片
要使單個collection也分片存儲,需要給collection指定一個分片key,通過以下命令操作:
db.runCommand( { shardcollection : “”,key : });
注:
a. 分片的collection系統會自動創建一個索引(也可用戶提前創建好)
b. 分片的collection只能有一個在分片key上的唯一索引,其它唯一索引不被允許
例子:
mongos> db.runCommand({enablesharding:"test2"});
{ "ok" : 1 }
mongos> db.runCommand( { shardcollection : "test2.books", key : { id : 1 } } );
{ "collectionsharded" : "test2.books", "ok" : 1 }
mongos> use test2
switched to db test2
mongos> db.stats();
{
"raw" : {
"shard1/192.168.66.10:27018,192.168.66.20:27018,192.168.66.30:27018" : {
"db" : "test2",
"collections" : 3,
"objects" : 6,
"avgObjSize" : 69.33333333333333,
"dataSize" : 416,
"storageSize" : 20480,
"numExtents" : 3,
"indexes" : 2,
"indexSize" : 16352,
"fileSize" : 67108864,
"nsSizeMB" : 16,
"extentFreeList" : {
"num" : 0,
"totalSize" : 0
},
"dataFileVersion" : {
"major" : 4,
"minor" : 22
},
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("586286596422d63aa9f9f000")
}
},
"shard2/192.168.66.10:27019,192.168.66.20:27019,192.168.66.30:27019" : {
"db" : "test2",
"collections" : 0,
"objects" : 0,
"avgObjSize" : 0,
"dataSize" : 0,
"storageSize" : 0,
"numExtents" : 0,
"indexes" : 0,
"indexSize" : 0,
"fileSize" : 0,
"ok" : 1
}
},
"objects" : 6,
"avgObjSize" : 69,
"dataSize" : 416,
"storageSize" : 20480,
"numExtents" : 3,
"indexes" : 2,
"indexSize" : 16352,
"fileSize" : 67108864,
"extentFreeList" : {
"num" : 0,
"totalSize" : 0
},
"ok" : 1
}
mongos> db.books.stats();
{
"sharded" : true,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for c ompatibility only.",
"userFlags" : 1,
"capped" : false,
"ns" : "test2.books",
"count" : 0,
"numExtents" : 1,
"size" : 0,
"storageSize" : 8192,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"id_1" : 8176
},
"avgObjSize" : 0,
"nindexes" : 2,
"nchunks" : 1,
"shards" : {
"s1" : {
"ns" : "test2.books",
"count" : 0,
"size" : 0,
"numExtents" : 1,
"storageSize" : 8192,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard co ded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 16352,
"indexSizes" : {
"_id_" : 8176,
"id_1" : 8176
},
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("586286596422d63aa9f9f000")
}
}
},
"ok" : 1
}
7、測試
mongos> for (var i = 1; i <= 20000; i++) db.books.save({id:i,name:"12345678",sex:"male",age:27,value:"test"});
WriteResult({ "nInserted" : 1 })
mongos> db.books.stats();
{
"sharded" : true,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"ns" : "test2.books",
"count" : 20000,
"numExtents" : 10,
"size" : 2240000,
"storageSize" : 5586944,
"totalIndexSize" : 1250928,
"indexSizes" : {
"_id_" : 670432,
"id_1" : 580496
},
"avgObjSize" : 112,
"nindexes" : 2,
"nchunks" : 5,
"shards" : {
"s1" : {
"ns" : "test2.books",
"count" : 12300,
"size" : 1377600,
"avgObjSize" : 112,
"numExtents" : 5,
"storageSize" : 2793472,
"lastExtentSize" : 2097152,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 760368,
"indexSizes" : {
"_id_" : 408800,
"id_1" : 351568
},
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("586286596422d63aa9f9f000")
}
},
"s2" : {
"ns" : "test2.books",
"count" : 7700,
"size" : 862400,
"avgObjSize" : 112,
"numExtents" : 5,
"storageSize" : 2793472,
"lastExtentSize" : 2097152,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 490560,
"indexSizes" : {
"_id_" : 261632,
"id_1" : 228928
},
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("58628704f916bb05014c5ea7")
}
}
},
"ok" : 1
}