基於python的web應用(三)

6 Mongodb安裝與連接測試:

6.1 安裝

1下載最新的mongodb

目前最新的版本爲:mongodb-linux-x86_64-2.4.6rc0.tgz


2 解壓:

#tar -zxvf mongodb-linux-x86_64-2.4.6-rc0.tgz

3 清除本地化設置:

# export LC_ALL="C"


注意:不執行這一步,啓動mongodb可能在初始化的時候會報錯

4 啓動mongodb之前的準備:

創建mongodb的數據庫目錄: #mkdir  -p /mongodata/db

創建mongodb的日誌文件:   #mkdir /mongodata/log

  #touch /mongodata/log/mongodb.log

6.2 啓動:


./mongod --dbpath=/mongodata/db --logpath=/mongodata/log/mongodb.log --fork

6 .3 安裝mongodbpython驅動:


#tar -zxvf mongo-python-driver-2.6.2.tar.gz

#cd mongo-python-driver-2.6.2

#python setup.py install


6.4 python連接測試


使用python腳本嘗試連接數據庫:



#!/usr/bin/python


import pymongo

import random


conn = pymongo.Connection("127.0.0.1",27017)

db = conn.test

db.authenticate("root","root.com")

db.user.save({'id':1,'name':'kaka','sex':'male'})

for id in range(2,10):

   name = random.choice(['steve','koby','owen','tody','rony'])

   sex = random.choice(['male','female'])

   db.user.insert({'id':id,'name':name,'sex':sex})

content = db.user.find()

for i in content:

   print i



保存爲conn_mongodb.py


執行程序


root@debian:/usr/local/mongodb/bin# python /root/conn_mongodb.py

{u'_id': ObjectId('52317dbc6e95524a10505709'), u'id': 1, u'name': u'kaka', u'sex': u'male'}

{u'_id': ObjectId('52317dbc6e95524a1050570a'), u'id': 2, u'name': u'tody', u'sex': u'male'}

{u'_id': ObjectId('52317dbc6e95524a1050570b'), u'id': 3, u'name': u'rony', u'sex': u'male'}

{u'_id': ObjectId('52317dbc6e95524a1050570c'), u'id': 4, u'name': u'owen', u'sex': u'female'}

{u'_id': ObjectId('52317dbc6e95524a1050570d'), u'id': 5, u'name': u'koby', u'sex': u'female'}

{u'_id': ObjectId('52317dbc6e95524a1050570e'), u'id': 6, u'name': u'steve', u'sex': u'male'}

{u'_id': ObjectId('52317dbc6e95524a1050570f'), u'id': 7, u'name': u'tody', u'sex': u'male'}

{u'_id': ObjectId('52317dbc6e95524a10505710'), u'id': 8, u'name': u'koby', u'sex': u'female'}

{u'_id': ObjectId('52317dbc6e95524a10505711'), u'id': 9, u'name': u'koby', u'sex': u'male'}

{u'_id': ObjectId('52317dc26e95524a16f4b4cd'), u'id': 1, u'name': u'kaka', u'sex': u'male'}

{u'_id': ObjectId('52317dc26e95524a16f4b4ce'), u'id': 2, u'name': u'owen', u'sex': u'male'}

{u'_id': ObjectId('52317dc26e95524a16f4b4cf'), u'id': 3, u'name': u'tody', u'sex': u'female'}

{u'_id': ObjectId('52317dc26e95524a16f4b4d0'), u'id': 4, u'name': u'koby', u'sex': u'female'}

{u'_id': ObjectId('52317dc26e95524a16f4b4d1'), u'id': 5, u'name': u'tody', u'sex': u'male'}

{u'_id': ObjectId('52317dc26e95524a16f4b4d2'), u'id': 6, u'name': u'tody', u'sex': u'female'}

{u'_id': ObjectId('52317dc26e95524a16f4b4d3'), u'id': 7, u'name': u'tody', u'sex': u'female'}

{u'_id': ObjectId('52317dc26e95524a16f4b4d4'), u'id': 8, u'name': u'rony', u'sex': u'female'}

{u'_id': ObjectId('52317dc26e95524a16f4b4d5'), u'id': 9, u'name': u'owen', u'sex': u'male'}


表示python連接mongodb正常。




7  Mongodb 數據分片存儲:

7.1 架構


Mongodb的分片技術即Sharding架構


     就是把海量數據水平擴展的集羣系統,數據分表存儲在Sharding的各個節點上。

     Mongodb的數據分開分爲chunk,每個chunk都是collection中的一段連續的數據記錄,一般爲200MB,超出則生成新的數據塊。


7.1.1 構建Sharding需要三種角色


shard服務器(Shard Server)Shard服務器是存儲實際數據的分片,每個Shard可以是一個mongod實例,也可以是一組mongod實例構成的Replica Sets,爲了實現每個Shard內部的故障自動切換,MongoDB官方建議每個Shard爲一組Replica Sets

配置服務器(Config Server)爲了將一個特定的collection存儲在多個Shard中,需要爲該collection指定一個Shard key,決定該條記錄屬於那個chunk,配置服務器可以存儲以下信息:


1,所有Shard節點的配置信息

2,每個chunkShard key範圍   

3chunk在各Shard的分佈情況   

4,集羣中所有DBcollectionShard配置信息


路由進程(Route Process):一個前端路由,客戶端由此接入,首先詢問配置服務器需要到那個Shard上查詢或保存記錄,然後連接相應Shard執行操作,最後將結果返回客戶端。客戶端只需 將原本發給mongod的查詢活更新請求原封不動的發給路由器進程,而不必關心所操作的記錄存儲在那個Shard上。


7.1.2架構圖:


spacer.gif

spacer.gifspacer.gifspacer.gif


spacer.gifspacer.gifspacer.gifspacer.gif




spacer.gifspacer.gifspacer.gifspacer.gif



spacer.gifspacer.gifspacer.gifspacer.gif




spacer.gif

spacer.gifspacer.gifspacer.gif




spacer.gifspacer.gifspacer.gif






spacer.gifspacer.gifspacer.gif







spacer.gif



7.2 切片準備:

7.2.1安裝

說明:10.15.62.202 以下使用簡稱server1

  10.15.62.202 以下使用簡稱server2

  10.15.62.205 以下使用簡稱server3


1   解壓mongodb重命名到/opt目錄下:(server1,server2,server3均執行此操作)


#tar -zxvf mongodb-linux-x86_64-2.4.6.tgz && mv mongodb-linux-x86_64-2.4.6 /opt/mongodb && rm -rf mongodb-linux-x86_64-2.4.6.tgz


  2    每臺服務器中,創建日誌和驗證目錄,創建MongoDB用戶組,創建MongoDB用戶同時設置所屬MongoDB用戶組,設置MongoDB用戶密碼:


#mkdir -p /opt/mongodb/log /opt/mongodb/security && groupadd mongodb && useradd -g mongodb mongodb && passwd mongodb


  3 創建安全key(server1,server2,server3均執行此操作)


#openssl rand -base64 741 > /opt/mongodb/security/mongo.key

#chmod 0600 /opt/mongodb/security/mongo.key

(不執行該命令會出現報錯:644 permissions on /opt/mongodb/security/mongo.key are too open



  4  數據庫程序目錄結構:


root@debian:/opt/mongodb# tree /opt/mongodb/

/opt/mongodb/

├── bin

├── bsondump

├── mongo

├── mongod

├── mongodump

├── mongoexport

├── mongofiles

├── mongoimport

├── mongooplog

├── mongoperf

├── mongorestore

├── mongos

├── mongosniff

├── mongostat

└── mongotop

├── GNU-AGPL-3.0

├── log

├── README

├── security

└── mongo.key

└── THIRD-PARTY-NOTICES


3 directories, 18 files

root@debian:/opt/mongodb#


 7.2.2 創建數據庫和日誌目錄:


Server1


#mkdir -p /data/shard10001 /data/shard20001 /data/shard30001 /data/config1  && chown -R mongodb:mongodb /data/shard10001 /data/shard20001 /data/shard30001 /data/config1


Server2


#mkdir -p /data/shard10002 /data/shard20002 /data/shard30002 /data/config2  && chown -R mongodb:mongodb /data/shard10002 /data/shard20002 /data/shard30002 /data/config2


Server3


#mkdir -p /data/shard10003 /data/shard20003 /data/shard30003 /data/config3  && chown -R mongodb:mongodb /data/shard10003 /data/shard20003 /data/shard30003 /data/config3


7.2.3 創建mongod sharding啓動文件:


Server1


一、新建配置文件 /opt/mongodb/security/shard10001.conf 加入如下內容:


dbpath=/data/shard10001

shardsvr=true

replSet=shard1

fork = true

port=10001

oplogSize=100

logpath=/opt/mongodb/log/shard10001.log

profile=1

slowms=5

rest=true

quiet=true

keyFile=/opt/mongodb/security/mongo.key


二、新建配置文件 /opt/mongodb/security/shard20001.conf 加入如下內容:


dbpath=/data/shard20001

shardsvr=true

replSet=shard2

fork = true

port=10002

oplogSize=100

logpath=/opt/mongodb/log/shard20001.log

profile=1

slowms=5

rest=true

quiet=true

keyFile=/opt/mongodb/security/mongo.key


三、新建配置文件 /opt/mongodb/security/shard30001.conf 加入如下內容:



dbpath=/data/shard30001

shardsvr=true

replSet=shard3

fork = true

port=10003

oplogSize=100

logpath=/opt/mongodb/log/shard30001.log

profile=1

slowms=5

rest=true

quiet=true

keyFile=/opt/mongodb/security/mongo.key


四、新建配置文件/opt/mongodb/security/config1.conf加入如下內容:


dbpath=/data/config1

configsvr=true

fork = true

port=20000

oplogSize=5

logpath=/opt/mongodb/log/config1.log

quiet=true

keyFile=/opt/mongodb/security/mongo.key


五、新建配置文件/opt/mongodb/security/mongos1.conf加入如下內容:


configdb=10.15.62.202:20000,10.15.62.203:20000,10.15.62.205:20000

port=30000

fork = true

chunkSize=5

logpath=/opt/mongodb/log/mongos1.log

quiet=true

keyFile=/opt/mongodb/security/mongo.key


Server2 如下操作:


① 新建配置文件/opt/mongodb/security/shard10002.conf添加如下內容:



dbpath=/data/shard10002

shardsvr=true

replSet=shard1

fork = true

port=10001

oplogSize=100

logpath=/opt/mongodb/log/shard10002.log

profile=1

slowms=5

rest=true

quiet=true

keyFile=/opt/mongodb/security/mongo.key


② 新建配置文件/opt/mongodb/security/shard20002.conf添加如下內容:


dbpath=/data/shard20002

shardsvr=true

replSet=shard2

fork = true

port=10002

oplogSize=100

logpath=/opt/mongodb/log/shard20002.log

profile=1

slowms=5

rest=true

quiet=true

keyFile=/opt/mongodb/security/mongo.key


③ 新建配置文件 /opt/mongodb/security/shard30002.conf 添加如下內容:



dbpath=/data/shard30002

shardsvr=true

replSet=shard3

shardsvr=true

fork = true

port=10003

oplogSize=100

logpath=/opt/mongodb/log/shard30002.log

profile=1

slowms=5

rest=true

quiet=true

keyFile=/opt/mongodb/security/mongo.key



④ 新建配置文件 /opt/mongodb/security/config2.conf 添加如下內容:



dbpath=/data/config2

configsvr=true

fork = true

port=20000

oplogSize=5

logpath=/opt/mongodb/log/config2.log

quiet=true

keyFile=/opt/mongodb/security/mongo.key


⑤ 新建配置文件 /opt/mongodb/security/mongos2.conf 添加如下內容:



configdb=10.15.62.202:20000,10.15.62.203:20000,10.15.62.205:20000

port=30000

fork = true

chunkSize=5

logpath=/opt/mongodb/log/mongos1.log

quiet=true

keyFile=/opt/mongodb/security/mongo.key



Server3 如下操作:



1) 新建配置文件/opt/mongodb/security/shard10003.conf添加如下內容:


dbpath=/data/shard10003

shardsvr=true

replSet=shard1

fork = true

port=10001

oplogSize=100

logpath=/opt/mongodb/log/shard10003.log

profile=1

slowms=5

rest=true

quiet=true

keyFile=/opt/mongodb/security/mongo.key


2) 新建配置文件/opt/mongodb/security/shard20003.conf添加如下內容:



dbpath=/data/shard20003

shardsvr=true

replSet=shard2

fork = true

port=10002

oplogSize=100

logpath=/opt/mongodb/log/shard20003.log

profile=1

slowms=5

rest=true

quiet=true

keyFile=/opt/mongodb/security/mongo.key



3) 新建配置文件 /opt/mongodb/security/shard30003.conf添加如下內容:



dbpath=/data/shard30003

shardsvr=true

replSet=shard3

fork = true

port=10003

oplogSize=100

logpath=/opt/mongodb/log/shard30003.log

profile=1

slowms=5

rest=true

quiet=true

keyFile=/opt/mongodb/security/mongo.key


4) 新建配置文件 /opt/mongodb/security/config3.conf添加如下內容:



bpath=/data/config3

configsvr=true

fork = true

port=20000

oplogSize=5

logpath=/opt/mongodb/log/config3.log

quiet=true

keyFile=/opt/mongodb/security/mongo.key



5) 新建配置文件 /opt/mongodb/security/mongos3.conf 添加如下內容:



configdb=10.15.62.202:20000,10.15.62.203:20000,10.15.62.205:20000

port=30000

fork = true

chunkSize=5

logpath=/opt/mongodb/log/mongos1.log

quiet=true

keyFile=/opt/mongodb/security/mongo.key




注意:根據mongodb官方配置,在安全auth認證方面,keyFile的優先級高於使用用戶和密碼認證,在初始化replication set之前需要關閉認證,而且開啓了keyFile認證後已經打開了認證,然後添加管理用戶,然後在開啓keyFile認證即可,Authentication is disabled by default. To enable authentication for a given mongod or mongos instance, use the auth and keyFile configuration settings


初始化之前先關閉keyFile


#sed -i s/keyFile/#keyFile//opt/mongodb/security/*.conf


7.2.4 啓動sharding服務:

l Server1


#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/shard10001.conf

#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/shard20001.conf

#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/shard30001.conf


l Server2


#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/shard10002.conf

#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/shard20002.conf

# /opt/mongodb/bin/mongod --config=/opt/mongodb/security/shard30002.conf


l Server3


#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/shard10003.conf

#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/shard20003.conf

#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/shard30003.conf


7.2.5 啓動config服務:


l Server1


#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/config1.conf


l Server2


#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/config2.conf


l Server3


#/opt/mongodb/bin/mongod --config=/opt/mongodb/security/config3.conf


7.2.6啓動mongos服務:


l Server1


/opt/mongodb/bin/mongos  --config=/opt/mongodb/security/mongos1.conf


l Server2


/opt/mongodb/bin/mongos  --config=/opt/mongodb/security/mongos2.conf


l Server3


/opt/mongodb/bin/mongos  --config= /opt/mongodb/security/mongos3.conf






7.2.7 初始化replica set


使用mongo連接其中任意mongod,這裏使用server1


root@debian:~# /opt/mongodb/bin/mongo 10.15.62.202:10001/admin

MongoDB shell version: 2.4.6

connecting to: 10.15.62.202:10001/admin

> db

admin

> config={_id:"shard1",members:[{_id:0,host:"10.15.62.202:10001"},{_id:1,host:"10.15.62.203:10001"},{_id:2,host:"10.15.62.205:10001"}]}

{

       "_id" : "shard1",

       "members" : [

               {

                       "_id" : 0,

                       "host" : "10.15.62.202:10001"

               },

               {

                       "_id" : 1,

                       "host" : "10.15.62.203:10001"

               },

               {

                       "_id" : 2,

                       "host" : "10.15.62.205:10001"

               }

       ]

}

> rs.initiate(config)

{

       "info" : "Config now saved locally.  Should come online in about a minute.",

       "ok" : 1

}

> rs.status()

{

       "set" : "shard1",

       "date" : ISODate("2013-09-24T05:12:29Z"),

       "myState" : 1,

       "members" : [

               {

                       "_id" : 0,

                       "name" : "10.15.62.202:10001",

                       "health" : 1,

                       "state" : 1,

"stateStr" : "PRIMARY",

                       "uptime" : 454,

                       "optime" : Timestamp(1379999452, 1),

                       "optimeDate" : ISODate("2013-09-24T05:10:52Z"),

                       "self" : true

               },

               {

                       "_id" : 1,

                       "name" : "10.15.62.203:10001",

                       "health" : 1,

                       "state" : 2,

                       "stateStr" : "SECONDARY",

                       "uptime" : 94,

                       "optime" : Timestamp(1379999452, 1),

                       "optimeDate" : ISODate("2013-09-24T05:10:52Z"),

                       "lastHeartbeat" : ISODate("2013-09-24T05:12:29Z"),

                       "lastHeartbeatRecv" : ISODate("2013-09-24T05:12:28Z"),

                       "pingMs" : 0,

                       "syncingTo" : "10.15.62.202:10001"

               },

               {

                       "_id" : 2,

                       "name" : "10.15.62.205:10001",

                       "health" : 1,

                       "state" : 2,

                       "stateStr" : "SECONDARY",

                       "uptime" : 94,

                       "optime" : Timestamp(1379999452, 1),

                       "optimeDate" : ISODate("2013-09-24T05:10:52Z"),

                       "lastHeartbeat" : ISODate("2013-09-24T05:12:29Z"),

                       "lastHeartbeatRecv" : ISODate("2013-09-24T05:12:29Z"),

                       "pingMs" : 0,

                       "syncingTo" : "10.15.62.202:10001"

               }

       ],

       "ok" : 1

}

shard1:PRIMARY>



按照此方式依次添加另外的replication set 副本集:



副本二:


root@debian:~# /opt/mongodb/bin/mongo 10.15.62.202:10002/admin

MongoDB shell version: 2.4.6

connecting to: 10.15.62.202:10002/admin

> config={_id:"shard2",members:[{_id:0,host:"10.15.62.202:10002"},{_id:1,host:"10.15.62.203:10002"},{_id:2,host:"10.15.62.205:10002"}]}

{

       "_id" : "shard2",

       "members" : [

               {

                       "_id" : 0,

                       "host" : "10.15.62.202:10002"

               },

               {

                       "_id" : 1,

                       "host" : "10.15.62.203:10002"

               },

               {

                       "_id" : 2,

                       "host" : "10.15.62.205:10002"

               }

       ]

}

> rs.initiate(config)

{

       "info" : "Config now saved locally.  Should come online in about a minute.",

       "ok" : 1

}

shard2:PRIMARY> rs.status()

{

       "set" : "shard2",

       "date" : ISODate("2013-09-24T05:30:40Z"),

       "myState" : 1,

       "members" : [

               {

                       "_id" : 0,

                       "name" : "10.15.62.202:10002",

                       "health" : 1,

                       "state" : 1,

"stateStr" : "PRIMARY",

                       "uptime" : 223,

                       "optime" : Timestamp(1380000589, 1),

                       "optimeDate" : ISODate("2013-09-24T05:29:49Z"),

                       "self" : true

               },

               {

                       "_id" : 1,

                       "name" : "10.15.62.203:10002",

                       "health" : 1,

                       "state" : 2,

                       "stateStr" : "SECONDARY",

                       "uptime" : 41,

                       "optime" : Timestamp(1380000589, 1),

                       "optimeDate" : ISODate("2013-09-24T05:29:49Z"),

                       "lastHeartbeat" : ISODate("2013-09-24T05:30:39Z"),

                       "lastHeartbeatRecv" : ISODate("2013-09-24T05:30:39Z"),

                       "pingMs" : 0,

                       "syncingTo" : "10.15.62.202:10002"

               },

               {

                       "_id" : 2,

                       "name" : "10.15.62.205:10002",

                       "health" : 1,

                       "state" : 2,

                       "stateStr" : "SECONDARY",

                       "uptime" : 41,

                       "optime" : Timestamp(1380000589, 1),

                       "optimeDate" : ISODate("2013-09-24T05:29:49Z"),

                       "lastHeartbeat" : ISODate("2013-09-24T05:30:39Z"),

                       "lastHeartbeatRecv" : ISODate("2013-09-24T05:30:39Z"),

                       "pingMs" : 0,

                       "syncingTo" : "10.15.62.202:10002"

               }

       ],

       "ok" : 1

}

shard2:PRIMARY> exit



副本三:


root@debian:~# /opt/mongodb/bin/mongo 10.15.62.202:10003/admin

MongoDB shell version: 2.4.6

connecting to: 10.15.62.202:10003/admin

> config={_id:"shard3",members:[{_id:0,host:"10.15.62.202:10003"},{_id:1,host:"10.15.62.203:10003"},{_id:2,host:"10.15.62.205:10003"}]}

{

       "_id" : "shard3",

       "members" : [

               {

                       "_id" : 0,

                       "host" : "10.15.62.202:10003"

               },

               {

                       "_id" : 1,

                       "host" : "10.15.62.203:10003"

               },

               {

                       "_id" : 2,

                       "host" : "10.15.62.205:10003"

               }

       ]

}

> rs.initiate(config)

{

       "info" : "Config now saved locally.  Should come online in about a minute.",

       "ok" : 1

}

>

shard3:PRIMARY> rs.status()

{

       "set" : "shard3",

       "date" : ISODate("2013-09-24T05:42:43Z"),

       "myState" : 1,

       "members" : [

               {

                       "_id" : 0,

                       "name" : "10.15.62.202:10003",

                       "health" : 1,

                       "state" : 1,

                       "stateStr" : "PRIMARY",

                       "uptime" : 930,

                       "optime" : Timestamp(1380001270, 1),

                       "optimeDate" : ISODate("2013-09-24T05:41:10Z"),

                       "self" : true

               },

               {

                       "_id" : 1,

                       "name" : "10.15.62.203:10003",

                       "health" : 1,

                       "state" : 2,

                       "stateStr" : "SECONDARY",

                       "uptime" : 90,

                       "optime" : Timestamp(1380001270, 1),

                       "optimeDate" : ISODate("2013-09-24T05:41:10Z"),

                       "lastHeartbeat" : ISODate("2013-09-24T05:42:43Z"),

                       "lastHeartbeatRecv" : ISODate("2013-09-24T05:42:41Z"),

                       "pingMs" : 0,

                       "syncingTo" : "10.15.62.202:10003"

               },

               {

                       "_id" : 2,

                       "name" : "10.15.62.205:10003",

                       "health" : 1,

                       "state" : 2,

                       "stateStr" : "SECONDARY",

                       "uptime" : 90,

                       "optime" : Timestamp(1380001270, 1),

                       "optimeDate" : ISODate("2013-09-24T05:41:10Z"),

                       "lastHeartbeat" : ISODate("2013-09-24T05:42:43Z"),

                       "lastHeartbeatRecv" : ISODate("2013-09-24T05:42:41Z"),

                       "pingMs" : 0,

                       "syncingTo" : "10.15.62.202:10003"

               }

       ],

       "ok" : 1

}shard3:SECONDARY> exit





7.2.8日誌查看主從選舉過程:


#more /opt/mongodb/log/shard10001.log


Tue Sep 24 13:10:51.831 [conn1] replSet replSetInitiate admin command received from client

Tue Sep 24 13:10:51.852 [conn1] replSet replSetInitiate config object parses ok, 3 members specified

Tue Sep 24 13:10:52.154 [conn1] replSet replSetInitiate all members seem up

Tue Sep 24 13:10:52.154 [conn1] ******

Tue Sep 24 13:10:52.154 [conn1] creating replication oplog of size: 100MB...

Tue Sep 24 13:10:52.160 [FileAllocator] allocating new datafile /data/shard10001/local.1, filling with zeroes...

Tue Sep 24 13:10:52.160 [FileAllocator] creating directory /data/shard10001/_tmp

Tue Sep 24 13:10:52.175 [FileAllocator] done allocating datafile /data/shard10001/local.1, size: 128MB,  took 0.013 secs

Tue Sep 24 13:10:52.176 [conn1] ******

Tue Sep 24 13:10:52.176 [conn1] replSet info saving a newer config version to local.system.replset

Tue Sep 24 13:10:52.178 [conn1] replSet saveConfigLocally done

Tue Sep 24 13:10:52.178 [conn1] replSet replSetInitiate config now saved locally.  Should come online in about a minute.

#初始化開始


Tue Sep 24 13:10:52.178 [conn1] command admin.$cmd command: { replSetInitiate: { _id: "shard1", members: [ { _id: 0.0, host: "10.15.62.202:10001" }, { _id: 1.0, host:

"10.15.62.203:10001" }, { _id: 2.0, host: "10.15.62.205:10001" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:29356 reslen:112 347ms


#成員檢測


Tue Sep 24 13:10:55.450 [rsStart] replSet I am 10.15.62.202:10001

Tue Sep 24 13:10:55.451 [rsStart] replSet STARTUP2

Tue Sep 24 13:10:55.456 [rsHealthPoll] replSet member 10.15.62.203:10001 is up

Tue Sep 24 13:10:55.457 [rsHealthPoll] replSet member 10.15.62.205:10001 is up



Tue Sep 24 13:10:56.457 [rsSync] replSet SECONDARY

Tue Sep 24 13:10:57.469 [rsHealthPoll] replset info 10.15.62.205:10001 thinks that we are down

Tue Sep 24 13:10:57.469 [rsHealthPoll] replSet member 10.15.62.205:10001 is now in state STARTUP2

Tue Sep 24 13:10:57.470 [rsMgr] not electing self, 10.15.62.205:10001 would veto with 'I don't think 10.15.62.202:10001 is electable'

Tue Sep 24 13:11:03.473 [rsMgr] replSet info electSelf 0

Tue Sep 24 13:11:04.459 [rsMgr] replSet PRIMARY

Tue Sep 24 13:11:05.473 [rsHealthPoll] replset info 10.15.62.203:10001 thinks that we are down

Tue Sep 24 13:11:05.473 [rsHealthPoll] replSet member 10.15.62.203:10001 is now in state STARTUP2

Tue Sep 24 13:11:05.473 [rsHealthPoll] replSet member 10.15.62.205:10001 is now in state RECOVERING

Tue Sep 24 13:11:13.188 [conn7] command admin.$cmd command: { listDatabases: 1 } ntoreturn:1 keyUpdates:0 locks(micros) R:5 r:7 reslen:124 12ms

Tue Sep 24 13:11:14.146 [conn8] query local.oplog.rs query: { ts: { $gte: Timestamp 1379999452000|1 } } cursorid:1511004138438811 ntoreturn:0 ntoskip:0 nscanned:1 keyU

pdates:0 locks(micros) r:9293 nreturned:1 reslen:106 9ms

Tue Sep 24 13:11:15.233 [slaveTracking] build index local.slaves { _id: 1 }

Tue Sep 24 13:11:15.239 [slaveTracking] build index done.  scanned 0 total records. 0.005 secs

#主從選舉結束


Tue Sep 24 13:11:15.240 [slaveTracking] update local.slaves query: { _id: ObjectId('52411eed2f0c855af923ffb1'), config: { _id: 2, host: "10.15.62.205:10001" }, ns: "lo

cal.oplog.rs" } update: { $set: { syncedTo: Timestamp 1379999452000|1 } } nscanned:0 nupdated:1 fastmodinsert:1 keyUpdates:0 locks(micros) w:14593 14ms

Tue Sep 24 13:11:15.478 [rsHealthPoll] replSet member 10.15.62.205:10001 is now in state SECONDARY

Tue Sep 24 13:11:23.486 [rsHealthPoll] replSet member 10.15.62.203:10001 is now in state RECOVERING

Tue Sep 24 13:11:25.487 [rsHealthPoll] replSet member 10.15.62.203:10001 is now in state SECONDARY


7.3 添加數據庫管理用戶,開啓路由功能,並且分片:


7.3.1 創建超級用戶:


root@debian:~# /opt/mongodb/bin/mongo 10.15.62.202:30000/admin

MongoDB shell version: 2.4.6

connecting to: 10.15.62.202:30000/admin

mongos> db.addUser({user:"clusterAdmin",pwd:"pwd",roles:["clusterAdmin","userAdminAnyDatabase","readWriteAnyDatabase"]});

{

       "user" : "clusterAdmin",

       "pwd" : "6f8d1d5a17d65fd6b632cdb0cb541466",

       "roles" : [

               "clusterAdmin",

               "userAdminAnyDatabase",

               "readWriteAnyDatabase"

       ],

       "_id" : ObjectId("52412ec4eb1bcd32b5a25ad2")

}


注意:userAdminAnyDatabase角色只能訪問admin數據庫,主要用於添加修改其他用戶角色,無法認證通過其它數據庫,且刪除shard中的成員時認證通不過,這個在後面會有驗證


7.3.2 開啓mongo路由功能,分片


mongos> db

admin

mongos> db.runCommand({addshard:"shard1/10.15.62.202:10001,10.15.62.203:10001,10.15.62.205:10001",name:"shard1",maxsize:20480})

{ "shardAdded" : "shard1", "ok" : 1 }

mongos> db.runCommand({addshard:"shard2/10.15.62.202:10002,10.15.62.203:10002,10.15.62.205:10002",name:"shard2",maxsize:20480})

{ "shardAdded" : "shard2", "ok" : 1 }

mongos>

mongos> db.runCommand({addshard:"shard3/10.15.62.202:10003,10.15.62.203:10003,10.15.62.205:10003",name:"shard3",maxsize:20480})

{ "shardAdded" : "shard3", "ok" : 1 }

mongos>


7.3.3檢查分片情況:


mongos> db.runCommand({listshards:1})

{

       "shards" : [

               {

                       "_id" : "shard1",

                       "host" : "shard1/10.15.62.202:10001,10.15.62.203:10001,10.15.62.205:10001"

               },

               {

                       "_id" : "shard2",

                       "host" : "shard2/10.15.62.202:10002,10.15.62.203:10002,10.15.62.205:10002"

               },

               {

                       "_id" : "shard3",

                       "host" : "shard3/10.15.62.202:10003,10.15.62.203:10003,10.15.62.205:10003"

               }

       ],

       "ok" : 1

}


可以看到3個分片信息正常


7.3.4 激活數據庫分片:


> db.runCommand( { enablesharding : <dbname>} );


通過執行以上命令,可以讓數據庫跨shard,如果不執行這步,數據庫只會存放在一個shard,一旦激活數據庫分片,數據庫中不同的collection將被存放在不同的shard上,但一個collection仍舊存放在同一個shard上,要使單個collection也分片,還需單獨對collection作些操作


7.3.5  Collecton分片


要使單個collection也分片存儲,需要給collection指定一個分片key,通過以下命令操作:

> db.runCommand( { shardcollection : <namespace>,key : <shardkeypatternobject> });



注:

a. 分片的collection系統會自動創建一個索引(也可用戶提前創建好)

b. 分片的collection只能有一個在分片key上的唯一索引,其它唯一索引不被允許


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章