add new node in hadoop without restarting the cluster

For the new node
step 1:modify /etc/hostname
slave0X
=============================================
For the master node
step1:modify /etc/hosts
ip masters
ip slave01
ip slave02
...
ip slave0X

step2:modify the configure file : conf/slaves
slave01
slave02
...
slave0X
==============================================

Distribute the configure files of the master to other slave nodes.
run batch file "batch" in hadoop-1.0.3 directory with hadoop user.
$ su hadoop
$ cd ~
$ cd hadoop/hadoop-1.0.3
$ chown -R hadoop:hadoop /home/batch
$ ./home/batch

#The contents of the batch file are as follows:

scp conf/slaves hadoop@slave0x:hadoop-1.0.3/conf/

scp /etc/hosts root@slave0x:/etc/hosts

==============================================

Start the new or broken node in the new node.

bin/Hadoop-daemon.sh start datanode
bin/Hadoop-daemon.sh start tasktracker

Balancing the utilization of the disks in the master node.

bin/start-balancer.sh -threshold 0.1

!attention the default threshold value is 0.1, and the lower the more balance, but the more time will be used.

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章