hadoop-3.1.0雙NameNode集羣安裝筆記-colby陳倫

1、修改主機名稱
vim /etc/hosts
重啓

2、修改該hosts文件,添加主機跟ip的映射關係
虛擬機網絡host-only
這個必須註釋掉
#127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
添加以下的地址關係
192.168.1.101 COLBY-NN-101
192.168.1.102 COLBY-NN-102
192.168.1.111 COLBY-DN-111
192.168.1.112 COLBY-DN-112
192.168.1.113 COLBY-DN-113

3、安裝JDK
/usr/local/jdk1.8.0_171

vi /etc/profile


export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:JRE_HOME/lib

分發JDK
scp -r /usr/local/jdk COLBY-NN-102:/usr/local/
scp -r /usr/local/jdk COLBY-DN-111:/usr/local/
scp -r /usr/local/jdk COLBY-DN-112:/usr/local/
scp -r /usr/local/jdk COLBY-DN-113:/usr/local/
4、ssh免密碼登錄
三臺機器分別執行:ssh-keygen -t rsa
    COLBY-NN-101
    COLBY-NN-102
    COLBY-DN-111
    COLBY-DN-112
    COLBY-DN-113

在101跟102上面分別執行
ssh-copy-id COLBY-NN-101
ssh-copy-id COLBY-NN-102
ssh-copy-id COLBY-DN-111
ssh-copy-id COLBY-DN-112
ssh-copy-id COLBY-DN-113

5、安裝zookeeper
cp zoo_sample.cfg zoo.cfg
修改dataDir=/app/bigdata/zookeeper/tmp
mkdir -p /app/bigdata/zookeeper/tmp

在最後添加(zk三臺服務器就夠了):
server.1=COLBY-NN-101:2888:3888
server.2=COLBY-NN-102:2888:3888
server.3=COLBY-DN-111:2888:3888

再創建一個空文件
touch /app/bigdata/zookeeper/tmp/myid
最後向該文件寫入ID
echo 1 > /app/bigdata/zookeeper/tmp/myid

1.3將配置好的zookeeper拷貝到其他節點(首先分別在COLBY-NN-102、COLBY-DN-111根目錄下創建一個app目錄:mkdir /app)
            scp -r /app/bigdata/zookeeper/ COLBY-NN-102:/app/bigdata/
            scp -r /app/bigdata/zookeeper/ COLBY-DN-111:/app/bigdata/
           
            注意:修改COLBY-NN-101、COLBY-DN-111對應/app/zookeeper-3.4.10/tmp/myid內容
            COLBY-NN-101
                echo 2 > /app/bigdata/zookeeper-3.4.10/tmp/myid
            COLBY-DN-111
                echo 3 > /app/bigdata/zookeeper-3.4.10/tmp/myid

scp -r /etc/profile COLBY-NN-102:/etc/profile
scp -r /etc/profile COLBY-DN-111:/etc/profile

scp -r  /app/bigdata/zookeeper/conf/zoo.cfg COLBY-NN-102:/app/bigdata/zookeeper/conf/
scp -r  /app/bigdata/zookeeper/conf/zoo.cfg COLBY-DN-111:/app/bigdata/zookeeper/conf/


三臺服務分別執行
zkServer.sh start

執行狀態
zkServer.sh status

配置hadoop
core-site.xml、hdfs-site.xml、yarn-site.xml、mapred-site.xml、hadoop-env.sh、workers

拷貝hadoop文件
scp -r /app/bigdata/hadoop COLBY-NN-102:/app/bigdata/
scp -r /app/bigdata/hadoop COLBY-DN-111:/app/bigdata/
scp -r /app/bigdata/hadoop COLBY-DN-112:/app/bigdata/
scp -r /app/bigdata/hadoop COLBY-DN-113:/app/bigdata/

scp -r /app/bigdata/hadoop/etc/hadoop/* COLBY-NN-102:/app/bigdata/hadoop/etc/hadoop/
scp -r /app/bigdata/hadoop/etc/hadoop/* COLBY-DN-111:/app/bigdata/hadoop/etc/hadoop/

scp -r /app/bigdata/hadoop/hdfs/name/* COLBY-NN-102:/app/bigdata/hadoop/hdfs/name/

scp -r /app/bigdata/hadoop/tmp/ COLBY-NN-102:/app/bigdata/hadoop/
在每臺服務器上面啓動journalnode
hdfs --daemon start journalnode

然後格式化
hdfs namenode -format
再格式化
hdfs zkfc –formatZK
hdfs zkfc -formatZK

3.1新的命令
hdfs --daemon start journalnode(新)

hadoop-daemon.sh start journalnode(2版本的)

====================Mysql==========================
安裝Mysql注意事項
啓動mysql
systemctl start mysqld
mysql -uroot -p
root/mrA)FxSK+0Fi

ALTER USER 'root'@'localhost' IDENTIFIED BY 'hadoop';

添加遠程賬戶

GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'hadoop' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO root@"172.16.16.152" IDENTIFIED BY "youpassword" WITH GRANT OPTION;

驗證yarn集羣
rpm -e --nodeps jdk-1.6.0_16-fcs

stop-yarn.sh
stop-dfs.sh
start-dfs.sh
start-yarn.sh


3.1新的命令
hdfs --daemon start journalnode

(缺少用戶定義而造成的)因此編輯啓動和關閉

$ vim sbin/start-dfs.sh
$ vim sbin/stop-dfs.sh

HDFS_DATANODE_USER=root  
HDFS_SECONDARYNAMENODE_USER=root 

start-dfs.sh修改  追加
HDFS_NAMENODE_USER=root
HDFS_DATANODE_USER=root
HDFS_JOURNALNODE_USER=root
HDFS_ZKFC_USER=root
HDFS_SECONDARYNAMENODE_USER=root
YARN_RESOURCEMANAGER_USER=root
YARN_NODEMANAGER_USER=root

mkdir -p /app/bigdata/hadoop/hdfs/name
mkdir -p /app/bigdata/hadoop/hdfs/data

hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.0.jar wordcount /input /out


================配置spark集羣======================
spark-env.sh
#配置內容如下:
export SCALA_HOME=/usr/share/scala
export JAVA_HOME=/usr/java/jdk1.8.0_112/
export SPARK_MASTER_IP=master
export SPARK_WORKER_MEMORY=1g
export HADOOP_CONF_DIR=/opt/hadoop-2.7.3/etc/hadoop

slaves添加
COLBY-NN-101
COLBY-NN-102
COLBY-DN-111

分佈式
scp -r  /app/bigdata/spark/ COLBY-DN-111:/app/bigdata/
scp -r  /app/bigdata/spark/ COLBY-DB-111:/app/bigdata/

啓動spark集羣
/app/bigdata/spark/sbin/start-all.sh
'''
    #!/bin/bash
    echo -e "\033[31m ========Start The Cluster======== \033[0m"
    echo -e "\033[31m Starting Hadoop Now !!! \033[0m"
    /opt/hadoop-2.7.3/sbin/start-all.sh
    echo -e "\033[31m Starting Spark Now !!! \033[0m"
    /opt/spark-2.1.0-bin-hadoop2.7/sbin/start-all.sh
    echo -e "\033[31m The Result Of The Command \"jps\" :  \033[0m"
    jps
    echo -e "\033[31m ========END======== \033[0m"
'''

進入spark:
[root@COLBY-NN-101 sbin]#spark-shell
2018-05-05 16:42:25 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://COLBY-DN-111:4040
Spark context available as 'sc' (master = local[*], app id = local-1525509759926).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.3.0
      /_/
        
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_171)
Type in expressions to have them evaluated.
Type :help for more information.

scala>

'''
驗證spark
val file=sc.textFile("hdfs://COLBY-NN-101:9000/input/profile")
val rdd = file.flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_)
rdd.collect()
rdd.foreach(println)
'''

==================HBASE======================
安裝配置HBASE
hbase-env.sh
export JAVA_HOME=/usr/local/jdk(jdk安裝路徑)
去掉註釋 # export  HBASE_MANAGES_ZK=true,使用hbase自帶zookeeper。
# The directory where pid files are stored. /tmp by default. 
 export HBASE_PID_DIR=/var/hadoop/pids

hbase-site.xml


<configuration>
    <property>
        <name>hbase.rootdir</name> <!-- hbase存放數據目錄 -->
        <value>hdfs://COLBY-NN-101:9000/opt/hbase/hbase_db</value>
        <!-- 端口要和Hadoop的fs.defaultFS端口一致-->
    </property>
    <property>
        <name>hbase.cluster.distributed</name> <!-- 是否分佈式部署 -->
        <value>true</value>
    </property>
    <property>
        <name>hbase.zookeeper.quorum</name> <!-- list of  zookooper -->
        <value>COLBY-NN-101,COLBY-NN-102,COLBY-DN-111</value>
    </property>
    <property><!--zookooper配置、日誌等的存儲位置 -->
        <name>hbase.zookeeper.property.dataDir</name>
        <value>/app/bigdata/hbase/logs/zookeeper</value>
    </property>
</configuration>

regionservers

COLBY-NN-101
COLBY-NN-102
COLBY-DN-111


hbase
scp -r  /app/bigdata/hbase/ COLBY-NN-102:/app/bigdata/
scp -r  /app/bigdata/hbase/ COLBY-DN-111:/app/bigdata/

啓動hbase
start-hbase.sh
輸入jps命令查看進程是否啓動成功,若 master上出現HMaster、HQuormPeer,

slave上出現HRegionServer、HQuorumPeer,就是啓動成功了。

瀏覽器訪問地址
http://colby-nn-101:16010/master-status


hbase驗證
[root@COLBY-NN-101 sbin]# hbase shell
[root@COLBY-NN-101 sbin]# hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/app/bigdata/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/app/bigdata/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.6, rUnknown, Mon May 29 02:25:32 CDT 2017

hbase(main):001:0>

============================hbase簡單練習==========================================
進入hbase shell console
$HBASE_HOME/bin/hbase shell
如果有kerberos認證,需要事先使用相應的keytab進行一下認證(使用kinit命令),認證成功之後再使用hbase shell進入可以使用whoami命令可查看當前用戶
hbase(main)> whoami
表的管理
1)查看有哪些表
hbase(main)> list
2)創建表

# 語法:create <table>, {NAME => <family>, VERSIONS => <VERSIONS>}
# 例如:創建表t1,有兩個family name:f1,f2,且版本數均爲2
hbase(main)> create 'test',{NAME => 'f1', VERSIONS => 2},{NAME => 'f2', VERSIONS => 2}

create 'student',{NAME => 'f1', VERSIONS => 2},{NAME => 'f2', VERSIONS => 2}
create 'class',{NAME => 'f1', VERSIONS => 2},{NAME => 'f2', VERSIONS => 2}
create 'grade',{NAME => 'f1', VERSIONS => 2},{NAME => 'f2', VERSIONS => 2}

3)刪除表
分兩步:首先disable,然後drop
例如:刪除表t1

hbase(main)> disable 't1'
hbase(main)> drop 't1'
4)查看錶的結構

# 語法:describe <table>
# 例如:查看錶t1的結構
hbase(main)> describe 't1'
5)修改表結構
修改表結構必須先disable

# 語法:alter 't1', {NAME => 'f1'}, {NAME => 'f2', METHOD => 'delete'}
# 例如:修改表test1的cf的TTL爲180天
hbase(main)> disable 'test1'
hbase(main)> alter 'test1',{NAME=>'body',TTL=>'15552000'},{NAME=>'meta', TTL=>'15552000'}
hbase(main)> enable 'test1'
權限管理
1)分配權限
# 語法 : grant <user> <permissions> <table> <column family> <column qualifier> 參數後面用逗號分隔
# 權限用五個字母表示: "RWXCA".
# READ('R'), WRITE('W'), EXEC('X'), CREATE('C'), ADMIN('A')
# 例如,給用戶‘test'分配對錶t1有讀寫的權限,
hbase(main)> grant 'test','RW','t1'
2)查看權限

# 語法:user_permission <table>
# 例如,查看錶t1的權限列表
hbase(main)> user_permission 't1'
3)收回權限

# 與分配權限類似,語法:revoke <user> <table> <column family> <column qualifier>
# 例如,收回test用戶在表t1上的權限
hbase(main)> revoke 'test','t1'
表數據的增刪改查
1)添加數據
# 語法:put <table>,<rowkey>,<family:column>,<value>,<timestamp>
# 例如:給表t1的添加一行記錄:rowkey是rowkey001,family name:f1,column name:col1,value:value01,timestamp:系統默認
hbase(main)> put 't1','rowkey001','f1:col1','value01'
用法比較單一。
2)查詢數據
a)查詢某行記錄

# 語法:get <table>,<rowkey>,[<family:column>,....]
# 例如:查詢表t1,rowkey001中的f1下的col1的值
hbase(main)> get 't1','rowkey001', 'f1:col1'
# 或者:
hbase(main)> get 't1','rowkey001', {COLUMN=>'f1:col1'}
# 查詢表t1,rowke002中的f1下的所有列值
hbase(main)> get 't1','rowkey001'
b)掃描表

# 語法:scan <table>, {COLUMNS => [ <family:column>,.... ], LIMIT => num}
# 另外,還可以添加STARTROW、TIMERANGE和FITLER等高級功能
# 例如:掃描表t1的前5條數據
hbase(main)> scan 't1',{LIMIT=>5}
c)查詢表中的數據行數

# 語法:count <table>, {INTERVAL => intervalNum, CACHE => cacheNum}
# INTERVAL設置多少行顯示一次及對應的rowkey,默認1000;CACHE每次去取的緩存區大小,默認是10,調整該參數可提高查詢速度
# 例如,查詢表t1中的行數,每100條顯示一次,緩存區爲500
hbase(main)> count 't1', {INTERVAL => 100, CACHE => 500}
3)刪除數據
a )刪除行中的某個列值

# 語法:delete <table>, <rowkey>,  <family:column> , <timestamp>,必須指定列名
# 例如:刪除表t1,rowkey001中的f1:col1的數據
hbase(main)> delete 't1','rowkey001','f1:col1'
注:將刪除改行f1:col1列所有版本的數據
b )刪除行

# 語法:deleteall <table>, <rowkey>,  <family:column> , <timestamp>,可以不指定列名,刪除整行數據
# 例如:刪除表t1,rowk001的數據
hbase(main)> deleteall 't1','rowkey001'
c)刪除表中的所有數據

# 語法: truncate <table>
# 其具體過程是:disable table -> drop table -> create table
# 例如:刪除表t1的所有數據
hbase(main)> truncate 't1'
Region管理
1)移動region
# 語法:move 'encodeRegionName', 'ServerName'
# encodeRegionName指的regioName後面的編碼,ServerName指的是master-status的Region Servers列表
# 示例
hbase(main)>move '4343995a58be8e5bbc739af1e91cd72d', 'db-41.xxx.xxx.org,60020,1390274516739'
2)開啓/關閉region

# 語法:balance_switch true|false
hbase(main)> balance_switch
3)手動split

# 語法:split 'regionName', 'splitKey'
4)手動觸發major compaction

#語法:
#Compact all regions in a table:
#hbase> major_compact 't1'
#Compact an entire region:
#hbase> major_compact 'r1'
#Compact a single column family within a region:
#hbase> major_compact 'r1', 'c1'
#Compact a single column family within a table:
#hbase> major_compact 't1', 'c1'
配置管理及節點重啓
1)修改hdfs配置
hdfs配置位置:/etc/hadoop/conf
# 同步hdfs配置
cat /home/hadoop/slaves|xargs -i -t scp /etc/hadoop/conf/hdfs-site.xml hadoop@{}:/etc/hadoop/conf/hdfs-site.xml
#關閉:
cat /home/hadoop/slaves|xargs -i -t ssh hadoop@{} "sudo /home/hadoop/cdh4/hadoop-2.0.0-cdh4.2.1/sbin/hadoop-daemon.sh --config /etc/hadoop/conf stop datanode"
#啓動:
cat /home/hadoop/slaves|xargs -i -t ssh hadoop@{} "sudo /home/hadoop/cdh4/hadoop-2.0.0-cdh4.2.1/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode"
2)修改hbase配置
hbase配置位置:

# 同步hbase配置
cat /home/hadoop/hbase/conf/regionservers|xargs -i -t scp /home/hadoop/hbase/conf/hbase-site.xml hadoop@{}:/home/hadoop/hbase/conf/hbase-site.xml
 
# graceful重啓
cd ~/hbase
bin/graceful_stop.sh --restart --reload --debug inspurXXX.xxx.xxx.org

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章