基於HA高可用搭建-Hadoop-3.2.1實戰搭建之Hbase-2.2.5集羣部署

版本選擇:Hadoop-3.2.1/Hbase-2.2.5

一、Zookeeper正常部署

[deploy@hadoop102 module]$ zk.sh start

二、Hadoop正常部署

[deploy@hadoop102 module]$ start-dfs.sh
[deploy@hadoop102 module]$ start-yarn.sh

三、Hbase上傳、解壓

[deploy@hadoop102 module]$ tar -xzvf hbase-2.2.5-bin.tar.gz -C ../module/
[deploy@hadoop102 module]$ ln -s hbase-2.2.5 hbase-default
[deploy@hadoop102 module]$ ls -tlr
總用量 28
drwxr-xr-x. 13 deploy deploy  211 2月   3 03:47 spark-2.4.5-bin-hadoop2.7
lrwxrwxrwx.  1 deploy deploy   12 5月  30 23:57 jdk-default -> jdk1.8.0_171
drwxr-xr-x.  8 deploy deploy 4096 5月  30 23:59 jdk1.8.0_171
lrwxrwxrwx.  1 deploy deploy   26 5月  31 00:32 zookeeper-default -> apache-zookeeper-3.6.1-bin
drwxr-xr-x.  8 deploy deploy  159 5月  31 04:31 apache-zookeeper-3.6.1-bin
drwxr-xr-x.  4 deploy deploy   43 5月  31 09:12 job_history
lrwxrwxrwx.  1 deploy deploy   34 6月   2 01:15 spark-default -> spark-3.0.0-preview2-bin-hadoop3.2
drwxr-xr-x.  6 deploy deploy   99 6月  11 12:01 maven
drwxr-xr-x. 11 deploy deploy  195 6月  13 23:42 hadoop-2.10.0
drwxr-xr-x. 10 deploy deploy  184 6月  14 00:17 apache-hive-3.1.2-bin
drwxr-xr-x. 10 deploy deploy  184 6月  14 00:21 apache-hive-2.3.7-bin
drwxr-xr-x. 11 deploy deploy  173 6月  14 01:10 hadoop-2.7.2
drwxr-xr-x.  3 deploy deploy   18 6月  14 02:01 hive
drwxr-xr-x.  5 deploy deploy 4096 6月  14 02:11 tez
lrwxrwxrwx.  1 deploy deploy   12 6月  14 02:26 hadoop-default -> hadoop-3.2.1
lrwxrwxrwx.  1 deploy deploy   21 6月  14 06:06 hive-default -> apache-hive-3.1.2-bin
drwxr-xr-x. 11 deploy deploy  173 6月  14 06:32 hadoop-3.2.1
-rw-rw-r--.  1 deploy deploy  265 6月  14 06:46 TestDFSIO_results.log
drwxr-xr-x. 14 deploy deploy  224 6月  14 09:46 spark-3.0.0-preview2-bin-hadoop3.2
-rw-------.  1 deploy deploy 8534 6月  14 12:31 nohup.out
drwxrwxr-x. 13 deploy deploy  266 6月  14 20:16 kibana-7.7.1-linux-x86_64
lrwxrwxrwx.  1 deploy deploy   19 6月  14 20:19 elasticsearch-default -> elasticsearch-7.7.1
lrwxrwxrwx.  1 deploy deploy   25 6月  14 20:19 kibana-default -> kibana-7.7.1-linux-x86_64
drwxr-xr-x. 10 deploy deploy  167 6月  15 05:05 elasticsearch-7.7.1
-rw-rw-r--.  1 deploy deploy 1047 6月  15 23:17 kibana.log
drwxrwxr-x.  6 deploy deploy  170 6月  25 12:30 hbase-2.2.5
lrwxrwxrwx.  1 deploy deploy   11 6月  25 12:31 hbase-default -> hbase-2.2.5

四、HBase的配置文件

1)hbase-env.sh修改內容:

export JAVA_HOME=/opt/module/jdk-default/
export HBASE_MANAGES_ZK=false

2)hbase-site.xml修改內容:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/*
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
-->
<configuration>
  <!--
    The following properties are set for running HBase as a single process on a
    developer workstation. With this configuration, HBase is running in
    "stand-alone" mode and without a distributed file system. In this mode, and
    without further configuration, HBase and ZooKeeper data are stored on the
    local filesystem, in a path under the value configured for `hbase.tmp.dir`.
    This value is overridden from its default value of `/tmp` because many
    systems clean `/tmp` on a regular basis. Instead, it points to a path within
    this HBase installation directory.

    Running against the `LocalFileSystem`, as opposed to a distributed
    filesystem, runs the risk of data integrity issues and data loss. Normally
    HBase will refuse to run in such an environment. Setting
    `hbase.unsafe.stream.capability.enforce` to `false` overrides this behavior,
    permitting operation. This configuration is for the developer workstation
    only and __should not be used in production!__

    See also https://hbase.apache.org/book.html#standalone_dist
  -->
  <property>
    <name>hbase.tmp.dir</name>
    <value>./tmp</value>
  </property>
  <property>
    <name>hbase.unsafe.stream.capability.enforce</name>
    <value>false</value>
  </property>
  <property>     
		<name>hbase.rootdir</name>     
		<value>hdfs://hadoop102:9000/hbase</value>   
	</property>

	<property>   
		<name>hbase.cluster.distributed</name>
		<value>true</value>
	</property>

   <!-- 0.98後的新變動,之前版本沒有.port,默認端口爲60000 -->
	<property>
		<name>hbase.master.port</name>
		<value>16000</value>
	</property>

	<property>   
		<name>hbase.zookeeper.quorum</name>
	     <value>hadoop104:2181,hadoop105:2181,hadoop106:2181</value>
	</property>

	<property>   
		<name>hbase.zookeeper.property.dataDir</name>
	     <value>/opt/module/zookeeper-default/zkData</value>
	</property>

</configuration>

3)regionservers

hadoop102
hadoop103
hadoop104
hadoop105
hadoop106

4)軟連接hadoop配置文件到hbase:

ln -s /opt/module/hadoop-default/etc/hadoop/core-site.xml /opt/module/hbase-default/conf/core-site.xml
ln -s /opt/module/hadoop-default/etc/hadoop/hdfs-site.xml /opt/module/hbase-default/conf/hdfs-site.xml

5)複製htrace-core4-4.2.0-incubating.jar 包

cp /opt/module/hbase-default/lib/client-facing-thirdparty/htrace-core4-4.2.0-incubating.jar   /opt/module/hbase-default/lib

6)root用戶修改/etc/profile文件,增加環境變量

#HBASE_HOME
export HBASE_HOME=/opt/module/hbase-default
export PATH=$PATH:$HBASE_HOME/bin:$HBASE_HOME/sbin

修改完成後分發文件
xsync /etc/profile

五、HBase遠程發送到其他集羣

[deploy@hadoop102 module]$ xsync hbase-2.2.5/
[deploy@hadoop102 module]$ xsync hbase-default

六、HBase服務的啓動(HMaster、HRegionServer)

啓動命令1(單獨啓動的時候推薦)

[deploy@hadoop102 hbase]$ hbase-daemon.sh start master
[deploy@hadoop102 hbase]$ hbase-daemon.sh start regionserver

 

啓動命令2(常規推薦)

[deploy@hadoop102 conf]$ start-hbase.sh
[deploy@hadoop102 conf]$ stop-hbase.sh
檢查進程
[deploy@hadoop102 conf]$ xcall.sh jps
--------- hadoop102 ----------
5264 DataNode
5763 DFSZKFailoverController
7045 NameNode
36357 Jps
5542 JournalNode
32824 HRegionServer
32601 HMaster
6187 ResourceManager
6351 NodeManager
--------- hadoop103 ----------
4562 DFSZKFailoverController
4453 JournalNode
4247 NameNode
4775 NodeManager
17067 HRegionServer
18812 Jps
4351 DataNode
--------- hadoop104 ----------
4209 DataNode
14545 HRegionServer
16194 Jps
4420 NodeManager
4311 JournalNode
13483 QuorumPeerMain
--------- hadoop105 ----------
4210 DataNode
4327 NodeManager
12839 QuorumPeerMain
15447 Jps
13882 HRegionServer
--------- hadoop106 ----------
13973 HRegionServer
15493 Jps
4155 DataNode
4269 NodeManager

訪問URL:http://hadoop102:16010,正常訪問如下

七、配置備用HMaster(增加配置文件backup-masters)

[deploy@hadoop102 conf]$ pwd
/opt/module/hbase-default/conf
[deploy@hadoop102 conf]$ vim backup-masters
[deploy@hadoop102 conf]$ cat backup-masters 
hadoop103

重啓Hbase服務,檢查進程如下(hadoop103多了HMaster)

[deploy@hadoop102 conf]$ xcall.sh jps
--------- hadoop102 ----------
5264 DataNode
5763 DFSZKFailoverController
38163 Jps
7045 NameNode
5542 JournalNode
37848 HRegionServer
6187 ResourceManager
37630 HMaster
6351 NodeManager
--------- hadoop103 ----------
4562 DFSZKFailoverController
4453 JournalNode
19622 Jps
4247 NameNode
4775 NodeManager
19501 HMaster
19326 HRegionServer
4351 DataNode
--------- hadoop104 ----------
4209 DataNode
4420 NodeManager
16630 HRegionServer
4311 JournalNode
13483 QuorumPeerMain
16843 Jps
--------- hadoop105 ----------
15872 HRegionServer
4210 DataNode
4327 NodeManager
12839 QuorumPeerMain
16095 Jps
--------- hadoop106 ----------
16131 Jps
4155 DataNode
4269 NodeManager
15919 HRegionServer

八、問題總結

A、2018-05-28 18:19:14,394 FATAL [hadoop001:16000.activeMasterManager] master.HMaster: Failed to become active master

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby

我在 hadoop102...hadoop106 節點上安裝了 HBase 集羣,

其中 hadoop102 和 hadoop103 爲 HMaster,hadoop102... hadoop00106 爲 HRegionServer,

啓動 HBase 後,發現 hadoop002 的 HMaster 和 HRegionServer 進程正常啓動,hadoop003 上的 HRegionServer 正常啓動,

但 hadoop102 上的 HMaster 進程卻沒有啓動,查看 hadoop102 節點上的 HBASE_HOME/logs/hbase-hadoop-master-hadoop102.log 日誌文件發現如下報錯:

2018-05-28 18:19:14,394 FATAL [hadoop001:16000.activeMasterManager] master.HMaster: Failed to become active master
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby
        at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
        at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1727)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1352)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4174)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:881)
        ......

我同時在 hadoop102~hadoop106節點上安裝了 hadoop HA集羣,查看狀態發現,hadoop102 是 standby 狀態,hadoop103 是 active 狀態,然後我查看 hbase 集羣的 hbase-site.xml 文件發現:

<property>
     <name>hbase.rootdir</name>
     <value>hdfs://hadoop102:9000/hbase</value>
</property>


原來我指定 hbase.rootdir 在 hadoop102上,而我的 hadoop 集羣是 HA集羣,hadoop集羣的 core-site.xml 中的配置是:

<property>
        <name>fs.defaultFS</name>
        <value>hdfs://beh/</value>
</property>

而由於 hadoop102此時是 standby 狀態,所以不能從 hadoop102 上去讀取 hbase.rootdir 中的文件,導致異常的發生。

解決問題:

  • 先修改 hbase 集羣中的所有的 hbase-site.xml
  • <property>
            <name>hbase.rootdir</name>
            <!-- 這裏建議指定HDFS上的目錄爲什麼的話自己百度吧,端口號要與hdfs-site.xml中設爲一致,hbase的這個目錄不需要建,自動生成-->
            <value>hdfs://mycluster/hbase</value>
        </property>

     

  • 殺掉所有的 HBase 進程並重啓 HBase
  • 問題解決!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章