僞分佈模式下HBase的安裝

Hbase有三種運行模式,其中單機模式的配置非常簡單,幾乎不用對安裝文件做任何修改就可以使用。如果要運行分佈式模式,Hadoop是必不可少的。另外在對HBase的某些文件進行配置之前,還需要具備以下先決條件:
1.Java:需要安裝Java 1.6.x以上的版本,推薦從SUN官網下載。網上也有許多JDK安裝配置的教程。在Ubuntu下可以使用下面命令安裝Java:

sudo apt-get install sun-java6-jdk

具體安裝過程這裏就不贅述了。
2.Hadoop:由於HBase架構是基於文件存儲系統的,因此在分佈式模式下安裝Hadoop是必須的。但是,如果運行在單機模式下,此條件可以忽略。
注意 安裝Hadoop的時候,要注意HBase的版本。也就是說,需要注意Hadoop和HBase之間的版本對應關係,如果不匹配,很可能會影響HBase系統的穩定性。在HBase的lib目錄下可以看到對應的Hadoop的JAR文件。默認情況下,HBase的lib文件夾下對應的Hadoop版本相對穩定。如果用戶想要使用其他的Hadoop版本,那麼需要將Hadoop系統的安裝目錄hadoop-×.×.×-core.jar文件和hadoop-×.×.×-test.jar文件複製到HBase的lib的文件夾下,以替換其他版本的Hadoop文件。
3.SSH:需要注意的時,SSH是必須安裝的,並且要保證用戶可以SSH到系統的其他節點(包括本地節點).因爲,我需要使用Hadoop來管理遠程Hadoop和HBase守護進程。這裏我只對僞分佈模式下安裝配置作說明:
僞分佈模式時一個運行在單個節點(單臺機器)上的分佈式模式,此模式下HBase所有的守護進程將運行在同一個節點上。由於分佈式模式的運行需要依賴分佈式文件系統,因此此時必須確保HDFS已經成功運行。用戶可以在HDFS系統上執行Put和Get操作來驗證HDFS是否安裝成功。關於HDFS集羣的安裝,網上有諸多教程,這裏也不再贅述。
一切準備就緒後,我們開始配置HBase的參數(即配置hbase-site.xml文檔)。通過設定hbase.rootdir參數來指定HBase的數據的具體位置,進而讓HBase運行在Hadoop之上,具體配置如代碼所示:
首先說明一下版本,Hadoop 使用的是hadoop-2.7.1版本,Hbase使用的是hbase-1.2.4版本,都是比較高的版本。
Hadoop中的配置:
*/hadoop-2.7.1/etc/hadoop下core-site.xml文件:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

Hbase中*/hbase-1.2.4/conf文件夾下hbase-env.sh配置:

#我的JDK的位置
 export JAVA_HOME=/usr/local/java/jdk1.7.0_76
#我的Hbase的目錄位置
 export HBASE_CLASSPATH=/home/dtw/hbase-1.2.4/conf
 export HBASE_OPTS="-XX:+UseConcMarkSweepGC"
 #表示Hbase使用自身所帶的ZooKeeper實例
 export HBASE_MANAGES_ZK=true

Hbase中*/hbase-1.2.4/conf文件夾下hbase-site.xml配置:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/**
 *
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
-->
<configuration>
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://localhost:9000</value>
        <description>此參數指定了HRegion服務器的位置,即數據存放位置</description>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
        <description>此參數指定了Hlog和Hfile的副本個數,此參數的設置不能大於HDFS的節點數。僞分佈式下DataNode只有一臺,因此此參數應設置爲1
         </description>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>
</configuration>

要用HDFS爲Hbase提供存儲空間,定義hbase.rootdir參數時HDFS文件系統的主機名和端口號必須與Hadoop的配置文件core-site.xml中fs.default.name參數的配置一致。不然會出現我之前博客中提到的錯誤解決辦法:master running as process ×××. Stop it first錯誤
接下就是運行HBase
由於僞分佈模式的運行基於HDFS,因此在運行HBase之前需要首先啓動HDFS。啓動HDFS可以使用如下命令:

dtw@dtw:~$ hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

16/11/25 21:29:03 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = dtw/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.7.1
STARTUP_MSG:   classpath = /home/dtw/hadoop-2.7.1/etc/hadoop:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/junit-4.11.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/commons-io-2.4.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/paranamer-2.3.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/gson-2.2.4.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/xmlenc-0.52.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/jsr305-3.0.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/commons-lang-2.6.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/avro-1.7.4.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/jsch-0.1.42.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/asm-3.2.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/commons-cli-1.2.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/activation-1.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/curator-framework-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/curator-client-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/commons-net-3.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/commons-codec-1.4.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/guava-11.0.2.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/jersey-core-1.9.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/jersey-json-1.9.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/hadoop-auth-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/servlet-api-2.5.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/hadoop-annotations-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/log4j-1.2.17.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/jetty-6.1.26.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/jettison-1.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/jersey-server-1.9.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/xz-1.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/commons-digester-1.8.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/jsp-api-2.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/hadoop-common-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/hadoop-common-2.7.1-tests.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/hadoop-nfs-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/asm-3.2.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/hadoop-hdfs-2.7.1-tests.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/hadoop-hdfs-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/asm-3.2.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/activation-1.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/guice-3.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/jettison-1.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/javax.inject-1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/xz-1.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-common-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-registry-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-common-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-api-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-client-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.1-tests.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.1.jar:/home/dtw/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.1.jar:/home/dtw/hadoop-2.7.1/contrib/capacity-scheduler/*.jar:/home/dtw/hadoop-2.7.1/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04Z
STARTUP_MSG:   java = 1.7.0_76
************************************************************/
16/11/25 21:29:03 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
16/11/25 21:29:03 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-1176e605-b18f-4336-8684-7722be1ffc66
16/11/25 21:29:05 INFO namenode.FSNamesystem: No KeyProvider found.
16/11/25 21:29:05 INFO namenode.FSNamesystem: fsLock is fair:true
16/11/25 21:29:05 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
16/11/25 21:29:05 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
16/11/25 21:29:05 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
16/11/25 21:29:05 INFO blockmanagement.BlockManager: The block deletion will start around 2016 十一月 25 21:29:05
16/11/25 21:29:05 INFO util.GSet: Computing capacity for map BlocksMap
16/11/25 21:29:05 INFO util.GSet: VM type       = 64-bit
16/11/25 21:29:05 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
16/11/25 21:29:05 INFO util.GSet: capacity      = 2^21 = 2097152 entries
16/11/25 21:29:05 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
16/11/25 21:29:05 INFO blockmanagement.BlockManager: defaultReplication         = 1
16/11/25 21:29:05 INFO blockmanagement.BlockManager: maxReplication             = 512
16/11/25 21:29:05 INFO blockmanagement.BlockManager: minReplication             = 1
16/11/25 21:29:05 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
16/11/25 21:29:05 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
16/11/25 21:29:05 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
16/11/25 21:29:05 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
16/11/25 21:29:05 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
16/11/25 21:29:05 INFO namenode.FSNamesystem: fsOwner             = dtw (auth:SIMPLE)
16/11/25 21:29:05 INFO namenode.FSNamesystem: supergroup          = supergroup
16/11/25 21:29:05 INFO namenode.FSNamesystem: isPermissionEnabled = false
16/11/25 21:29:05 INFO namenode.FSNamesystem: HA Enabled: false
16/11/25 21:29:05 INFO namenode.FSNamesystem: Append Enabled: true
16/11/25 21:29:06 INFO util.GSet: Computing capacity for map INodeMap
16/11/25 21:29:06 INFO util.GSet: VM type       = 64-bit
16/11/25 21:29:06 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
16/11/25 21:29:06 INFO util.GSet: capacity      = 2^20 = 1048576 entries
16/11/25 21:29:06 INFO namenode.FSDirectory: ACLs enabled? false
16/11/25 21:29:06 INFO namenode.FSDirectory: XAttrs enabled? true
16/11/25 21:29:06 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
16/11/25 21:29:06 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/11/25 21:29:06 INFO util.GSet: Computing capacity for map cachedBlocks
16/11/25 21:29:06 INFO util.GSet: VM type       = 64-bit
16/11/25 21:29:06 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
16/11/25 21:29:06 INFO util.GSet: capacity      = 2^18 = 262144 entries
16/11/25 21:29:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
16/11/25 21:29:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/11/25 21:29:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
16/11/25 21:29:06 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
16/11/25 21:29:06 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
16/11/25 21:29:06 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
16/11/25 21:29:06 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/11/25 21:29:06 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/11/25 21:29:06 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/11/25 21:29:06 INFO util.GSet: VM type       = 64-bit
16/11/25 21:29:06 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
16/11/25 21:29:06 INFO util.GSet: capacity      = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /tmp/hadoop-dtw/dfs/name ? (Y or N) y
16/11/25 21:29:08 INFO namenode.FSImage: Allocated new BlockPoolId: BP-387211364-127.0.1.1-1480080548412
16/11/25 21:29:08 INFO common.Storage: Storage directory /tmp/hadoop-dtw/dfs/name has been successfully formatted.
16/11/25 21:29:08 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/11/25 21:29:08 INFO util.ExitUtil: Exiting with status 0
16/11/25 21:29:08 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at dtw/127.0.1.1
************************************************************/

啓動HDFS:

dtw@dtw:~$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
dtw@localhost's password: 
localhost: starting namenode, logging to /home/dtw/hadoop-2.7.1/logs/hadoop-dtw-namenode-dtw.out
dtw@localhost's password: 
localhost: starting datanode, logging to /home/dtw/hadoop-2.7.1/logs/hadoop-dtw-datanode-dtw.out
Starting secondary namenodes [0.0.0.0]
dtw@0.0.0.0's password: 
0.0.0.0: starting secondarynamenode, logging to /home/dtw/hadoop-2.7.1/logs/hadoop-dtw-secondarynamenode-dtw.out
starting yarn daemons
starting resourcemanager, logging to /home/dtw/hadoop-2.7.1/logs/yarn-dtw-resourcemanager-dtw.out
dtw@localhost's password: 
localhost: starting nodemanager, logging to /home/dtw/hadoop-2.7.1/logs/yarn-dtw-nodemanager-dtw.out

啓動HBase:

dtw@dtw:~$ start-hbase.sh 
dtw@localhost's password: 
localhost: starting zookeeper, logging to /home/dtw/hbase-1.2.4/bin/../logs/hbase-dtw-zookeeper-dtw.out
starting master, logging to /home/dtw/hbase-1.2.4/bin/../logs/hbase-dtw-master-dtw.out
starting regionserver, logging to /home/dtw/hbase-1.2.4/bin/../logs/hbase-dtw-1-regionserver-dtw.out

可以通過jps查看此時系統java進程,如下代碼所示:

dtw@dtw:~$ jps
25519 NodeManager
24864 NameNode
30479 HRegionServer
31769 Jps
25219 SecondaryNameNode
30344 HMaster
30277 HQuorumPeer
25385 ResourceManager
25009 DataNode

HBase爲用戶提供一個非常方便的使用方式,我們稱之爲HBase Shell。HBase Shell提供了大多數的HBase命令,通過HBase Shell用戶可以方便地創建 刪除及修改表,還可以向表中添加數據 列出表中相關信息等。
在啓動HBase之後,用戶可以通過下面的命令進入HBase Shell之中:

hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.4, r67592f3d062743907f8c5ae00dbbe1ae4f69e5af, Tue Oct 25 18:10:20 CDT 2016

hbase(main):001:0> 

進入HBase Shell檢測一下,list或者status命令。如果有問題可能和訪問路徑的權限有關係,使用下面命令解決:

chmod -R 777 hbase-1.2.4

成功顯示的界面如下:

hbase(main):001:0> status
1 active master, 0 backup masters, 1 servers, 0 dead, 2.0000 average load

hbase(main):002:0> 

或者

hbase(main):002:0> list
TABLE                                                                           
0 row(s) in 0.0810 seconds

=> []
hbase(main):003:0> 

就表示HBase安裝成功並正確執行了,至此所有關於僞分佈式配置HBase的內容都已經講解完了,希望對初學HBase的人有所幫助!

發佈了74 篇原創文章 · 獲贊 193 · 訪問量 47萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章