使用MyEclipse開發HBase應用程序

當第三方訪問HBase的時候,首選需要訪問ZooKeeper,因爲HBase的重要信息保存在ZooKeeper當中。我們知道,ZooKeeper集羣的信息由$$HBASE_HOME/conf/hbase-site.xml文件指定。因此需要通過classpath來指定HBase配置文件的位置,即$HBASE_HOME/conf的位置。
使用HBase客戶端進行編程的時候,下文指定的JAR包對於程序來說時必需的。除此之外,commons-configuration slf4j等JAR包也常被用到。下面列出對於hbase-1.2.4版本來說需要的JAR包:

這裏寫圖片描述
這裏寫圖片描述
這裏寫圖片描述
這裏寫圖片描述

這些包主要來自兩個地方: 1.×/hadoop-2.7.1/share/hadoop/common,×/hadoop-2.7.1/share/hadoop/common/lib目錄下;
2.×/hbase-1.2.4/lib目錄下。
程序中需要的JAR根據錯誤提示知道對應缺少類對應包通過buildpath添加到IDE中即可。
下面通過一個實例HBaseTestCase Java Project來演示具體的配置。
(1)添加JAR包
添加JAR包有兩種方法,比較簡單的是,在HBase工程上,右擊HBase工程,彈出BuildPath->ConfigureBuildPath,在對話框中單擊Libraries選項卡,在該選項卡下單擊Add External JARs按鈕,定位到$HBase/lib目錄下,並選取上述JAR包,如圖:
這裏寫圖片描述
(2)添加hbase-site.xml配置文件
在工程目錄下創建Conf文件夾,將$HBase_HOME/conf/目錄中的hbase-site.xml文件複製到該文件夾中。通過右鍵選擇Properties->Java BuildPath->Libraries->Add Class Folder,然後勾選Conf文件進行添加。
接下來便可以與普通Java程序一般調用HBase API編寫程序了。還可以通過原先HBase Shell與程序操作進行交互。
預備工作在我之前的博客中有介紹,這裏就不一一贅述了。首先打開HDFS,然後啓動HBase服務。博客地址:僞分佈模式下HBase的安裝
下面是HBase工程HBaseTestCase Java Project的簡單用例即和Hbase shell執行對錶的基本操作一直,只不過這裏是通過Java代碼調用Hbase接口去實現對應功能。
首先介紹一下系統環境:
系統: Ubuntu 16.04
Hadoop:hadoop-2.7.1
HBase:hbase-1.2.4
MyElipse:

MyEclipse Enterprise Workbench

Version: 2015 Stable 2.0
Build id: 13.0.0-20150518

其次介紹一下項目的結構,如圖所示:
這裏寫圖片描述
下面是程序的完整源碼:

package cn.edn.ruc.clodcomputing.book.chapter12;
import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.MasterNotRunningException;
import org.apache.hadoop.hbase.ZooKeeperConnectionException;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.util.Bytes;


public class HBaseTestCase {
    static Configuration cfg=HBaseConfiguration.create();
    public static void create(String tablename,String columnFamily) throws MasterNotRunningException, ZooKeeperConnectionException, IOException
    {
        HBaseAdmin admin=new HBaseAdmin(cfg);
        if(admin.tableExists(tablename))
        {
            System.out.println("table Exists");
            System.exit(0);
        }else
        {
            HTableDescriptor tableDesc=new HTableDescriptor(tablename);
            tableDesc.addFamily(new HColumnDescriptor(columnFamily));
            admin.createTable(tableDesc);
            System.out.println("create table success");
        }
    }
    public static void put(String tablename,String row,String columnFamily,String column,String data) throws IOException
    {
        HTable table=new HTable(cfg,tablename);
        Put p1=new Put(Bytes.toBytes(row));
        p1.add(Bytes.toBytes(columnFamily),Bytes.toBytes(column),Bytes.toBytes(data));
        table.put(p1);
        System.out.println("put '"+row+"','"+columnFamily+":"+column+"','"+data+"'");
    }
    public static void get(String tablename,String row) throws IOException
    {
        HTable table=new HTable(cfg,tablename);
        Get g=new Get(Bytes.toBytes(row));
        Result result=table.get(g);
        System.out.println("Get: "+result);
    }
    public static void scan(String tablename) throws IOException
    {
        HTable table=new HTable(cfg,tablename);
        Scan s=new Scan();
        ResultScanner rs=table.getScanner(s);
        for(Result r:rs)
        {
            System.out.println("Scan: "+r);
        }
    }
    public static boolean delete(String tablename) throws MasterNotRunningException, ZooKeeperConnectionException, IOException
    {
        HBaseAdmin admin=new HBaseAdmin(cfg);
        if(admin.tableExists(tablename))
        {
            try
            {
                admin.disableTable(tablename);
                admin.deleteTable(tablename);
            }catch(Exception ex)
            {
                ex.printStackTrace();
                return false;
            }
        }
        return true;
    }
    public static void main(String[] args) {
        String tablename="hbase_tb";
        String columnFamily="cf";
        try
        {
            HBaseTestCase.create(tablename, columnFamily);
            HBaseTestCase.put(tablename, "row1", columnFamily, "cl1", "data");
            HBaseTestCase.get(tablename, "row1");
            HBaseTestCase.scan(tablename);
            if(true==HBaseTestCase.delete(tablename))
            {
                System.out.println("Delete table:"+tablename+"success!");
            }
        }catch(Exception e)
        {
            e.printStackTrace();
        }
    }

}

5個靜態函數和一個主函數。
程序執行結果如圖所示:

2016-11-30 05:29:52,844 [org.apache.hadoop.util.Shell]-[DEBUG] setsid exited with exit code 0
2016-11-30 05:29:53,208 [org.apache.hadoop.security.Groups]-[DEBUG]  Creating new Groups object
2016-11-30 05:29:53,312 [org.apache.hadoop.util.NativeCodeLoader]-[DEBUG] Trying to load the custom-built native-hadoop library...
2016-11-30 05:29:53,314 [org.apache.hadoop.util.NativeCodeLoader]-[DEBUG] Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path
2016-11-30 05:29:53,314 [org.apache.hadoop.util.NativeCodeLoader]-[DEBUG] java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2016-11-30 05:29:53,315 [org.apache.hadoop.util.NativeCodeLoader]-[WARN] Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-11-30 05:29:53,346 [org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback]-[DEBUG] Falling back to shell based
2016-11-30 05:29:53,359 [org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback]-[DEBUG] Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
2016-11-30 05:29:53,692 [org.apache.hadoop.security.Groups]-[DEBUG] Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
2016-11-30 05:29:54,082 [org.apache.hadoop.metrics2.lib.MutableMetricsFactory]-[DEBUG] field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, about=, value=[Rate of successful kerberos logins and latency (milliseconds)], always=false, type=DEFAULT, sampleName=Ops)
2016-11-30 05:29:54,103 [org.apache.hadoop.metrics2.lib.MutableMetricsFactory]-[DEBUG] field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, about=, value=[Rate of failed kerberos logins and latency (milliseconds)], always=false, type=DEFAULT, sampleName=Ops)
2016-11-30 05:29:54,104 [org.apache.hadoop.metrics2.lib.MutableMetricsFactory]-[DEBUG] field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, about=, value=[GetGroups], always=false, type=DEFAULT, sampleName=Ops)
2016-11-30 05:29:54,107 [org.apache.hadoop.metrics2.impl.MetricsSystemImpl]-[DEBUG] UgiMetrics, User and group related metrics
2016-11-30 05:29:54,387 [org.apache.hadoop.security.authentication.util.KerberosName]-[DEBUG] Kerberos krb5 configuration not found, setting default realm to empty
2016-11-30 05:29:54,396 [org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule]-[DEBUG] hadoop login
2016-11-30 05:29:54,397 [org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule]-[DEBUG] hadoop login commit
2016-11-30 05:29:54,415 [org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule]-[DEBUG] using local user:UnixPrincipal: dtw
2016-11-30 05:29:54,416 [org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule]-[DEBUG] Using user: "UnixPrincipal: dtw" with name dtw
2016-11-30 05:29:54,417 [org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule]-[DEBUG] User entry: "dtw"
2016-11-30 05:29:54,418 [org.apache.hadoop.security.UserGroupInformation]-[DEBUG] UGI loginUser:dtw (auth:SIMPLE)
2016-11-30 05:29:54,581 [org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper]-[INFO] Process identifier=hconnection-0x6a871b75 connecting to ZooKeeper ensemble=localhost:2181
2016-11-30 05:29:54,595 [org.apache.zookeeper.Environment]-[INFO] Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-11-30 05:29:54,595 [org.apache.zookeeper.Environment]-[INFO] Client environment:host.name=dtw
2016-11-30 05:29:54,596 [org.apache.zookeeper.Environment]-[INFO] Client environment:java.version=1.7.0_45
2016-11-30 05:29:54,596 [org.apache.zookeeper.Environment]-[INFO] Client environment:java.vendor=Oracle Corporation
2016-11-30 05:29:54,596 [org.apache.zookeeper.Environment]-[INFO] Client environment:java.home=/home/dtw/MyEclipse2015/binary/com.sun.java.jdk7.linux.x86_64_1.7.0.u45/jre
2016-11-30 05:29:54,597 [org.apache.zookeeper.Environment]-[INFO] Client environment:java.class.path=/home/dtw/Workspaces/MyEclipse 2015/HBase/bin:/home/dtw/hbase-1.2.4/lib/zookeeper-3.4.6.jar:/home/dtw/hbase-1.2.4/lib/log4j-1.2.17.jar:/home/dtw/hbase-1.2.4/lib/commons-logging-1.2.jar:/home/dtw/hbase-1.2.4/lib/commons-lang-2.6.jar:/home/dtw/Workspaces/MyEclipse 2015/HBase/Conf:/home/dtw/hadoop-2.7.1/share/hadoop/common/hadoop-common-2.7.1.jar:/home/dtw/hbase-1.2.4/lib/hbase-common-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-client-1.2.4.jar:/home/dtw/hadoop-2.7.1/share/hadoop/tools/lib/guava-11.0.2.jar:/home/dtw/hbase-1.2.4/lib/commons-collections-3.2.2.jar:/home/dtw/hbase-1.2.4/lib/protobuf-java-2.5.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/hadoop-auth-2.7.1.jar:/home/dtw/hbase-1.2.4/lib/hbase-protocol-1.2.4.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/home/dtw/hbase-1.2.4/lib/hbase-common-1.2.4-tests.jar:/home/dtw/hbase-1.2.4/lib/hbase-server-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/netty-all-4.0.23.Final.jar:/home/dtw/hbase-1.2.4/lib/activation-1.1.jar:/home/dtw/hbase-1.2.4/lib/aopalliance-1.0.jar:/home/dtw/hbase-1.2.4/lib/apacheds-i18n-2.0.0-M15.jar:/home/dtw/hbase-1.2.4/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/dtw/hbase-1.2.4/lib/api-asn1-api-1.0.0-M20.jar:/home/dtw/hbase-1.2.4/lib/api-util-1.0.0-M20.jar:/home/dtw/hbase-1.2.4/lib/asm-3.1.jar:/home/dtw/hbase-1.2.4/lib/avro-1.7.4.jar:/home/dtw/hbase-1.2.4/lib/commons-beanutils-1.7.0.jar:/home/dtw/hbase-1.2.4/lib/commons-beanutils-core-1.8.0.jar:/home/dtw/hbase-1.2.4/lib/commons-cli-1.2.jar:/home/dtw/hbase-1.2.4/lib/commons-codec-1.9.jar:/home/dtw/hbase-1.2.4/lib/commons-compress-1.4.1.jar:/home/dtw/hbase-1.2.4/lib/commons-configuration-1.6.jar:/home/dtw/hbase-1.2.4/lib/commons-daemon-1.0.13.jar:/home/dtw/hbase-1.2.4/lib/commons-digester-1.8.jar:/home/dtw/hbase-1.2.4/lib/commons-el-1.0.jar:/home/dtw/hbase-1.2.4/lib/commons-httpclient-3.1.jar:/home/dtw/hbase-1.2.4/lib/commons-io-2.4.jar:/home/dtw/hbase-1.2.4/lib/commons-math3-3.1.1.jar:/home/dtw/hbase-1.2.4/lib/commons-math-2.2.jar:/home/dtw/hbase-1.2.4/lib/commons-net-3.1.jar:/home/dtw/hbase-1.2.4/lib/disruptor-3.3.0.jar:/home/dtw/hbase-1.2.4/lib/findbugs-annotations-1.3.9-1.jar:/home/dtw/hbase-1.2.4/lib/guava-12.0.1.jar:/home/dtw/hbase-1.2.4/lib/guice-3.0.jar:/home/dtw/hbase-1.2.4/lib/guice-servlet-3.0.jar:/home/dtw/hbase-1.2.4/lib/hadoop-annotations-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-auth-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-client-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-common-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-hdfs-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-mapreduce-client-app-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-mapreduce-client-common-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-mapreduce-client-core-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-mapreduce-client-jobclient-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-mapreduce-client-shuffle-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-yarn-api-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-yarn-client-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-yarn-common-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-yarn-server-common-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hbase-annotations-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-annotations-1.2.4-tests.jar:/home/dtw/hbase-1.2.4/lib/hbase-examples-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-external-blockcache-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-hadoop2-compat-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-hadoop-compat-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-it-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-it-1.2.4-tests.jar:/home/dtw/hbase-1.2.4/lib/hbase-prefix-tree-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-procedure-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-resource-bundle-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-rest-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-server-1.2.4-tests.jar:/home/dtw/hbase-1.2.4/lib/hbase-shell-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-thrift-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/htrace-core-3.1.0-incubating.jar:/home/dtw/hbase-1.2.4/lib/httpclient-4.2.5.jar:/home/dtw/hbase-1.2.4/lib/httpcore-4.4.1.jar:/home/dtw/hbase-1.2.4/lib/jackson-core-asl-1.9.13.jar:/home/dtw/hbase-1.2.4/lib/jackson-jaxrs-1.9.13.jar:/home/dtw/hbase-1.2.4/lib/jackson-mapper-asl-1.9.13.jar:/home/dtw/hbase-1.2.4/lib/jackson-xc-1.9.13.jar:/home/dtw/hbase-1.2.4/lib/jamon-runtime-2.4.1.jar:/home/dtw/hbase-1.2.4/lib/jasper-compiler-5.5.23.jar:/home/dtw/hbase-1.2.4/lib/jasper-runtime-5.5.23.jar:/home/dtw/hbase-1.2.4/lib/javax.inject-1.jar:/home/dtw/hbase-1.2.4/lib/java-xmlbuilder-0.4.jar:/home/dtw/hbase-1.2.4/lib/jaxb-api-2.2.2.jar:/home/dtw/hbase-1.2.4/lib/jaxb-impl-2.2.3-1.jar:/home/dtw/hbase-1.2.4/lib/jcodings-1.0.8.jar:/home/dtw/hbase-1.2.4/lib/jersey-client-1.9.jar:/home/dtw/hbase-1.2.4/lib/jersey-core-1.9.jar:/home/dtw/hbase-1.2.4/lib/jersey-guice-1.9.jar:/home/dtw/hbase-1.2.4/lib/jersey-json-1.9.jar:/home/dtw/hbase-1.2.4/lib/jersey-server-1.9.jar:/home/dtw/hbase-1.2.4/lib/jets3t-0.9.0.jar:/home/dtw/hbase-1.2.4/lib/jettison-1.3.3.jar:/home/dtw/hbase-1.2.4/lib/jetty-6.1.26.jar:/home/dtw/hbase-1.2.4/lib/jetty-sslengine-6.1.26.jar:/home/dtw/hbase-1.2.4/lib/jetty-util-6.1.26.jar:/home/dtw/hbase-1.2.4/lib/joni-2.1.2.jar:/home/dtw/hbase-1.2.4/lib/jruby-complete-1.6.8.jar:/home/dtw/hbase-1.2.4/lib/jsch-0.1.42.jar:/home/dtw/hbase-1.2.4/lib/jsp-2.1-6.1.14.jar:/home/dtw/hbase-1.2.4/lib/jsp-api-2.1-6.1.14.jar:/home/dtw/hbase-1.2.4/lib/junit-4.12.jar:/home/dtw/hbase-1.2.4/lib/leveldbjni-all-1.8.jar:/home/dtw/hbase-1.2.4/lib/libthrift-0.9.3.jar:/home/dtw/hbase-1.2.4/lib/metrics-core-2.2.0.jar:/home/dtw/hbase-1.2.4/lib/paranamer-2.3.jar:/home/dtw/hbase-1.2.4/lib/servlet-api-2.5.jar:/home/dtw/hbase-1.2.4/lib/servlet-api-2.5-6.1.14.jar:/home/dtw/hbase-1.2.4/lib/slf4j-api-1.7.7.jar:/home/dtw/hbase-1.2.4/lib/snappy-java-1.0.4.1.jar:/home/dtw/hbase-1.2.4/lib/spymemcached-2.11.6.jar:/home/dtw/hbase-1.2.4/lib/xmlenc-0.52.jar:/home/dtw/hbase-1.2.4/lib/xz-1.0.jar
2016-11-30 05:29:54,598 [org.apache.zookeeper.Environment]-[INFO] Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2016-11-30 05:29:54,598 [org.apache.zookeeper.Environment]-[INFO] Client environment:java.io.tmpdir=/tmp
2016-11-30 05:29:54,599 [org.apache.zookeeper.Environment]-[INFO] Client environment:java.compiler=<NA>
2016-11-30 05:29:54,599 [org.apache.zookeeper.Environment]-[INFO] Client environment:os.name=Linux
2016-11-30 05:29:54,599 [org.apache.zookeeper.Environment]-[INFO] Client environment:os.arch=amd64
2016-11-30 05:29:54,600 [org.apache.zookeeper.Environment]-[INFO] Client environment:os.version=4.4.0-47-generic
2016-11-30 05:29:54,600 [org.apache.zookeeper.Environment]-[INFO] Client environment:user.name=dtw
2016-11-30 05:29:54,600 [org.apache.zookeeper.Environment]-[INFO] Client environment:user.home=/home/dtw
2016-11-30 05:29:54,601 [org.apache.zookeeper.Environment]-[INFO] Client environment:user.dir=/home/dtw/Workspaces/MyEclipse 2015/HBase
2016-11-30 05:29:54,603 [org.apache.zookeeper.ZooKeeper]-[INFO] Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x6a871b750x0, quorum=localhost:2181, baseZNode=/hbase
2016-11-30 05:29:54,611 [org.apache.zookeeper.ClientCnxn]-[DEBUG] zookeeper.disableAutoWatchReset is false
2016-11-30 05:29:54,644 [org.apache.zookeeper.ClientCnxn$SendThread]-[INFO] Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2016-11-30 05:29:54,662 [org.apache.zookeeper.ClientCnxn$SendThread]-[INFO] Socket connection established to localhost/127.0.0.1:2181, initiating session
2016-11-30 05:29:54,665 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Session establishment request sent on localhost/127.0.0.1:2181
2016-11-30 05:29:54,700 [org.apache.zookeeper.ClientCnxn$SendThread]-[INFO] Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x158b1fe53f10006, negotiated timeout = 90000
2016-11-30 05:29:54,704 [org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]-[DEBUG] hconnection-0x6a871b750x0, quorum=localhost:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-11-30 05:29:54,706 [org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006 connected
2016-11-30 05:29:54,721 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 1,3  replyHeader:: 1,42,0  request:: '/hbase/hbaseid,F  response:: s{17,17,1480454991127,1480454991127,0,0,0,0,67,0,17} 
2016-11-30 05:29:54,729 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 2,4  replyHeader:: 2,42,0  request:: '/hbase/hbaseid,F  response:: #ffffffff000146d61737465723a3136303030ffffffa7ffffff996cffffffcd37ffffffa617ffffff8f50425546a2430333131343564392d376363392d346534652d613333642d353932393864303834333134,s{17,17,1480454991127,1480454991127,0,0,0,0,67,0,17} 
2016-11-30 05:29:55,104 [org.apache.hadoop.hdfs.DFSClient$Conf]-[DEBUG] dfs.client.use.legacy.blockreader.local = false
2016-11-30 05:29:55,105 [org.apache.hadoop.hdfs.DFSClient$Conf]-[DEBUG] dfs.client.read.shortcircuit = false
2016-11-30 05:29:55,105 [org.apache.hadoop.hdfs.DFSClient$Conf]-[DEBUG] dfs.client.domain.socket.data.traffic = false
2016-11-30 05:29:55,105 [org.apache.hadoop.hdfs.DFSClient$Conf]-[DEBUG] dfs.domain.socket.path = 
2016-11-30 05:29:55,172 [org.apache.hadoop.io.retry.RetryUtils]-[DEBUG] multipleLinearRandomRetry = null
2016-11-30 05:29:55,229 [org.apache.hadoop.ipc.Server]-[DEBUG] rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@5b36cea3
2016-11-30 05:29:55,239 [org.apache.hadoop.ipc.ClientCache]-[DEBUG] getting client out of cache: org.apache.hadoop.ipc.Client@7a6b653f
2016-11-30 05:29:55,836 [org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory]-[DEBUG] Both short-circuit local reads and UNIX domain socket are disabled.
2016-11-30 05:29:55,931 [org.apache.hadoop.hbase.ipc.AbstractRpcClient]-[DEBUG] Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@82a5772, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-11-30 05:29:56,000 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 3,4  replyHeader:: 3,42,-101  request:: '/hbase/meta-region-server,F  response::  
2016-11-30 05:29:56,011 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)
2016-11-30 05:29:56,219 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 4,4  replyHeader:: 4,42,-101  request:: '/hbase/meta-region-server,F  response::  
2016-11-30 05:29:56,219 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)
2016-11-30 05:29:56,424 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 5,4  replyHeader:: 5,42,-101  request:: '/hbase/meta-region-server,F  response::  
2016-11-30 05:29:56,424 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)
2016-11-30 05:29:56,629 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 6,4  replyHeader:: 6,42,-101  request:: '/hbase/meta-region-server,F  response::  
2016-11-30 05:29:56,629 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)
2016-11-30 05:29:56,834 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 7,4  replyHeader:: 7,42,-101  request:: '/hbase/meta-region-server,F  response::  
2016-11-30 05:29:56,835 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)
2016-11-30 05:29:57,039 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 8,4  replyHeader:: 8,42,-101  request:: '/hbase/meta-region-server,F  response::  
2016-11-30 05:29:57,040 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)
2016-11-30 05:29:57,244 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 9,4  replyHeader:: 9,42,-101  request:: '/hbase/meta-region-server,F  response::  
2016-11-30 05:29:57,245 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)
2016-11-30 05:29:57,449 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 10,4  replyHeader:: 10,43,-101  request:: '/hbase/meta-region-server,F  response::  
2016-11-30 05:29:57,449 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)
2016-11-30 05:29:57,654 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 11,4  replyHeader:: 11,46,-101  request:: '/hbase/meta-region-server,F  response::  
2016-11-30 05:29:57,655 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)
2016-11-30 05:29:57,858 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 12,4  replyHeader:: 12,47,-101  request:: '/hbase/meta-region-server,F  response::  
2016-11-30 05:29:57,859 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)
2016-11-30 05:29:58,063 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 13,4  replyHeader:: 13,47,-101  request:: '/hbase/meta-region-server,F  response::  
2016-11-30 05:29:58,064 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)
2016-11-30 05:29:58,267 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 14,4  replyHeader:: 14,47,-101  request:: '/hbase/meta-region-server,F  response::  
2016-11-30 05:29:58,268 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)
2016-11-30 05:29:58,474 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 15,4  replyHeader:: 15,50,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a313632303139ffffffee1effffffb9472d2a7050425546afa364747710ffffffc97e18ffffffbcffffff81fffffff9ffffff8fffffff8b2b100183,s{49,49,1480454998413,1480454998413,0,0,0,0,56,0,49} 
2016-11-30 05:29:58,501 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 16,8  replyHeader:: 16,51,0  request:: '/hbase,F  response:: v{'meta-region-server,'online-snapshot,'replication,'recovering-regions,'splitWAL,'rs,'backup-masters,'flush-table-proc,'region-in-transition,'draining,'table,'running,'table-lock,'master,'hbaseid} 
2016-11-30 05:29:58,894 [org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection]-[DEBUG] Use SIMPLE authentication for service ClientService, sasl=false
2016-11-30 05:29:58,946 [org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection]-[DEBUG] Connecting to dtw/127.0.1.1:16201
2016-11-30 05:29:59,325 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 17,3  replyHeader:: 17,54,0  request:: '/hbase,F  response:: s{2,2,1480454985936,1480454985936,0,15,0,0,0,15,49} 
2016-11-30 05:29:59,335 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 18,4  replyHeader:: 18,54,0  request:: '/hbase/master,F  response:: #ffffffff000146d61737465723a3136303030ffffffaf4b6affffffb7ffffff98ffffffd7ffffffed4a50425546afa364747710ffffff807d18ffffffc9ffffff80fffffff9ffffff8fffffff8b2b10018ffffff8a7d,s{13,13,1480454988100,1480454988100,0,0,0,97023097876578304,51,0,13} 
2016-11-30 05:29:59,351 [org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection]-[DEBUG] Use SIMPLE authentication for service MasterService, sasl=false
2016-11-30 05:29:59,352 [org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection]-[DEBUG] Connecting to dtw/127.0.1.1:16000
2016-11-30 05:30:03,985 [org.apache.hadoop.hbase.client.HBaseAdmin$CreateTableFuture]-[INFO] Created hbase_tb
create table success
put 'row1','cf:cl1','data'
Get: keyvalues={row1/cf:cl1/1480455004219/Put/vlen=4/seqid=0}
Scan: keyvalues={row1/cf:cl1/1480455004219/Put/vlen=4/seqid=0}
2016-11-30 05:30:04,341 [org.apache.hadoop.hbase.client.HBaseAdmin$9]-[INFO] Started disable of hbase_tb
2016-11-30 05:30:06,664 [org.apache.hadoop.hbase.client.HBaseAdmin$DisableTableFuture]-[INFO] Disabled hbase_tb
2016-11-30 05:30:08,983 [org.apache.hadoop.hbase.client.HBaseAdmin$DeleteTableFuture]-[INFO] Deleted hbase_tb
Delete table:hbase_tbsuccess!
2016-11-30 05:30:08,993 [org.apache.hadoop.ipc.ClientCache]-[DEBUG] stopping client from cache: org.apache.hadoop.ipc.Client@7a6b653f
2016-11-30 05:30:08,993 [org.apache.hadoop.ipc.ClientCache]-[DEBUG] removing client from cache: org.apache.hadoop.ipc.Client@7a6b653f
2016-11-30 05:30:08,993 [org.apache.hadoop.ipc.ClientCache]-[DEBUG] stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@7a6b653f
2016-11-30 05:30:08,994 [org.apache.hadoop.ipc.Client]-[DEBUG] Stopping client

至此所有MyElipse開發HBase應用程序內容就講完了。希望對初學者有幫助!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章