VMware12使用三臺虛擬機Ubuntu16.04系統搭建hadoop-2.7.1+hbase-1.2.4(完全分佈式)

初衷

首先說明一下既然網上有那麼多教程爲什麼要還要寫這樣一個安裝教程呢?網上教程雖然多,但是有些教程比較老,許多教程忽略許多安裝過程中的細節,比如添加用戶的權限,文件權限,小編在安裝過程遇到許多這樣的問題所以想寫一篇完整的教程,希望對初學hadoop的人有一個直觀的瞭解,我們接觸真集羣的機會比較少,虛擬機是個不錯的選擇,可以基本完全模擬真實的情況,前提是你的電腦要配置相對較好不然跑起來都想死,廢話不多說。

環境說明

本文使用VMware® Workstation 12 Pro虛擬機創建並安裝三臺Ubuntu16.04系統分別命名爲master、slave1、slave2對應對應NameNode、DataNode、DataNode。
安裝過程中要求三個系統中配置基本相同除個別配置(比如:節點的命名)
192.168.190.128 master
192.168.190.129 slave1
192.168.190.131 slave2

在虛擬機Linux上安裝與配置Hadoop

需要說明的是下面的所有配置三臺Ubuntu系統都要配置而且是基本一樣,爲了使配置一致,先在一臺機器上配置然後將對應配置scp到其他機器上
虛擬機的安裝不是本文重點,這裏就不贅述了。安裝之後是這樣的:
這裏寫圖片描述
在Linux上安裝Hadoop之前,需要安裝兩個程序:
1)JDK1.6(或更高版本),本文采用JDK 1.7。Hadoop是Java編寫的程序,Hadoop的編譯及MapReduce都需要使用JDK。因此,在安裝Hadoop前,必須安裝JDK1.6或更高版本。
2)SSH(安裝外殼協議),推薦安裝OpenSSH.Hadoop需要通過SSH來啓動Slave列表中各臺機器的守護進程,因此SSH也是必須安裝的,即使是安裝僞分佈版本(因爲Hadoop並沒有區分集羣式和僞分佈式)。對於僞分佈式,Hadoop會採用與集羣相同處理方式,即按次序啓動文件conf/slaves中記載的主機上的進程,只不過在僞分佈式中Slave爲localhost(即本身),所以對於僞分佈式Hadoop,SSH也是一樣必須的。

部署步驟

添加一個hadoop用戶,並賦予相應權利,我們接下來hadoop hbase的安裝都要在hadoop用戶下操作,所以hadoop用戶要將hadoop的文件權限以及文件所有者賦予給hadoop用戶。
1.每個虛擬機系統上都添加 hadoop 用戶,並添加到 sudoers

sudo adduser hadoop
sudo gedit /etc/sudoers

找到對應添加如下:

# User privilege specification
root    ALL=(ALL:ALL) ALL
hadoop ALL=(ALL:ALL) ALL

2.切換到 hadoop 用戶:

su hadoop

3.修改 /etc/hostname 主機名爲 master
當然master虛擬機設置爲master
其他兩個虛擬機分別設置爲slave1、slave2
4.、修改 /etc/hosts

127.0.0.1 localhost
127.0.1.1   localhost.localdomain   localhost
192.168.190.128 master
192.168.190.129 slave1
192.168.190.131 slave2
# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

5.安裝JDK 1.7

(1)下載和安裝JDK 1.7
jdk-7u76-linux-x64.tar.gz
使用tar命令

tar -zxvf jdk-7u76-linux-x64.tar.gz

將安裝文件移動到JDK安裝目錄,本文JDK的安裝目錄爲/usr/lib/jvm/jdk1.7.0_76
(2)配置環境變量
輸入命令:

sudo gedit /etc/profile

輸入密碼,打開profile文件。在最下面輸入如下內容:

#set java environment
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_76
export JRE_HOME=${JAVA_HOME}/jre  
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib  
export PATH=${JAVA_HOME}/bin:/home/hadoop/hadoop-2.7.1/bin:/home/hadoop/hadoop-2.7.1/sbin:/home/hadoop/hbase-1.2.4/bin:$PATH

需要說明的是可能profile文件當前權限是隻讀的,需要使用

sudo chmod 777 /etc/profile

命令修改文件讀寫權限。文件中已經包含了Hadoop以及Hbase的環境配置。
這一步的意義是配置環境變量,使系統可以找到JDK。
(4)驗證JDK是否安裝成功
輸入命令:

java -version

會出現如下JDK版本信息:

java version "1.7.0_76"
Java(TM) SE Runtime Environment (build 1.7.0_76-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.76-b04, mixed mode)

如果出現上述JDK版本信息說明當前安裝JDK並未設置成Ubuntu系統默認的JDK,接下來還需要手動將安裝的JDK設置成系統默認的JDK。
(5)手動設置系統默認JDK
在終端依次輸入命令:

sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.7.0_76/bin/java 300
sudo update-alternatives --install  /usr/bin/javac javac /usr/lib/jvm/jdk1.7.0_76/bin/javac 300
sudo update-alternatives --config java

接下來輸入java -version就可以看到所安裝的JDK的版本信息了。
三臺虛擬機都要安裝Vmware Tools工具方便複製粘貼

6.配置SSH免密碼登錄
(1)確認已經連上互聯網,然後輸入命令:

sudo apt-get install ssh

(2)配置 master、slave1 和 slave2 節點可以通過 SSH 無密碼互相訪問
注意這裏的所有操作都是在hadoop用戶下操作的。
首先,查看下hadoop用戶下是否存在.ssh文件夾(注意ssh文件前面有”.”這是一個隱藏文件夾),輸入命令:

ls -a -l

可以得到

drwxr-xr-x  9 root   root 4096 Feb  1 02:41 .
drwxr-xr-x  4 root   root 4096 Jan 27 01:50 ..
drwx------  3 root   root 4096 Jan 31 03:35 .cache
drwxr-xr-x  5 root   root 4096 Jan 31 03:35 .config
drwxrwxrwx 11 hadoop root 4096 Feb  1 00:18 hadoop-2.7.1
drwxrwxrwx  8 hadoop root 4096 Feb  1 02:47 hbase-1.2.4
drwxr-xr-x  3 root   root 4096 Jan 31 03:35 .local
drwxr-xr-x  2 root   root 4096 Jan 31 14:47 software
drwxr-xr-x  2 hadoop root 4096 Feb  1 00:01 .ssh

一般來說,安裝SSH時會自動在當前用戶下創建這個隱藏文件夾,如果沒有,可以手動創建一個。

sudo mkdir .ssh

注意這裏的.ssh要是hadoop權限擁有,如果是root的話,使用下面命令:

sudo chown -R hadoop .ssh

接下來,輸入命令:

ssh-keygen -t rsa

如果沒有權限前面加一個sudo.
執行完可以看到一個圖標並在.ssh文件下創建兩個文件:id_rsa和id_rsa.pub

 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

在Ubuntu中,~代表單前用戶文件夾,此處即/home/hadoop。
這表命令的功能是把公鑰加到用於認證的公鑰文件中,這裏的authorized_keys是用於認證的公鑰文件。
然後使用命令:

sudo gedit authorized_keys

打開對應虛擬機生成的密碼,如master主機的hadoop用戶生成了hadoop@master,將其他主機生成的祕鑰添加到master主機的authorized_keys文件的末尾,這樣master主機就擁有slave1的hadoop用戶以及slave2的hadoop用戶的祕鑰了。
如下:
不要複製我的,複製我的沒用,我這裏只是實例一下,複製你自己的三臺虛擬機各自生成的祕鑰

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC743oCP2Voa3deHBkA+N7cYJC4Jv2Tj8Z6tGVWCxg0NJl3yKwYIfgC9RiyFyRWcl5byI34Oe7dYtf+9UtvH85hca1/IDP1m02NLPXsIJmcPS4uNgMLfsWg/F/C3Bqut7i4t6eHwO/FRhjeIBu5O/9GHoXk/ykhgJIbyh8hhAlcke6Jtt80I63r2+3DnlHlNzw1sQRJp2qFRgyV61j5DfuYrhfd+/eTkFtXc7izLVCkC7x6hMo4qIMQ0GbSx9iqTO0tO1skGYLhCX3Cbo3hf4i19RUKt168eg/X2l1qIvf+vgxQudM3lZa9/pxDieK5p8c8xupcaoR67jMFLWLl3EUb hadoop@master
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDQ1Jf6ds9Y+KlQNIHq+pDGxM1OsF+RSXcgLDdlzw+qGK7NT28bRK6QUCm3kJqa/ekEkqDHdWegtiQVriOsY4A2fABkRsjiOrnc4QYQ/rqB06JuvshwToB91qwmV/J/o3mgsentJLfmBUpSyW8rRxQV+tYtqQ+gipL7x0WGUBRQYRhJJZKAxqgLGE3Md/siYjn8Ge4G31rrTcx9QDVcfTCtHkvqca0b0f98Y+U9Fu6w4Ari28oLxFTlzuCsebIPMzE4uWQuXT+2kMz0HunpejSDrLkrFqO1OKUs0peZrUVRmYBY5flt4tnV0XOQBYClzxieev/ppgH8AeB4Qs/zXB25 hadoop@slave1
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDI8PpgXt94SAEtUhvt2JmlO4Ed11r1WLoN1Eha5vI3qqm7cgT4yS7lvxL53Dc5G7R0n4Jwsf2hTvD9JF77vEIxp5g3xQGa7HafbIMzQupuCyAHqY+v0RTepaBUNGkFz0uKv+Nq8bzjfSUv4HgRorW7Yzqaa0LjEvHiI8uVZA7dcZ6Ba1on/TlKVVzz3MdZulcn7+AzjTPTG8hPQaELQqws1UuIYIUanOSqFPCADart/pJpAzGkqek0LBRSvI+U+P0oSrz9aX3wVOUQknheinM4tmuo3TGYionjeV1jqroCxBbZaeqLLwnpA0YZBl/ZMnJHkeSITypmgZWszh3ylC8p hadoop@slave2

至此免密碼登錄主機已配置完畢。
(3)驗證SSH是否已安裝成功,以及是否可以免密碼登錄主機。
輸入命令:

ssh -V

顯示結果:

OpenSSH_7.2p2 Ubuntu-4ubuntu2.1, OpenSSL 1.0.2g  1 Mar 2016

輸入命令:

ssh localhost

會有如下顯示:

Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-21-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

458 packages can be updated.
171 updates are security updates.


The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.


The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

Last login: Wed Feb  1 00:02:53 2017 from 127.0.0.1
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

這說明已經安裝成功,第一次登錄會詢問是否繼續鏈接,輸入yes即可以進入。
實際上,在Hadoop的安裝過程中,是否免密碼登錄是無關緊要的,但是如果不配置免密碼登錄,每次啓動Hadoop都需要輸入密碼以登錄到每臺機器的DataNode上,考慮到一般的Hadoop集羣動輒數百或者上千臺機器,因此一般來說都會配置SSH免密碼登錄。
master 節點無密碼訪問 slave1 和 slave2 節點:

ssh slave1

運行結果:

Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-59-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

312 packages can be updated.
10 updates are security updates.


The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.


The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

Last login: Wed Feb  1 00:03:30 2017 from 192.168.190.131

不需要密碼,需要密碼說明沒有配置成功,看看是不是哪步出現了問題。

安裝並運行Hadoop

介紹Hadoop的安裝之前,先介紹一下Hadoop對各個節點的角色定義。
Hadoop分別從三個角度將主機劃分爲兩種角色。第一,最基本的劃分爲Master和Slave,即主人和奴隸;第二,從HDFS的角度,將主機劃分爲NameNode和DataNode(在分佈式文件系統中,目錄的管理很重要,管理目錄相當於主任,而NameNode就是目錄管理者);第三,從MapReduce角度,將主機劃分爲JobTracker和TaskTracker(一個Job經常被劃分爲多個Task,從這個角度不難理解它們之間的關係)。
Hadoop有三種運行方式:單機模式、僞分佈與完全分佈式。乍看之下,前兩種並不能體現雲計算的優勢,但是它們便於程序的測試與調試,所以還是有意義的。
我的博客中有介紹單機模式和僞分佈式方式這裏就不贅述,本文主要着重介紹分佈式方式配置。
(1)hadoop 用戶目錄下解壓下載的hadoop-2.7.1.tar.gz
使用解壓命令:

tar -zxvf hadoop-2.7.1.tar.gz

注意一下操作都是在hadoop用戶下操作的也就是hadoop-2.7.1的所有者是hadoop.如下所示:

total 120
drwxr-xr-x 19 hadoop hadoop 4096 Feb  1 02:28 .
drwxr-xr-x  4 root   root   4096 Jan 31 14:24 ..
-rw-------  1 hadoop hadoop 1297 Feb  1 03:37 .bash_history
-rw-r--r--  1 hadoop hadoop  220 Jan 31 14:24 .bash_logout
-rw-r--r--  1 hadoop hadoop 3771 Jan 31 14:24 .bashrc
drwx------  3 root   root   4096 Jan 31 22:49 .cache
drwx------  5 root   root   4096 Jan 31 23:59 .config
drwx------  3 root   root   4096 Jan 31 23:59 .dbus
drwxr-xr-x  2 hadoop hadoop 4096 Feb  1 00:55 Desktop
-rw-r--r--  1 hadoop hadoop   25 Feb  1 00:55 .dmrc
drwxr-xr-x  2 hadoop hadoop 4096 Feb  1 00:55 Documents
drwxr-xr-x  2 hadoop hadoop 4096 Feb  1 00:55 Downloads
-rw-r--r--  1 hadoop hadoop 8980 Jan 31 14:24 examples.desktop
drwx------  2 hadoop hadoop 4096 Feb  1 00:56 .gconf
drwx------  3 hadoop hadoop 4096 Feb  1 00:55 .gnupg
drwxrwxrwx 11 hadoop hadoop 4096 Feb  1 00:30 hadoop-2.7.1
drwxrwxrwx  8 hadoop hadoop 4096 Feb  1 02:44 hbase-1.2.4
-rw-------  1 hadoop hadoop  318 Feb  1 00:56 .ICEauthority
drwxr-xr-x  3 root   root   4096 Jan 31 22:49 .local
drwxr-xr-x  2 hadoop hadoop 4096 Feb  1 00:55 Music
drwxr-xr-x  2 hadoop hadoop 4096 Feb  1 00:55 Pictures
-rw-r--r--  1 hadoop hadoop  675 Jan 31 14:24 .profile
drwxr-xr-x  2 hadoop hadoop 4096 Feb  1 00:55 Public
drwx------  2 hadoop hadoop 4096 Feb  1 00:02 .ssh
drwxr-xr-x  2 hadoop hadoop 4096 Feb  1 00:55 Templates
drwxr-xr-x  2 hadoop hadoop 4096 Feb  1 00:55 Videos
-rw-------  1 hadoop hadoop   51 Feb  1 00:55 .Xauthority
-rw-------  1 hadoop hadoop 1492 Feb  1 00:58 .xsession-errors

(2)配置 hadoop 的環境變量

sudo gedit /etc/profile

配置如下:

#set java environment
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_76
export JRE_HOME=${JAVA_HOME}/jre  
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib  
export PATH=${JAVA_HOME}/bin:/home/hadoop/hadoop-2.7.1/bin:/home/hadoop/hadoop-2.7.1/sbin:/home/hadoop/hbase-1.2.4/bin:$PATH

(3)配置三臺主機的Hadoop文件,內容如下。
conf/Hadoop-env.sh:
/home/master/hadoop-2.7.1/etc/hadoop
首先如何找到這個文件呢,使用Ubuntu的搜索工具如圖所示:
這裏寫圖片描述
配置如下:

# The java implementation to use.
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_76
export HADOOP_HOME=/home/master/hadoop-2.7.1
export PATH=$PATH:/home/master/hadoop-2.7.1/bin

conf/core-site.xml
/home/master/hadoop-2.7.1/etc/hadoop

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://master:9000</value>
</property>
<property>
 <name>hadoop.tmp.dir</name>
 <value>/tmp</value>
</property>
</configuration>

conf/hdfs-site.xml
/home/master/hadoop-2.7.1/etc/hadoop

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
   <name>dfs.replication</name>
   <value>2</value>
</property>
</configuration>

conf/mapred-site.xml
/home/master/hadoop-2.7.1/etc/hadoop
搜索發現沒有這個文件需要複製mapred-site.xml.template這個文件的內容到mapred-site.xml

cp mapred-site.xml.template mapred-site.xml

配置如下:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
 <property>
   <name>mapred.job.tracker</name>
   <value>master:9001</value>
 </property>
</configuration>

conf/masters
/home/master/hadoop-2.7.1/etc/hadoop
沒有手動添加一個master文件
配置如下:

master

conf/slaves:

slave1
slave2

(4) 向 slave1 和 slave2 節點複製 hadoop2.7.1 整個目錄至相同的位置
進入hadoop@master節點hadoop目錄下使用

scp -r hadoop-2.7.1 hadoop@slave1:~/
scp -r hadoop-2.7.1 hadoop@slave2:~/

(5)啓動Hadoop
在hadoop@master節點上執行

hadoop@master:~$ hadoop namenode -format

如果提示:

hadoop: command not found

需要source一下環境變量文件

source /etc/profile

執行結果如下:

hadoop@master:~$ hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

17/02/02 02:59:44 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master/192.168.190.128
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.7.1
STARTUP_MSG:   classpath = /home/hadoop/hadoop-2.7.1/etc/hadoop:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jsr305-3.0.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/asm-3.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-net-3.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/activation-1.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-digester-1.8.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jettison-1.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-lang-2.6.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/hadoop-auth-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jsch-0.1.42.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/gson-2.2.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/curator-client-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/hadoop-annotations-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/curator-framework-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/xz-1.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jersey-json-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/junit-4.11.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/hadoop-common-2.7.1-tests.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/hadoop-nfs-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/common/hadoop-common-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/asm-3.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/hadoop-hdfs-2.7.1-tests.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/hdfs/hadoop-hdfs-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/asm-3.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/activation-1.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jettison-1.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/guice-3.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/xz-1.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-api-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-common-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-registry-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-client-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/yarn/hadoop-yarn-common-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.1-tests.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.1.jar:/home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.1.jar:/home/master/hadoop-2.7.1/contrib/capacity-scheduler/*.jar:/home/master/hadoop-2.7.1/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04Z
STARTUP_MSG:   java = 1.7.0_76
************************************************************/
17/02/02 02:59:44 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/02/02 02:59:44 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-ef219bd8-5622-49d9-b501-6370f3b5fc73
17/02/02 03:00:03 INFO namenode.FSNamesystem: No KeyProvider found.
17/02/02 03:00:03 INFO namenode.FSNamesystem: fsLock is fair:true
17/02/02 03:00:04 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17/02/02 03:00:04 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
17/02/02 03:00:04 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
17/02/02 03:00:04 INFO blockmanagement.BlockManager: The block deletion will start around 2017 Feb 02 03:00:04
17/02/02 03:00:04 INFO util.GSet: Computing capacity for map BlocksMap
17/02/02 03:00:04 INFO util.GSet: VM type       = 64-bit
17/02/02 03:00:04 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
17/02/02 03:00:04 INFO util.GSet: capacity      = 2^21 = 2097152 entries
17/02/02 03:00:04 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
17/02/02 03:00:04 INFO blockmanagement.BlockManager: defaultReplication         = 2
17/02/02 03:00:04 INFO blockmanagement.BlockManager: maxReplication             = 512
17/02/02 03:00:04 INFO blockmanagement.BlockManager: minReplication             = 1
17/02/02 03:00:04 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
17/02/02 03:00:04 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
17/02/02 03:00:04 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17/02/02 03:00:04 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
17/02/02 03:00:04 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
17/02/02 03:00:04 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
17/02/02 03:00:04 INFO namenode.FSNamesystem: supergroup          = supergroup
17/02/02 03:00:04 INFO namenode.FSNamesystem: isPermissionEnabled = true
17/02/02 03:00:04 INFO namenode.FSNamesystem: HA Enabled: false
17/02/02 03:00:04 INFO namenode.FSNamesystem: Append Enabled: true
17/02/02 03:00:05 INFO util.GSet: Computing capacity for map INodeMap
17/02/02 03:00:05 INFO util.GSet: VM type       = 64-bit
17/02/02 03:00:05 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
17/02/02 03:00:05 INFO util.GSet: capacity      = 2^20 = 1048576 entries
17/02/02 03:00:05 INFO namenode.FSDirectory: ACLs enabled? false
17/02/02 03:00:05 INFO namenode.FSDirectory: XAttrs enabled? true
17/02/02 03:00:05 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
17/02/02 03:00:05 INFO namenode.NameNode: Caching file names occuring more than 10 times
17/02/02 03:00:05 INFO util.GSet: Computing capacity for map cachedBlocks
17/02/02 03:00:05 INFO util.GSet: VM type       = 64-bit
17/02/02 03:00:05 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
17/02/02 03:00:05 INFO util.GSet: capacity      = 2^18 = 262144 entries
17/02/02 03:00:05 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17/02/02 03:00:05 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17/02/02 03:00:05 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
17/02/02 03:00:05 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
17/02/02 03:00:05 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
17/02/02 03:00:05 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
17/02/02 03:00:05 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/02/02 03:00:05 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
17/02/02 03:00:06 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/02/02 03:00:06 INFO util.GSet: VM type       = 64-bit
17/02/02 03:00:06 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
17/02/02 03:00:06 INFO util.GSet: capacity      = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /tmp/dfs/name ? (Y or N) y
17/02/02 03:00:28 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1867851271-192.168.190.128-1485975628037
17/02/02 03:00:28 INFO common.Storage: Storage directory /tmp/dfs/name has been successfully formatted.
17/02/02 03:00:29 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/02/02 03:00:29 INFO util.ExitUtil: Exiting with status 0
17/02/02 03:00:29 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.190.128
************************************************************/

說明初始格式化文件系統成功!
啓動Hadoop
注意啓動Hadoop是在主節點上執行命令,其他節點不需要,主節點會自動按照文件配置啓動從節點

hadoop@master:~$ start-all.sh

執行結果如下:

hadoop@master:~$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /home/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-namenode-master.out
slave1: starting datanode, logging to /home/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-datanode-slave1.out
slave2: starting datanode, logging to /home/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-datanode-slave2.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop-2.7.1/logs/yarn-hadoop-resourcemanager-master.out
slave1: starting nodemanager, logging to /home/hadoop/hadoop-2.7.1/logs/yarn-hadoop-nodemanager-slave1.out
slave2: starting nodemanager, logging to /home/hadoop/hadoop-2.7.1/logs/yarn-hadoop-nodemanager-slave2.out

可以通過jps命令查看各個節點運行的進程查看運行是否成功。
master節點:

hadoop@master:~$ jps
11012 Jps
10748 ResourceManager
10594 SecondaryNameNode

slave1節點:

hadoop@slave1:~$ jps
7227 Jps
7100 NodeManager
6977 DataNode

slave2節點:

hadoop@slave2:~$ jps
6654 Jps
6496 NodeManager
6373 DataNode

你可以通過以下命令或者通過http://master:50070查看集羣狀態。

Hadoop dfsadmin -report

至此Haoop的安裝配置已經全部講完。

HBase的安裝

HBase有三種運行模式,其中單機模式的配置非常簡單,幾乎不用對安裝文件做任何修改就可以使用。如果要運行分佈式模式,Hadoop是必不可少的。另外在對HBase的某些文件進行配置之前,需要具備一下先決條件也是我們剛纔介紹Hadoop介紹過的。
(1)JDK
( 2 )Hadoop
( 3 )SSH

完全分佈式模式安裝

對於完全分佈式安裝HBase,我們需要通過hbase-site.xml文檔來配置本機的HBase特性,通過hbase-env.sh來配置全局HBase集羣系統的特性,也就是說每一臺機器都可以通過hbase-env.sh來了解全局的HBase的某些特性。另外,各個HBase實例之間需要通過Zookeeper來進行通信,因此我們還需要維護一個(一組)Zookeeper系統。
首先通過查看下hbase文件的所有者和權限

ls -a -l

得到如下:

total 36
drwxr-xr-x  9 root   root 4096 Feb  1 02:41 .
drwxr-xr-x  4 root   root 4096 Jan 27 01:50 ..
drwx------  3 root   root 4096 Jan 31 03:35 .cache
drwxr-xr-x  5 root   root 4096 Jan 31 03:35 .config
drwxrwxrwx 11 hadoop root 4096 Feb  1 00:18 hadoop-2.7.1
drwxrwxrwx  8 hadoop root 4096 Feb  1 02:47 hbase-1.2.4
drwxr-xr-x  3 root   root 4096 Jan 31 03:35 .local
drwxr-xr-x  2 root   root 4096 Jan 31 14:47 software
drwxr-xr-x  2 hadoop root 4096 Feb  1 00:01 .ssh

(1)conf/hbase-site.xml文件的配置
hbase.rootdir和hbase.cluster.distributed兩個參數的配置對於HBase來說是必須的。我們通過hbase.rootdir來指定本臺機器HBase的存儲目錄;通過hbase.cluster.distributed來說明其運行模式(true爲全分佈式模式,false爲單機模式或僞分佈式模式);另外hbase.master指定的是HBase的master位置,hbase.zookeeper.quorum指定的是Zookeeper集羣的位置。如下所示爲示例配置文檔:
同樣,通過Ubuntu的目錄查找hbase-site.xml
/home/hadoop/hbase-1.2.4/conf
配置如下:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/**
 *
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
-->
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:9000/hbase</value>
<description>HBase Data storge directory</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>Assign HBase run mode</description>
</property>
<property>
<name>hbase.master</name>
<value>hdfs://master:60000</value>
<description>Assign Master position</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master,slave1,slave2</value>
<description>Assign Zookeeper cluster</description>
</property>
</configuration>

(2)conf/regionservers的配置
regionservers文件列出了所有運行HBase RegionServer CHRegion Server的機器。此文件的配置和Hadoop的slaves文件十分類似,每一行指定一臺機器。當HBase啓動的時候,會將此文件中列出的機器啓動;同樣,當HBase關閉的時候,也會同時自動讀取文件並將所有機器關閉。
在我們配置中,HBase Master及HDFS NameNode運行在hostname爲Master的機器上,HBase RegionServers運行在master、slave1、slave2上。根據上述配置,我們只需要將每臺機器上HBase安裝目錄下的conf/regionservers文件的內容設置爲:
/home/hadoop/hbase-1.2.4/conf

master
slave1
slave2

另外,我們可以將HBase的Master和HRegionServer服務器分開。這樣只需要在上述配置文件中刪除master一行即可。
(3)Zookeeper配置
完全分佈式的HBase集羣需要Zookeeper實例運行,並且需要所有的HBase節點能夠與Zookeeper實例通信。默認情況下HBase自身維護着一組默認的Zookeeper實例。不過,用戶可以配置獨立的Zookeeper實例,這樣能夠使HBase系統更加健壯。
conf/hbase-env.sh配置文檔中HBASE_MANAGES_ZK的默認值爲true,它表示HBase使用自身所帶的Zookeeper實例。但是,該實例只能爲單機或者僞分佈式模式下的HBase提供服務。當安裝完全分佈模式時需要配置自己的Zookeeper實例。在HBase-site.xml文檔中配置了hbase.zookeeper.quorum屬性後,系統將有限使用該屬性所指定的Zookeeper列表。此時,若HBASE_MANAGES_ZK變量值爲true,那麼在啓動HBase時,Hbase將把Zookeeper作爲自身的一部分運行,其對應進程爲“HQuorumPeer”;若該變量值爲false,那麼在啓動HBase之前必須首先手動運行hbase.zookeeper.quorum屬性所指定的Zookeeper集羣,其對應的進程顯示爲QuorumPeerMain.若將Zookeeper作爲HBase的一部分來運行,那麼關閉HBase時Zookeeper將被自動關閉,否則需要手動停止Zookeeper服務。

運行Hbase

運行之前,在hdfs文件系統中添加hbase目錄:

hdfs dfs -mkdir hdfs://master:9000/hbase

執行start-hbase.sh

hadoop@master:~$ start-hbase.sh
slave1: starting zookeeper, logging to /home/hadoop/hbase-1.2.4/logs/hbase-hadoop-zookeeper-slave1.out
slave2: starting zookeeper, logging to /home/hadoop/hbase-1.2.4/logs/hbase-hadoop-zookeeper-slave2.out
master: starting zookeeper, logging to /home/hadoop/hbase-1.2.4/logs/hbase-hadoop-zookeeper-master.out
starting master, logging to /home/hadoop/hbase-1.2.4/logs/hbase-hadoop-master-master.out
master: starting regionserver, logging to /home/hadoop/hbase-1.2.4/logs/hbase-hadoop-regionserver-master.out
slave2: starting regionserver, logging to /home/hadoop/hbase-1.2.4/logs/hbase-hadoop-regionserver-slave2.out
slave1: starting regionserver, logging to /home/hadoop/hbase-1.2.4/logs/hbase-hadoop-regionserver-slave1.out

在啓動Hbase之後,用戶可以通過下面命令進入HBase Shell之中:

hbase shell

成功進入之後,用戶會看到如下所示:

hadoop@master:~$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hbase-1.2.4/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-2.7.1/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.4, r67592f3d062743907f8c5ae00dbbe1ae4f69e5af, Tue Oct 25 18:10:20 CDT 2016

hbase(main):001:0> 

進去HBase Shell輸入status命令,如果看到如下結果,證明HBase安裝成功。

hbase(main):009:0> status
1 active master, 0 backup masters, 3 servers, 0 dead, 0.6667 average load

輸入list

hbase(main):010:0> list
TABLE                                                                           
0 row(s) in 0.3250 seconds

=> []

至此hbase安裝已經全部講完了,過程主要出現的問題是權限問題,如果hadoop文件放在root目錄下,而在hadoop用戶目錄下會出現訪問權限問題,訪問hadoop目錄權限不夠也是問題,比如hadoop在/home/目錄下,需要賦予相應權限,希望對初學hadoop,hbase的人有所幫助!

發佈了74 篇原創文章 · 獲贊 193 · 訪問量 47萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章