簡化可行安裝:如果一遍啓動有問題,刪了,再一遍,三遍過後,第四遍,你知道問題出現在哪兒了!
1.JDK 安裝:下載路徑 http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html
下載爲最新版本的jdk,這裏爲32位:jdk-7u45-linux-i586.tar.gz
tar -zxvf jdk-7u45-linux-i586.tar.gz
不需要安裝,直接解壓就OK。解壓文件爲:jdk1.7.0_45。記住路徑。
配置環境變量:
在你的/etc/profile文件中增加一條這樣子的配置
sudo gedit profile:添加
export JAVA_HOME=/home/zhangzhen/software/jdk1.7.0_45
PATH=$JAVA_HOME/bin:$PATH
關閉,然後執行:
source profile
查看java 版本 :
java -version
ok
如果系統已經安裝過jdk或安裝jdk之後不知道jdk的安裝路徑在。
執行:whereis java。這裏只是找到你安裝的路徑。我們需要的是java,jdk的路徑。也就是你解壓jdk壓縮包後的文件,放到了什麼地方。如果是在線自動安裝java,jdk。用whereis java 查找你解壓後的路徑。這裏推薦,自己下載jdk,解壓。
2.安裝hadoop (version 1.2.1)僞分佈式:
(1)假設hadoop-jdk-7u45-linux-i586.tar.gztar.gz在桌面,將它複製到安裝目錄 /usr/local/下;
1
|
sudo cp hadoop-1.2.1.tar.gz
/usr/local/ |
(2). 解壓hadoop-1.2.1.tar.gz;
cd /usr/local sudo tar -zxvf h hadoop-1.1.2.tar.gz
|
(3). 將解壓出的文件夾改名爲hadoop;
sudo mv hadoop-1.2.1 hadoop |
(4). 將該hadoop文件夾的屬主用戶設爲hadoop,
(5). 打開hadoop/conf/hadoop-env.sh文件;
1
|
sudo gedit hadoop /conf/hadoop-env .sh |
(6). 配置conf/hadoop-env.sh(找到#export JAVA_HOME=...,去掉#,然後加上本機jdk的路徑);
1
|
export JAVA_HOME=/usr/lib/jvm/java-6-jdk |
sudo gedit hadoop /conf/core-site .xml |
修改如下:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
(8). 打開conf/mapred-site.xml文件;
sudo gedit hadoop /conf/mapred-site .xml |
修改如下:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
(9). 打開conf/hdfs-site.xml文件;
1
|
sudo gedit hadoop /conf/hdfs-site .xml |
編輯如下:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/zhangzhen/software/hadoop-1.2.1/</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/zhangzhen/software/hadoop-1.2.1/data</value>
</property>
#<property>
#<name>dfs.name.dir</name>
#<value>/home/zhangzhen/software/hadoop-1.2.1/name</value>
#</property>
</configuration>
關於data的目錄,在指定的路徑自己建立。name不需要指定。
執行:./bin/hadoop namenode -format
執行:/bin/start-all.sh
執行: jps
(我是自己創建的data目錄:data,是爲以後數據存儲的地方。name目錄是:默認,不需要配置)
要把data的權限設成755,也可以不管。如果你是自己通過:mkdir data 創建的文件夾,它的權限默認就是755。如果是系統自動生成的data、name目錄,要改寫一下data的權限:chmod 775 data,就OK了!
3.安裝ssh服務
ssh可以實現遠程登錄和管理,具體可以參考其他相關資料。
(1)安裝openssh-server;
sudo apt-get install ssh openssh-server |
(2)配置ssh無密碼登錄本機
創建ssh-key,這裏我們採用dsa方式;
|
ssh -keygen -t dsa -P ''
|
回車後會在/home/zhangzhen/.ssh/下生成兩個文件:id_rsa和id_rsa.pub.這兩個文件是成對出現的,查看兩個文件
# cd /home/zhangzhen/.ssh
# ls -a
. .. authorized_keys id_dsa id_dsa.pub known_hosts
進入~/.ssh/目錄下,將id_rsa.pub追加到authorized_keys授權文件中,開始是沒有authorized_keys文件的;
cd /home/zhangzhen/.ssh cat id_dsa.pub >> authorized_keys |
$ ssh -version
登錄localhost;
|
ssh localhost |
執行退出命令;
|
exit |
4. 在單機上運行hadoop
1. 進入hadoop目錄下,格式化hdfs文件系統,初次運行hadoop時一定要有該操作,
cd /home/zhangzhen/hadoop-1.2.1/ bin /hadoop namenode - format |
2.
當你看到下圖時,就說明你的hdfs文件系統格式化成功了。
3. 啓動bin/start-all.sh
bin /start-all .sh |
4. 檢測hadoop是否啓動成功
jps |
6755 Jps
5432 TaskTracker
4866 DataNode
4638 NameNode
5109 SecondaryNameNode
5201 JobTracker
如果6行都出來,說明搭建成功,漏一個都是有。
如果都列出來,可以通過firefox瀏覽器查看mapreduce的web頁面,使用
hdfs的web 頁面
5.測試hadoop
zhangzhen@ubuntu:~$ mkdir input
zhangzhen@ubuntu:~$ cd input/
zhangzhen@ubuntu:~/input$ ls
zhangzhen@ubuntu:~/input$ echo "hello world" >test1.txe
zhangzhen@ubuntu:~/input$ echo "hello world" >test1.txx
zhangzhen@ubuntu:~/input$ ls
test1.txe test1.txx
zhangzhen@ubuntu:~/input$ rm test1.txe
zhangzhen@ubuntu:~/input$ rm test1.txx
zhangzhen@ubuntu:~/input$ echo "hello world" >test1.txt
zhangzhen@ubuntu:~/input$ echo "hello hadoop">test2.txt
zhangzhen@ubuntu:~/input$ cat test1.txt
hello world
zhangzhen@ubuntu:~/input$ cat test2.txt
hello hadoop
zhangzhen@ubuntu:~/software/hadoop-1.2.1/bin$ ./hadoop dfs -ls ./in/*
-rw-r--r-- 1 zhangzhen supergroup 12 2014-01-14 04:49 /user/zhangzhen/in/test1.txt
-rw-r--r-- 1 zhangzhen supergroup 13 2014-01-14 04:49 /user/zhangzhen/in/test2.txt
查看你的java jar 包在什麼地方:-rw-rw-r-- 1 zhangzhen zhangzhen 6842 Jul 22 18:26 hadoop-ant-1.2.1.jar
-rw-rw-r-- 1 zhangzhen zhangzhen 414 Jul 22 18:26 hadoop-client-1.2.1.jar
-rw-rw-r-- 1 zhangzhen zhangzhen 4203147 Jul 22 18:26 hadoop-core-1.2.1.jar
-rw-rw-r-- 1 zhangzhen zhangzhen 142726 Jul 22 18:26 hadoop-examples-1.2.1.jar
-rw-rw-r-- 1 zhangzhen zhangzhen 417 Jul 22 18:26 hadoop-minicluster-1.2.1.jar
-rw-rw-r-- 1 zhangzhen zhangzhen 3126576 Jul 22 18:26 hadoop-test-1.2.1.jar
-rw-rw-r-- 1 zhangzhen zhangzhen 385634 Jul 22 18:26 hadoop-tools-1.2.1.jar
zhangzhen@ubuntu:~/software/hadoop-1.2.1$ cp hadoop-examples-1.2.1.jar /home/zhangzhen/software/hadoop-1.2.1/bin/
zhangzhen@ubuntu:~/software/hadoop-1.2.1$ ./hadoop jar hadoop-examples-1.2.1.jar wordcount in out
-bash: ./hadoop: No such file or directory
zhangzhen@ubuntu:~/software/hadoop-1.2.1$ cd bin/
zhangzhen@ubuntu:~/software/hadoop-1.2.1/bin$ ./hadoop jar hadoop-examples-1.2.1.jar wordcount in out
14/01/14 05:47:19 INFO input.FileInputFormat: Total input paths to process : 2
14/01/14 05:47:19 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/01/14 05:47:19 WARN snappy.LoadSnappy: Snappy native library not loaded
14/01/14 05:47:20 INFO mapred.JobClient: Running job: job_201401140428_0001
14/01/14 05:47:21 INFO mapred.JobClient: map 0% reduce 0%
14/01/14 05:47:33 INFO mapred.JobClient: map 50% reduce 0%
14/01/14 05:47:34 INFO mapred.JobClient: map 100% reduce 0%
14/01/14 05:47:42 INFO mapred.JobClient: map 100% reduce 33%
14/01/14 05:47:43 INFO mapred.JobClient: map 100% reduce 100%
14/01/14 05:47:45 INFO mapred.JobClient: Job complete: job_201401140428_0001
14/01/14 05:47:45 INFO mapred.JobClient: Counters: 29
14/01/14 05:47:45 INFO mapred.JobClient: Job Counters
14/01/14 05:47:45 INFO mapred.JobClient: Launched reduce tasks=1
14/01/14 05:47:45 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=21137
14/01/14 05:47:45 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/01/14 05:47:45 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/01/14 05:47:45 INFO mapred.JobClient: Launched map tasks=2
14/01/14 05:47:45 INFO mapred.JobClient: Data-local map tasks=2
14/01/14 05:47:45 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=10426
14/01/14 05:47:45 INFO mapred.JobClient: File Output Format Counters
14/01/14 05:47:45 INFO mapred.JobClient: Bytes Written=25
14/01/14 05:47:45 INFO mapred.JobClient: FileSystemCounters
14/01/14 05:47:45 INFO mapred.JobClient: FILE_BYTES_READ=55
14/01/14 05:47:45 INFO mapred.JobClient: HDFS_BYTES_READ=253
14/01/14 05:47:45 INFO mapred.JobClient: FILE_BYTES_WRITTEN=172644
14/01/14 05:47:45 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=25
14/01/14 05:47:45 INFO mapred.JobClient: File Input Format Counters
14/01/14 05:47:45 INFO mapred.JobClient: Bytes Read=25
14/01/14 05:47:45 INFO mapred.JobClient: Map-Reduce Framework
14/01/14 05:47:45 INFO mapred.JobClient: Map output materialized bytes=61
14/01/14 05:47:45 INFO mapred.JobClient: Map input records=2
14/01/14 05:47:45 INFO mapred.JobClient: Reduce shuffle bytes=61
14/01/14 05:47:45 INFO mapred.JobClient: Spilled Records=8
14/01/14 05:47:45 INFO mapred.JobClient: Map output bytes=41
14/01/14 05:47:45 INFO mapred.JobClient: Total committed heap usage (bytes)=248127488
14/01/14 05:47:45 INFO mapred.JobClient: CPU time spent (ms)=3820
14/01/14 05:47:45 INFO mapred.JobClient: Combine input records=4
14/01/14 05:47:45 INFO mapred.JobClient: SPLIT_RAW_BYTES=228
14/01/14 05:47:45 INFO mapred.JobClient: Reduce input records=4
14/01/14 05:47:45 INFO mapred.JobClient: Reduce input groups=3
14/01/14 05:47:45 INFO mapred.JobClient: Combine output records=4
14/01/14 05:47:45 INFO mapred.JobClient: Physical memory (bytes) snapshot=322818048
14/01/14 05:47:45 INFO mapred.JobClient: Reduce output records=3
14/01/14 05:47:45 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1040166912
14/01/14 05:47:45 INFO mapred.JobClient: Map output records=4
zhangzhen@ubuntu:~/software/hadoop-1.2.1$ bin/hadoop dfs -ls
Found 2 items
drwxr-xr-x - zhangzhen supergroup 0 2014-01-14 04:49 /user/zhangzhen/in
drwxr-xr-x - zhangzhen supergroup 0 2014-01-14 05:47 /user/zhangzhen/out
zhangzhen@ubuntu:~/software/hadoop-1.2.1$ bin/hadoop dfs -ls ./out
Found 3 items
-rw-r--r-- 1 zhangzhen supergroup 0 2014-01-14 05:47 /user/zhangzhen/out/_SUCCESS
drwxr-xr-x - zhangzhen supergroup 0 2014-01-14 05:47 /user/zhangzhen/out/_logs
-rw-r--r-- 1 zhangzhen supergroup 25 2014-01-14 05:47 /user/zhangzhen/out/part-r-00000
zhangzhen@ubuntu:~/software/hadoop-1.2.1$ bin/hadoop dfs -cat ./out/*
hadoop 1
hello 2
world 1
注意事項:
1、有些時候雖然按照上面方法去配置僞分佈式hadoop,但是難免還是會遇到一些問題:在用jps查看啓動項的時候,少了NameNode或DataNode。遇到沒有啓動起來的節點。通過日誌來判斷問題的所在。
問題:hadoop--namenode未啓動解決
zhangzhen@ubuntu:~/software/hadoop-1.2.1/bin$ ./hadoop namenode format
14/01/14 04:15:53 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = ubuntu/127.0.1.1
STARTUP_MSG: args = [format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_45
************************************************************/
Usage: java NameNode [-format [-force ] [-nonInteractive]] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint] | [-recover [ -force ] ]
14/01/14 04:15:53 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/
zhangzhen@ubuntu:~/software/hadoop-1.2.1/bin$ ./start-all.sh
starting namenode, logging to /home/zhangzhen/software/hadoop-1.2.1/libexec/../logs/hadoop-zhangzhen-namenode-ubuntu.out
localhost: starting datanode, logging to /home/zhangzhen/software/hadoop-1.2.1/libexec/../logs/hadoop-zhangzhen-datanode-ubuntu.out
localhost: starting secondarynamenode, logging to /home/zhangzhen/software/hadoop-1.2.1/libexec/../logs/hadoop-zhangzhen-secondarynamenode-ubuntu.out
starting jobtracker, logging to /home/zhangzhen/software/hadoop-1.2.1/libexec/../logs/hadoop-zhangzhen-jobtracker-ubuntu.out
localhost: starting tasktracker, logging to /home/zhangzhen/software/hadoop-1.2.1/libexec/../logs/hadoop-zhangzhen-tasktracker-ubuntu.out
zhangzhen@ubuntu:~/software/hadoop-1.2.1/bin$ jps
12020 DataNode
12249 SecondaryNameNode
12331 JobTracker
12571 TaskTracker
12639 Jps
解決方法:
原先在 conf/hdfs-site.xml文件配置了name文件的路徑:註釋掉,採用默認,在指定路徑mkdir data 設置數據路徑。data爲空。
重新格式化,啓動hadoop:
zhangzhen@ubuntu:~/software/hadoop-1.2.1/bin$ ./hadoop namenode -format
14/01/14 04:27:42 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = ubuntu/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_45
************************************************************/
Re-format filesystem in /home/zhangzhen/software/hadoop-1.2.1/dfs/name ? (Y or N) y
Format aborted in /home/zhangzhen/software/hadoop-1.2.1/dfs/name
14/01/14 04:27:47 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/
zhangzhen@ubuntu:~/software/hadoop-1.2.1/bin$ ./start-all.sh
starting namenode, logging to /home/zhangzhen/software/hadoop-1.2.1/libexec/../logs/hadoop-zhangzhen-namenode-ubuntu.out
localhost: starting datanode, logging to /home/zhangzhen/software/hadoop-1.2.1/libexec/../logs/hadoop-zhangzhen-datanode-ubuntu.out
localhost: starting secondarynamenode, logging to /home/zhangzhen/software/hadoop-1.2.1/libexec/../logs/hadoop-zhangzhen-secondarynamenode-ubuntu.out
starting jobtracker, logging to /home/zhangzhen/software/hadoop-1.2.1/libexec/../logs/hadoop-zhangzhen-jobtracker-ubuntu.out
localhost: starting tasktracker, logging to /home/zhangzhen/software/hadoop-1.2.1/libexec/../logs/hadoop-zhangzhen-tasktracker-ubuntu.out
zhangzhen@ubuntu:~/software/hadoop-1.2.1/bin$ jps
17529 TaskTracker
17193 SecondaryNameNode
16713 NameNode
17594 Jps
17286 JobTracker
16957 DataNode
啓動起來!2、出現上述1問題,可能存在節點沒有啓動。把防火牆給關掉:
分別執行:
sudo apt-get install ufw
sudo ufw status(查看狀態)
sudo ufw disable
然後重新啓動Hadoop。
3、一般配置會出現的問題,在向data目錄傳文件的時候後,重啓電腦,開啓hadoop服務,執行jps,查看啓動項會少一項。一般做的方式,是namenode空間格式化。在bin目錄下執行:./hadoop namenode -format 。然後重新啓動./hadoop start-all.sh。
但是,重啓電腦,就要格式化namenode 空間,這未必會導致數據的流失。我們在實際安裝過程中就遇到這種問題。一般都會是namenode和datanode的問題。通過查看:hadoop-zhangzhen-datanode-ubuntu.log 或 hadoop-zhangzhen-namenode-ubuntu.log日誌,產看問題出現在哪?
(4)常見的錯誤分類請參考:http://blog.csdn.net/yonghutwo/article/details/9206059
Copyright©BUAA