基本語法
hadoop fs + 具體命令 或者 hdfs dfs + 具體命令
這兩條基本語法底層是一樣的,只是名字不一樣罷了
命令集合
常用命令實操
- 啓動Hadoop集羣
[redhat@hadoop102 hadoop-2.7.2]$ sbin/start-dfs.sh
[redhat@hadoop102 hadoop-2.7.2]$ sbin/start-yarn.sh- -help:輸出這個命令參數
[redhat@hadoop102 hadoop-2.7.2]$ hadoop fs -help rm- -ls:顯示目錄信息
[redhat@hadoop102 hadoop-2.7.2]$ hadoop fs -ls /- -mkdir:在HDFS上創建目錄
[redhat@hadoop102 hadoop-2.7.2]$ hadoop fs -mkdir -p /redhat/hadoop- -moveFromLocal:從本地剪切粘貼到HDFS
[redhat@hadoop102 hadoop-2.7.2]$ touch Hadoop.txt
[redhat@hadoop102 hadoop-2.7.2]$ Hadoop fs -moveFromLocal ./Hadoop.txt /redhat/hadoop- -cat:顯示文件內容
[redhat@hadoop102 hadoop-2.7.2]$ Hadoop fs -cat /redhat/hadoop/hadoop.txt- -appendToFile:追加一個文件到已存在的文件末尾
[redhat@hadoop102 hadoop-2.7.2]$ touch zhixiong.txt
[redhat@hadoop102 hadoop-2.7.2]$ vim zhixiong.txt
輸入 zhou zhi xiong
[redhat@hadoop102 hadoop-2.7.2]$ hadoop fs -appendToFile zhixiong.txt /redhat/hadoop/hadoop.txt- -chgrp:、-chmod、-chown:和linux系統的使用用法一樣、修改文件所屬權限
[redhat@hadoop102 hadoop-2.7.2]$ hadoop fs -chmod 666 /redhat/hadoop/hadoop.txt
[redhat@hadoop102 hadoop-2.7.2]$ hadoop fs -chown ubantu:ubantu /redhat/hadoop/hadoop.txt- -copyFromLocal:從文件系統中拷貝文件到HDFS路徑去
[redhat@hadoop102 hadoop-2.7.2]$hadoop fs -copyFromLocal README.txt /redhat/hadoop/- -copyToLocal:從HDFS拷貝到本地
[redhat@hadoop102 hadoop-2.7.2]$hadoop fs -copyToLocal /redhat/hadoop/hadoop.txt ./- -cp:從HDFS的一個路徑拷貝到HDFS的另一個路徑
[redhat@hadoop102 hadoop-2.7.2]$hadoop fs -cp /redhat/hadoop/hadoop.txt /redhat/linux/linux.txt- -mv:在HDFS目錄中移動文件
[redhat@hadoop102 hadoop-2.7.2]$hadoop fs -mv /redhat/hadoop/hadoop.txt /redhat/linux- -get:等同於copyToLocal
[redhat@hadoop102 hadoop-2.7.2]$hadoop fs -get /redhat/hadoop/hadoop.txt ./- -getmerge:合併下載多個文件,比如HDFS的目錄/usr/redhat/test下有多個文件:test.1.test.2,test.3,…
[redhat@hadoop102 hadoop-2.7.2]$hadoop fs -getmerge /usr/redhat/test/* ./test.txt- -put:等同於copyFromLocal
[redhat@hadoop102 hadoop-2.7.2]$ hadoop fs -put ./test.txt /usr/redhat/test/- -tail:顯示一個文件的末尾
[redhat@hadoop102 hadoop-2.7.2]$ hadoop fs -tail /usr/redhat/test.txt- -rm:刪除文件或者文件夾
[redhat@hadoop102 hadoop-2.7.2]$ hadoop fs -rm /usr/redhat/test/test.txt
[redhat@hadoop102 hadoop-2.7.2]$hadoop fs -rm -r -skipTrash /redhat/linux/- -rmdir:刪除空目錄
[redhat@hadoop102 hadoop-2.7.2]hadoop fs -rmdir /test- -du:統計文件夾的大小信息
[redhat@hadoop102 hadoop-2.7.2]$hadoop fs -du -s -h /usr/redhat/test
2.7 K /usr/redhat/test/- -setrep:設置HDFS中文件的副本數量
[redhat@hadoop102 hadoop-2.7.2]$hadoop fs -setrep 10 /usr/redhat/test.txt
這裏設置的副本數只是記錄在NameNode的元數據中,是否真的會有這麼多副本,還得看DataNode的數量。如果目前只有3臺設備,那麼最多也就3個副本,只有當節點數增加到10臺時,副本數才能達到10。