準備:
三臺虛擬機,JAVA,配置好網絡
參考系列1: https://blog.csdn.net/tanxiang21/article/details/104206881
1、配置
<property>
<name>fs.defaultFS</name>
<value>hdfs://bigdata-pro01.kfk.com:9000</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
2、啓動
bin/hdfs namenode -format
sbin/hadoop-daemon.sh start namenode
sbin/hadoop-daemon.sh start datanode
查看
http://bigdata-pro01.kfk.com:50070/dfshealth.html#tab-overview
http://bigdata-pro01.kfk.com:50070/dfshealth.html#tab-datanode
3、傳輸
scp -r hadoop-2.5.0/ [email protected]:/opt/modules/
scp -r hadoop-2.5.0/ [email protected]:/opt/modules/
4、啓動另外兩臺機器的datanode
sbin/hadoop-daemon.sh start datanode
5、創建一個文件試試
bin/hdfs dfs -mkdir -p /user/kfk/data/
http://bigdata-pro01.kfk.com:50070/explorer.html#/
bin/hdfs dfs -put /opt/modules/hadoop-2.5.0/etc/hadoop/core-site.xml /user/kfk/data
bin/hdfs dfs -text /user/kfk/data/core-site.xml
–看文件