大數據集羣搭建之Linux安裝hadoop3.0.0

目錄

一、安裝準備

1、下載地址

2、參考文檔

3、ssh免密配置

4、zookeeper安裝

5、集羣角色分配

二、解壓安裝

三、環境變量配置

四、修改配置文件

1、檢查磁盤空間

2、修改配置文件

五、初始化集羣

1、啓動zookeeper

2、在zookeeper中初始化元數據

3、啓動zkfc

4、啓動JournalNode

5、格式化NameNode

6、NameNode同步數據

7、啓動hdfs

7、查看集羣狀態

8、訪問集羣

六、集羣高可用測試

1、停止Active狀態的NameNode

2、查看standby狀態的NameNode

3、重啓啓動停止的NameNode

4、查看兩個NameNode狀態


一、安裝準備

1、下載地址

https://www.apache.org/dyn/closer.cgi/hadoop/common

2、參考文檔

https://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-common/ClusterSetup.html

3、ssh免密配置

https://blog.csdn.net/qq262593421/article/details/105325593

4、zookeeper安裝

https://blog.csdn.net/qq262593421/article/details/106955485

5、集羣角色分配

hadoop集羣角色 集羣節點
NameNode hadoop001、hadoop002
DataNode hadoop003、hadoop004、hadoop005
JournalNode

hadoop003、hadoop004、hadoop005

ResourceManager hadoop001、hadoop002
NodeManager hadoop003、hadoop004、hadoop005
DFSZKFailoverController hadoop001、hadoop002

二、解壓安裝

解壓文件

cd /usr/local/hadoop
tar zxpf hadoop-3.0.0.tar.gz

創建軟鏈接

ln -s  hadoop-3.0.0 hadoop

三、環境變量配置

編輯 /etc/profile 文件

vim /etc/profile

添加以下內容

export HADOOP_HOME=/usr/local/hadoop/hadoop
export PATH=$PATH:$HADOOP_HOME/bin

四、修改配置文件

1、檢查磁盤空間

首先查看磁盤掛載空間,避免hadoop的數據放在掛載空間小的目錄

df -h

磁盤一共800G,home目錄佔了741G,故以下配置目錄都會以 /home開頭 

2、修改配置文件

worker

hadoop003
hadoop004
hadoop005

core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
	<property>
		<name>fs.defaultFS</name>
		<value>hdfs://ns1</value>
	</property>
	<property>
		<name>hadoop.tmp.dir</name>
		<value>/home/cluster/hadoop/data/tmp</value>
	</property>
	<property>
		<name>io.file.buffer.size</name>
		<value>131072</value>
		<description>Size of read/write buffer used in SequenceFiles</description>
	</property>
	<property>
		<name>ha.zookeeper.quorum</name>
		<value>hadoop001:2181,hadoop002:2181,hadoop003:2181</value>
		<description>DFSZKFailoverController</description>
	</property>
</configuration>

hadoop-env.sh

export HDFS_NAMENODE_OPTS="-XX:+UseParallelGC -Xmx4g"
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_JOURNALNODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
export JAVA_HOME=/usr/java/jdk1.8

hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
	<property>
		<name>dfs.namenode.name.dir</name>
		<value>/home/cluster/hadoop/data/nn</value>
	</property>
	<property>
		<name>dfs.datanode.data.dir</name>
		<value>/home/cluster/hadoop/data/dn</value>
	</property>
	<!--
	<property>
        <name>dfs.data.dir</name>
        <value>/home/cluster/hadoop/data/hdfs/data</value>
    </property>
	-->
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/home/cluster/hadoop/data/jn</value>
    </property>
    <property>
        <name>dfs.nameservices</name>
        <value>ns1</value>
    </property>
    <property>
        <name>dfs.ha.namenodes.ns1</name>
        <value>hadoop001,hadoop002</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.ns1.hadoop001</name>
        <value>hadoop001:9000</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.ns1.hadoop001</name>
        <value>hadoop001:50070</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.ns1.hadoop002</name>
        <value>hadoop002:9000</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.ns1.hadoop002</name>
        <value>hadoop002:50070</value>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled.ns1</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.client.failover.proxy.provider.ns1</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
	<property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
    </property>
	<property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
	<property>
		<name>dfs.blocksize</name>
        <value>64M</value>
		<!-- <value>128M</value> -->
		<description>HDFS blocksize of 128MB for large file-systems</description>
	</property>
	<property>
		<name>dfs.namenode.handler.count</name>
		<value>100</value>
		<description>More NameNode server threads to handle RPCs from large number of DataNodes.</description>
	</property>

    <!--這是配置提供共享編輯存儲的journalnode地址的地方,這些地址由活動nameNode寫入,由備用nameNode讀取,以便與活動nameNode所做的所有文件系統更改保持最新。雖然必須指定幾個JournalNode地址,但是應該只配置其中一個uri。URI的形式應該是:qjournal://*host1:port1*;*host2:port2*;*host3:port3*/*journalId*。日誌ID是這個名稱服務的惟一標識符,它允許一組日誌節點爲多個聯合名稱系統提供存儲 /usr/shell/scp.sh hdfs-site.xml /usr/local/hadoop/hadoop/etc/hadoop
    rm -rf /home/cluster/* rm -rf /usr/local/hadoop/hadoop/logs/* -->
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://hadoop001:8485;hadoop002:8485;hadoop003:8485/ns1</value>
    </property>
    <!-- 隔離方法;確保當前時間點只有一個namenode處於active狀態,jurnalnode只允許1個namenode來讀寫數據,但是也會出現意外的情況,因此需要控制對方機器,進行將自我提升[active],將對方降級[standby] -->
    <property>
     <name>dfs.ha.fencing.methods</name>
     <value>sshfence</value>
    </property>
    <property>
    <name>dfs.ha.fencing.ssh.private-key-files</name>
    <value>/root/.ssh/id_rsa</value>
    </property>

</configuration>

mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
	<!-- Configurations for MapReduce Applications -->
	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
		<description>Execution framework set to Hadoop YARN.</description>
	</property>
	<property>
		<name>mapreduce.map.memory.mb</name>
		<value>4096</value>
		<!-- <value>1536</value> -->
		<description>Larger resource limit for maps.</description>
	</property>
	<property>
		<name>mapreduce.map.java.opts</name>
		<value>-Xmx4096M</value>
		<!-- <value>-Xmx2048M</value> -->
		<description>Larger heap-size for child jvms of maps.</description>
	</property>
	<property>
		<name>mapreduce.reduce.memory.mb</name>
		<value>4096</value>
		<!-- <value>3072</value> -->
		<description>Larger resource limit for reduces.</description>
	</property>
	<property>
		<name>mapreduce.reduce.java.opts</name>
		<value>-Xmx4096M</value>
		<description>Larger heap-size for child jvms of reduces.</description>
	</property>
	<property>
		<name>mapreduce.task.io.sort.mb</name>
		<value>4096</value>
		<!-- <value>1024</value> -->
		<description>Higher memory-limit while sorting data for efficiency.</description>
	</property>
	<property>
		<name>mapreduce.task.io.sort.factor</name>
		<value>400</value>
		<!-- <value>200</value> -->
		<description>More streams merged at once while sorting files.</description>
	</property>
	<property>
		<name>mapreduce.reduce.shuffle.parallelcopies</name>
		<value>200</value>
		<!-- <value>100</value> -->
		<description>Higher number of parallel copies run by reduces to fetch outputs from very large number of maps.</description>
	</property>
	<!-- Configurations for MapReduce JobHistory Server -->
	<property>
		<name>mapreduce.jobhistory.address</name>
		<value>hadoop001:10020</value>
		<description>MapReduce JobHistory Server host:port.Default port is 10020</description>
	</property>
	<property>
		<name>mapreduce.jobhistory.webapp.address</name>
		<value>hadoop001:19888</value>
		<description>MapReduce JobHistory Server Web UI host:port.Default port is 19888.</description>
	</property>
	<property>
		<name>mapreduce.jobhistory.intermediate-done-dir</name>
		<value>/tmp/mr-history/tmp</value>
		<description>Directory where history files are written by MapReduce jobs.</description>
	</property>
	<property>
		<name>mapreduce.jobhistory.done-dir</name>
		<value>/tmp/mr-history/done</value>
		<description>Directory where history files are managed by the MR JobHistory Server.</description>
	</property>
</configuration>

yarn-site.xml

<?xml version="1.0"?>
<configuration>
	<property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
	</property>
	<!--是否啓用自動故障轉移。默認情況下,在啓用 HA 時,啓用自動故障轉移。-->
	<property>
	    <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
	    <value>true</value>
	</property>
	<!--啓用內置的自動故障轉移。默認情況下,在啓用 HA 時,啓用內置的自動故障轉移。--> 
	<property>
	    <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
	    <value>true</value>
	</property>
	<property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>yarn-rm-cluster</value>
	</property>
	<property>
	    <name>yarn.resourcemanager.ha.rm-ids</name>
	    <value>rm1,rm2</value>
	</property>
	<property>
	    <name>yarn.resourcemanager.hostname.rm1</name>
	    <value>hadoop001</value>
	</property>
	<property>
	    <name>yarn.resourcemanager.hostname.rm2</name>
	    <value>hadoop002</value>
	</property>
	<!--啓用 resourcemanager 自動恢復--> 
	<property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
	</property>
	<!--配置 Zookeeper 地址--> 
	<property>
        <name>yarn.resourcemanager.zk.state-store.address</name>
        <value>hadoop001:2181,hadoop002:2181,hadoop003:2181</value>
	</property>
	<property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>hadoop001:2181,hadoop002:2181,hadoop003:2181</value>
	</property>
	<property>
        <name>yarn.resourcemanager.address.rm1</name>
        <value>hadoop001:8032</value>
	</property>
	<property>
        <name>yarn.resourcemanager.address.rm2</name>
        <value>hadoop002:8032</value>
	</property>
	<property>
        <name>yarn.resourcemanager.scheduler.address.rm1</name>
        <value>hadoop001:8034</value>
	</property>
	<property>
        <name>yarn.resourcemanager.webapp.address.rm1</name>
        <value>hadoop001:8088</value>
	</property>
	<property>
        <name>yarn.resourcemanager.scheduler.address.rm2</name>
        <value>hadoop002:8034</value>
	</property>
	<property>
        <name>yarn.resourcemanager.webapp.address.rm2</name>
        <value>hadoop002:8088</value>
	</property>

	<!-- Configurations for ResourceManager and NodeManager -->
	<property>
		<name>yarn.acl.enable</name>
		<value>true</value>
		<description>Enable ACLs? Defaults to false.</description>
	</property>
	<property>
		<name>yarn.admin.acl</name>
		<value>*</value>
	</property>	
	<property>
		<name>yarn.log-aggregation-enable</name>
		<value>false</value>
		<description>Configuration to enable or disable log aggregation</description>
	</property>
	<!-- Configurations for ResourceManager -->
	<property>
		<name>yarn.resourcemanager.hostname</name>
		<value>hadoop001</value>
		<description>host Single hostname that can be set in place of setting all yarn.resourcemanager*address resources. Results in default ports for ResourceManager components.</description>
	</property>
	<!-- spark on yarn -->
	<property>
		<name>yarn.scheduler.maximum-allocation-mb</name>
		<value>20480</value>
	</property>
	<property>
		<name>yarn.nodemanager.resource.memory-mb</name>
		<value>28672</value>
	</property>
	<!-- Configurations for NodeManager -->
	<property>
		<name>yarn.nodemanager.log.retain-seconds</name>
		<value>10800</value>
	</property>
	<property>
		<name>yarn.nodemanager.log-dirs</name>
		<value>/home/cluster/yarn/log/1,/home/cluster/yarn/log/2,/home/cluster/yarn/log/3</value>
	</property>
	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
		<description>Shuffle service that needs to be set for Map Reduce applications.</description>
	</property>
	<!-- 
	<property>
		<name>yarn.nodemanager.env-whitelist</name>
		<value>Environment properties to be inherited by containers from NodeManagers</value>
		<description>For mapreduce application in addition to the default values HADOOP_MAPRED_HOME should to be added. Property value should JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</description>
	</property>	
	 -->
	<!-- Configurations for History Server (Needs to be moved elsewhere) -->
	<property>
		<name>yarn.log-aggregation.retain-seconds</name>
		<value>-1</value>
	</property>
	<property>
		<name>yarn.log-aggregation.retain-check-interval-seconds</name>
		<value>-1</value>
	</property>

</configuration>

五、初始化集羣

1、啓動zookeeper

由於hadoop的HA機制依賴於zookeeper,因此先啓動zookeeper集羣

如果zookeeper集羣沒有搭建參考:https://blog.csdn.net/qq262593421/article/details/106955485

zkServer.sh start
zkServer.sh status

2、在zookeeper中初始化元數據

hdfs zkfc -formatZK

3、啓動zkfc

hdfs --daemon start zkfc

4、啓動JournalNode

格式化NameNode前必須先格式化JournalNode,否則格式化失敗

這裏配置了3個JournalNode節點,hadoop001、hadoop002、hadoop003

hdfs --daemon start journalnode

5、格式化NameNode

在第一臺NameNode節點上執行

hdfs namenode -format

6、NameNode同步數據

在另一臺NameNode上執行

hdfs namenode -bootstrapStandby

7、啓動hdfs

start-all.sh

如果格式化失敗或者出現以下錯誤,把對應節點上的目錄刪掉再重新格式化

Directory is in an inconsistent state: Can't format the storage directory because the current directory is not empty.

rm -rf /home/cluster/hadoop/data/jn/ns1/*
hdfs namenode -format

7、查看集羣狀態

hadoop dfsadmin -report

8、訪問集羣

http://hadoop001:50070/

http://hadoop002:50070/

 

六、集羣高可用測試

1、停止Active狀態的NameNode

在active狀態上的NameNode執行(hadoop1)

hdfs --daemon stop namenode

2、查看standby狀態的NameNode

http://hadoop002:50070/ 可以看到,hadoop2從standby變成了active狀態

3、重啓啓動停止的NameNode

停止之後,瀏覽器無法訪問,重啓恢復

hdfs --daemon start namenode

4、查看兩個NameNode狀態

http://hadoop001:50070/

http://hadoop002:50070/

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章