目錄
一、修改hostname
hostnamectl set-hostname hadoopxx
二、修改網絡配置
1、生成UUID
UUID是網絡的唯一標識,不能和之前的主機重複
uuidgen
2、修改 /etc/sysconfig/network-scripts/ifcfg-ens33 文件
ifconfig
cat /etc/sysconfig/network-scripts/ifcfg-ens33
cp /etc/sysconfig/network-scripts/ifcfg-ens33 /etc/sysconfig/network-scripts/ifcfg-ens33.tempalte
vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="f18dea2a-c2de-489d-b172-d52c385bbbf6"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.x.xxx"
NETMASK="255.255.255.0"
GATEWAY="192.168.0.1"
DNS="192.168.0.1"
NM_CONTROLLED="no"
3、停止networkManager服務
systemctl stop NetworkManager.service
systemctl disable NetworkManager.service
4、重啓網卡服務
systemctl restart network
ifconfig
ping hao123.com
三、修改 hosts 文件
cat /etc/hosts
需要添加的節點ip和host
192.168.0.133 hadoop4
192.168.0.134 hadoop5
echo "192.168.0.133 hadoop4
192.168.0.134 hadoop5" >> /etc/hosts
cat /etc/hosts
reboot
四、SSH免密登錄配置
大數據入門之 ssh 免密碼登錄:https://blog.csdn.net/qq262593421/article/details/105325593
注意事項:
1、因爲是複製過來的節點,原來的 ssh keygen 沒變,這裏直接 overwrite 就行了
2、原來的免密登錄已經失效,需要把 /root/.ssh/known_hosts 文件和 authorized_keys 文件清空重新配置
cat /root/.ssh/known_hosts
> /root/.ssh/known_hosts
cat /root/.ssh/known_hosts
cat /root/.ssh/authorized_keys
> /root/.ssh/authorized_keys
cat /root/.ssh/authorized_keys
五、修改zookeeper配置
1、配置 zoo.cfg 文件
cd $ZOO_HOME/conf
cat $ZOO_HOME/conf/zoo.cfg
echo "server.4=hadoop4:2888:3888
tail -n 10 $ZOO_HOME/conf/zoo.cfg
2、 配置 zookeeper的myid
cat $ZOO_HOME/data/myid
# n 爲zookeeper的myid,一直累加下去就行了,這裏用的4和5
echo "n" > $ZOO_HOME/data/myid
cat $ZOO_HOME/data/myid
六、修改hadoop配置
cd $HADOOP_HOME/etc/hadoop
cat $HADOOP_HOME/etc/hadoop/workers
# 如果沒有換行先換行
echo "" >>$HADOOP_HOME/etc/hadoop/workers
echo "hadoop4
hadoop5" >> $HADOOP_HOME/etc/hadoop/workers
cat $HADOOP_HOME/etc/hadoop/workers
NameNode上執行
# NameNode上刷新節點
hdfs dfsadmin -refreshNodes
# 查看節點信息
hdfs dfsadmin -report
vim $HADOOP_HOME/etc/hadoop/core-site.xml
如果想添加zkfcz則配置此項
<property>
<name>ha.zookeeper.quorum</name>
<value>hadoop1:2181,hadoop2:2181,hadoop3:2181</value>
<description>DFSZKFailoverController</description>
</property>
七、修改hbase配置
cd $HBASE_HOME/conf
cat $HBASE_HOME/conf/regionservers
echo "hadoop4
hadoop5" >> $HBASE_HOME/conf/regionservers
cat $HBASE_HOME/conf/regionservers
八、修改spark配置
1、配置work節點
cd $SPARK_HOME/conf
cat $SPARK_HOME/conf/slaves
echo "hadoop4
hadoop5" >> $SPARK_HOME/conf/slaves
cat $SPARK_HOME/conf/slaves
2、配置spark高可用
vim $SPARK_HOME/conf/spark-env.sh
# export SPARK_MASTER_IP=hadoop1
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop1:2181,hadoop2:2181,hadoop3:2181 -Dspark.deploy.zookeeper.dir=/spark"
tail -n 20 $SPARK_HOME/conf/spark-env.sh