環境
主機名 | ip | 進程 |
---|---|---|
nn.hadoop.data.example.net | 172.16.156.220 | NameNode、Master、ResourceManager、SecondaryNameNode、JobHistoryServer |
dn1.hadoop.data.example.net | 172.16.156.221 | NodeManager、DataNode、Worker |
dn2.hadoop.data.example.net | 172.16.156.222 | NodeManager、DataNode、Worker |
yum安裝如下包 (可能有部分包用不到)
yum install pcre-devel openssl openssl-devel openssh-clients htop gcc zlib lrzsz zip unzip vim telnet-server ncurses wget net-tools
關閉防火牆
systemctl stop firewalld.service
systemctl disable firewalld.service
配置hosts文件
vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.156.220 nn.hadoop.data.example.net
172.16.156.221 dn1.hadoop.data.example.net
172.16.156.222 dn2.hadoop.data.example.net
安裝JDK和Scala
0.創建文件夾
mkdir /app/java
mkdir /app/scala
1.下載
下載JDK
wget http://download.oracle.com/otn-pub/java/jdk/8u121-b13/e9e7ea248e2c4826b92b3f075a80e441/jdk-8u121-linux-x64.tar.gz
如果失效,點這裏下載 並上傳至服務器
下載Scala
wget http://downloads.lightbend.com/scala/2.12.1/scala-2.12.1.tgz
2.移動&解壓
mv jdk-8u121-linux-x64.tar.gz /app/java
tar -zxvf jdk-8u121-linux-x64.tar.gz
mv scala-2.12.1.tgz /app/scala
tar -zxvf scala-2.12.1.tgz
3.授權
chmod -R 775 /app/
chown -R hadoop /app/
創建hadoop用戶
useradd hadoop
passwd hadoop
如無特殊說明 以後均爲hadoop用戶操作
SSH完密碼登錄
生成祕鑰:~/.ssh/id_rsa和~/.ssh/id_rsa.pub
ssh-keygen -t rsa
拷貝公鑰到其他機器上
ssh-copy-id -i nn.hadoop.data.example.net
ssh-copy-id -i dn1.hadoop.data.example.net
ssh-copy-id -i dn2.hadoop.data.example.net
安裝Hadoop
0.創建文件夾
mkdir /app/hadoop/data
mkdir /app/hadoop/name
mkdir /app/hadoop/tmp
1.下載hadoop
wget http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
2.移動&解壓
mv hadoop-2.7.3.tar.gz /app/hadoop
tar -zxvf hadoop-2.7.3.tar.gz
3.修改配置文件
/etc/profile (root權限)
HADOOP_HOME=/app/hadoop/hadoop-2.7.3
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
slaves
dn1.hadoop.data.example.net
dn2.hadoop.data.example.net
hadoop-env.sh
# export JAVA_HOME=${JAVA_HOME}
改爲
export JAVA_HOME=/app/java/jdk1.8.0_121/
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://nn.hadoop.data.example.net:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>nn.hadoop.data.example.net:50090</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/app/hadoop/name</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/app/hadoop/data</value>
</property>
</configuration>
mapred-site.xml
cp mapred-site.xml.template mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>nn.hadoop.data.example.net:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>nn.hadoop.data.example.net:19888</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>nn.hadoop.data.example.net</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
4.格式化namenode
hadoop namenode -format
5.複製文件到其他機器
將/app/hadoop(包括data、name、tmp和配置好的hadoop)複製到其他機器。
6.啓動dfs
start-dfs.sh
7.啓動yarn
start-yarn.sh
8.啓動jobhistory
mr-jobhistory-daemon.sh start historyserver
安裝Spark2
0.創建文件夾
mkdir /app/spark
1.下載Spark2
wget http://www.apache.org/dyn/closer.lua/spark/spark-2.1.0/spark-2.1.0-bin-hadoop2.7.tgz
2.移動&解壓
mv spark-2.1.0-bin-hadoop2.7.tgz /app/spark
tar -zxvf spark-2.1.0-bin-hadoop2.7.tgz
3.修改配置文件
/etc/profile (root權限)
export SPARK_HOME=/app/spark/spark-2.1.0-bin-hadoop2.7
export PATH="$SPARK_HOME/bin:$PATH"
spark-env.sh
cp spark-env.sh.template spark-env.sh
export SCALA_HOME=/app/scala/scala-2.12.1
export JAVA_HOME=/app/java/jdk1.8.0_121
export SPARK_MASTER_IP=nn.hadoop.data.easydebug.net
export SPARK_WORKER_MEMORY=1g
export HADOOP_CONF_DIR=/app/hadoop/hadoop-2.7.3/etc/hadoop
slaves
dn1.hadoop.data.example.net
dn2.hadoop.data.example.net
4.複製文件到其他機器
將/app/spark複製到其他機器。
5.啓動Spark
/app/spark/spark-2.1.0-bin-hadoop2.7/sbin/start-all.sh
安裝完成 ^_^
因爲系統變量改了幾次 最後貼一下完整的 其實可以在配置前直接貼進去
export JAVA_HOME=/app/java/jdk1.8.0_121
export SCALA_HOME=/app/scala/scala-2.12.1
export PATH=$JAVA_HOME/bin:$SCALA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
HADOOP_HOME=/app/hadoop/hadoop-2.7.3
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export SPARK_HOME=/app/spark/spark-2.1.0-bin-hadoop2.7
export PATH="$SPARK_HOME/bin:$PATH"
參考文章
CentOS 6.5 hadoop 2.7.3 集羣環境搭建
http://blog.csdn.net/mxxlevel/article/details/52653086
Spark修煉之道(進階篇)——Spark入門到精通:第一節 Spark 1.5.0集羣搭建
https://yq.aliyun.com/articles/60309?spm=5176.8251999.569296.66.0H8Bal
Hadoop2.7.3+Spark2.1.0 完全分佈式環境 搭建全過程
http://www.cnblogs.com/purstar/p/6293605.html