Spark入門(二)——Spark環境搭建與開發環境

Standalone單節點模式

  1. Hadoop環境(有Hadoop環境的可以直接進入下面Spark環境安裝)
  • 設置CentOS進程數和文件數(重啓)
[root@CentOS ~]# vi /etc/security/limits.conf
* soft nofile 204800
* hard nofile 204800
* soft nproc 204800
* hard nproc 204800
[root@CentOS ~]# reboot
  • 配置主機名(重啓)
[root@CentOS ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=CentOS
[root@CentOS ~]# reboot
  • 設置IP映射
[root@CentOS ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.111.132 CentOS
  • 防火牆服務
[root@CentOS ~]# service iptables stop
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]
[root@CentOS ~]# chkconfig iptables off
  • 安裝JDK1.8+
[root@CentOS ~]# rpm -ivh jdk-8u191-linux-x64.rpm
warning: jdk-8u191-linux-x64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing...                ########################################### [100%]
   1:jdk1.8                 ########################################### [100%]
Unpacking JAR files...
        tools.jar...
        plugin.jar...
        javaws.jar...
        deploy.jar...
        rt.jar...
        jsse.jar...
        charsets.jar...
        localedata.jar...
[root@CentOS ~]# vi .bashrc 
JAVA_HOME=/usr/java/latest
PATH=$PATH:$JAVA_HOME/bin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
[root@CentOS ~]# source ~/.bashrc
  • SSH配置免密
[root@CentOS ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
a5:2d:f5:c3:22:83:cf:13:25:59:fb:c1:f4:63:06:d4 root@CentOS
The key's randomart image is:
+--[ RSA 2048]----+
|          ..+.   |
|         o + oE  |
|        o = o =  |
|       . B + + . |
|      . S o =    |
|       o = . .   |
|        +        |
|         .       |
|                 |
+-----------------+
[root@CentOS ~]# ssh-copy-id CentOS
The authenticity of host 'centos (192.168.111.132)' can't be established.
RSA key fingerprint is fa:1b:c0:23:86:ff:08:5e:83:ba:65:4c:e6:f2:1f:3b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'centos,192.168.111.132' (RSA) to the list of known hosts.
root@centos's password:`需要輸入密碼`
Now try logging into the machine, with "ssh 'CentOS'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.
  • 配置HDFS

hadoop-2.9.2.tar.gz解壓到系統的/usr目錄下然後配置[core|hdfs]-site.xml配置文件。

[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/core-site.xml
<!--nn訪問入口-->
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://CentOS:9000</value>
</property>
<!--hdfs工作基礎目錄-->
<property>
    <name>hadoop.tmp.dir</name>
    <value>/usr/hadoop-2.9.2/hadoop-${user.name}</value>
</property>
[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/hdfs-site.xml
<!--block副本因子-->
<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
<!--配置Sencondary namenode所在物理主機-->
<property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>CentOS:50090</value>
</property>
<!--設置datanode最大文件操作數-->
<property>
        <name>dfs.datanode.max.xcievers</name>
        <value>4096</value>
</property>
<!--設置datanode並行處理能力-->
<property>
        <name>dfs.datanode.handler.count</name>
        <value>6</value>
</property>
[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/slaves
CentOS
  • 配置hadoop環境變量
[root@CentOS ~]# vi .bashrc
HADOOP_HOME=/usr/hadoop-2.9.2
JAVA_HOME=/usr/java/latest
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
export HADOOP_HOME
[root@CentOS ~]# source .bashrc

  • 啓動Hadoop服務
[root@CentOS ~]# hdfs namenode -format # 創建初始化所需的fsimage文件
[root@CentOS ~]# start-dfs.sh
  1. Spark環境

下載spark-2.4.3-bin-without-hadoop.tgz解壓到/usr目錄,並且將Spark目錄修改名字爲spark-2.4.3然後修改spark-env.shspark-default.conf文件.

下載地址:http://mirrors.tuna.tsinghua.edu.cn/apache/spark/spark-2.4.3/spark-2.4.3-bin-without-hadoop.tgz

  • 解壓Spark安裝包,並且修改解壓文件名
[root@CentOS ~]# tar -zxf spark-2.4.3-bin-without-hadoop.tgz -C /usr/
[root@CentOS ~]# mv /usr/spark-2.4.3-bin-without-hadoop/ /usr/spark-2.4.3
[root@CentOS ~]# vi .bashrc
SPARK_HOME=/usr/spark-2.4.3
HADOOP_HOME=/usr/hadoop-2.9.2
JAVA_HOME=/usr/java/latest
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SPARK_HOME/bin:$SPARK_HOME/sbin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
export HADOOP_HOME
export SPARK_HOME
[root@CentOS ~]# source .bashrc
  • 配置Spark服務
[root@CentOS spark-2.4.3]# cd /usr/spark-2.4.3/conf/
[root@CentOS conf]# mv spark-env.sh.template spark-env.sh
[root@CentOS conf]# mv slaves.template slaves
[root@CentOS conf]# vi slaves
CentOS
[root@CentOS conf]# vi spark-env.sh
SPARK_MASTER_HOST=CentOS
SPARK_MASTER_PORT=7077
SPARK_WORKER_CORES=4
SPARK_WORKER_MEMORY=2g
LD_LIBRARY_PATH=/usr/hadoop-2.9.2/lib/native
SPARK_DIST_CLASSPATH=$(hadoop classpath)

export SPARK_MASTER_HOST
export SPARK_MASTER_PORT
export SPARK_WORKER_CORES
export SPARK_WORKER_MEMORY
export LD_LIBRARY_PATH
export SPARK_DIST_CLASSPATH
  • 啓動Spark進程
[root@CentOS ~]# cd /usr/spark-2.4.3/
[root@CentOS spark-2.4.3]# ./sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/spark-2.4.3/logs/spark-root-org.apache.spark.deploy.master.Master-1-CentOS.out
CentOS: starting org.apache.spark.deploy.worker.Worker, logging to /usr/spark-2.4.3/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-CentOS.out
  • 測試Spark
[root@CentOS spark-2.4.3]# ./bin/spark-shell 
					--master spark://CentOS:7077 
					--deploy-mode client 
					--executor-cores 2

executor-cores:在standalone模式表示程序每個Worker節點分配資源數。不能超過單臺自大core個數,如果不清每臺能夠分配的最大core的個數,可以使用--total-executor-cores,該種分配會盡最大可能分配。

scala> sc.textFile("hdfs:///words/t_words",5)
    .flatMap(_.split(" "))
    .map((_,1))
    .reduceByKey(_+_)
    .sortBy(_._1,true,3)
    .saveAsTextFile("hdfs:///results")

  • 上述任務的任務拆分圖:
    在這裏插入圖片描述

Spark On Yarn

  1. Hadoop環境(配置過Yarn的可以直接跳過Hadoop安裝)
  • 設置CentOS進程數和文件數(重啓)
[root@CentOS ~]# vi /etc/security/limits.conf
* soft nofile 204800
* hard nofile 204800
* soft nproc 204800
* hard nproc 204800
[root@CentOS ~]# reboot
  • 配置主機名(重啓)
[root@CentOS ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=CentOS
[root@CentOS ~]# reboot
  • 設置IP映射
[root@CentOS ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.111.132 CentOS
  • 防火牆服務
[root@CentOS ~]# service iptables stop
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]
[root@CentOS ~]# chkconfig iptables off
  • 安裝JDK1.8+
[root@CentOS ~]# rpm -ivh jdk-8u191-linux-x64.rpm
warning: jdk-8u191-linux-x64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing...                ########################################### [100%]
   1:jdk1.8                 ########################################### [100%]
Unpacking JAR files...
        tools.jar...
        plugin.jar...
        javaws.jar...
        deploy.jar...
        rt.jar...
        jsse.jar...
        charsets.jar...
        localedata.jar...
[root@CentOS ~]# vi .bashrc 
JAVA_HOME=/usr/java/latest
PATH=$PATH:$JAVA_HOME/bin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
[root@CentOS ~]# source ~/.bashrc
  • SSH配置免密
[root@CentOS ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
a5:2d:f5:c3:22:83:cf:13:25:59:fb:c1:f4:63:06:d4 root@CentOS
The key's randomart image is:
+--[ RSA 2048]----+
|          ..+.   |
|         o + oE  |
|        o = o =  |
|       . B + + . |
|      . S o =    |
|       o = . .   |
|        +        |
|         .       |
|                 |
+-----------------+
[root@CentOS ~]# ssh-copy-id CentOS
The authenticity of host 'centos (192.168.111.132)' can't be established.
RSA key fingerprint is fa:1b:c0:23:86:ff:08:5e:83:ba:65:4c:e6:f2:1f:3b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'centos,192.168.111.132' (RSA) to the list of known hosts.
root@centos's password:`需要輸入密碼`
Now try logging into the machine, with "ssh 'CentOS'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.
  • 配置HDFS

hadoop-2.9.2.tar.gz解壓到系統的/usr目錄下然後配置[core|hdfs]-site.xml配置文件。

[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/core-site.xml
<!--nn訪問入口-->
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://CentOS:9000</value>
</property>
<!--hdfs工作基礎目錄-->
<property>
    <name>hadoop.tmp.dir</name>
    <value>/usr/hadoop-2.9.2/hadoop-${user.name}</value>
</property>
[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/hdfs-site.xml
<!--block副本因子-->
<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
<!--配置Sencondary namenode所在物理主機-->
<property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>CentOS:50090</value>
</property>
<!--設置datanode最大文件操作數-->
<property>
        <name>dfs.datanode.max.xcievers</name>
        <value>4096</value>
</property>
<!--設置datanode並行處理能力-->
<property>
        <name>dfs.datanode.handler.count</name>
        <value>6</value>
</property>
[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/slaves
CentOS
[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/yarn-site.xml
<!--配置MapReduce計算框架的核心實現Shuffle-洗牌-->
<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>
<!--配置資源管理器所在的目標主機-->
<property>
    <name>yarn.resourcemanager.hostname</name>
    <value>CentOS</value>
</property>
<!--關閉物理內存檢查-->
<property>
        <name>yarn.nodemanager.pmem-check-enabled</name>
        <value>false</value>
</property>
<!--關閉虛擬內存檢查-->
<property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
</property>
[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/mapred-site.xml
<!--MapRedcue框架資源管理器的實現-->
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>
  • 配置hadoop環境變量
[root@CentOS ~]# vi .bashrc
HADOOP_HOME=/usr/hadoop-2.9.2
JAVA_HOME=/usr/java/latest
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
export HADOOP_HOME
[root@CentOS ~]# source .bashrc

  • 啓動Hadoop服務
[root@CentOS ~]# hdfs namenode -format # 創建初始化所需的fsimage文件
[root@CentOS ~]# start-all.sh
  1. Spark環境

下載spark-2.4.3-bin-without-hadoop.tgz解壓到/usr目錄,並且將Spark目錄修改名字爲spark-2.4.3然後修改spark-env.shspark-default.conf文件.

下載地址:http://mirrors.tuna.tsinghua.edu.cn/apache/spark/spark-2.4.3/spark-2.4.3-bin-without-hadoop.tgz

  • 解壓Spark安裝包,並且修改解壓文件名
[root@CentOS ~]# tar -zxf spark-2.4.3-bin-without-hadoop.tgz -C /usr/
[root@CentOS ~]# mv /usr/spark-2.4.3-bin-without-hadoop/ /usr/spark-2.4.3
[root@CentOS ~]# vi .bashrc
SPARK_HOME=/usr/spark-2.4.3
HADOOP_HOME=/usr/hadoop-2.9.2
JAVA_HOME=/usr/java/latest
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SPARK_HOME/bin:$SPARK_HOME/sbin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
export HADOOP_HOME
export SPARK_HOME
[root@CentOS ~]# source .bashrc
  • 配置Spark服務
[root@CentOS spark-2.4.3]# cd /usr/spark-2.4.3/conf/
[root@CentOS conf]# mv spark-env.sh.template spark-env.sh
[root@CentOS conf]# vi spark-env.sh
HADOOP_CONF_DIR=/usr/hadoop-2.9.2/etc/hadoop
YARN_CONF_DIR=/usr/hadoop-2.9.2/etc/hadoop
SPARK_EXECUTOR_CORES=4
SPARK_EXECUTOR_MEMORY=2G
SPARK_DRIVER_MEMORY=1G
LD_LIBRARY_PATH=/usr/hadoop-2.9.2/lib/native
SPARK_DIST_CLASSPATH=$(hadoop classpath):$SPARK_DIST_CLASSPATH
SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs:///spark-logs"

export HADOOP_CONF_DIR
export YARN_CONF_DIR
export SPARK_EXECUTOR_CORES
export SPARK_DRIVER_MEMORY
export SPARK_EXECUTOR_MEMORY
export LD_LIBRARY_PATH
export SPARK_DIST_CLASSPATH
export SPARK_HISTORY_OPTS

[root@CentOS conf]# mv spark-defaults.conf.template spark-defaults.conf
[root@CentOS conf]# vi spark-defaults.conf
spark.eventLog.enabled=true
spark.eventLog.dir=hdfs:///spark-logs

在HDFS上創建spark-logs目錄,用於作爲Sparkhistory服務器存儲數據的地方。

[root@CentOS ~]# hdfs dfs -mkdir /spark-logs
  • 啓動Spark歷史服務器(可選)
[root@CentOS spark-2.4.3]# ./sbin/start-history-server.sh
starting org.apache.spark.deploy.history.HistoryServer, logging to /usr/spark-2.4.3/logs/spark-root-org.apache.spark.deploy.history.HistoryServer-1-CentOS.out
[root@CentOS spark-2.4.3]# jps
5728 NodeManager
5090 NameNode
5235 DataNode
10531 Jps
5623 ResourceManager
5416 SecondaryNameNode
10459 HistoryServer

該進程啓動一個內嵌的web ui端口是18080,用戶可以訪問改頁面查看任務執行計劃、歷史。

  • 測試Spark
./bin/spark-shell 
	--master yarn 
	--deploy-mode client 
	--num-executors 2 
	--executor-cores 3

--num-executors:在Yarn模式下,表示向NodeManager申請的資源數進程,
--executor-cores表示每個進程所能運行線程數。
真個任務計算資源= num-executors * executor-core

scala> sc.textFile("hdfs:///words/t_words",5)
    .flatMap(_.split(" "))
    .map((_,1))
    .reduceByKey(_+_)
    .sortBy(_._1,true,3)
    .saveAsTextFile("hdfs:///results")

  • 上述任務的任務劃分圖在這裏插入圖片描述
  • 本地仿真

在該種模式下,無需安裝yarn、無需啓動Stanalone,一切都是模擬。

[root@CentOS spark-2.4.3]# ./bin/spark-shell --master local[5]
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://CentOS:4040
Spark context available as 'sc' (master = local[5], app id = local-1561742649329).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.3
      /_/

Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_191)
Type in expressions to have them evaluated.
Type :help for more information.

scala> sc.textFile("hdfs:///words/t_words").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).sortBy(_._1,false,3).saveAsTextFile("hdfs:///results1/")

scala>

Spark 開發環境構建

  • 引入開發所需依賴
<dependencies>
    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.11</artifactId>
        <version>2.4.3</version>
    </dependency>
</dependencies>
<build>
    <plugins>
        <!--scala編譯插件-->
        <plugin>
            <groupId>net.alchim31.maven</groupId>
            <artifactId>scala-maven-plugin</artifactId>
            <version>4.0.1</version>
            <executions>
                <execution>
                    <id>scala-compile-first</id>
                    <phase>process-resources</phase>
                    <goals>
                        <goal>add-source</goal>
                        <goal>compile</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

SparkRDDWordCount(本地)

   //1.創建SparkContext
    val conf = new SparkConf().setMaster("local[10]").setAppName("wordcount")
    val sc = new SparkContext(conf)

    val lineRDD: RDD[String] = sc.textFile("file:///Users/mashikang/demo/words/t_word.txt")
    lineRDD.flatMap(line=>line.split(" "))
        .map(word=>(word,1))
        .groupByKey()
        .map(tuple=>(tuple._1,tuple._2.sum))
        .sortBy(tuple=>tuple._2,false,1)
        .collect()
        .foreach(tuple=>println(tuple._1+"->"+tuple._2))

    //3.關閉sc
    sc.stop()

集羣(yarn)

//1.創建SparkContext
val conf = new SparkConf().setMaster("yarn").setAppName("wordcount")
val sc = new SparkContext(conf)

val lineRDD: RDD[String] = sc.textFile("hdfs:///words/t_words")
lineRDD.flatMap(line=>line.split(" "))
.map(word=>(word,1))
.groupByKey()
.map(tuple=>(tuple._1,tuple._2.sum))
.sortBy(tuple=>tuple._2,false,1)
.collect()
.foreach(tuple=>println(tuple._1+"->"+tuple._2))

//3.關閉sc
sc.stop()

發佈:

[root@CentOS spark-2.4.3]# ./bin/spark-submit --master yarn --deploy-mode client --class com.mask.demo02.SparkRDDWordCount --num-executors 3 --executor-cores 4 /root/sparkrdd-1.0-SNAPSHOT.jar

集羣(standalone)

//1.創建SparkContext
val conf = new SparkConf().setMaster("spark://CentOS:7077").setAppName("wordcount")
val sc = new SparkContext(conf)

val lineRDD: RDD[String] = sc.textFile("hdfs:///words/t_words")
lineRDD.flatMap(line=>line.split(" "))
.map(word=>(word,1))
.groupByKey()
.map(tuple=>(tuple._1,tuple._2.sum))
.sortBy(tuple=>tuple._2,false,1)
.collect()
.foreach(tuple=>println(tuple._1+"->"+tuple._2))

//3.關閉sc
sc.stop()

發佈:

[root@CentOS spark-2.4.3]# ./bin/spark-submit --master spark://CentOS:7077 --deploy-mode client --class com.mask.demo02.SparkRDDWordCount --num-executors 3 --total-executor-cores 4 /root/sparkrdd-1.0-SNAPSHOT.jar
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章