Linux安裝Spark集羣(CentOS7+Spark2.1.1+Hadoop2.8.0) 原 薦

1 安裝Spark依賴的Scala

          1.1下載和解壓縮Scala

          1.2 配置環境變量

          1.3 驗證Scala

2下載和解壓縮Spark

          2.1 下載Spark壓縮包

          2.2 解壓縮Spark

3 Spark相關的配置

          3.1 配置環境變量

          3.2 配置conf目錄下的文件

                  3.2.1 新建spark-env.h文件

                  3.2.2 新建slaves文件

4 啓動和測試Spark集羣

         4.1 啓動Spark

         4.2 測試和使用Spark集羣

                 4.2.1訪問Spark集羣提供的URL

                 4.2.2運行Spark提供的計算圓周率的示例程序

 

關鍵字:Linux   CentOS  Hadoop  Spark   Scala   Java

版本號:CentOS7  Hadoop2.8.0  Spark2.1.1   Scala2.12.2   JDK1.8

 

          說明:Spark可以在只安裝了JDK、Scala的機器上直接單機安裝,但是這樣的話只能使用單機模式運行不涉及分佈式運算和分佈式存儲的代碼,例如可以單機安裝Spark,單機運行計算圓周率的Spark程序。但是我們要運行的是Spark集羣,並且需要調用Hadoop的分佈式文件系統,所以請你先安裝Hadoop

 

在這裏相信有許多想要學習大數據的同學,大家可以+下大數據學習裙:716加上【五8一】最後014,即可免費領取一整套系統的大數據學習教程

     

        Spark集羣的最小化安裝只需要安裝這些東西:JDK  、Scala  、Hadoop  、Spark

1 安裝Spark依賴的Scala

          Hadoop的安裝請參考上面提到的博文,因爲Spark依賴scala,所以在安裝Spark之前,這裏要先安裝scala。在每個節點上都進行安裝。

1.1  下載和解壓縮Scala

        如圖:

 

如圖:

      

 

 

 

在Linux服務器的opt目錄下新建一個名爲scala的文件夾,並將下載的壓縮包上載上去

如圖:

   

   

執行命令,進入到該目錄:

cd    /opt/scala

執行命令進行解壓縮:

tar   -xvf    scala-2.12.2

1.2  配置環境變量

       編輯/etc/profile這個文件,在文件中增加一行配置:

 

export    SCALA_HOME=/opt/scala/scala-2.12.2

 

      在該文件的PATH變量中增加下面的內容:

 

  ${SCALA_HOME}/bin

 

 

      添加完成後,我的/etc/profile的配置如下:

 

 
  1. export JAVA_HOME=/opt/java/jdk1.8.0_121

  2. export HADOOP_HOME=/opt/hadoop/hadoop-2.8.0

  3. export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop

  4. export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native

  5. export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib"

  6. export HIVE_HOME=/opt/hive/apache-hive-2.1.1-bin

  7. export HIVE_CONF_DIR=${HIVE_HOME}/conf

  8. export SQOOP_HOME=/opt/sqoop/sqoop-1.4.6.bin__hadoop-2.0.4-alpha

  9. export HBASE_HOME=/opt/hbase/hbase-1.2.5

  10. export ZK_HOME=/opt/zookeeper/zookeeper-3.4.10

  11. export SCALA_HOME=/opt/scala/scala-2.12.2

  12. export CLASS_PATH=.:${JAVA_HOME}/lib:${HIVE_HOME}/lib:$CLASS_PATH

  13. export PATH=.:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:${SPARK_HOME}/bin:${ZK_HOME}/bin:${HIVE_HOME}/bin:${SQOOP_HOME}/bin:${HBASE_HOME}/bin:${SCALA_HOME}/bin:$PATH

         說明:你可以只關注開頭說的JDK  SCALA   Hadoop  Spark的環境變量,其餘的諸如Zookeeper、Hbase、Hive、Sqoop都不用管。

        如圖:

 

      環境變量配置完成後,執行下面的命令:

 

source   /etc/profile

 

1.3 驗證Scala

    執行命令:

 

scala     -version

 

   如圖:

    

2 下載和解壓縮Spark

      在每個節點上都安裝Spark,也就是重複下面的步驟。

 

  如圖:

        

 

     

2.2  解壓縮Spark

      在Linux服務器的opt目錄下新建一個名爲spark的文件夾,把剛纔下載的壓縮包,上載上去。

如圖:

進入到該目錄內,也就是執行下面的命令:

cd    /opt/spark

執行解壓縮命令:

tar   -zxvf   spark-2.1.1-bin-hadoop2.7.tgz

3  Spark相關的配置

         說明:因爲我們搭建的是基於hadoop集羣的Spark集羣,所以每個hadoop節點上我都安裝了Spark,都需要按照下面的步驟做配置,啓動的話只需要在Spark集羣的Master機器上啓動即可,我這裏是在hserver1上啓動。

3.1  配置環境變量

編輯/etc/profile文件,增加

 

export  SPARK_HOME=/opt/spark/spark-2.1.1-bin-hadoop2.7

 

      上面的變量添加完成後編輯該文件中的PATH變量,添加

 

${SPARK_HOME}/bin

 

      注意:因爲$SPARK_HOME/sbin目錄下有一些文件名稱和$HADOOP_HOME/sbin目錄下的文件同名,爲了避免同名文件衝突,這裏不在PATH變量裏添加$SPARK_HOME/sbin只添加了$SPARK_HOME/bin

修改完成後,我的/etc/profile文件內容是:

 

 
  1. export JAVA_HOME=/opt/java/jdk1.8.0_121

  2. export HADOOP_HOME=/opt/hadoop/hadoop-2.8.0

  3. export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop

  4. export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native

  5. export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib"

  6. export HIVE_HOME=/opt/hive/apache-hive-2.1.1-bin

  7. export HIVE_CONF_DIR=${HIVE_HOME}/conf

  8. export SQOOP_HOME=/opt/sqoop/sqoop-1.4.6.bin__hadoop-2.0.4-alpha

  9. export HBASE_HOME=/opt/hbase/hbase-1.2.5

  10. export ZK_HOME=/opt/zookeeper/zookeeper-3.4.10

  11. export SCALA_HOME=/opt/scala/scala-2.12.2

  12. export SPARK_HOME=/opt/spark/spark-2.1.1-bin-hadoop2.7

  13. export CLASS_PATH=.:${JAVA_HOME}/lib:${HIVE_HOME}/lib:$CLASS_PATH

  14. export PATH=.:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:${SPARK_HOME}/bin:${ZK_HOME}/bin:${HIVE_HOME}/bin:${SQOOP_HOME}/bin:${HBASE_HOME}:${SCALA_HOME}/bin:$PATH

 

      說明:你可以只關注開頭說的JDK  SCALA   Hadoop  Spark的環境變量,其餘的諸如Zookeeper、Hbase、Hive、Sqoop都不用管。 

      如圖:

編輯完成後,執行命令:

source    /etc/profile

3.2 配置conf目錄下的文件

         對/opt/spark/spark-2.1.1-bin-hadoop2.7/conf目錄下的文件進行配置。

3.2.1  新建spark-env.h文件

        執行命令,進入到/opt/spark/spark-2.1.1-bin-hadoop2.7/conf目錄內:

cd    /opt/spark/spark-2.1.1-bin-hadoop2.7/conf

       以spark爲我們創建好的模板創建一個spark-env.h文件,命令是:

cp    spark-env.sh.template   spark-env.sh

     如圖:

    編輯spark-env.h文件,在裏面加入配置(具體路徑以自己的爲準):

 

 
  1. export SCALA_HOME=/opt/scala/scala-2.12.2

  2. export JAVA_HOME=/opt/java/jdk1.8.0_121

  3. export HADOOP_HOME=/opt/hadoop/hadoop-2.8.0

  4. export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

  5. export SPARK_HOME=/opt/spark/spark-2.1.1-bin-hadoop2.7

  6. export SPARK_MASTER_IP=hserver1

  7. export SPARK_EXECUTOR_MEMORY=1G

 

3.2.2 新建slaves文件

執行命令,進入到/opt/spark/spark-2.1.1-bin-hadoop2.7/conf目錄內:

cd   /opt/spark/spark-2.1.1-bin-hadoop2.7/conf

以spark爲我們創建好的模板創建一個slaves文件,命令是:

cp    slaves.template   slaves

如圖:

編輯slaves文件,裏面的內容爲:

 

hserver2
hserver3

 

4 啓動和測試Spark集羣

4.1 啓動Spark

          因爲spark是依賴於hadoop提供的分佈式文件系統的,所以在啓動spark之前,先確保hadoop在正常運行。

        在hadoop正常運行的情況下,在hserver1(也就是hadoop的namenode,spark的marster節點)上執行命令:

   cd   /opt/spark/spark-2.1.1-bin-hadoop2.7/sbin

    執行啓動腳本:

  ./start-all.sh

  如圖:

 

  完整內容是:

 

 
  1.   [root@hserver1 sbin]# cd/opt/spark/spark-2.1.1-bin-hadoop2.7/sbin

  2. [root@hserver1 sbin]# ./start-all.sh

  3. starting org.apache.spark.deploy.master.Master,logging to/opt/spark/spark-2.1.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.master.Master-1-hserver1.out

  4. hserver2: startingorg.apache.spark.deploy.worker.Worker, logging to/opt/spark/spark-2.1.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-hserver2.out

  5. hserver3: startingorg.apache.spark.deploy.worker.Worker, logging to/opt/spark/spark-2.1.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-hserver3.out

  6. [root@hserver1 sbin]#  

 

         注意:上面的命令中有./這個不能少,./的意思是執行當前目錄下的start-all.sh腳本。

4.2  測試和使用Spark集羣

4.2.1  訪問Spark集羣提供的URL

  

       如圖:

4.2.2  運行Spark提供的計算圓周率的示例程序

      這裏只是簡單的用local模式運行一個計算圓周率的Demo。按照下面的步驟來操作。

      第一步,進入到Spark的根目錄,也就是執行下面的腳本:

cd     /opt/spark/spark-2.1.1-bin-hadoop2.7

如圖:

        

第二步,調用Spark自帶的計算圓周率的Demo,執行下面的命令:

 

./bin/spark-submit  --class  org.apache.spark.examples.SparkPi  --master local   examples/jars/spark-examples_2.11-2.1.1.jar

 

 

        命令執行後,spark示例程序已經開始執行

        如圖:

        很快執行結果出來了,執行結果我用紅框標出來了

如圖:

        完整的控制檯輸出是:

 

 
  1. [root@hserver1 bin]# cd /opt/spark/spark-2.1.1-bin-hadoop2.7

  2. [root@hserver1 spark-2.1.1-bin-hadoop2.7]# ./bin/spark-submit--class org.apache.spark.examples.SparkPi --master localexamples/jars/spark-examples_2.11-2.1.1.jar

  3. 17/05/16 14:26:23 INFO spark.SparkContext: Running Spark version2.1.1

  4. 17/05/16 14:26:24 WARN util.NativeCodeLoader: Unable to loadnative-hadoop library for your platform... using builtin-java classes whereapplicable

  5. 17/05/16 14:26:25 INFO spark.SecurityManager: Changing view acls to:root

  6. 17/05/16 14:26:25 INFO spark.SecurityManager: Changing modify aclsto: root

  7. 17/05/16 14:26:25 INFO spark.SecurityManager: Changing view aclsgroups to:

  8. 17/05/16 14:26:25 INFO spark.SecurityManager: Changing modify aclsgroups to:

  9. 17/05/16 14:26:25 INFO spark.SecurityManager: SecurityManager:authentication disabled; ui acls disabled; users with view permissions: Set(root); groups withview permissions: Set(); users withmodify permissions: Set(root); groups with modify permissions: Set()

  10. 17/05/16 14:26:25 INFO util.Utils: Successfully started service'sparkDriver' on port 40855.

  11. 17/05/16 14:26:26 INFO spark.SparkEnv: Registering MapOutputTracker

  12. 17/05/16 14:26:26 INFO spark.SparkEnv: RegisteringBlockManagerMaster

  13. 17/05/16 14:26:26 INFO storage.BlockManagerMasterEndpoint: Usingorg.apache.spark.storage.DefaultTopologyMapper for getting topology information

  14. 17/05/16 14:26:26 INFO storage.BlockManagerMasterEndpoint:BlockManagerMasterEndpoint up

  15. 17/05/16 14:26:26 INFO storage.DiskBlockManager: Created localdirectory at /tmp/blockmgr-cf8cbb42-95d2-4284-9a48-67592363976a

  16. 17/05/16 14:26:26 INFO memory.MemoryStore: MemoryStore started withcapacity 413.9 MB

  17. 17/05/16 14:26:26 INFO spark.SparkEnv: RegisteringOutputCommitCoordinator

  18. 17/05/16 14:26:26 INFO util.log: Logging initialized @5206ms

  19. 17/05/16 14:26:27 INFO server.Server: jetty-9.2.z-SNAPSHOT

  20. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@5118388b{/jobs,null,AVAILABLE,@Spark}

  21. 17/05/16 14:26:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@15a902e7{/jobs/json,null,AVAILABLE,@Spark}

  22. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@7876d598{/jobs/job,null,AVAILABLE,@Spark}

  23. 17/05/16 14:26:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4a3e3e8b{/jobs/job/json,null,AVAILABLE,@Spark}

  24. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@5af28b27{/stages,null,AVAILABLE,@Spark}

  25. 17/05/16 14:26:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@71104a4{/stages/json,null,AVAILABLE,@Spark}

  26. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@4985cbcb{/stages/stage,null,AVAILABLE,@Spark}

  27. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@72f46e16{/stages/stage/json,null,AVAILABLE,@Spark}

  28. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@3c9168dc{/stages/pool,null,AVAILABLE,@Spark}

  29. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@332a7fce{/stages/pool/json,null,AVAILABLE,@Spark}

  30. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@549621f3{/storage,null,AVAILABLE,@Spark}

  31. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@54361a9{/storage/json,null,AVAILABLE,@Spark}

  32. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@32232e55{/storage/rdd,null,AVAILABLE,@Spark}

  33. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@5217f3d0{/storage/rdd/json,null,AVAILABLE,@Spark}

  34. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@37ebc9d8{/environment,null,AVAILABLE,@Spark}

  35. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@293bb8a5{/environment/json,null,AVAILABLE,@Spark}

  36. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@2416a51{/executors,null,AVAILABLE,@Spark}

  37. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@6fa590ba{/executors/json,null,AVAILABLE,@Spark}

  38. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@6e9319f{/executors/threadDump,null,AVAILABLE,@Spark}

  39. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@72e34f77{/executors/threadDump/json,null,AVAILABLE,@Spark}

  40. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@7bf9b098{/static,null,AVAILABLE,@Spark}

  41. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@389adf1d{/,null,AVAILABLE,@Spark}

  42. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@77307458{/api,null,AVAILABLE,@Spark}

  43. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@1fc0053e{/jobs/job/kill,null,AVAILABLE,@Spark}

  44. 17/05/16 14:26:27 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@290b1b2e{/stages/stage/kill,null,AVAILABLE,@Spark}

  45. 17/05/16 14:26:27 INFO server.ServerConnector: StartedSpark@32fe9d0a{HTTP/1.1}{0.0.0.0:4040}

  46. 17/05/16 14:26:27 INFO server.Server: Started @5838ms

  47. 17/05/16 14:26:27 INFO util.Utils: Successfully started service'SparkUI' on port 4040.

  48. 17/05/16 14:26:27 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, andstarted at http://192.168.27.143:4040

  49. 17/05/16 14:26:27 INFO spark.SparkContext: Added JARfile:/opt/spark/spark-2.1.1-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.1.1.jarat spark://192.168.27.143:40855/jars/spark-examples_2.11-2.1.1.jar withtimestamp 1494915987472

  50. 17/05/16 14:26:27 INFO executor.Executor: Starting executor IDdriver on host localhost

  51. 17/05/16 14:26:27 INFO util.Utils: Successfully started service'org.apache.spark.network.netty.NettyBlockTransferService' on port 41104.

  52. 17/05/16 14:26:27 INFO netty.NettyBlockTransferService: Servercreated on 192.168.27.143:41104

  53. 17/05/16 14:26:27 INFO storage.BlockManager: Usingorg.apache.spark.storage.RandomBlockReplicationPolicy for block replicationpolicy

  54. 17/05/16 14:26:27 INFO storage.BlockManagerMaster: RegisteringBlockManager BlockManagerId(driver, 192.168.27.143, 41104, None)

  55. 17/05/16 14:26:27 INFO storage.BlockManagerMasterEndpoint:Registering block manager 192.168.27.143:41104 with 413.9 MB RAM,BlockManagerId(driver, 192.168.27.143, 41104, None)

  56. 17/05/16 14:26:27 INFO storage.BlockManagerMaster: RegisteredBlockManager BlockManagerId(driver, 192.168.27.143, 41104, None)

  57. 17/05/16 14:26:27 INFO storage.BlockManager: InitializedBlockManager: BlockManagerId(driver, 192.168.27.143, 41104, None)

  58. 17/05/16 14:26:28 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@4e6d7365{/metrics/json,null,AVAILABLE,@Spark}

  59. 17/05/16 14:26:28 INFO internal.SharedState: Warehouse path is'file:/opt/spark/spark-2.1.1-bin-hadoop2.7/spark-warehouse'.

  60. 17/05/16 14:26:28 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@705202d1{/SQL,null,AVAILABLE,@Spark}

  61. 17/05/16 14:26:28 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3e58d65e{/SQL/json,null,AVAILABLE,@Spark}

  62. 17/05/16 14:26:28 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@6f63c44f{/SQL/execution,null,AVAILABLE,@Spark}

  63. 17/05/16 14:26:28 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@62a8fd44{/SQL/execution/json,null,AVAILABLE,@Spark}

  64. 17/05/16 14:26:28 INFO handler.ContextHandler: Startedo.s.j.s.ServletContextHandler@1d035be3{/static/sql,null,AVAILABLE,@Spark}

  65. 17/05/16 14:26:30 INFO spark.SparkContext: Starting job: reduce atSparkPi.scala:38

  66. 17/05/16 14:26:30 INFO scheduler.DAGScheduler: Got job 0 (reduce atSparkPi.scala:38) with 2 output partitions

  67. 17/05/16 14:26:30 INFO scheduler.DAGScheduler: Final stage:ResultStage 0 (reduce at SparkPi.scala:38)

  68. 17/05/16 14:26:30 INFO scheduler.DAGScheduler: Parents of finalstage: List()

  69. 17/05/16 14:26:30 INFO scheduler.DAGScheduler: Missing parents:List()

  70. 17/05/16 14:26:30 INFO scheduler.DAGScheduler: SubmittingResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has nomissing parents

  71. 17/05/16 14:26:30 INFO memory.MemoryStore: Block broadcast_0 storedas values in memory (estimated size 1832.0 B, free 413.9 MB)

  72. 17/05/16 14:26:30 INFO memory.MemoryStore: Block broadcast_0_piece0stored as bytes in memory (estimated size 1167.0 B, free 413.9 MB)

  73. 17/05/16 14:26:31 INFO storage.BlockManagerInfo: Addedbroadcast_0_piece0 in memory on 192.168.27.143:41104 (size: 1167.0 B, free:413.9 MB)

  74. 17/05/16 14:26:31 INFO spark.SparkContext: Created broadcast 0 frombroadcast at DAGScheduler.scala:996

  75. 17/05/16 14:26:31 INFO scheduler.DAGScheduler: Submitting 2 missingtasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34)

  76. 17/05/16 14:26:31 INFO scheduler.TaskSchedulerImpl: Adding task set0.0 with 2 tasks

  77. 17/05/16 14:26:31 INFO scheduler.TaskSetManager: Starting task 0.0in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL,6026 bytes)

  78. 17/05/16 14:26:31 INFO executor.Executor: Running task 0.0 in stage0.0 (TID 0)

  79. 17/05/16 14:26:31 INFO executor.Executor: Fetchingspark://192.168.27.143:40855/jars/spark-examples_2.11-2.1.1.jar with timestamp1494915987472

  80. 17/05/16 14:26:31 INFO client.TransportClientFactory: Successfullycreated connection to /192.168.27.143:40855 after 145 ms (0 ms spent inbootstraps)

  81. 17/05/16 14:26:31 INFO util.Utils: Fetchingspark://192.168.27.143:40855/jars/spark-examples_2.11-2.1.1.jar to/tmp/spark-702c8654-489f-47f2-85e0-8b658ebb2988/userFiles-0a07fa86-4d14-4939-ad2b-95ac8488e187/fetchFileTemp3302336691796081023.tmp

  82. 17/05/16 14:26:33 INFO executor.Executor: Addingfile:/tmp/spark-702c8654-489f-47f2-85e0-8b658ebb2988/userFiles-0a07fa86-4d14-4939-ad2b-95ac8488e187/spark-examples_2.11-2.1.1.jarto class loader

  83. 17/05/16 14:26:34 INFO executor.Executor: Finished task 0.0 in stage0.0 (TID 0). 1114 bytes result sent to driver

  84. 17/05/16 14:26:34 INFO scheduler.TaskSetManager: Starting task 1.0in stage 0.0 (TID 1, localhost, executor driver, partition 1, PROCESS_LOCAL,6026 bytes)

  85. 17/05/16 14:26:34 INFO executor.Executor: Running task 1.0 in stage0.0 (TID 1)

  86. 17/05/16 14:26:34 INFO scheduler.TaskSetManager: Finished task 0.0in stage 0.0 (TID 0) in 2815 ms on localhost (executor driver) (1/2)

  87. 17/05/16 14:26:34 INFO executor.Executor: Finished task 1.0 in stage0.0 (TID 1). 1114 bytes result sent to driver

  88. 17/05/16 14:26:34 INFO scheduler.TaskSetManager: Finished task 1.0in stage 0.0 (TID 1) in 416 ms on localhost (executor driver) (2/2)

  89. 17/05/16 14:26:34 INFO scheduler.TaskSchedulerImpl: Removed TaskSet0.0, whose tasks have all completed, from pool

  90. 17/05/16 14:26:34 INFO scheduler.DAGScheduler: ResultStage 0 (reduceat SparkPi.scala:38) finished in 3.269 s

  91. 17/05/16 14:26:34 INFO scheduler.DAGScheduler: Job 0 finished:reduce at SparkPi.scala:38, took 4.404894 s

  92. Pi is roughly 3.1434157170785855

  93. 17/05/16 14:26:34 INFO server.ServerConnector: StoppedSpark@32fe9d0a{HTTP/1.1}{0.0.0.0:4040}

  94. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@290b1b2e{/stages/stage/kill,null,UNAVAILABLE,@Spark}

  95. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@1fc0053e{/jobs/job/kill,null,UNAVAILABLE,@Spark}

  96. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@77307458{/api,null,UNAVAILABLE,@Spark}

  97. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@389adf1d{/,null,UNAVAILABLE,@Spark}

  98. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@7bf9b098{/static,null,UNAVAILABLE,@Spark}

  99. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@72e34f77{/executors/threadDump/json,null,UNAVAILABLE,@Spark}

  100. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@6e9319f{/executors/threadDump,null,UNAVAILABLE,@Spark}

  101. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@6fa590ba{/executors/json,null,UNAVAILABLE,@Spark}

  102. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@2416a51{/executors,null,UNAVAILABLE,@Spark}

  103. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@293bb8a5{/environment/json,null,UNAVAILABLE,@Spark}

  104. 17/05/16 14:26:34 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@37ebc9d8{/environment,null,UNAVAILABLE,@Spark}

  105. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@5217f3d0{/storage/rdd/json,null,UNAVAILABLE,@Spark}

  106. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@32232e55{/storage/rdd,null,UNAVAILABLE,@Spark}

  107. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@54361a9{/storage/json,null,UNAVAILABLE,@Spark}

  108. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@549621f3{/storage,null,UNAVAILABLE,@Spark}

  109. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@332a7fce{/stages/pool/json,null,UNAVAILABLE,@Spark}

  110. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@3c9168dc{/stages/pool,null,UNAVAILABLE,@Spark}

  111. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@72f46e16{/stages/stage/json,null,UNAVAILABLE,@Spark}

  112. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@4985cbcb{/stages/stage,null,UNAVAILABLE,@Spark}

  113. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@71104a4{/stages/json,null,UNAVAILABLE,@Spark}

  114. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@5af28b27{/stages,null,UNAVAILABLE,@Spark}

  115. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@4a3e3e8b{/jobs/job/json,null,UNAVAILABLE,@Spark}

  116. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@7876d598{/jobs/job,null,UNAVAILABLE,@Spark}

  117. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@15a902e7{/jobs/json,null,UNAVAILABLE,@Spark}

  118. 17/05/16 14:26:34 INFO handler.ContextHandler: Stoppedo.s.j.s.ServletContextHandler@5118388b{/jobs,null,UNAVAILABLE,@Spark}

  119. 17/05/16 14:26:34 INFO ui.SparkUI: Stopped Spark web UI athttp://192.168.27.143:4040

  120. 17/05/16 14:26:34 INFO spark.MapOutputTrackerMasterEndpoint:MapOutputTrackerMasterEndpoint stopped!

  121. 17/05/16 14:26:34 INFO memory.MemoryStore: MemoryStore cleared

  122. 17/05/16 14:26:34 INFO storage.BlockManager: BlockManager stopped

  123. 17/05/16 14:26:34 INFO storage.BlockManagerMaster: BlockManagerMasterstopped

  124. 17/05/16 14:26:34 INFOscheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:OutputCommitCoordinator stopped!

  125. 17/05/16 14:26:34 INFO spark.SparkContext: Successfully stoppedSparkContext

  126. 17/05/16 14:26:34 INFO util.ShutdownHookManager: Shutdown hookcalled

  127. 17/05/16 14:26:34 INFO util.ShutdownHookManager: Deleting directory/tmp/spark-702c8654-489f-47f2-85e0-8b658ebb2988

  128. [root@hserver1 spark-2.1.1-bin-hadoop2.7]#

注意:上面只是使用了單機本地模式調用Demo,使用集羣模式運行Demo

 

在這裏相信有許多想要學習大數據的同學,大家可以+下大數據學習裙:716加上【五8一】最後014,即可免費領取一整套系統的大數據學習教程

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章