Spark on yarn Intellij ide 安裝,編譯,打包,集羣運行 詳解

這裏寫圖片描述

說明:已經安裝好hadoop2.2.0 完全分佈,scala,spark已安裝好,環境配置完畢;主機爲hadoop-master,hadoop-slave

一.intellij 安裝(centos6.5系統)

1.需要安裝包ideaIc-2017.1.tar.gz(http://pan.baidu.com/s/1nv7Emox
2.scala jar包 scala-intellij-bin-2017.1.14(http://pan.baidu.com/s/1i4PMZzf

步驟一。

1.將上述兩個安裝包放到主節點相關位置(本機放在hadoop-master上,root用戶下桌面上)
2.解壓 命令如下:
tar zxvf ideaIc-2017.1.tar.gz -C ~/
3.將scala-intellij-bin-2017.1.14 放置在ide-IC-171.3780.95
目錄下的 plugins 裏面

步驟二。

1.打開intellij
2.新建 Project,如圖所示,next
這裏寫圖片描述
3.project name ,項目名稱
project location 爲project存放路徑
JDK爲java(本機之前用centos6.5自帶版本1.7,但使用不了,後改爲sun公司的1.8)
scala SDK 爲2.10.4 (本機安裝的版本爲2.10.4)
這裏寫圖片描述
3.這裏寫圖片描述
4.在IDEA中開發應用程序時,常常需要通過一定的文件目錄組織進行源碼編寫,例如源文件目錄、測試源文件目錄,下面演示在Intellij IDEA的src目錄下創建main/scala源文件目錄。
直接按F4或右鍵點擊工程文件 ,點擊OPEN module setting
這裏寫圖片描述
5.點擊Moudules,點擊src目錄,然後右鍵創建main/scala文件夾,再點擊scala文件夾爲sources,如下圖所示
這裏寫圖片描述
6導入Spark 依賴包(本機用spark-assemble-1.0.0-hadoop2.2.0),點擊libraries,點擊+,點擊java,選則spark-assemble-1.0.0-hadoop2.2.0,點擊 項目錄名稱,ok
這裏寫圖片描述

至此Spark開發環境配置完成

步驟三。本地運行

1.新建 scala object,然後輸入代碼
這裏寫圖片描述
2.編譯代碼,直接Build->build Project
3.然後編程運行參數,Run->Edit Configurations
這裏寫圖片描述

`/**
* Created by root on 3/28/17.
*/
import org.apache.spark.SparkContext._
import org.apache.spark.{SparkConf, SparkContext}
/**
* Created by root on 3/23/17.
*/
object spp {
def main(args: Array[String]) {
//輸入文件既可以是本地linux系統文件,也可以是其它來源文件,例如HDFS
if (args.length == 0) {
System.err.println(“Usage: SparkWordCount ”)
System.exit(1)
}
//以本地線程方式運行,可以指定線程個數,
//如.setMaster(“local[2]”),兩個線程執行
//下面給出的是單線程執行
val conf = new SparkConf().setAppName(“SparkWordCount”).setMaster(“local”)
val sc = new SparkContext(conf)

//wordcount操作,計算文件中包含Spark的行數
val rdd2=sc.textFile(args(0)).flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_)
//    val count1=rdd2.countByValue()
//打印結果

// rdd2.saveAsTextFile(path = args(1))
// rdd2.count()
println(rdd2.count())
sc.stop()
}

}

完成後直接Run->Run或Alt+Shift+F10運行程序,執行結果如下圖:
/usr/lib/jvm/jdk1.8.0_60/bin/java -javaagent:/root/Desktop/idea-IC-171.3780.95/lib/idea_rt.jar=42032:/root/Desktop/idea-IC-171.3780.95/bin -Dfile.encoding=UTF-8 -classpath /usr/lib/jvm/jdk1.8.0_60/jre/lib/charsets.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/deploy.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/cldrdata.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/dnsns.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/jaccess.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/jfxrt.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/localedata.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/nashorn.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/sunec.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/sunjce_provider.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/sunpkcs11.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/zipfs.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/javaws.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/jce.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/jfr.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/jfxswt.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/jsse.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/management-agent.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/plugin.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/resources.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/rt.jar:/root/IdeaProjects/my2/out/production/my2:/root/.ivy2/cache/org.scala-lang/scala-reflect/jars/scala-reflect-2.10.4.jar:/root/.ivy2/cache/org.scala-lang/scala-library/jars/scala-library-2.10.4.jar:/root/.ivy2/cache/org.scala-lang/scala-reflect/srcs/scala-reflect-2.10.4-sources.jar:/root/.ivy2/cache/org.scala-lang/scala-library/srcs/scala-library-2.10.4-sources.jar:/root/spark-1.0.0-bin-2.2.0/lib/spark-assembly-1.0.0-hadoop2.2.0.jar spp hdfs://10.6.3.200:8020/data/wordcount/1.txt
17/03/28 13:26:06 INFO SecurityManager: Using Spark’s default log4j profile: org/apache/spark/log4j-defaults.properties
17/03/28 13:26:06 INFO SecurityManager: Changing view acls to: root
17/03/28 13:26:06 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root)
17/03/28 13:26:07 INFO Slf4jLogger: Slf4jLogger started
17/03/28 13:26:07 INFO Remoting: Starting remoting
17/03/28 13:26:08 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark@hadoop-master:59842]
17/03/28 13:26:08 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark@hadoop-master:59842]
17/03/28 13:26:08 INFO SparkEnv: Registering MapOutputTracker
17/03/28 13:26:08 INFO SparkEnv: Registering BlockManagerMaster
17/03/28 13:26:08 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20170328132608-36ca
17/03/28 13:26:08 INFO MemoryStore: MemoryStore started with capacity 528.0 MB.
17/03/28 13:26:08 INFO ConnectionManager: Bound socket to port 50620 with id = ConnectionManagerId(hadoop-master,50620)
17/03/28 13:26:08 INFO BlockManagerMaster: Trying to register BlockManager
17/03/28 13:26:08 INFO BlockManagerInfo: Registering block manager hadoop-master:50620 with 528.0 MB RAM
17/03/28 13:26:08 INFO BlockManagerMaster: Registered BlockManager
17/03/28 13:26:08 INFO HttpServer: Starting HTTP Server
17/03/28 13:26:08 INFO HttpBroadcast: Broadcast server started at http://10.6.3.200:50541
17/03/28 13:26:08 INFO HttpFileServer: HTTP File server directory is /tmp/spark-e484b842-2b2c-43e1-8c19-c1375c30dc92
17/03/28 13:26:08 INFO HttpServer: Starting HTTP Server
17/03/28 13:26:09 INFO SparkUI: Started SparkUI at http://hadoop-master:4040
17/03/28 13:26:10 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
17/03/28 13:26:11 INFO MemoryStore: ensureFreeSpace(133256) called with curMem=0, maxMem=553648128
17/03/28 13:26:11 INFO MemoryStore: Block broadcast_0 stored as values to memory (estimated size 130.1 KB, free 527.9 MB)
17/03/28 13:26:12 INFO FileInputFormat: Total input paths to process : 1
17/03/28 13:26:12 INFO SparkContext: Starting job: count at spp.scala:28
17/03/28 13:26:12 INFO DAGScheduler: Registering RDD 4 (reduceByKey at spp.scala:23)
17/03/28 13:26:12 INFO DAGScheduler: Got job 0 (count at spp.scala:28) with 1 output partitions (allowLocal=false)
17/03/28 13:26:12 INFO DAGScheduler: Final stage: Stage 0(count at spp.scala:28)
17/03/28 13:26:12 INFO DAGScheduler: Parents of final stage: List(Stage 1)
17/03/28 13:26:12 INFO DAGScheduler: Missing parents: List(Stage 1)
17/03/28 13:26:12 INFO DAGScheduler: Submitting Stage 1 (MapPartitionsRDD[4] at reduceByKey at spp.scala:23), which has no missing parents
17/03/28 13:26:12 INFO DAGScheduler: Submitting 1 missing tasks from Stage 1 (MapPartitionsRDD[4] at reduceByKey at spp.scala:23)
17/03/28 13:26:12 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
17/03/28 13:26:12 INFO TaskSetManager: Starting task 1.0:0 as TID 0 on executor localhost: localhost (PROCESS_LOCAL)
17/03/28 13:26:12 INFO TaskSetManager: Serialized task 1.0:0 as 2076 bytes in 129 ms
17/03/28 13:26:12 INFO Executor: Running task ID 0
17/03/28 13:26:12 INFO BlockManager: Found block broadcast_0 locally
17/03/28 13:26:12 INFO HadoopRDD: Input split: hdfs://10.6.3.200:8020/data/wordcount/1.txt:0+15
17/03/28 13:26:12 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
17/03/28 13:26:12 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
17/03/28 13:26:12 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
17/03/28 13:26:12 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
17/03/28 13:26:12 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
17/03/28 13:26:13 INFO Executor: Serialized size of result for 0 is 783
17/03/28 13:26:13 INFO Executor: Sending result for 0 directly to driver
17/03/28 13:26:13 INFO Executor: Finished task ID 0
17/03/28 13:26:13 INFO DAGScheduler: Completed ShuffleMapTask(1, 0)
17/03/28 13:26:13 INFO TaskSetManager: Finished TID 0 in 667 ms on localhost (progress: 1/1)
17/03/28 13:26:13 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
17/03/28 13:26:13 INFO DAGScheduler: Stage 1 (reduceByKey at spp.scala:23) finished in 0.705 s
17/03/28 13:26:13 INFO DAGScheduler: looking for newly runnable stages
17/03/28 13:26:13 INFO DAGScheduler: running: Set()
17/03/28 13:26:13 INFO DAGScheduler: waiting: Set(Stage 0)
17/03/28 13:26:13 INFO DAGScheduler: failed: Set()
17/03/28 13:26:13 INFO DAGScheduler: Missing parents for Stage 0: List()
17/03/28 13:26:13 INFO DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[6] at reduceByKey at spp.scala:23), which is now runnable
17/03/28 13:26:13 INFO DAGScheduler: Submitting 1 missing tasks from Stage 0 (MapPartitionsRDD[6] at reduceByKey at spp.scala:23)
17/03/28 13:26:13 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
17/03/28 13:26:13 INFO TaskSetManager: Starting task 0.0:0 as TID 1 on executor localhost: localhost (PROCESS_LOCAL)
17/03/28 13:26:13 INFO TaskSetManager: Serialized task 0.0:0 as 1939 bytes in 1 ms
17/03/28 13:26:13 INFO Executor: Running task ID 1
17/03/28 13:26:13 INFO BlockManager: Found block broadcast_0 locally
17/03/28 13:26:13 INFO BlockFetcherIteratorBasicBlockFetcherIterator:maxBytesInFlight:50331648,targetRequestSize:1006632917/03/2813:26:13INFOBlockFetcherIterator BasicBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
17/03/28 13:26:13 INFO BlockFetcherIterator$BasicBlockFetcherIterator: Started 0 remote fetches in 13 ms
17/03/28 13:26:13 INFO Executor: Serialized size of result for 1 is 863
17/03/28 13:26:13 INFO Executor: Sending result for 1 directly to driver
17/03/28 13:26:13 INFO Executor: Finished task ID 1
17/03/28 13:26:13 INFO DAGScheduler: Completed ResultTask(0, 0)
17/03/28 13:26:13 INFO TaskSetManager: Finished TID 1 in 91 ms on localhost (progress: 1/1)
17/03/28 13:26:13 INFO DAGScheduler: Stage 0 (count at spp.scala:28) finished in 0.094 s
17/03/28 13:26:13 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
17/03/28 13:26:13 INFO SparkContext: Job finished: count at spp.scala:28, took 1.148558113 s
2
17/03/28 13:26:13 INFO SparkUI: Stopped Spark web UI at http://hadoop-master:4040
17/03/28 13:26:13 INFO DAGScheduler: Stopping DAGScheduler
17/03/28 13:26:14 INFO MapOutputTrackerMasterActor: MapOutputTrackerActor stopped!
17/03/28 13:26:14 INFO ConnectionManager: Selector thread was interrupted!
17/03/28 13:26:14 INFO ConnectionManager: ConnectionManager stopped
17/03/28 13:26:14 INFO MemoryStore: MemoryStore cleared
17/03/28 13:26:14 INFO BlockManager: BlockManager stopped
17/03/28 13:26:14 INFO BlockManagerMasterActor: Stopping BlockManagerMaster
17/03/28 13:26:14 INFO BlockManagerMaster: BlockManagerMaster stopped
17/03/28 13:26:14 INFO SparkContext: Successfully stopped SparkContext
17/03/28 13:26:14 INFO RemoteActorRefProviderRemotingTerminator:Shuttingdownremotedaemon.17/03/2813:26:14INFORemoteActorRefProvider RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.`

OK

打包、集羣 yarn 運行

1.看 hdfs 情況
Hadoop fs -ls -R /
這裏寫圖片描述
2.如上圖所示我們已經有了文件1.txt,如果沒有,則使用命令將一些好的文件放入hdfs中:hadoop fs -put /usr/local/cluster/hadoop/etc/hadoop/slaves /data/wordcount/
3.修改代碼爲:
`
package scala
import org.apache.spark.SparkContext._
import org.apache.spark.{SparkConf, SparkContext}
/**
* Created by root on 3/23/17.
*/
object spp {
def main(args: Array[String]) {
//輸入文件既可以是本地linux系統文件,也可以是其它來源文件,例如HDFS
if (args.length == 0) {
System.err.println(“Usage: SparkWordCount ”)
System.exit(1)
}
//以本地線程方式運行,可以指定線程個數,
//如.setMaster(“local[2]”),兩個線程執行
//下面給出的是單線程執行
val conf = new SparkConf().setAppName(“SparkWordCount”).setMaster(“spark://10.6.3.200:7077”)
val sc = new SparkContext(conf)

//wordcount操作,計算文件中包含Spark的行數
val rdd2=sc.textFile(args(0)).flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_)
//    val count1=rdd2.countByValue()
//打印結果
 rdd2.saveAsTextFile(path = args(1))
//   rdd2.count()
//println(rdd2.count())
sc.stop()

}

}
`
4.點擊工程my2,然後按F4打個Project Structure並選擇Artifacts,如下圖
選擇 jar from moulde
這裏寫圖片描述

5.因爲後期提交到集羣上運行,因此相關jar包都存在,爲減小jar包的體積,將spark-assembly-1.0.0-hadoop2.2.0.jar等jar包刪除即可,如下圖
這裏寫圖片描述

6.確定後,再點擊Build->Build Artifacts
生成如圖所示:
這裏寫圖片描述

7.運行 yarn-client(cluster會出現內存溢出,未解決)
/root/spark-1.0.0-bin-2.2.0/bin/spark-submit –master yarn-client –class main.scala.spp my2.jar hdfs://hadoop-master:8020/data/wordcount/1.txt hdfs://hadoop-master:8020/data/wordcount/read9

8.看hdfs 是否有read9
這裏寫圖片描述
看spark :

這裏寫圖片描述

集羣hdfs:10.6.3.200:50070
spark:10.6.3.200:8088
hadoop:10.6.3.8080

參考:http://blog.csdn.net/lovehuangjiaju/article/details/48577281

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章