spark提交部署方式

1、bin/spark-submit --master spark://123.321.123.321:7077 --deploy-mode client jars/sparkApp.jar
2、bin/spark-submit --master spark://123.321.123.321:7077  jars/sparkApp.jar
3、bin/spark-submit --master spark://123.321.123.321:7077 --deploy-mode cluster jars/sparkApp.jar

1=2 都是本地運行,3是佈置到集羣

1/2 運行成功界面如下 

3成功運行界面如下

 

可能出現的錯誤 

錯誤一、     
19/10/18 11:06:54 INFO MemoryStore: ensureFreeSpace(2245) called with curMem=268125, maxMem=280248975
19/10/18 11:06:54 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.2 KB, free 267.0 MB)
19/10/18 11:06:54 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on izbp1aiqq9qrjpvel26rx0z:44205 (size: 2.2 KB, free: 267.2 MB)
19/10/18 11:06:54 INFO BlockManagerMaster: Updated info of block broadcast_1_piece0
19/10/18 11:06:54 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:839
19/10/18 11:06:54 INFO DAGScheduler: Submitting 2 missing tasks from Stage 0 (MapPartitionsRDD[3] at map at SparkDemo.scala:18)
19/10/18 11:06:54 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
19/10/18 11:07:09 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
19/10/18 11:07:24 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

解決方案 

       將spark-env.sh中配置修改 SPARK_WORKER_MEMORY=1g 
              SPARK_MASTER_IP=izbp1aiqq9qrjpvel26rx0z 必須和提交的masterip一致 ,修改後重啓 master和 slaves

錯誤二 

   7077連接拒絕 

master沒有啓動 

解決 sbin/start-master.sh 啓動主機 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章