Spark源碼分析——deploy模塊

Background

在前文Spark源碼分析之-scheduler模塊中提到了Spark在資源管理和調度上採用了Hadoop YARN的方式:外層的資源管理器和應用內的任務調度器;並且分析了Spark應用內的任務調度模塊。本文就Spark的外層資源管理器-deploy模塊進行分析,探究Spark是如何協調應用之間的資源調度和管理的。

Spark最初是交由Mesos進行資源管理,爲了使得更多的用戶,包括沒有接觸過Mesos的用戶使用Spark,Spark的開發者添加了Standalone的部署方式,也就是deploy模塊。因此deploy模塊只針對不使用Mesos進行資源管理的部署方式。

Deploy模塊整體架構

deploy模塊主要包含3個子模塊:master, worker, client。他們繼承於Actor,通過actor實現互相之間的通信。

  • Master:master的主要功能是接收worker的註冊並管理所有的worker,接收client提交的application,(FIFO)調度等待的application並向worker提交。
  • Worker:worker的主要功能是向master註冊自己,根據master發送的application配置進程環境,並啓動StandaloneExecutorBackend
  • Client:client的主要功能是向master註冊並監控application。當用戶創建SparkContext時會實例化SparkDeploySchedulerBackend,而實例化SparkDeploySchedulerBackend的同時就會啓動client,通過向client傳遞啓動參數和application有關信息,client向master發送請求註冊application並且在slave node上啓動StandaloneExecutorBackend

下面來看一下deploy模塊的類圖:

Deploy moduler class chart

Deploy模塊通信消息

Deploy模塊並不複雜,代碼也不多,主要集中在各個子模塊之間的消息傳遞和處理上,因此在這裏列出了各個模塊之間傳遞的主要消息:

  • client to master

    1. RegisterApplication (向master註冊application)
  • master to client

    1. RegisteredApplication (作爲註冊application的reply,回覆給client)
    2. ExecutorAdded (通知client worker已經啓動了Executor環境,當向worker發送LaunchExecutor後通知client)
    3. ExecutorUpdated (通知client Executor狀態已經發生變化了,包括結束、異常退出等,當worker向master發送ExecutorStateChanged後通知client)
  • master to worker

    1. LaunchExecutor (發送消息啓動Executor環境)
    2. RegisteredWorker (作爲worker向master註冊的reply)
    3. RegisterWorkerFailed (作爲worker向master註冊失敗的reply)
    4. KillExecutor (發送給worker請求停止executor環境)
  • worker to master

    1. RegisterWorker (向master註冊自己)
    2. Heartbeat (定期向master發送心跳信息)
    3. ExecutorStateChanged (向master發送Executor狀態改變信息)

Deploy模塊代碼詳解

Deploy模塊相比於scheduler模塊簡單,因此對於deploy模塊的代碼並不做十分細節的分析,只針對application的提交和結束過程做一定的分析。

Client提交application

Client是由SparkDeploySchedulerBackend創建被啓動的,因此client是被嵌入在每一個application中,只爲這個applicator所服務,在client啓動時首先會先master註冊application:

  1. def start() {
  2. // Just launch an actor; it will call back into the listener.
  3. actor = actorSystem.actorOf(Props(new ClientActor))
  4. }
  5. override def preStart() {
  6. logInfo("Connecting to master " + masterUrl)
  7. try {
  8. master = context.actorFor(Master.toAkkaUrl(masterUrl))
  9. masterAddress = master.path.address
  10. master ! RegisterApplication(appDescription) //向master註冊application
  11. context.system.eventStream.subscribe(self, classOf[RemoteClientLifeCycleEvent])
  12. context.watch(master) // Doesn't work with remote actors, but useful for testing
  13. } catch {
  14. case e: Exception =>
  15. logError("Failed to connect to master", e)
  16. markDisconnected()
  17. context.stop(self)
  18. }
  19. }

Master在收到RegisterApplication請求後會把application加到等待隊列中,等待調度:

  1. case RegisterApplication(description) => {
  2. logInfo("Registering app " + description.name)
  3. val app = addApplication(description, sender)
  4. logInfo("Registered app " + description.name + " with ID " + app.id)
  5. waitingApps += app
  6. context.watch(sender) // This doesn't work with remote actors but helps for testing
  7. sender ! RegisteredApplication(app.id)
  8. schedule()
  9. }

Master會在每次操作後調用schedule()函數,以確保等待的application能夠被及時調度。

在前面提到deploy模塊是資源管理模塊,那麼Spark的deploy管理的是什麼資源,資源以什麼單位進行調度的呢?在當前版本的Spark中,集羣的cpu數量是Spark資源管理的一個標準,每個提交的application都會標明自己所需要的資源數(也就是cpu的core數),Master以FIFO的方式管理所有的application請求,當資源數量滿足當前任務執行需求的時候該任務就會被調度,否則就繼續等待,當然如果master能給予當前任務部分資源則也會啓動該application。schedule()函數實現的就是此功能。

  1. def schedule() {
  2. if (spreadOutApps) {
  3. for (app <- waitingApps if app.coresLeft > 0) {
  4. val usableWorkers = workers.toArray.filter(_.state == WorkerState.ALIVE)
  5. .filter(canUse(app, _)).sortBy(_.coresFree).reverse
  6. val numUsable = usableWorkers.length
  7. val assigned = new Array[Int](numUsable) // Number of cores to give on each node
  8. var toAssign = math.min(app.coresLeft, usableWorkers.map(_.coresFree).sum)
  9. var pos = 0
  10. while (toAssign > 0) {
  11. if (usableWorkers(pos).coresFree - assigned(pos) > 0) {
  12. toAssign -= 1
  13. assigned(pos) += 1
  14. }
  15. pos = (pos + 1) % numUsable
  16. }
  17. // Now that we've decided how many cores to give on each node, let's actually give them
  18. for (pos <- 0 until numUsable) {
  19. if (assigned(pos) > 0) {
  20. val exec = app.addExecutor(usableWorkers(pos), assigned(pos))
  21. launchExecutor(usableWorkers(pos), exec, app.desc.sparkHome)
  22. app.state = ApplicationState.RUNNING
  23. }
  24. }
  25. }
  26. } else {
  27. // Pack each app into as few nodes as possible until we've assigned all its cores
  28. for (worker <- workers if worker.coresFree > 0 && worker.state == WorkerState.ALIVE) {
  29. for (app <- waitingApps if app.coresLeft > 0) {
  30. if (canUse(app, worker)) {
  31. val coresToUse = math.min(worker.coresFree, app.coresLeft)
  32. if (coresToUse > 0) {
  33. val exec = app.addExecutor(worker, coresToUse)
  34. launchExecutor(worker, exec, app.desc.sparkHome)
  35. app.state = ApplicationState.RUNNING
  36. }
  37. }
  38. }
  39. }
  40. }
  41. }

當application得到調度後就會調用launchExecutor()向worker發送請求,同時向client彙報狀態:

  1. def launchExecutor(worker: WorkerInfo, exec: ExecutorInfo, sparkHome: String) {
  2. worker.addExecutor(exec)
  3. worker.actor ! LaunchExecutor(exec.application.id, exec.id, exec.application.desc, exec.cores, exec.memory, sparkHome)
  4. exec.application.driver ! ExecutorAdded(exec.id, worker.id, worker.host, exec.cores, exec.memory)
  5. }

至此client與master的交互已經轉向了master與worker的交互,worker需要配置application啓動環境

  1. case LaunchExecutor(appId, execId, appDesc, cores_, memory_, execSparkHome_) =>
  2. val manager = new ExecutorRunner(
  3. appId, execId, appDesc, cores_, memory_, self, workerId, ip, new File(execSparkHome_), workDir)
  4. executors(appId + "/" + execId) = manager
  5. manager.start()
  6. coresUsed += cores_
  7. memoryUsed += memory_
  8. master ! ExecutorStateChanged(appId, execId, ExecutorState.RUNNING, None, None)

Worker在接收到LaunchExecutor消息後創建ExecutorRunner實例,同時彙報master executor環境啓動。

ExecutorRunner在啓動的過程中會創建線程,配置環境,啓動新進程:

  1. def start() {
  2. workerThread = new Thread("ExecutorRunner for " + fullId) {
  3. override def run() { fetchAndRunExecutor() }
  4. }
  5. workerThread.start()
  6. // Shutdown hook that kills actors on shutdown.
  7. ...
  8. }
  9. def fetchAndRunExecutor() {
  10. try {
  11. // Create the executor's working directory
  12. val executorDir = new File(workDir, appId + "/" + execId)
  13. if (!executorDir.mkdirs()) {
  14. throw new IOException("Failed to create directory " + executorDir)
  15. }
  16. // Launch the process
  17. val command = buildCommandSeq()
  18. val builder = new ProcessBuilder(command: _*).directory(executorDir)
  19. val env = builder.environment()
  20. for ((key, value) <- appDesc.command.environment) {
  21. env.put(key, value)
  22. }
  23. env.put("SPARK_MEM", memory.toString + "m")
  24. // In case we are running this from within the Spark Shell, avoid creating a "scala"
  25. // parent process for the executor command
  26. env.put("SPARK_LAUNCH_WITH_SCALA", "0")
  27. process = builder.start()
  28. // Redirect its stdout and stderr to files
  29. redirectStream(process.getInputStream, new File(executorDir, "stdout"))
  30. redirectStream(process.getErrorStream, new File(executorDir, "stderr"))
  31. // Wait for it to exit; this is actually a bad thing if it happens, because we expect to run
  32. // long-lived processes only. However, in the future, we might restart the executor a few
  33. // times on the same machine.
  34. val exitCode = process.waitFor()
  35. val message = "Command exited with code " + exitCode
  36. worker ! ExecutorStateChanged(appId, execId, ExecutorState.FAILED, Some(message),
  37. Some(exitCode))
  38. } catch {
  39. case interrupted: InterruptedException =>
  40. logInfo("Runner thread for executor " + fullId + " interrupted")
  41. case e: Exception => {
  42. logError("Error running executor", e)
  43. if (process != null) {
  44. process.destroy()
  45. }
  46. val message = e.getClass + ": " + e.getMessage
  47. worker ! ExecutorStateChanged(appId, execId, ExecutorState.FAILED, Some(message), None)
  48. }
  49. }
  50. }

ExecutorRunner啓動後worker向master彙報ExecutorStateChanged,而master則將消息重新pack成爲ExecutorUpdated發送給client。

至此整個application提交過程基本結束,提交的過程並不複雜,主要涉及到的消息的傳遞。

Application的結束

由於各種原因(包括正常結束,異常返回等)會造成application的結束,我們現在就來看看applicatoin結束的整個流程。

application的結束往往會造成client的結束,而client的結束會被master通過Actor檢測到,master檢測到後會調用removeApplication()函數進行操作:

  1. def removeApplication(app: ApplicationInfo) {
  2. if (apps.contains(app)) {
  3. logInfo("Removing app " + app.id)
  4. apps -= app
  5. idToApp -= app.id
  6. actorToApp -= app.driver
  7. addressToWorker -= app.driver.path.address
  8. completedApps += app // Remember it in our history
  9. waitingApps -= app
  10. for (exec <- app.executors.values) {
  11. exec.worker.removeExecutor(exec)
  12. exec.worker.actor ! KillExecutor(exec.application.id, exec.id)
  13. }
  14. app.markFinished(ApplicationState.FINISHED) // TODO: Mark it as FAILED if it failed
  15. schedule()
  16. }
  17. }

removeApplicatoin()首先會將application從master自身所管理的數據結構中刪除,其次它會通知每一個work,請求其KillExecutor。worker在收到KillExecutor後調用ExecutorRunnerkill()函數:

  1. case KillExecutor(appId, execId) =>
  2. val fullId = appId + "/" + execId
  3. executors.get(fullId) match {
  4. case Some(executor) =>
  5. logInfo("Asked to kill executor " + fullId)
  6. executor.kill()
  7. case None =>
  8. logInfo("Asked to kill unknown executor " + fullId)
  9. }

ExecutorRunner內部,它會結束監控線程,同時結束監控線程所啓動的進程,並且向worker彙報ExecutorStateChanged

  1. def kill() {
  2. if (workerThread != null) {
  3. workerThread.interrupt()
  4. workerThread = null
  5. if (process != null) {
  6. logInfo("Killing process!")
  7. process.destroy()
  8. process.waitFor()
  9. }
  10. worker ! ExecutorStateChanged(appId, execId, ExecutorState.KILLED, None, None)
  11. Runtime.getRuntime.removeShutdownHook(shutdownHook)
  12. }
  13. }

Application結束的同時清理了master和worker上的關於該application的所有信息,這樣關於application結束的整個流程就介紹完了,當然在這裏我們對於許多異常處理分支沒有細究,但這並不影響我們對主線的把握。

End

至此對於deploy模塊的分析暫告一個段落。deploy模塊相對來說比較簡單,也沒有特別複雜的邏輯結構,正如前面所說的deploy模塊是爲了能讓更多的沒有部署Mesos的集羣的用戶能夠使用Spark而實現的一種方案。

當然現階段看來還略微簡陋,比如application的調度方式(FIFO)是否會造成小應用長時間等待大應用的結束,是否有更好的調度策略;資源的衡量標準是否可以更多更合理,而不單單是cpu數量,因爲現實場景中有的應用是disk intensive,有的是network intensive,這樣就算cpu資源有富餘,調度新的application也不一定會很有意義。

總的來說作爲Mesos的一種簡單替代方式,deploy模塊對於推廣Spark還是有積極意義的。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章