在notebook中使用pyspark遇到的問題

代碼:

from pyspark import SparkContext
sc = SparkContext()
rdd.getNumPartitions()
rdd.glom().collect()

遇到的問題:
執行rdd.glom().collect()時出現如下錯誤:

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 123, in main
    ("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 3.6 than that in driver 2.7, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set

解決辦法:
在集羣上的每個節點上添加環境變量
export PYSPARK_DRIVER_PYTHON=/usr/local/anacond/bin/python3
export PYSPARK_PYTHON=/usr/local/anacond/bin/python3
記得使用source命令生效,然後重啓集羣中的所有節點,重啓spark

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章