pyspark報錯java.lang.ClassNotFoundException: com.mysql.cj.jdbc.Driver(not finished yet)

完整報錯如下:

Traceback (most recent call last):
  File "<stdin>", line 6, in <module>
  File "/home/appleyuchi/bigdata/spark-2.3.1-bin-hadoop2.7/python/pyspark/sql/readwriter.py", line 703, in save
    self._jwrite.save()
  File "/home/appleyuchi/bigdata/spark-2.3.1-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
  File "/home/appleyuchi/bigdata/spark-2.3.1-bin-hadoop2.7/python/pyspark/sql/utils.py", line 63, in deco
    return f(*a, **kw)
  File "/home/appleyuchi/bigdata/spark-2.3.1-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o92.save.
: java.lang.ClassNotFoundException: com.mysql.cj.jdbc.Driver
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:45)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:79)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:79)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:79)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:35)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:60)
    at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
    at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
    at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)

解決方案:

mv mysql-connector-java-8.0.20.jar $SPARK_HOME/jars/

驅動文件mysql-connector-java-8.0.20.jar是從maven倉庫下載的:

https://mvnrepository.com/artifact/mysql/mysql-connector-java/8.0.20

 

注意,這個報錯的設置,需要搞清楚當前spark是什麼mode,如果盲目照搬stackoverflow和百度,你會發現無效!

該結論的依據來自[1]中的表格.

下面是根據不同模式歸納的解決方案:

啓動命令 configuration file modification path mode 無效的方式 log查看方式

pyspark --master yarn

 

或者

 

spark-shell --master yarn

 

spark.jars = /home/appleyuchi/bigdata/apache-hive-3.0.0-bin/lib/mysql-connector-java-8.0.20.jar $SPARK_HOME/conf/spark-defaults.conf yarn client

①只在$SPARK/jars下面放驅動文件

spark.driver.extraClassPath

spark.executor.extraClassPath

都無效

終端
spark-submit --master --deploy-mode cluster

 

spark.driver.extraClassPath   = /home/appleyuchi/bigdata/apache-hive-3.0.0-bin/lib/mysql-connector-java-8.0.20.jar
spark.executor.extraClassPath = /home/appleyuchi/bigdata/apache-hive-3.0.0-bin/lib/mysql-connector-java-8.0.20.jar

$SPARK_HOME/conf/spark-defaults.conf yarn/standalone(cluster)

①只在$SPARK/jars下面放驅動文件

 

②spark.jars = /home/appleyuchi/bigdata/apache-hive-3.0.0-bin/lib/mysql-connector-java-8.0.20.jar

 

瀏覽器打開http://desktop:8088/cluster

->選擇最新的ID->Attempt ID的logs->stdout->Click here for full log

測試方式:

①pyspark --master yarn(然後在交互是模式中輸入交互式代碼)

②spark-submit --master yarn --deploy-mode cluster 源碼.py

#----------------------------------------------------------附錄-------------------------------------------------------------------------------------------------

源碼.py

 

pyspark交互式代碼:

import pandas as pd
from pyspark.sql import SparkSession
from pyspark import SparkContext
from pyspark.sql import SQLContext

def map_extract(element):
    file_path, content = element
    year = file_path[-8:-4]
    return [(year, i) for i in content.split("\n") if i]


spark = SparkSession\
    .builder\
    .appName("PythonTest")\
    .getOrCreate()

    
res = spark.sparkContext.wholeTextFiles('hdfs://Desktop:9000/user/mercury/names',
                        minPartitions=40)  \
        .map(map_extract) \
        .flatMap(lambda x: x) \
        .map(lambda x: (x[0], int(x[1].split(',')[2]))) \
        .reduceByKey(lambda x,y:x+y)



df = res.toDF(["key","num"])  #把已有數據列改成和目標mysql表的列的名字相同
# print(dir(df))
df.printSchema()
print(df.show())
df.printSchema()

df.write.format("jdbc").options(
    url="jdbc:mysql://127.0.0.1:3306/leaf",
    driver="com.mysql.cj.jdbc.Driver",
    dbtable="spark",
    user="appleyuchi",
    password="appleyuchi").mode('append').save()

 

Reference:

[1]Spark Shell Add Multiple Drivers/Jars to Classpath using spark-defaults.conf
[2]Spark Configuration

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章