spark解決org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow

使用spark sql的thrift jdbc接口查詢數據時報這個錯誤

Exception in thread "main" java.sql.SQLException: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3107 in stage 308.0 failed 4 times, most recent failure: Lost task 3107.3 in stage 308.0 (TID 620318, XXX): org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 1572864, required: 3236381
Serialization trace:
values (org.apache.spark.sql.catalyst.expressions.GenericInternalRow). To avoid this, increase spark.kryoserializer.buffer.max value.
        at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:299)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:240)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
        at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:275)
        at org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)
        at com.peopleyuqing.tool.SparkJDBC.excuteQuery(SparkJDBC.java:64)
        at com.peopleyuqing.main.ContentSubThree.main(ContentSubThree.java:24)

提示需要調整的參數是spark.kryoserializer.buffer.max,最少是需要3236381
一開始設置是在spark-default.conf文件裏面配置

spark.kryoserializer.buffer.max=64m
spark.kryoserializer.buffer=64k

錯誤依舊而且Available變成了0

Exception in thread "main" java.sql.SQLException: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3155 in stage 0.0 failed 4 times, most recent failure: Lost task 3155.3 in stage 0.0 (TID 3317, XXX): org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 615328
Serialization trace:
values (org.apache.spark.sql.catalyst.expressions.GenericInternalRow). To avoid this, increase spark.kryoserializer.buffer.max value.
        at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:299)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:240)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

後來通過spark-shell測試
sc.getConf.get(“spark.kryoserializer.buffer.max”)發現返回是設置的值64M
說明spark-sql thrift jdbc的配置不收spark-conf.default的影響,遂修改配置方式,增加了這樣兩個啓動參數

--conf  spark.kryoserializer.buffer.max=256m  --conf spark.kryoserializer.buffer=64m 

啓動命令如下

sbin/start-thriftserver.sh --executor-memory 10g  --driver-memory 12g --total-executor-cores 288 --executor-cores 2 --conf spark.kryoserializer.buffer.max=256m  --conf spark.kryoserializer.buffer=64m 

問題解決

發佈了14 篇原創文章 · 獲贊 13 · 訪問量 9萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章