Spark寫入 ES 異常 Use 'org.elasticsearch.spark.sql' package instead

Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Spark SQL types are not handled through basic RDD saveToEs() calls; typically this is a mistake(as the SQL schema will be ignored). Use 'org.elasticsearch.spark.sql' package instead

原來是這樣寫的

public static void write(Dataset<Row> df, String table) {

        EsSpark.saveToEs(df.rdd(), table);

    }

切換到下面這個就沒有問題了

 

public static void write(Dataset<Row> df, String table) {
        JavaEsSparkSQL.saveToEs(df,table);

    }

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章