Spark中使用kyro序列化

序列化在分佈式系統中扮演着重要的角色,優化Spark程序時,首當其衝的就是對序列化方式的優化。Spark爲使用者提供兩種序列化方式:

Java serialization: 默認的序列化方式。

Kryo serialization: 相較於 Java serialization 的方式,速度更快,空間佔用更小,但並不支持所有的序列化格式,同時使用的時候需要註冊class。spark-sql中默認使用的是kyro的序列化方式。

下文將會講解kryo的使用方式並對比性能。

配置

可以在spark-default.conf設置全局參數,也可以代碼中初始化時對SparkConf設置 conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer") ,該參數會同時作用於機器之間數據的shuffle操作以及序列化rdd到磁盤,內存。

Spark不將Kyro設置成默認的序列化方式是因爲它需要對類進行註冊,官方強烈建議在一些網絡數據傳輸很大的應用中使用kyro序列化。

val conf = new SparkConf()
conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
conf.registerKryoClasses(Array(classOf[MyClass1],classOf[MyClass2]))
val sc = new SparkContext(conf)
如果你要序列化的對象比較大,可以增加參數spark.kryoserializer.buffer所設置的值。

如果你沒有註冊需要序列化的class,Kyro依然可以照常工作,但會存儲每個對象的全類名(full class name),這樣的使用方式往往比默認的 Java serialization 還要浪費更多的空間。

可以設置 spark.kryo.registrationRequired 參數爲 true,使用kyro時如果在應用中有類沒有進行註冊則會報錯:

java.lang.IllegalArgumentException: Class is not registered: scala.collection.mutable.WrappedArray$ofRef
Note: To register this class use: kryo.register(scala.collection.mutable.WrappedArray$ofRef.class);
	at com.esotericsoftware.kryo.Kryo.getRegistration(Kryo.java:488)
	at com.esotericsoftware.kryo.util.DefaultClassResolver.writeClass(DefaultClassResolver.java:97)
	at com.esotericsoftware.kryo.Kryo.writeClass(Kryo.java:517)
	at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:622)
	at org.apache.spark.serializer.KryoSerializationStream.writeObject(KryoSerializer.scala:207)
	at org.apache.spark.rdd.ParallelCollectionPartition$$anonfun$writeObject$1$$anonfun$apply$mcV$sp$1.apply(ParallelCollectionRDD.scala:65)
	at org.apache.spark.rdd.ParallelCollectionPartition$$anonfun$writeObject$1$$anonfun$apply$mcV$sp$1.apply(ParallelCollectionRDD.scala:65)
	at org.apache.spark.util.Utils$.serializeViaNestedStream(Utils.scala:184)
	at org.apache.spark.rdd.ParallelCollectionPartition$$anonfun$writeObject$1.apply$mcV$sp(ParallelCollectionRDD.scala:65)
	at org.apache.spark.rdd.ParallelCollectionPartition$$anonfun$writeObject$1.apply(ParallelCollectionRDD.scala:51)
	at org.apache.spark.rdd.ParallelCollectionPartition$$anonfun$writeObject$1.apply(ParallelCollectionRDD.scala:51)
	at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1269)
	at org.apache.spark.rdd.ParallelCollectionPartition.writeObject(ParallelCollectionRDD.scala:51)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:1028)
	at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496)
	at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
	at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
	at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
	at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
	at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
	at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
	at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
	at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:43)
	at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
	at org.apache.spark.scheduler.Task$.serializeWithDependencies(Task.scala:246)
	at org.apache.spark.scheduler.TaskSetManager$$anonfun$resourceOffer$1.apply(TaskSetManager.scala:452)
	at org.apache.spark.scheduler.TaskSetManager$$anonfun$resourceOffer$1.apply(TaskSetManager.scala:432)
	at scala.Option.map(Option.scala:146)
	at org.apache.spark.scheduler.TaskSetManager.resourceOffer(TaskSetManager.scala:432)
	at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$org$apache$spark$scheduler$TaskSchedulerImpl$$resourceOfferSingleTaskSet$1.apply$mcVI$sp(TaskSchedulerImpl.scala:264)
	at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
	at org.apache.spark.scheduler.TaskSchedulerImpl.org$apache$spark$scheduler$TaskSchedulerImpl$$resourceOfferSingleTaskSet(TaskSchedulerImpl.scala:259)
	at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$resourceOffers$3$$anonfun$apply$8.apply(TaskSchedulerImpl.scala:333)
	at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$resourceOffers$3$$anonfun$apply$8.apply(TaskSchedulerImpl.scala:331)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$resourceOffers$3.apply(TaskSchedulerImpl.scala:331)
	at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$resourceOffers$3.apply(TaskSchedulerImpl.scala:328)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
	at org.apache.spark.scheduler.TaskSchedulerImpl.resourceOffers(TaskSchedulerImpl.scala:328)
	at org.apache.spark.scheduler.local.LocalEndpoint.reviveOffers(LocalSchedulerBackend.scala:85)
	at org.apache.spark.scheduler.local.LocalEndpoint$$anonfun$receive$1.applyOrElse(LocalSchedulerBackend.scala:64)
	at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:117)
	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:205)
	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101)
	at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:213)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
2018-01-08 10:40:41  [ dispatcher-event-loop-2:29860 ] - [ ERROR ]  Failed to serialize task 0, not attempting to retry it.

如上這個錯誤需要添加

sparkConf.registerKryoClasses(
    Array(classOf[scala.collection.mutable.WrappedArray.ofRef[_]],
    classOf[MyClass]))
下面的 demo 將會演示不同方式的序列化對空間佔用的情況。

Demo

case class Info(name: String ,age: Int,gender: String,addr: String)

object KyroTest {
  def main(args: Array[String]) {

  val conf = new SparkConf().setMaster("local[2]").setAppName("KyroTest")
  conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
  conf.set("spark.kryo.registrationRequired", "true")
 conf.registerKryoClasses(Array(classOf[Info], classOf[scala.collection.mutable.WrappedArray.ofRef[_]]))
  val sc = new SparkContext(conf)

  val arr = new ArrayBuffer[Info]()

  val nameArr = Array[String]("lsw","yyy","lss")
  val genderArr = Array[String]("male","female")
  val addressArr = Array[String]("beijing","shanghai","shengzhen","wenzhou","hangzhou")

  for(i <- 1 to 1000000){
    val name = nameArr(Random.nextInt(3))
    val age = Random.nextInt(100)
    val gender = genderArr(Random.nextInt(2))
    val address = addressArr(Random.nextInt(5))
    arr.+=(Info(name,age,gender,address))
    }

  val rdd = sc.parallelize(arr)

  //序列化的方式將rdd存到內存
  rdd.persist(StorageLevel.MEMORY_ONLY_SER)
  rdd.count()
  }
}

可以在web ui中看到緩存的rdd大小:


序列化方式 是否註冊 空間佔用
kyro 21.1 MB
kyro 38.3 MB
Java 25.1 MB

轉載自:http://blog.csdn.net/lsshlsw/article/details/50856842

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章