Spark操作——創建操作

  1. 並行化創建操作
  2. 外部存儲創建操作
 

並行化創建操作

  • parallelize[T](seq: Seq[T], numSlices: Int=defaultParallelism):RDD[T]
# 並行化操作1到10數據集,根據能啓動的Executor數據來進行切分多個分區,每個分區啓動一個任務來進行處理
scala> var rdd = sc.parallelize(1 to 10)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[3] at parallelize at <console>:24

scala> rdd.collect
res5: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)

scala> rdd.partitions.size
res6: Int = 4

# 同上,只不過指定了分區數量
scala> var rdd = sc.parallelize(1 to 10, 5)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[4] at parallelize at <console>:24

scala> rdd.collect
res7: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)

scala> rdd.partitions.size
res8: Int = 5
  • makeRDD[[T](seq: Seq[(T, Seq[String])]):RDD[T]
  • makeRDD[T](seq: Seq[T], numSlices:Int=defaultParallelism):RDD[T]
該方法和parallelize方法類似,區別在於makeRDD可以指定每一個分區的首選位置
scala> var collect = Seq((1 to 10, Seq("master","slave1")),(11 to 15, Seq("slave2","slave3")))
collect: Seq[(scala.collection.immutable.Range.Inclusive, Seq[String])] = List((Range(1, 2, 3, 4, 5, 6, 7, 8, 9, 10),List(master, slave1)), (Range(11, 12, 13, 14, 15),List(slave2, slave3)))

scala> var rdd = sc.makeRDD(collect)
rdd: org.apache.spark.rdd.RDD[scala.collection.immutable.Range.Inclusive] = ParallelCollectionRDD[5] at makeRDD at <console>:26

scala> rdd.partitions.size
res9: Int = 2

scala> rdd.preferredLocations(rdd.partitions(0))
res10: Seq[String] = List(master, slave1)

scala> rdd.preferredLocations(rdd.partitions(1))
res11: Seq[String] = List(slave2, slave3)

外部存儲創建操作

Spark可以將任何Hadoop支持的存儲資源轉化成RDD,如本地文件、HDFS文件、Cassandra、HBase、Amazon S3deng。支持文件文件、SequenceFiles和Hadoop InputFormat格式
  • textFile(path:String, minPartitions:Int=defaultMinPartitions):RDD[String]
textFile可以指定第二個參數用於指定更多的分片,但是不能使用少於HDFS數據塊的分片數,默認情況下一個數據塊對應一個分片。
scala> var rdd = sc.textFile("/Users/lyf/Desktop/data.txt")
rdd: org.apache.spark.rdd.RDD[String] = /Users/lyf/Desktop/data.txt MapPartitionsRDD[7] at textFile at <console>:24

scala> rdd.collect
res12: Array[String] = Array(Hello World, Hello Tom, Hello Jerry)

scala> rdd.count
res13: Long = 3
  • wholeTextFile(path:String, minPartitions:Int=defaultMinPartitions):RDD[(K,V)]
讀取目錄裏面的小文件,返回(文件路徑,內容)對
scala> var rdd = sc.wholeTextFiles("/Users/lyf/Desktop/test")
rdd: org.apache.spark.rdd.RDD[(String, String)] = /Users/lyf/Desktop/test MapPartitionsRDD[9] at wholeTextFiles at <console>:24

scala> rdd.collect
res14: Array[(String, String)] =
Array((file:/Users/lyf/Desktop/test/data1.txt,"Hello World
Hello Tom
Hello Jerry
"), (file:/Users/lyf/Desktop/test/data2.txt,"This is a spark test
Hello World
"))
  • sequenceFile[K,V](path:String, minPartitions:Int=defaultMinPartitions):RDD[(K,V)]
  • sequenceFile[K,V](path:String, keyClass:Class[K], valueClass:Class[V]):RDD[(K,V)]
  • sequenceFile[K,V](path:String, keyClass:Class[K], valueClass:Class[V], minPartitions:Int):RDD[(K,V)]
sequenceFile[K,V]()操作可以將SequenceFile轉換成RDD。
 
  • hadoopFile[K,V,F<:InputFormat[K,V]](path:String):RDD[(K,V)]
  • hadoopFile[K,V,F<:InputFormat[K,V]](path:String, minPartitions:Int):RDD[(K,V)]
  • hadoopFile[K,V](path:String, inputFormatClass:Cclass[_<:InputFormat[K,V]], keyClass:Class[K], valueClass:Class[V], minPartitions:Int=defaultMinPartitions):RDD[(K,V)]
  • newAPIHadoopFile[K,V,F<:InputFormat[K,V]](path:String, fClass:Class[F],kClass:Class[K], vClass:Class[V], conf:Configuration=hadoopConfiguration):RDD[(K,V)]
  • newAPIHadoopFile[K,V,F<:InputFormat[K,V]](path:String)(implicit km:ClassTag[K], vm:ClassTag[V], fm:ClassTag[F]):RDD[(K,V)]
  • hadoopRDD[K,V](conf:JobConf, inputFormatClass:Class[_ <:InputFormat[K,V]], keyClass:Class[K], valueClass:Class[V], minPartitions:Int=defaultMinPartitions):RDD[(K,V)]
  • newAPIHadoopRDD[K,V,F<:InputFormat[K,V]](conf:Configuration=hadoopConfiguration, fClass:Class[F], kClass:Class[K], vClass:Class[V]):RDD[(K,V)]
hadoopRDD操作可以將其他任何Hadoop輸入類型轉換成RDD使用操作。
 
 
 
 
 
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章