Programming with RDD

- Passing functions to Spark (be careful the reference to the containing object which need to be serializable)

class SearchFunctions(val query: String) {
def isMatch(s: String): Boolean = {
s.contains(query)
}
def getMatchesFunctionReference(rdd: RDD[String]): RDD[String] = {
// Problem: “isMatch” means “this.isMatch”, so we pass all of “this”
rdd.map(isMatch)
}
def getMatchesFieldReference(rdd: RDD[String]): RDD[String] = {
// Problem: “query” means “this.query”, so we pass all of “this”
rdd.map(x => x.split(query))
}
def getMatchesNoReference(rdd: RDD[String]): RDD[String] = {
// Safe: extract just the field we need into a local variable
val query_ = this.query
rdd.map(x => x.split(query_))
}
}

Note that passing in local serializable
variables or functions that are members of a top-level object is always safe


- Basic RDD transformations

  • map, flatMap
  • set operations, union, distinct, intersection, subtract, cartesian. pay attention to operations needing shuffle (multiple RDDs with same type)
- Basic RDD Actions


  • reduce 
  • collect
  • count
  • fold -> currying functions. provide "zero value" as first parameter which then applied as the first parameter of the function.
  • aggregate, initial value, function 1 to accumulate value from each node, function 2 to merge all accumulated values.
  • foreach , run function on distributed nodes
  • take, returned result not in order
  • RDD implicitly converted to real scala classes, like RDD[Double] to DoubleRDDFunctions

- Persist (eviction cache of partition computing result by LRU)

MEMORY_ONLY

MEMORY_ONLY_SER

MEMORY_AND_DISK

MEMORY_AND_DISK_SER

DISK_ONLY

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章