Programming with RDDsjava
This chapter introduces Spark’s core abstraction for working with data, the resilient
distributed dataset (RDD). An RDD is simply a distributed collection of elements. In
Spark all work is expressed as either creating new RDDs, transforming existing RDDs, or
calling operations on RDDs to compute a result.RDDs are the core conceptnode
in Spark.python
RDD Basicsshell
An RDD in Spark is simply an immutable distributed collection of objects. Each RDD is split into multiple partitions, which may be computed on different nodes of the cluster.express
一、Create an RDD apache
by loading an external dataset, or by distributing a collection of objects (e.g., a list or set) in their driver program. We have already seen loading a text file as an RDD of strings using SparkContext.textFile().api
lines = sc.textFile(“README.md”)缓存
二、两种操做session
RDDs offer two types of operationsjvm
|————transformations
|————actions
***Transformations construct a new RDD from a previous one,for example:
pythonLines = lines.filter(lambda line: “Python” in line)
***Actions, on the other hand, compute a result based on an RDD, and either return it to the driver program or save it to an external storage system
pythonLines.first()
Transformations and actions are different because of the way Spark computes RDDs.具体而言,你能够随时定义一个RDD,可是spark计算RDD采起的一种懒惰方式(lazzy fashion)——transformation操做不会去真正的扫描计算RDD,直到你使用Action操做的时候,这个时候才会真正的计算RDD。好比上面的lines=sc.textFile("")不会当即把文件读入内存,知道使用了一个Action操做linses.first()的时候才真正的scans RDD的数据,并且不是彻底扫描,一部分一部分数据加载,找到第一个知足条件的结果就结束。
三、Finally, Spark’s RDDs are by default recomputed each time you run an action on them. If you would like to reuse an RDD in multiple actions, you can ask Spark to persist it using RDD.persist().其实这也至关因而一种优化,重用计算结果。若是不persist下来,默认状况下spark会把计算出来的结果消除,这在大数据的场景中也是合理的,这样能够节省cluster宝贵的memory。
To summarize, every Spark program and shell session will work as follows:
1. Create some input RDDs from external data.
2. Transform them to define new RDDs using transformations like filter().
3. Ask Spark to persist() any intermediate RDDs that will need to be reused.
4. Launch actions such as count() and first() to kick off a parallel computation,
which is then optimized and executed by Spark.
Tip
cache() is the same as calling persist() with the default storage level.
====================================================
后面是对上面四部中涉及到的操做展开介绍。
一、Creating RDDs
Spark provides two ways to create RDDs: loading an external dataset and parallelizing a collection in your driver program.
(1)The simplest way to create RDDs is to take an existing collection in your program and pass it to SparkContext’s parallelize() method。 Keep in mind, however, that outside of prototyping and testing, this is not widely used since it requires that you have your entire dataset in memory on one machine.
examples:
lines = sc.parallelize([“pandas”, “i like pandas”]) #python
JavaRDD<String> lines = sc.parallelize(Arrays.asList(“pandas”, “i like pandas”)); //java
(2) a more common way
sc.textFiel("/path/to/README.md");
RDD Operations
As we’ve discussed, RDDs support two types of operations: transformations and actions.Transformations are operations on RDDs that return a new RDD, such as map() and filter(). Actions are operations that return a result to the driver program or write it to storage, and kick off a computation, such as count() and first(). Spark treats transformations and actions very differently, so understanding which type of operation you are performing will be important. If you are ever confused whether a given function is a transformation or an action, you can look at its return type: transformations return RDDs, whereas actions return some other data type.
(一)Transformations
Transformations are operations on RDDs that return a new RDD. As discussed in “Lazy Evaluation”, transformed RDDs are computed lazily, only when you use them in an action. Many transformations are element-wise; that is, they work on one element at a time; but this is not true for all transformations.
filter() transformation in Python
inputRDD = sc.textFile(“log.txt”)
errorsRDD = inputRDD.filter(lambda x: “error” in x)
filter() transformation in Java
JavaRDD<String> inputRDD = sc.textFile(“log.txt”);
JavaRDD<String> errorsRDD = inputRDD.filter(
new Function<String, Boolean>() {
public Boolean call(String x) { return x.contains(“error”); }
}
});
Note that the filter() operation does not mutate the existing inputRDD. Instead, it returns a pointer to an entirely new RDD.
Finally, as you derive new RDDs from each other using transformations, Spark keeps track of the set of dependencies between different RDDs, called the lineage graph.(这张图有点相似于类的集成关系图,一旦发生错误或数据丢失的时候的时候,能够及时恢复)
(二)Actions
Actions are the second type of RDD operation. They are the operations that return a final value to the driver program or write data to an external storage system.
#Python error count using actions print “Input had “ + badLinesRDD.count() + ” concerning lines” print “Here are 10 examples:” for line in badLinesRDD.take(10): print line
Java error count using actions System.out.println(“Input had “ + badLinesRDD.count() + ” concerning lines”) System.out.println(“Here are 10 examples:”) for (String line: badLinesRDD.take(10)) { System.out.println(line); }
Note : take() VS collect
RDDs also have a collect() function aproximately equaling to take(), which can retrieve the entire RDD. Keep in mind that your entire dataset must fit in memory on a single machine to use collect() on it, so collect() shouldn’t be used on large datasets.
In most cases RDDs can’t just be collect()ed to the driver because they are too large. In these cases, it’s common to write data out to a distributed storage system such as HDFS or Amazon S3. You can save the contents of an RDD using the saveAsTextFile() action, saveAsSequenceFile(), or any of a number of actions for various built-in formats. We will cover the different options for exporting data in Chapter 5.
It is important to note that each time we call a new action, the entire RDD must be computed “from scratch.” To avoid this inefficiency, users can persist intermediate results, as we will cover in “Persistence (Caching)”.
(三)Lazy Evaluation
Rather than thinking of
an RDD as containing specific data, it is best to think of each RDD as consisting of
instructions on how to compute the data that we build up through transformations.
Passing Functions to Spark
固然也是三种语言的形式,这里主要介绍的是Python和java版本,其实传递函数的方法,前面已经介绍过了。
这里说明Python的传递函数时要注意的一一个问题,
One issue to watch out for when passing functions is inadvertently serializing the object containing the function. When you pass a function that is the member of an object, or contains references to fields in an object (e.g., self.field), Spark sends the entire object to worker nodes, which can be much larger than the bit of information you need (seeExample 3-19). Sometimes this can also cause your program to fail, if your class containsobjects that Python can’t figure out how to pickle.
Example 3-19. Passing a function with field references (don’t do this!) class SearchFunctions(object): def __init__(self, query): self.query = query def isMatch(self, s): return self.query in s def getMatchesFunctionReference(self, rdd): # Problem: references all of “self” in “self.isMatch” return rdd.filter(self.isMatch) def getMatchesMemberReference(self, rdd): # Problem: references all of “self” in “self.query” return rdd.filter(lambda x: self.query in x)
做者所说的问题,上面带注释的地方就是代表的,就是传参,尤为是传递对象中的某个域的时候,必定先把域中的内容extract出来,用一个本地变量报错,而后传递这个本地变量,像下面这样操做:
class WordFunctions(object): … def getMatchesNoReference(self, rdd): # Safe: extract only the field we need into a local variable query = self.query return rdd.filter(lambda x: query in x
Java
In Java, functions are specified as objects that implement one of Spark’s function interfaces from the org.apache.spark.api.java.function package. There are a number of different interfaces based on the return type of the function. We show the most basic function interfaces in Table 3-1, and cover a number of other function interfaces for when we need to return special types of data, like key/value data, in “Java”.
Table 3-1. Standard Java function interfaces
Function name Method to implement Usage
(1)Function<T, R> R call(T) Take in one input and return one output, for use with operations like map() and filter().
(2)Function2<T1, T2, R> R call(T1, T2) Take in two inputs and return one output, for use with operations like aggregate() or fold().(3)FlatMapFunction<T,R> Iterable<R>call(T) Take in one input and return zero or more outputs, for use with operationslike flatMap().
We can either define our function classes inline as anonymous inner classes (Example 3-
22), or create a named class (Example 3-23).
Example 3-22. Java function passing with anonymous inner class RDD<String> errors = lines.filter(new Function<String, Boolean>() { public Boolean call(String x) { return x.contains(“error”); } }); Example 3-23. Java function passing with named class class ContainsError implements Function<String, Boolean>() { public Boolean call(String x) { return x.contains(“error”); } } RDD<String> errors = lines.filter(new ContainsError());
The style to choose is a personal preference, but we find that top-level named functions
are often cleaner for organizing large programs. One other benefit of top-level functions is
that you can give them constructor parameters, as shown in Example 3-24.
Example 3-24. Java function class with parameters class Contains implements Function<String, Boolean>() { private String query; public Contains(String query) { this.query = query; } public Boolean call(String x) { return x.contains(query); } } RDD<String> errors = lines.filter(new Contains(“error”));
In Java 8, you can also use lambda expressions to concisely implement the function
interfaces. Since Java 8 is still relatively new as of this writing, our examples use the more
verbose syntax for defining classes in previous versions of Java. However, with lambda
expressions, our search example would look like Example 3-25.
Example 3-25. Java function passing with lambda expression in Java 8
RDD<String> errors = lines.filter(s -> s.contains(“error”));
If you are interested in using Java 8’s lambda expression, refer to Oracle’s documentation
and the Databricks blog post on how to use lambdas with Spark.
Tip
Both anonymous inner classes and lambda expressions can reference any final variables
in the method enclosing them, so you can pass these variables to Spark just as in Python
and Scala.
接下来开始认真的介绍经常使用的一些transfornation和Action操做
map()凡有key/value味道的操做,均可以使用这个map()操做,其核心是给map传递一个实际执行的函数,好比
Python squaring the values in an RDD nums = sc.parallelize([1, 2, 3, 4]) squared = nums.map(lambda x: x * x).collect() for num in squared: print “%i “ % (num)
Java squaring the values in an RDD JavaRDD<Integer> rdd = sc.parallelize(Arrays.asList(1, 2, 3, 4)); JavaRDD<Integer> result = rdd.map(new Function<Integer, Integer>() { public Integer call(Integer x) { return x*x; } }); System.out.println(StringUtils.join(result.collect(), “,”));
filter(),前面也屡次使用,好比inputRDD.filter(lambda x: "error" inm x)
flatMap() 用一个输入,获得多个输出内容时,咱们使用这个函数,好比:
flatMap() in Python, splitting lines into words
lines = sc.parallelize([“hello world”, “hi”])
words = lines.flatMap(lambda line: line.split(” “))
words.first() # returns “hello”
flatMap() in Java, splitting lines into multiple words
JavaRDD<String> lines = sc.parallelize(Arrays.asList(“hello world”, “hi”));
JavaRDD<String> words =
lines.flatMap(new FlatMapFunction<String, String>() {
public Iterable<String> call(String line) {
return Arrays.asList(line.split(” “));
}
});
words.first(); // returns “hello”
通俗的将map()和flatMap()的区别:
- Spark 中 map函数会对每一条输入进行指定的操做,而后为每一条输入返回一个对象;
- 而flatMap函数则是两个操做的集合——正是“先映射后扁平化”:
操做1:同map函数同样:对每一条输入进行指定的操做,而后为每一条输入返回一个对象
操做2:最后将全部对象合并为一个对象
具体而言就像上面的例子:
lines = sc.parralize(["hello world", "hi lilei"])
wordsMap = lines.map(lambda line : line.split(" "))
wordsMap.first() # ["hello", "word"]
worsFlatMap = lines.FlatMap(lambda line : line.split(" "))
wordsFlatMap.first() # 'hello'
wordsMap: {['hello', 'word'], ['hi', 'lilei']}
wordFlatmap: {'hello', 'world', 'hi', 'lilei'}
RDD支持一些伪集合操做:
包括,distinct, union , intersection , subtract(就是差集)
cartesian(),用来计算两个RDD的笛卡尔积Cartesian Product
=====再来总结一下常见的transformation====
一、对单个rdd使用的
map() 对每个元素操做,进来多少个元素,返回的元素个数不变
flatMap()对每个元素操做,最终把每个元素又变成更小的不能拆的元素
fileter() distinct() distinct()
二、对两个rdd操做
union() intersaction() subtract() cartisian()
Actions
reduce()操做:能够方便的实现sum求和,计算rdd中元素的个数等等.reduce(binary_function)
reduce将RDD中元素前两个传给输入函数,产生一个新的return值,新产生的return值与RDD中下一个元素(第三个元素)组成两个元素,再被传给输入函数,直到最后只有一个值为止。
Python中的示例代码
sum = rdd.reduce(lambda x, y: x + y)
Java中的实例代码:
Integer sum = rdd.reduce(new Function2<Integer, Integer, Integer>() {
public Integer call(Integer x, Integer y) { return x + y; }
});
aggregate()aggregate函数将每一个分区里面的元素进行聚合,而后用combine函数将每一个分区的结果和初始值(zeroValue)进行combine操做。这个函数最终返回的类型不须要和RDD中元素类型一致。说实话这里aggregate的Python示例代码看的不是很明白,可是java版本的示例代码看的仍是听明白的。
aggregate() in Python:
1 sumCount = nums.aggregate((0, 0), 2 (lambda acc, value: (acc[0] + value, acc[1] + 1), 3 (lambda acc1, acc2: (acc1[0] + acc2[0], acc1[1] + acc2[1])))) 4 return sumCount[0] / float(sumCount[1])
aggregate() in Java:
1 class AvgCount implements Serializable { 2 public AvgCount(int total, int num) { 3 this.total = total; 4 this.num = num; 5 } 6 public int total; 7 public int num; 8 public double avg() { 9 return total / (double) num; 10 } 11 } Function2<AvgCount, Integer, AvgCount> addAndCount = 12 new Function2<AvgCount, Integer, AvgCount>() { 13 public AvgCount call(AvgCount a, Integer x) { 14 a.total += x; 15 a.num += 1; 16 return a; 17 } 18 }; 19 Function2<AvgCount, AvgCount, AvgCount> combine = 20 new Function2<AvgCount, AvgCount, AvgCount>() { 21 public AvgCount call(AvgCount a, AvgCount b) { 22 a.total += b.total; 23 a.num += b.num; 24 return a; 25 } 26 }; 27 AvgCount initial = new AvgCount(0, 0); 28 AvgCount result = rdd.aggregate(initial, addAndCount, combine); 29 System.out.println(result.avg());
collect(), which returns the entire RDD’s contents. collect() is commonly used in unit
tests where the entire contents of the RDD are expected to fit in memory, as that makes it easy to compare the value of our RDD with our expected result.
take(n) returns n elements from the RDD and attempts to minimize the number of
partitions it accesses, so it may represent a biased collection. It’s important to note that
these operations do not return the elements in the order you might expect.
top() If there is an ordering defined on our data, we can also extract the top elements from an RDD using top(). top() will use the default ordering on the data, but we can supply our own comparison function to extract the top elements.
takeSample(withReplacement, num, seed) function allows us to take a sample of our
data either with or without replacement.
固然还有不少的Action的操做,这里就怒在一一列举,具体,能够参看书的P69总结的一个列表。
Converting Between RDD TypesRDD之间的类型转换
在spark中有一些函数只能操做数值型的RDD(numeric rdds),有一些函数只能操做数值对类型RDD(key/values RDD)。注意这些函数在Scala和Java中不是使用标准类的定义。
Persistence (Caching)
这个意思说说Spark的缓存机制,由于有些数据须要屡次使用,因此就把相应的RDD缓存在机器中。
Python,java和Scala一种三种缓存机制,java和scala的缓存机制同样,是吧RDD中的数据缓存在jvm的heap中,Python则是把数据序列化出来写到硬盘中。
缓存技术有不少的等级,persistence levels,这个能够详细参见P72的table3-6,好比有MEMORY_ONLY,MEMORY_ONL_SER,MERORY_AND_DISK,MEMORY_AND_DISK_SER,DISK_ONLY
并分析了各类缓存策略的cpu时间,内存占用率等。下面是一段Scala的实例代码
1 val result = input.map(x => x * x) 2 result.persist(StorageLevel.DISK_ONLY) 3 println(result.count()) 4 println(result.collect().mkString(“,”))
Notice that we called persist() on the RDD before the first action. The persist() call
on its own doesn’t force evaluation.
If you attempt to cache too much data to fit in memory, Spark will automatically evict old partitions using a Least Recently Used (LRU) cache policy. For the memory-only storage levels, it will recompute these partitions the next time they are accessed, while for the memory-and-disk ones, it will write them out to disk. In either case, this means that you don’t have to worry about your job breaking if you ask Spark to cache too much data.However, caching unnecessary data can lead to eviction of useful data and more recomputation time.
Finally, RDDs come with a method called unpersist() that lets you manually remove
them from the cache.
到这里chapter 3就讲完了。做者在结余里面这样说:The ability to always recompute an RDD is actually why RDDs are called “resilient.” When a machine holding RDD data fails, Spark uses this ability to recompute the missing partitions, transparent to the user.RDD老是可以被重复计算的能力就是RDD被称为“弹性”的实际缘由,当一台机器所拥有的RDD数据失败的时候,Spark会使用这种弹性计算的能力重复计算丢失的部分,这个过程对用户而言彻底是透明的。