MapReduce 是一个分布式运算程序的编程框架,是用户开发基于 Hadoop 的数据分析应用的核心框架。java
MapReduce 核心功能是将用户编写的业务逻辑代码和自带默认组件整合成一个完整的分布式运算程序,并发运行在一个Hadoop 集群上。算法
优势:spring
MapReduce 易于编程apache
它简单的实现一些接口,就能够完成一个分布式程序,这个分布式程序能够分布到大量廉价的 PC 机器上运行,也就是说写一个分布式程序,跟写一个简单的串行程序是如出一辙的,就是由于这个特色使得 MapReduce 编程变得很是流行。编程
良好的扩展性缓存
当计算资源不能获得知足的时候,能够经过简单的增长机器来扩展它的计算能力。服务器
高容错性网络
MapReduce 设计的初衷就是使程序可以部署在廉价的 PC 机器上,这就要求它具备很高的容错性,好比其中一台机器挂了,它能够把上面的计算任务转移到另一个节点上运行,不至于这个任务运行失败,并且这个过程不须要人工参与,而彻底是由 Hadoop 内部完成的。数据结构
适合 PB 级以上海量数据的离线处理并发
能够实现上千台服务器集群并发工做,提供数据处理能力。
缺点:
不擅长实时计算
MapReduce 没法像 MySQL 同样,在毫秒或者秒级内返回结果。
不擅长流式计算
流式计算的输入数据是动态的,而 MapReduce 的输入数据集是静态的,不能动态变化。这是由于 MapReduce 自身的设计特色决定了数据源必须是静态的。
不擅长DAG(有向图)计算
多个应用程序存在依赖关系,后一个应用程序的输入为前一个的输出,在这种状况下,MapReduce 并非不能作,而是使用后,每一个 MapReduce 做业的输出结果都会写入到磁盘,会形成大量的磁盘 IO,致使性能很是的低下。
分布式的运算程序每每须要分红至少 2 个阶段。
第一个阶段的 MapTask 并发实例,彻底并行运行,互不相干。
第二个阶段的 ReduceTask 并发实例互不相干,可是他们的数据依赖于上一个阶段的全部 MapTask 并发实例的输出。
MapReduce 编程模型只能包含一个 Map 阶段和一个 Reduce 阶段,若是用户的业务逻辑很是复杂,那就只能多个MapReduce 程序,串行运行。
一个完整的 MapReduce 程序在分布式运行时有三类实例进程:
MrAppMaster 负责整个程序的过程调度及状态协调
MapTask 负责 Map 阶段的整个数据处理流程。
ReduceTask 负责 Reduce 阶段的整个数据处理流程。
Java 类型 | Hadoop Writable 类型 |
---|---|
Boolean | BooleanWritable |
Byte | ByteWritable |
Int | IntWritable |
Float | FloatWritable |
Long | LongWritable |
Double | DoubleWritable |
String | Text |
Map | MapWritable |
Array | ArrayWritable |
用户编写的程序分红三个部分:
Mapper 阶段
Reduce 阶段
Driver 阶段
导入依赖
<dependencies> <dependency> <groupid>junit</groupid> <artifactid>junit</artifactid> <version>RELEASE</version> </dependency> <dependency> <groupid>org.apache.logging.log4j</groupid> <artifactid>log4j-core</artifactid> <version>2.8.2</version> </dependency> <dependency> <groupid>org.apache.hadoop</groupid> <artifactid>hadoop-common</artifactid> <version>2.7.2</version> </dependency> <dependency> <groupid>org.apache.hadoop</groupid> <artifactid>hadoop-client</artifactid> <version>2.7.2</version> </dependency> <dependency> <groupid>org.apache.hadoop</groupid> <artifactid>hadoop-hdfs</artifactid> <version>2.7.2</version> </dependency> <dependency> <groupid>jdk.tools</groupid> <artifactid>jdk.tools</artifactid> <version>1.8</version> <scope>system</scope> <systempath>${JAVA_HOME}/lib/tools.jar</systempath> </dependency> </dependencies>
log4j.properties
log4j.rootLogger=INFO, stdout log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n log4j.appender.logfile=org.apache.log4j.FileAppender log4j.appender.logfile.File=target/spring.log log4j.appender.logfile.layout=org.apache.log4j.PatternLayout log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n
WcMapper
package com.djm.mapreduce; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; import java.io.IOException; public class WcMapper extends Mapper<longwritable, text,text, intwritable> { private Text key = new Text(); private IntWritable one = new IntWritable(1); @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); String[] words = line.split(" "); for (String word : words) { this.key.set(word); context.write(this.key, this.one); } } }
WcReduce
package com.djm.mapreduce; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; import java.io.IOException; public class WcReduce extends Reducer<text, intwritable, text, intwritable> { private IntWritable total = new IntWritable(); @Override protected void reduce(Text key, Iterable<intwritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable count : values) { sum += 1; } this.total.set(sum); context.write(key, this.total); } }
WcDriver
package com.djm.mapreduce; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import java.io.IOException; public class WcDriver { public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { // 得到任务 Job job = Job.getInstance(new Configuration()); // 设置Classpath job.setJarByClass(WcDriver.class); // 设置Mapper job.setMapperClass(WcMapper.class); // 设置Reducer job.setReducerClass(WcReduce.class); // 设置Mapper的输出key和value的类型 job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(IntWritable.class); // 设置Reducer的输出key和value的类型 job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); // 设置输入和输出路径 FileInputFormat.setInputPaths(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); boolean result = job.waitForCompletion(true); System.exit(result ? 0 : 1); } }
Serializable 是一个重量级的 Java 序列框架,一个对象被序列化后,会产生不少额外的信息(各类校验信息,Header,继承体系等),会产生大量的 IO,因此不适合在网络中高效的传输,因此,Hadoop 本身开发了一个轻量级的序列化框架(Writable)。
Hadoop序列化特色:
一、紧凑:高效使用存储空间。
二、快速:读写数据的额外开销小
三、可扩展:随着通讯协议的升级而可升级。
四、 互操做:支持多语言的交互。
在开发过程当中每每提供的基本序列化类型不能知足要求,通常状况都须要建立一个 Bean 实现 Writable 接口。
具体实现 bean 对象序列化步骤以下 7 步:
一、实现 Writable 接口
二、反序列化时,须要反射调用空参构造函数,必须提供空参构造
三、重写序列化方法
四、重写反序列方法
五、反序列化和序列化的顺序必须彻底一致
六、要想把结果显示在文件中,须要重写 toString()
七、若是须要将自定义的 bean 放在 key 中传输,则还须要实现 Comparable 接口,由于 MapReduce 框中的 Shuffle 过程要求对 key 必须能排序
统计每个手机号耗费的总上行流量、下行流量、总流量
输入数据格式:id 手机号码 网络ip 上行流量 下行流量 网络状态码
输出数据格式:手机号码 上行流量 下行流量 总流量
FlowBean
package com.djm.mapreduce.flow; import org.apache.hadoop.io.Writable; import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; public class FlowBean implements Writable { private long upFlow; private long downFlow; private long sumFlow; public FlowBean() { } public void set(long upFlow, long downFlow) { this.upFlow = upFlow; this.downFlow = downFlow; this.sumFlow = this.upFlow + this.downFlow; } public long getUpFlow() { return upFlow; } public void setUpFlow(long upFlow) { this.upFlow = upFlow; } public long getDownFlow() { return downFlow; } public void setDownFlow(long downFlow) { this.downFlow = downFlow; } public long getSumFlow() { return sumFlow; } public void setSumFlow(long sumFlow) { this.sumFlow = sumFlow; } @Override public String toString() { return upFlow + "\t" + downFlow + "\t" + sumFlow; } public void write(DataOutput out) throws IOException { out.writeLong(upFlow); out.writeLong(downFlow); out.writeLong(sumFlow); } public void readFields(DataInput in) throws IOException { this.upFlow = in.readLong(); this.downFlow = in.readLong(); this.sumFlow = in.readLong(); } }
FlowMapper
package com.djm.mapreduce.flow; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; import java.io.IOException; public class FlowMapper extends Mapper<longwritable, text, flowbean> { private FlowBean flowBean = new FlowBean(); private Text phone = new Text(); @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); String[] words = line.split("\t"); phone.set(words[1]); long upFlow = Long.parseLong(words[words.length - 3]); long downFlow = Long.parseLong(words[words.length - 2]); flowBean.set(upFlow, downFlow); context.write(phone, flowBean); } }
FlowReduce
package com.djm.mapreduce.flow; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; import java.io.IOException; public class FlowReduce extends Reducer<text, flowbean, text, flowbean> { private FlowBean totalFlow = new FlowBean(); @Override protected void reduce(Text key, Iterable<flowbean> values, Context context) throws IOException, InterruptedException { long sumUpFlow = 0; long sumDownFlow = 0; for (FlowBean value : values) { long upFlow = value.getUpFlow(); long downFlow = value.getDownFlow(); sumUpFlow += upFlow; sumDownFlow += downFlow; } totalFlow.set(sumUpFlow, sumDownFlow); context.write(key, totalFlow); } }
FlowDriver
package com.djm.mapreduce.flow; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import java.io.IOException; public class FlowDriver { public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { Job job = Job.getInstance(new Configuration()); job.setJarByClass(FlowDriver.class); job.setMapperClass(FlowMapper.class); job.setReducerClass(FlowReduce.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(FlowBean.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(FlowBean.class); FileInputFormat.setInputPaths(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); boolean result = job.waitForCompletion(true); System.exit(result ? 0 : 1); } }
一个 Job 的 Map 阶段并行度由客户端在提交 Job 时的切片数决定
每个Split切片分配一个MapTask并行实例处理
默认状况下,切片大小=BlockSize
切片时不考虑数据集总体,而是逐个针对每个文件单独切片
切片机制:
源码中如何计算切片大小的?
如何自定义切片大小?
CombineTextInputFormat 用于小文件过多的场景,它能够将多个小文件从逻辑上规划到一个切片中,这样,多个小文件就能够交给一个 MapTask 处理。
TextInputFormat:
TextInputForma 是默认的 FileInputFormat 实现类,按行读取每条记录,键是存储该行在整个文件中的起始字节偏移量,LongWritable 类型,值是这行的内容,不包括任何行终止符(换行符和回车符),Text类型。
KeyValueTextInputFormat:
每一行均为一条记录,被分隔符分割为 key,value,能够经过在驱动类中设置conf.set(KeyValueLineRecordReader.KEY_VALUE_SEPERATOR, "\t"); 来设定分隔符,默认分隔符是 tab。
NLineInputFormat:
若是使用 NlineInputFormat,表明每一个 map 进程处理的 InputSplit 再也不按 Block 块去划分,而是按 NlineInputFormat 指定的行数N来划分,即输入文件的总行数 /N = 切片数,若是不整除,切片数 = 商 + 1。
不管 HDFS 仍是 MapReduce,在处理小文件时效率都很是低,但又不免面临处理大量小文件的场景,此时,就须要有相应解决方案。能够自定义 InputFormat 实现小文件的合并。
程序实现:
WholeFileInputformat
package com.djm.mapreduce.inputformat; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.BytesWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.InputSplit; import org.apache.hadoop.mapreduce.JobContext; import org.apache.hadoop.mapreduce.RecordReader; import org.apache.hadoop.mapreduce.TaskAttemptContext; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import java.io.IOException; public class WholeFileInputformat extends FileInputFormat<text, byteswritable> { @Override protected boolean isSplitable(JobContext context, Path filename) { return false; } public RecordReader<text, byteswritable> createRecordReader(InputSplit split, TaskAttemptContext context) throws IOException, InterruptedException { return new WholeRecordReader(); } }
WholeRecordReader
package com.djm.mapreduce.inputformat; import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.BytesWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.InputSplit; import org.apache.hadoop.mapreduce.RecordReader; import org.apache.hadoop.mapreduce.TaskAttemptContext; import org.apache.hadoop.mapreduce.lib.input.FileSplit; import java.io.IOException; public class WholeRecordReader extends RecordReader<text, byteswritable> { private boolean notRead = true; private Text key = new Text(); private BytesWritable value = new BytesWritable(); private FSDataInputStream fis; private FileSplit fs; /** * 初始化方法,框架会在开始的时候调用一次 * @param split * @param context * @throws IOException * @throws InterruptedException */ public void initialize(InputSplit split, TaskAttemptContext context) throws IOException, InterruptedException { // 转换切换类型为文件切片 fs = (FileSplit) split; // 经过切片获取文件路径 Path path = fs.getPath(); // 经过路径获取文件系统 FileSystem fileSystem = path.getFileSystem(context.getConfiguration()); // 开流 fis = fileSystem.open(path); } /** * 读取下一组KV * @return * @throws IOException * @throws InterruptedException */ public boolean nextKeyValue() throws IOException, InterruptedException { if (notRead) { // 读K key.set(fs.getPath().toString()); // 读V byte[] buf = new byte[(int) fs.getLength()]; fis.read(buf); value.set(buf, 0, buf.length); notRead = false; return true; } else { return false; } } /** * 获取当前读到的key * @return * @throws IOException * @throws InterruptedException */ public Text getCurrentKey() throws IOException, InterruptedException { return this.key; } /** * 获取当前读到的value * @return * @throws IOException * @throws InterruptedException */ public BytesWritable getCurrentValue() throws IOException, InterruptedException { return this.value; } /** * 当前数据读取的进度 * @return * @throws IOException * @throws InterruptedException */ public float getProgress() throws IOException, InterruptedException { return notRead ? 0 : 1; } /** * 关闭资源 * @throws IOException */ public void close() throws IOException { if (fis != null) { fis.close(); } } }
WholeFileDriver
package com.djm.mapreduce.inputformat; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.BytesWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat; import java.io.IOException; public class WholeFileDriver { public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { Job job = Job.getInstance(new Configuration()); job.setJarByClass(WholeFileDriver.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(BytesWritable.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(BytesWritable.class); job.setInputFormatClass(WholeFileInputformat.class); job.setOutputFormatClass(SequenceFileOutputFormat.class); FileInputFormat.setInputPaths(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); boolean b = job.waitForCompletion(true); System.exit(b ? 0 : 1); } }
上面的流程是整个 MapReduce 最全工做流程,可是 Shuffle 过程只是从第 7 步开始到第 16 步结束,具体 Shuffle 过程详解,以下:
1)MapTask 收集咱们的 map() 方法输出的 KV 对,放到内存缓冲区中
2)从内存缓冲区不断溢出本地磁盘文件,可能会溢出多个文件
3)多个溢出文件会被合并成大的溢出文件
4)在溢出过程及合并的过程当中,都要调用 Partitioner 进行分区和针对 key 进行排序
5)ReduceTask 根据本身的分区号,去各个 MapTask 机器上取相应的结果分区数据
6)ReduceTask 会取到同一个分区的来自不一样 MapTask 的结果文件,ReduceTask 会将这些文件再进行合并(归并排序)
7)合并成大文件后,Shuffle 的过程也就结束了,后面进入 ReduceTask 的逻辑运算过程(从文件中取出一个一个的键值对 Group,调用用户自定义的 reduce() 方法)
分区能够将统计结果按照条件输出到不一样的文件中
默认 Partition 分区:
public class HashPartitioner<k, v> extends Partitioner<k, v> { public int getPartition(K key, V value, int numReduceTasks) { return (key.hashCode() & Integer.MAX_VALUE) % numReduceTasks; } }
默认分区是根据 key 的 hashCode 对 ReduceTasks 个数取模决定的。
自定义 Partition 步骤:
public class CustomPartitioner extends Partitioner<text, flowbean> { @Override public int getPartition(Text key, FlowBean value, int numPartitions) { // 控制分区代码逻辑 return partition; } }
注意:
需求分析:
代码实现:
# ProvincePartitioner package com.djm.mapreduce.partitioner; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Partitioner; public class ProvincePartitioner extends Partitioner<flowbean, text> { @Override public int getPartition(FlowBean flowBean, Text text, int numPartitions) { switch (text.toString().substring(0, 3)) { case "136": return 0; case "137": return 1; case "138": return 2; case "139": return 3; default: return 4; } } } # PartitionerFlowDriver package com.djm.mapreduce.partitioner; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import java.io.IOException; public class PartitionerFlowDriver { public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { Job job = Job.getInstance(new Configuration()); job.setJarByClass(PartitionerFlowDriver.class); job.setMapperClass(SortMapper.class); job.setReducerClass(SortReduce.class); job.setMapOutputKeyClass(FlowBean.class); job.setMapOutputValueClass(Text.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(FlowBean.class); job.setPartitionerClass(ProvincePartitioner.class); job.setNumReduceTasks(5); FileInputFormat.setInputPaths(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); boolean result = job.waitForCompletion(true); System.exit(result ? 0 : 1); } }
排序是 MapReduce 框架中最重要的操做之一,MapTask 和 ReduceTask 均会对数据按照 key 进行排序,该操做属于Hadoop 的默认行为,任何应用程序中的数据均会被排序,而无论逻辑上是否须要。
默认排序是按照字典顺序排序,且实现该排序的方法是快速排序:
对于 MapTask,它会将处理的结果暂时放到环形缓冲区中,当环形缓冲区使用率达到必定阈值后,再对缓冲区中的数据进行一次快速排序,并将这些有序数据溢写到磁盘上,而当数据处理完毕后,它会对磁盘上全部文件进行归并排序。
对于 ReduceTask,它从每一个 MapTask 上远程拷贝相应的数据文件,若是文件大小超过必定阈值,则溢写磁盘上,不然存储在内存中,若是磁盘上文件数目达到必定阈值,则进行一次归并排序以生成一个更大文件,若是内存中文件大小或者数目超过必定阈值,则进行一次合并后将数据溢写到磁盘上,当全部数据拷贝完毕后,ReduceTask 统一对内存和磁盘上的全部数据进行一次归并排序。
排序分类:
需求分析:
代码实现:
package com.djm.mapreduce.partitioner; import org.apache.hadoop.io.WritableComparable; import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; @Data public class FlowBean implements WritableComparable<flowbean> { private long upFlow; private long downFlow; private long sumFlow; public void set(long upFlow, long downFlow) { this.upFlow = upFlow; this.downFlow = downFlow; this.sumFlow = this.upFlow + this.downFlow; } public void write(DataOutput out) throws IOException { out.writeLong(upFlow); out.writeLong(downFlow); out.writeLong(sumFlow); } public void readFields(DataInput in) throws IOException { this.upFlow = in.readLong(); this.downFlow = in.readLong(); this.sumFlow = in.readLong(); } @Override public int compareTo(FlowBean o) { return this.sumFlow > o.sumFlow ? -1:1; } }
对 Reduce 阶段的数据根据某一个或几个字段进行分组。
分组排序步骤:
自定义类继承WritableComparator
重写compare()方法
建立一个构造将比较对象的类传给父类
protected OrderGroupingComparator() { super(OrderBean.class, true); }
需求分析:
代码实现:
# OrderBean package com.djm.mapreduce.order; import org.apache.hadoop.io.WritableComparable; import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; @Data public class OrderBean implements WritableComparable<orderbean> { private String orderId; private String productId; private double price; @Override public int compareTo(OrderBean o) { int compare = this.orderId.compareTo(o.orderId); if (compare == 0) { return Double.compare(o.price, this.price); } else { return compare; } } @Override public void write(DataOutput out) throws IOException { out.writeUTF(orderId); out.writeUTF(productId); out.writeDouble(price); } @Override public void readFields(DataInput in) throws IOException { this.orderId = in.readUTF(); this.productId = in.readUTF(); this.price = in.readDouble(); } } # OrderSortGroupingComparator package com.djm.mapreduce.order; import org.apache.hadoop.io.WritableComparable; import org.apache.hadoop.io.WritableComparator; public class OrderSortGroupingComparator extends WritableComparator { public OrderSortGroupingComparator() { super(OrderBean.class, true); } @Override public int compare(WritableComparable a, WritableComparable b) { OrderBean oa = (OrderBean) a; OrderBean ob = (OrderBean) b; return oa.getOrderId().compareTo(ob.getOrderId()); } } # OrderSortDriver package com.djm.mapreduce.order; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import java.io.IOException; public class OrderSortDriver { public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { Job job = Job.getInstance(new Configuration()); job.setJarByClass(OrderSortDriver.class); job.setMapperClass(OrderSortMapper.class); job.setReducerClass(OrderSortReduce.class); job.setMapOutputKeyClass(OrderBean.class); job.setMapOutputValueClass(NullWritable.class); job.setGroupingComparatorClass(OrderSortGroupingComparator.class); job.setOutputKeyClass(OrderBean.class); job.setOutputValueClass(NullWritable.class); FileInputFormat.setInputPaths(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); boolean result = job.waitForCompletion(true); System.exit(result ? 0 : 1); } }
1)Read 阶段:MapTask 经过用户编写的 RecordReader,从输入 InputSplit 中解析出一个个 key/value。
2)Map阶段:该节点主要是将解析出的 key/value 交给用户编写 map() 函数处理,并产生一系列新的 key/value。
3)Collect 收集阶段:在用户编写 map() 函数中,当数据处理完成后,通常会调用 OutputCollector.collect() 输出结果,在该函数内部,它会将生成的 key/value 分区(调用Partitioner),并写入一个环形内存缓冲区中。
4)Spill 阶段:即溢写,当环形缓冲区满后,MapReduce 会将数据写到本地磁盘上,生成一个临时文件,须要注意的是,将数据写入本地磁盘以前,先要对数据进行一次本地排序,并在必要时对数据进行合并、压缩等操做。
5)Combine 阶段:当全部数据处理完成后,MapTask 对全部临时文件进行一次合并,以确保最终只会生成一个数据文件。
6)当全部数据处理完后,MapTask 会将全部临时文件合并成一个大文件,并保存到文件 output/file.out 中,同时生成相应的索引文件 output/file.out.index。
7)在进行文件合并过程当中,MapTask 以分区为单位进行合并,对于某个分区,它将采用多轮递归合并的方式,每轮合并io.sort.factor(默认10)个文件,并将产生的文件从新加入待合并列表中,对文件排序后,重复以上过程,直到最终获得一个大文件。
8)让每一个 MapTask 最终只生成一个数据文件,可避免同时打开大量文件和同时读取大量小文件产生的随机读取带来的开销。
1)Copy 阶段:ReduceTask 从各个 MapTask 上远程拷贝一片数据,并针对某一片数据,若是其大小超过必定阈值,则写到磁盘上,不然直接放到内存中。
2)Merge 阶段:在远程拷贝数据的同时,ReduceTask 启动了两个后台线程对内存和磁盘上的文件进行合并,以防止内存使用过多或磁盘上文件过多。
3)Sort 阶段:按照 MapReduce 语义,用户编写 reduce() 函数输入数据是按 key 进行汇集的一组数据,为了将 key 相同的数据聚在一块儿,Hadoop 采用了基于排序的策略,因为各个 MapTask 已经实现对本身的处理结果进行了局部排序,所以,ReduceTask 只需对全部数据进行一次归并排序便可。
4)Reduce 阶段:reduce() 函数将计算结果写到 HDFS 上。
ReduceTask 的并行度一样影响整个 Job 的执行并发度和执行效率,但与 MapTask 的并发数由切片数决定不一样,ReduceTask 数量的决定是能够直接手动设置:
job.setNumReduceTasks(4);
注意事项:
需求分析:
代码实现:
# FilterOutputFormat package com.djm.mapreduce.outputformat; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.RecordWriter; import org.apache.hadoop.mapreduce.TaskAttemptContext; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import java.io.IOException; public class FilterOutputFormat extends FileOutputFormat<text, nullwritable> { @Override public RecordWriter<text, nullwritable> getRecordWriter(TaskAttemptContext job) throws IOException, InterruptedException { return new FilterRecordWriter(job); } } # FilterRecordWriter package com.djm.mapreduce.outputformat; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IOUtils; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.RecordWriter; import org.apache.hadoop.mapreduce.TaskAttemptContext; import java.io.IOException; public class FilterRecordWriter extends RecordWriter<text, nullwritable> { private FSDataOutputStream atguiguOut = null; private FSDataOutputStream otherOut = null; public FilterRecordWriter() { } public FilterRecordWriter(TaskAttemptContext job) { FileSystem fs; try { fs = FileSystem.get(job.getConfiguration()); Path atguigu = new Path("C:\\Application\\Apache\\hadoop-2.7.2\\djm.log"); Path other = new Path("C:\\Application\\Apache\\hadoop-2.7.2\\other.log"); atguiguOut = fs.create(atguigu); otherOut = fs.create(other); } catch (IOException e) { e.printStackTrace(); } } @Override public void write(Text key, NullWritable value) throws IOException, InterruptedException { if (key.toString().contains("atguigu")) { atguiguOut.write(key.toString().getBytes()); } else { otherOut.write(key.toString().getBytes()); } } @Override public void close(TaskAttemptContext context) throws IOException, InterruptedException { IOUtils.closeStream(atguiguOut); IOUtils.closeStream(otherOut); } } # FilterDriver package com.djm.mapreduce.outputformat; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import java.io.IOException; public class FilterDriver { public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { Job job = Job.getInstance(new Configuration()); job.setJarByClass(FilterDriver.class); job.setMapperClass(FilterMapper.class); job.setReducerClass(FilterReduce.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(NullWritable.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(NullWritable.class); job.setOutputFormatClass(FilterOutputFormat.class); FileInputFormat.setInputPaths(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); boolean result = job.waitForCompletion(true); System.exit(result ? 0 : 1); } }
工做原理:
Map 端
为来自不一样表或文件的 key/value 对,打标签以区别不一样来源的记录,而后用链接字段做为 key,其他部分和新加的标志做为 value,最后进行输出。
Reduce端
在 Reduce 端以链接字段做为 key 的分组已经完成,咱们只须要在每个分组当中将那些来源于不一样文件的记录分开,最后进行合并。
需求分析:
代码实现:
# TableBean package com.djm.mapreduce.table; import org.apache.hadoop.io.Writable; import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; @Data public class TableBean implements Writable { private String orderId; private String productId; private int amount; private String pname; private String flag; @Override public void write(DataOutput out) throws IOException { out.writeUTF(orderId); out.writeUTF(productId); out.writeInt(amount); out.writeUTF(pname); out.writeUTF(flag); } @Override public void readFields(DataInput in) throws IOException { this.orderId = in.readUTF(); this.productId = in.readUTF(); this.amount = in.readInt(); this.pname = in.readUTF(); this.flag = in.readUTF(); } } # TableMapper package com.djm.mapreduce.table; import java.io.IOException; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.lib.input.FileSplit; public class TableMapper extends Mapper<longwritable, text, tablebean>{ String name; TableBean bean = new TableBean(); Text k = new Text(); @Override protected void setup(Context context) throws IOException, InterruptedException { FileSplit split = (FileSplit) context.getInputSplit(); name = split.getPath().getName(); } @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); if (name.startsWith("order")) {// 订单表处理 String[] fields = line.split("\t"); bean.setOrder_id(fields[0]); bean.setP_id(fields[1]); bean.setAmount(Integer.parseInt(fields[2])); bean.setPname(""); bean.setFlag("order"); k.set(fields[1]); }else { String[] fields = line.split("\t"); bean.setP_id(fields[0]); bean.setPname(fields[1]); bean.setFlag("pd"); bean.setAmount(0); bean.setOrder_id(""); k.set(fields[0]); } context.write(k, bean); } } # TableReducer package com.djm.mapreduce.table; import org.apache.commons.beanutils.BeanUtils; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; import java.io.IOException; import java.lang.reflect.InvocationTargetException; import java.util.ArrayList; public class TableReducer extends Reducer<text, tablebean, nullwritable> { @Override protected void reduce(Text key, Iterable<tablebean> values, Context context) throws IOException, InterruptedException { ArrayList<tablebean> orderBeans = new ArrayList<>(); TableBean pdBean = new TableBean(); for (TableBean bean : values) { if ("order".equals(bean.getFlag())) { TableBean orderBean = new TableBean(); try { BeanUtils.copyProperties(orderBean, bean); } catch (IllegalAccessException | InvocationTargetException e) { e.printStackTrace(); } orderBeans.add(orderBean); } else { try { BeanUtils.copyProperties(pdBean, bean); } catch (IllegalAccessException | InvocationTargetException e) { e.printStackTrace(); } } } for (TableBean bean :orderBeans) { bean.setPname (pdBean.getPname()); context.write(bean, NullWritable.get()); } } } package com.djm.mapreduce.table; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; public class TableDriver { public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException, URISyntaxException { Job job = Job.getInstance(new Configuration()); job.setJarByClass(TableDriver.class); job.setMapperClass(TableMapper.class); job.setReducerClass(TableReducer.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(TableBean.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(NullWritable.class); FileInputFormat.setInputPaths(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.addCacheFile(new URI("file:///C:/Application/Apache/hadoop-2.7.2/input/pd.txt")); boolean result = job.waitForCompletion(true); System.exit(result ? 0 : 1); } }
Map Join 适用于一张表十分小、一张表很大的场景。
优势:
在 Map 端缓存多张表,提早处理业务逻辑,这样增长 Map 端业务,减小 Reduce 端数据的压力,就能够尽量的减小数据倾斜。
需求分析:
代码实现:
# TableMapper package com.djm.mapreduce.table; import org.apache.commons.lang.StringUtils; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; import java.io.BufferedReader; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStreamReader; import java.net.URI; import java.nio.charset.StandardCharsets; import java.util.HashMap; import java.util.Map; public class TableMapper extends Mapper<longwritable, text, nullwritable> { private Text k = new Text(); private Map<string, string> pdMap = new HashMap<>(); @Override protected void setup(Context context) throws IOException, InterruptedException { URI[] cacheFiles = context.getCacheFiles(); String path = cacheFiles[0].getPath(); BufferedReader reader = new BufferedReader(new InputStreamReader(new FileInputStream(path), StandardCharsets.UTF_8)); String line; while(StringUtils.isNotEmpty(line = reader.readLine())){ String[] fields = line.split("\t"); pdMap.put(fields[0], fields[1]); } reader.close(); } @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String[] fields = value.toString().split("\t"); String pId = fields[1]; String pdName = pdMap.get(pId); k.set(fields[0] + "\t"+ pdName + "\t" + fields[2]); context.write(k, NullWritable.get()); } } # TableDriver package com.djm.mapreduce.table; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; public class TableDriver { public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException, URISyntaxException { Job job = Job.getInstance(new Configuration()); job.setJarByClass(TableDriver.class); job.setMapperClass(TableMapper.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(NullWritable.class); FileInputFormat.setInputPaths(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.addCacheFile(new URI("file:///C:/Application/Apache/hadoop-2.7.2/input/pd.txt")); job.setNumReduceTasks(0); boolean result = job.waitForCompletion(true); System.exit(result ? 0 : 1); } }
在运行核心业务 MapReduce 程序以前,每每要先对数据进行清洗,清理掉不符合用户要求的数据。清理的过程每每只须要运行 Mapper 程序,不须要运行 Reduce 程序。
需求分析:
须要在 Map 阶段对输入的数据根据规则进行过滤清洗。
代码实现:
# LogBean package com.djm.mapreduce.etl; @Data public class LogBean { private String remoteAddr; private String remoteUser; private String timeLocal; private String request; private String status; private String bodyBytesSent; private String httpReferer; private String httpUserAgent; private boolean valid = true; } # LogMapper package com.djm.mapreduce.etl; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; import java.io.IOException; public class LogMapper extends Mapper<longwritable, text, nullwritable> { private Text k = new Text(); @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); LogBean bean = parseLog(line); if (!bean.isValid()) { return; } k.set(bean.toString()); context.write(k, NullWritable.get()); } private LogBean parseLog(String line) { LogBean logBean = new LogBean(); String[] fields = line.split(" "); if (fields.length > 11) { logBean.setRemoteAddr(fields[0]); logBean.setRemoteUser(fields[1]); logBean.setTimeLocal(fields[3].substring(1)); logBean.setRequest(fields[6]); logBean.setStatus(fields[8]); logBean.setBodyBytesSent(fields[9]); logBean.setHttpReferer(fields[10]); if (fields.length > 12) { logBean.setHttpUserAgent(fields[11] + " " + fields[12]); } else { logBean.setHttpUserAgent(fields[11]); } if (Integer.parseInt(logBean.getStatus()) >= 400) { logBean.setValid(false); } } else { logBean.setValid(false); } return logBean; } } # LogDriver package com.djm.mapreduce.etl; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import java.io.IOException; public class LogDriver { public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { Job job = Job.getInstance(new Configuration()); job.setJarByClass(LogDriver.class); job.setMapperClass(LogMapper.class); job.setNumReduceTasks(0); job.setOutputKeyClass(Text.class); job.setOutputValueClass(NullWritable.class); FileInputFormat.setInputPaths(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.waitForCompletion(true); } }
在编写 MapReduce 程序时,须要考虑以下几个方面:
Mapper
Partitioner分区
有默认实现 HashPartitioner,逻辑是根据 key 的哈希值和 numReduces 来返回一个分区号
key.hashCode()&Integer.MAXVALUE % numReduces
Comparable
Combiner
GroupingComparator
Reducer
OutputFormat