MapReduce 其实是分为两个过程java
并行计算是一个很是复杂的过程, mapreduce是一个并行框架。apache
在Hadoop中,每一个MapReduce任务都被初始化为一个Job,每一个Job又能够分为两种阶段:map阶段和reduce阶段。这两个阶段分别用两个函数表示,即map函数和reduce函数app
咱们能够看下典型的官方列子框架
用idea 开发开发分布式
pom.xml 添加依赖ide
<dependencies> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>2.7.2</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-core</artifactId> <version>1.2.1</version> </dependency> </dependencies>
写代码:函数
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.*;oop
import java.io.IOException;
import java.util.Iterator;
import java.util.StringTokenizer;ui
/**idea
Created by diwu.sld on 2016/4/13.
*/
public class WordCount{
public static class CountMap extends MapReduceBase
implements Mapper<LongWritable, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable longWritable, Text text, OutputCollector<Text, IntWritable> outputCollector, Reporter reporter) throws IOException { String line = text.toString(); StringTokenizer tokenizer = new StringTokenizer(line); while(tokenizer.hasMoreTokens()){ word.set(tokenizer.nextToken()); outputCollector.collect(word, one); } }
}
public static class CountReduce extends MapReduceBase implements
Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator
OutputCollector<Text, IntWritable> output, Reporter reporter)
throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");
conf.setOutputKeyClass(Text.class); conf.setOutputValueClass(IntWritable.class); conf.setMapperClass(CountMap.class); conf.setCombinerClass(CountReduce.class); conf.setReducerClass(CountReduce.class); conf.setInputFormat(TextInputFormat.class); conf.setOutputFormat(TextOutputFormat.class); FileInputFormat.setInputPaths(conf, new Path(args[0])); FileOutputFormat.setOutputPath(conf, new Path(args[1])); JobClient.runJob(conf);
}
}
而后打好包 HadoopDemo:
1. Project Sturcture->Artifacts->+ 2. Build Artifacts
放到 hadoop 目录下运行
若是有N个文件,和对这个N个文件的计算,咱们能够用并行来提升运行效率。可是文件有大有小, 计算量有多又少, 如何进行并行和分配任务是一个很是繁琐的事情。 因此有了Hadoop这个并行框架来解决咱们的问题。
Hadoop 主要分为两大块: 分布式文件存储和分布式计算。
在分布式文件存储中,他会把文件分割为想多相同的小块。