Mapreduce初析java
Mapreduce是一个计算框架,既然是作计算的框架,那么表现形式就是有个输入(input),mapreduce操做这个输入(input),经过自己定义好的计算模型,获得一个输出(output),这个输出就是咱们所须要的结果。程序员
咱们要学习的就是这个计算模型的运行规则。在运行一个mapreduce计算任务时候,任务过程被分为两个阶段:map阶段和reduce阶段,每一个阶段都是用键值对(key/value)做为输入(input)和输出(output)。而程序员要作的就是定义好这两个阶段的函数:map函数和reduce函数。apache
Mapreduce的基础实例app
jar包依赖框架
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.7.6</version>
</dependency>函数
代码实现oop
map类学习
public class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } }
reduce类code
public class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } }
main方法orm
public class WordCount { public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = Job.getInstance(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } }
打成jar包放到hadoop环境下
./hadoop-2.7.6/bin/hadoop jar hadoop-mapreduce-1.0.0.jar com.dongpeng.hadoop.mapreduce.wordcount.WordCount /user/test.txt /user/in.txt