WordCount 是 Hadoop 应用最经典的例子。java
笔者使用 hadoop-2.6.0 版本,须要引入的包目录位于 hadoop-2.6.0/share/hadoop/common/lib
。apache
import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; public class WordCount { public static class WordCountMap extends Mapper <Object, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); StringTokenizer tokenizer = new StringTokenizer(line); while (tokenizer.hasMoreTokens()){ word.set(tokenizer.nextToken()); context.write(word, one); } } } public static class WordCountReduce extends Reducer <Text, IntWritable, Text, IntWritable> { public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } context.write(key, new IntWritable(sum)); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = new Job(conf); job.setJarByClass(WordCount.class); job.setJobName("wordcount"); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); job.setMapperClass(WordCountMap.class); job.setReducerClass(WordCountReduce.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.waitForCompletion(true); } }
Mapper 的输入类型为文本,键用 Object
代替,值为文本 (Text)。编程
Mapper 的输出类型为文本,键为 Text,值为 IntWritable,至关于java中Integer整型变量。将分割后的字符串造成键值对 <单词,1>。并发
对于每一行输入文本,都会调用一次 map 方法,对输入的行进行切分。app
while (tokenizer.hasMoreTokens()){ word.set(tokenizer.nextToken()); context.write(word, one); }
将一行文本变为<单词,出现次数>这样的键值对。eclipse
对于每一个键,都会调用一次 reduce 方法,对键出现次数进行求和。oop
用 eclipse 导出 WordCount 的 Runable jar 包,放到目录 hadoop-2.6.0/bin
。测试
在目录 hadoop-2.6.0/bin
下新建 input 文件夹,并新建文件 file1, file2。.net
file1 内容为 one titus two titus three titus
code
file2 内容为 one huangyi two huangyi
. ├── container-executor ├── hadoop ├── hadoop.cmd ├── hdfs ├── hdfs.cmd ├── input │ ├── file1.txt │ └── file2.txt ├── mapred ├── mapred.cmd ├── rcc ├── test-container-executor ├── wordcount.jar ├── yarn └── yarn.cmd
运行 ./hadoop jar wordcount.jar input output
会生成 output 目录和结果。
huangyi 2 one 2 three 1 titus 3 two 2
http://blog.csdn.net/jediael_lu/article/details/38637277
http://blog.csdn.net/jediael_lu/article/details/38705371
七周七并发编程模型. Page 199~203