尽管Hadoop框架是用java写的,可是Hadoop程序不限于java,能够用python、C++、ruby等。本例子中直接用python写一个MapReduce实例,而不是用Jython把python代码转化成jar文件。css
例子的目的是统计输入文件的单词的词频。html
使用python写MapReduce的“诀窍”是利用Hadoop流的API,经过STDIN(标准输入)、STDOUT(标准输出)在Map函数和Reduce函数之间传递数据。java
咱们惟一须要作的是利用Python的sys.stdin读取输入数据,并把咱们的输出传送给sys.stdout。Hadoop流将会帮助咱们处理别的任何事情。python
1.1 Map阶段:mapper.pyshell
在这里,咱们假设把文件保存到hadoop-0.20.2/test/code/mapper.pysegmentfault
#!/usr/bin/env python import sys for line in sys.stdin: line = line.strip() words = line.split() for word in words: print "%s\t%s" % (word, 1)
文件从STDIN读取文件。把单词切开,并把单词和词频输出STDOUT。Map脚本不会计算单词的总数,而是输出<word> 1。在咱们的例子中,咱们让随后的Reduce阶段作统计工做。ruby
为了是脚本可执行,增长mapper.py的可执行权限app
chmod +x hadoop-0.20.2/test/code/mapper.py
1.2 Reduce阶段:reducer.py框架
在这里,咱们假设把文件保存到hadoop-0.20.2/test/code/reducer.py分布式
#!/usr/bin/env python from operator import itemgetter import sys current_word = None current_count = 0 word = None for line in sys.stdin: line = line.strip() word, count = line.split('\t', 1) try: count = int(count) except ValueError: #count若是不是数字的话,直接忽略掉 continue if current_word == word: current_count += count else: if current_word: print "%s\t%s" % (current_word, current_count) current_count = count current_word = word if word == current_word: #不要忘记最后的输出 print "%s\t%s" % (current_word, current_count)
文件会读取mapper.py 的结果做为reducer.py 的输入,并统计每一个单词出现的总的次数,把最终的结果输出到STDOUT。
为了是脚本可执行,增长reducer.py的可执行权限
chmod +x hadoop-0.20.2/test/code/reducer.py
细节:split(chara, m),第二个参数的做用,下面的例子很给力
str = 'server=mpilgrim&ip=10.10.10.10&port=8080' print str.split('=', 1)[0] #1表示=只截一次 print str.split('=', 1)[1] print str.split('=')[0] print str.split('=')[1]
输出
server mpilgrim&ip=10.10.10.10&port=8080 server mpilgrim&ip
1.3 测试代码(cat data | map | sort | reduce)
这里建议你们在提交给MapReduce job以前在本地测试mapper.py 和reducer.py脚本。不然jobs可能会成功执行,可是结果并不是本身想要的。
功能性测试mapper.py 和 reducer.py
[rte@hadoop-0.20.2]$cd test/code [rte@code]$echo "foo foo quux labs foo bar quux" | ./mapper.py foo 1 foo 1 quux 1 labs 1 foo 1 bar 1 quux 1 [rte@code]$echo "foo foo quux labs foo bar quux" | ./mapper.py | sort -k1,1 | ./reducer.py bar 1 foo 3 labs 1 quux 2
细节:sort -k1,1 参数何意?
-k, -key=POS1[,POS2] 键以pos1开始,以pos2结束
有时候常常使用sort来排序,须要预处理把须要排序的field语言在最前面。实际上这是
彻底没有必要的,利用-k参数就足够了。
好比sort all
1 4 2 3 3 2 4 1 5 0
若是sort -k 2的话,那么执行结果就是
5 0 4 1 3 2 2 3 1 4
2.1 数据准备
我把上面三个文件放到hadoop-0.20.2/test/datas/目录下
2.2 运行
把本地的数据文件拷贝到分布式文件系统HDFS中。
bin/hadoop dfs -copyFromLocal /test/datas hdfs_in
查看
bin/hadoop dfs -ls
结果
drwxr-xr-x - rte supergroup 0 2014-07-05 15:40 /user/rte/hdfs_in
查看具体的文件
bin/hadoop dfs -ls /user/rte/hdfs_in
执行MapReduce job
bin/hadoop jar contrib/streaming/hadoop-*streaming*.jar \ -file test/code/mapper.py -mapper test/code/mapper.py \ -file test/code/reducer.py -reducer test/code/reducer.py \ -input /user/rte/hdfs_in/* -output /user/rte/hdfs_out
实例输出
查看输出结果是否在目标目录/user/rte/hdfs_out
bin/hadoop dfs -ls /user/rte/hdfs_out
输出
Found 2 items drwxr-xr-x - rte supergroup 0 2014-07-05 20:51 /user/rte/hdfs_out2/_logs -rw-r--r-- 2 rte supergroup 880829 2014-07-05 20:51 /user/rte/hdfs_out2/part-00000
查看结果
bin/hadoop dfs -cat /user/rte/hdfs_out2/part-00000
输出
以上已经达成目的了,可是能够利用python迭代器和生成器优化
3.1 python中的迭代器和生成器
3.2 优化Mapper 和 Reducer代码
mapper.py
#!/usr/bin/env python import sys def read_input(file): for line in file: yield line.split() def main(separator='\t'): data = read_input(sys.stdin) for words in data: for word in words: print "%s%s%d" % (word, separator, 1) if __name__ == "__main__": main()
reducer.py
#!/usr/bin/env python from operator import itemgetter from itertools import groupby import sys def read_mapper_output(file, separator = '\t'): for line in file: yield line.rstrip().split(separator, 1) def main(separator = '\t'): data = read_mapper_output(sys.stdin, separator = separator) for current_word, group in groupby(data, itemgetter(0)): try: total_count = sum(int(count) for current_word, count in group) print "%s%s%d" % (current_word, separator, total_count) except valueError: pass if __name__ == "__main__": main()
细节:groupby
from itertools import groupby from operator import itemgetter things = [('2009-09-02', 11), ('2009-09-02', 3), ('2009-09-03', 10), ('2009-09-03', 4), ('2009-09-03', 22), ('2009-09-06', 33)] sss = groupby(things, itemgetter(0)) for key, items in sss: print key for subitem in items: print subitem print '-' * 20
结果
>>> 2009-09-02 ('2009-09-02', 11) ('2009-09-02', 3) -------------------- 2009-09-03 ('2009-09-03', 10) ('2009-09-03', 4) ('2009-09-03', 22) -------------------- 2009-09-06 ('2009-09-06', 33) --------------------
注