1. 建立一个存放源数据的表(外部表)t_words_src, 表中字段line为string类型, 存放一行单词
java
源数据示列: node
hello,tom hello,jerry hello,kitty hello,world hello,tom
hive> create external table t_words_src (line string) row format delimited fields terminated by '\n' # 按\n来切分字段, 一行就是一个字段 location '/wc/input'; # 源数据路径为 'hdfs://node1:9000/wc/input' hive> select * from t_words; OK hello,tom hello,jerry hello,kitty hello,world hello,tom
2. 建立一个存放全部单词的表t_words, 表中字段word为string类型, 存放单词shell
hive> create table t_words (word string); hive> insert into table t_words select explode(split(line,',')) as word from t_words_src; hive> select * from t_words; OK hello tom hello jerry hello kitty hello world hello tom
3. 建立一个存放WordCount结果的表t_wc_result, 表中字段word为string类型, 存放单词, counts为int类型, 存放单词出现次数spa
hive> create table t_wc_result (word string, counts int); hive> insert into table t_wc_result select word as word, count(word) as counts from t_words; hive> select * from t_wc_result; OK hello 5 jerry 1 kitty 1 tom 2 world 1
相对MapReduce来讲, Hive的HQL版WordCount写起来代码量少不少, 但他们的思想都是同样的code