PIG中输入输出分隔符默认是制表符\t,而到了hive中,默认变成了八进制的\001,java
也就是ASCII: ctrl - A正则表达式
Oct Dec Hex ASCII_Char sql
001 1 01 SOH (start of heading)apache
官方的解释说是尽可能不和文中的字符重复,所以选用了 crtrl - A,单个的字符能够经过ide
row format delimited fields terminated by '#'; 指定,PIG的单个分隔符的也能够经过 PigStorage指定,函数
可是多个字符作分隔符呢?PIG是直接报错,而HIVE只认第一个字符,而无视后面的多个字符。oop
解决办法:ui
PIG能够自定义加载函数(load function):继承LoadFunc,重写几个方法就ok了,this
详见:http://my.oschina.net/leejun2005/blog/83825 spa
而在hive中,自定义多分隔符(Multi-character delimiter strings),有2种方法能够实现:
RegexSerDe是hive自带的一种序列化/反序列化的方式,主要用来处理正则表达式。
RegexSerDe主要下面三个参数:
input.regex
output.format.string
input.regex.case.insensitive
下面给出一个完整的范例:
add jar /home/june/hadoop/hive-0.8.1-bin/lib/hive_contrib.jar; CREATE TABLE b( c0 string, c1 string, c2 string) ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe' WITH SERDEPROPERTIES ( 'input.regex' = '([^,]*),,,,([^,]*),,,,([^,]*)', 'output.format.string' = '%1$s %2$s %3$s') STORED AS TEXTFILE; cat b.txt 1,,,,2,,,,3 a,,,,b,,,,c 9,,,,5,,,,7 load data local inpath 'b.txt' overwrite into table b; select * from b
REF:
http://grokbase.com/t/hive/user/115sw9ant2/hive-create-table
//使用多字符来分隔字段,则须要你自定义InputFormat来实现。 package org.apache.hadoop.mapred; import java.io.IOException; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapred.FileSplit; import org.apache.hadoop.mapred.InputSplit; import org.apache.hadoop.mapred.JobConf; import org.apache.hadoop.mapred.LineRecordReader; import org.apache.hadoop.mapred.RecordReader; import org.apache.hadoop.mapred.Reporter; import org.apache.hadoop.mapred.TextInputFormat; public class MyDemoInputFormat extends TextInputFormat { @Override public RecordReader<LongWritable, Text> getRecordReader( InputSplit genericSplit, JobConf job, Reporter reporter) throws IOException { reporter.setStatus(genericSplit.toString()); MyDemoRecordReader reader = new MyDemoRecordReader( new LineRecordReader(job, (FileSplit) genericSplit)); return reader; } public static class MyDemoRecordReader implements RecordReader<LongWritable, Text> { LineRecordReader reader; Text text; public MyDemoRecordReader(LineRecordReader reader) { this.reader = reader; text = reader.createValue(); } @Override public void close() throws IOException { reader.close(); } @Override public LongWritable createKey() { return reader.createKey(); } @Override public Text createValue() { return new Text(); } @Override public long getPos() throws IOException { return reader.getPos(); } @Override public float getProgress() throws IOException { return reader.getProgress(); } @Override public boolean next(LongWritable key, Text value) throws IOException { Text txtReplace; while (reader.next(key, text)) { txtReplace = new Text(); txtReplace.set(text.toString().toLowerCase().replaceAll("\\|\\|\\|", "\001")); value.set(txtReplace.getBytes(), 0, txtReplace.getLength()); return true; } return false; } } } //这时候的建表语句是: create external table IF NOT EXISTS test( id string, name string )partitioned by (day string) STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.MyDemoInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' LOCATION '/log/dw_srclog/test';
采集日志到Hive http://blog.javachen.com/2014/07/25/collect-log-to-hive/
参考:
hive处理日志,自定义inputformat
http://running.iteye.com/blog/907806
http://superlxw1234.iteye.com/blog/1744970
原理很简单:hive 的内部分隔符是“ \001 ”,只要把分隔符替换成“\001 ”便可。
若是咱们须要修改为自定义的,例如为空,一样咱们也要利用正则序列化:
hive> CREATE TABLE sunwg02 (id int,name STRING) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' WITH SERDEPROPERTIES ( 'field.delim'='\t', 'escape.delim'='\\', 'serialization.null.format'=' ) STORED AS TEXTFILE; OK Time taken: 0.046 seconds hive> insert overwrite table sunwg02 select * from sunwg00; Loading data to table sunwg02 2 Rows loaded to sunwg02 OK Time taken: 18.756 seconds 查看sunwg02在hdfs的文件 [hjl@sunwg src]$ hadoop fs -cat /hjl/sunwg02/attempt_201105020924_0013_m_000000_0 mary 101 tom NULL值没有被转写成’\N’
PS:
其实话说回来这个功能很简单,但不知为什么做者没有直接支持,或许将来的版本会支持的。
1|JOHN|abu1/abu21|key1:1'\004'2'\004'3/key12:6'\004'7'\004'8 2|Rain|abu2/abu22|key2:2'\004'2'\004'3/key22:6'\004'7'\004'8 3|Lisa|abu3/abu23|key3:3'\004'2'\004'3/key32:6'\004'7'\004'8
针对上述文件能够看到, 紫色方框里的都是 array,可是为了不 array 和 map嵌套array 里的分隔符冲突,
采用了不一样的分隔符,一个是 / , 一个是 \004,为何要用 \004 呢?
由于 hive 默认支持 8 级分隔符:\001~\008,用户只能重写覆盖 \001~\003,其它级别的分隔符 hive 会本身识别解析。
因此以本例来看,建表语句以下:
create EXTERNAL table IF NOT EXISTS testSeparator( id string, name string, itemList array<String>, kvMap map<string, array<int>> ) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' COLLECTION ITEMS TERMINATED BY '/' MAP KEYS TERMINATED BY ':' LINES TERMINATED BY '\n' LOCATION '/tmp/dsap/rawdata/ooxx/3';
hive 结果以下:
关于这块知识能够参考:Hadoop The Definitive Guide - Chapter 12: Hive, Page No: 433, 434
[1] HIVE nested ARRAY in MAP data type
http://stackoverflow.com/questions/18812025/hive-nested-array-in-map-data-type