hive文件存储格式包括如下几类:html
TEXTFILEjava
SEQUENCEFILEnode
RCFILEsql
自定义格式apache
其中TEXTFILE为默认格式,建表时不指定默认为这个格式,导入数据时会直接把数据文件拷贝到hdfs上不进行处理。markdown
SequenceFile,RCFile格式的表不能直接从本地文件导入数据,数据要先导入到textfile格式的表中,而后再从textfile表中用insert导入到SequenceFile,RCFile表中。oop
默认格式,数据不作压缩,磁盘开销大,数据解析开销大。
可结合Gzip、Bzip2使用(系统自动检查,执行查询时自动解压),但使用这种方式,hive不会对数据进行切分,从而没法对数据进行并行操做。
实例:性能
> create table test1(str STRING) > STORED AS TEXTFILE; OK Time taken: 0.786 seconds #写脚本生成一个随机字符串文件,导入文件: > LOAD DATA LOCAL INPATH '/home/work/data/test.txt' INTO TABLE test1; Copying data from file:/home/work/data/test.txt Copying file: file:/home/work/data/test.txt Loading data to table default.test1 OK Time taken: 0.243 seconds
SequenceFile是Hadoop API提供的一种二进制文件支持,其具备使用方便、可分割、可压缩的特色。
SequenceFile支持三种压缩选择:NONE, RECORD, BLOCK。 Record压缩率低,通常建议使用BLOCK压缩。
示例:编码
> create table test2(str STRING) > STORED AS SEQUENCEFILE; OK Time taken: 5.526 seconds hive> SET hive.exec.compress.output=true; hive> SET io.seqfile.compression.type=BLOCK; hive> INSERT OVERWRITE TABLE test2 SELECT * FROM test1;
RCFILE是一种行列存储相结合的存储方式。首先,其将数据按行分块,保证同一个record在一个块上,避免读一个记录须要读取多个block。其次,块数据列式存储,有利于数据压缩和快速的列存取。RCFILE文件示例:spa
> create table test3(str STRING) > STORED AS RCFILE; OK Time taken: 0.184 seconds > INSERT OVERWRITE TABLE test3 SELECT * FROM test1;
实践证实RCFile目前没有性能优点, 只有存储上能省10%的空间, 做者本身都认可. Facebook用它也就是为了存储,. RCFile目前没有使用特殊的压缩手段, 例如算术编码, 后缀树等, 没有像InfoBright那样能skip 大量io.
ORC是RCfile的升级版,性能有大幅度提高, 并且数据能够压缩存储,压缩比和Lzo压缩差很少,比text文件压缩比能够达到70%的空间。并且读性能很是高,能够实现高效查询。 具体介绍https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC 建表语句以下: 同时,将ORC的表中的NULL取值,由默认的\N改成'',
方式一:
hive> show create table test_orc; CREATE TABLE `test_orc`( `advertiser_id` string, `ad_plan_id` string, `cnt` bigint) PARTITIONED BY ( `day` string, `type` tinyint COMMENT '0 as bid, 1 as win, 2 as ck', `hour` tinyint) ROW FORMAT DELIMITED NULL DEFINED AS '' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 'hdfs://namenode/hivedata/warehouse/pmp.db/test_orc' TBLPROPERTIES ( 'last_modified_by'='pmp_bi', 'last_modified_time'='1465992624', 'transient_lastDdlTime'='1465992624')
方式二:
drop table test_orc; create table if not exists test_orc( advertiser_id string, ad_plan_id string, cnt BIGINT ) partitioned by (day string, type TINYINT COMMENT '0 as bid, 1 as win, 2 as ck', hour TINYINT) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' with serdeproperties('serialization.null.format' = '') STORED AS ORC; 查看结果 hive> show create table test_orc; CREATE TABLE `test_orc`( `advertiser_id` string, `ad_plan_id` string, `cnt` bigint) PARTITIONED BY ( `day` string, `type` tinyint COMMENT '0 as bid, 1 as win, 2 as ck', `hour` tinyint) ROW FORMAT DELIMITED NULL DEFINED AS '' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 'hdfs://namenode/hivedata/warehouse/pmp.db/test_orc' TBLPROPERTIES ( 'transient_lastDdlTime'='1465992726')
方式三:
drop table test_orc; create table if not exists test_orc( advertiser_id string, ad_plan_id string, cnt BIGINT ) partitioned by (day string, type TINYINT COMMENT '0 as bid, 1 as win, 2 as ck', hour TINYINT) ROW FORMAT DELIMITED NULL DEFINED AS '' STORED AS ORC; 查看结果 hive> show create table test_orc; CREATE TABLE `test_orc`( `advertiser_id` string, `ad_plan_id` string, `cnt` bigint) PARTITIONED BY ( `day` string, `type` tinyint COMMENT '0 as bid, 1 as win, 2 as ck', `hour` tinyint) ROW FORMAT DELIMITED NULL DEFINED AS '' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 'hdfs://namenode/hivedata/warehouse/pmp.db/test_orc' TBLPROPERTIES ( 'transient_lastDdlTime'='1465992916')
当用户的数据文件格式不能被当前 Hive 所识别的时候,能够自定义文件格式。
用户能够经过实现inputformat和outputformat来自定义输入输出格式,参考代码:.\hive-0.8.1\src\contrib\src\java\org\apache\hadoop\hive\contrib\fileformat\base64
实例:
> create table test4(str STRING) > stored as > inputformat 'org.apache.hadoop.hive.contrib.fileformat.base64.Base64TextInputFormat' > outputformat 'org.apache.hadoop.hive.contrib.fileformat.base64.Base64TextOutputFormat';
$ cat test1.txt aGVsbG8saGl2ZQ== aGVsbG8sd29ybGQ= aGVsbG8saGFkb29w
test1文件为base64编码后的内容,decode后数据为:
hello,hive hello,world hello,hadoop
load数据并查询:
hive> LOAD DATA LOCAL INPATH '/home/work/test1.txt' INTO TABLE test4; Copying data from file:/home/work/test1.txt Copying file: file:/home/work/test1.txt Loading data to table default.test4 OK Time taken: 4.742 seconds hive> select * from test4; OK hello,hive hello,world hello,hadoop Time taken: 1.953 seconds
相比TEXTFILE和SEQUENCEFILE,RCFILE因为列式存储方式,数据加载时性能消耗较大,可是具备较好的压缩比和查询响应。数据仓库的特色是一次写入、屡次读取,所以,总体来看,RCFILE相比其他两种格式具备较明显的优点。
参考连接: http://blog.csdn.net/yfkiss/article/details/7787742
http://blog.csdn.net/longshenlmj/article/details/51702343
http://www.cnblogs.com/ggjucheng/archive/2013/01/03/2843318.html