今天使用Sqoop将数据从HDFS导出到MySQL的时候,报出了以下错误:java
2018-08-22 14:49:36,857 INFO [IPC Server handler 1 on 35135] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1528444677205_3829_m_000000_0 is : 0.0 2018-08-22 14:49:36,866 FATAL [IPC Server handler 2 on 35135] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1528444677205_3829_m_000000_0 - exited : java.io.IOException: Can't export data, please check failed map task logs at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:122) at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.io.IOException: java.sql.BatchUpdateException: Data truncation: Data too long for column 'on_off_time' at row 4 #问题提示关键点 at org.apache.sqoop.mapreduce.AsyncSqlRecordWriter.write(AsyncSqlRecordWriter.java:233) at org.apache.sqoop.mapreduce.AsyncSqlRecordWriter.write(AsyncSqlRecordWriter.java:46) at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658) at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112) at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:90) ... 10 more
字段说明:on_off_time字段在Hive中的字段类型为String,在MySQL中的设置是varchar(50)。由于这个字段比较特殊,须要存储多个时间点,因此可能存储的数据会很大。sql
由上面的字段说明,很容易就找出了问题的所在,那就是问题字段超过了MySQL的数据类型可以存储的大小,这就须要MySQL中可以盛放的下这个数据的数据类型才能存储。apache
对于数据自己,确定是无能为力了,这是本人清理好的数据,没有办法再进行精简,因此解决问题的点就在MySQL中了。bash
首先,我尝试了varchar的最大值,仍旧是报错。app
那么最终只有更换MySQL的数据类型了,将本来的vachar类型更改成text类型。oop
问题获得解决。测试
这里可能会有人问,为何不直接更改数据类型,而是还要测试一下varchar的最大值,固然是为了更合理的利用资源了,若是varchar可以盛放的下,就不会更改这个数据类型。另外text类型的查询速度是有目共睹的,因此在不是必须的状况下,本人是不想使用text类型。spa