使用hdfs做为druid.io的deep storage,可是在提交任务时却出现了错误。java
错误以下:apache
2016-03-25T01:57:15,917 INFO [task-runner-0] io.druid.storage.hdfs.HdfsDataSegmentPusher - Copying segment[wikipedia_2013-08-31T00:00:00.000Z_2013-09-01T00:00:00.000Z_2016-03-25T01:57:07.729Z] to HDFS at location[hdfs://tt1.masiah.test/tmp/druid/RemoteStorage/wikipedia/20130831T000000.000Z_20130901T000000.000Z/2016-03-25T01_57_07.729Z/0] 2016-03-25T01:57:15,919 WARN [task-runner-0] io.druid.indexing.common.index.YeOldePlumberSchool - Failed to merge and upload java.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2304) ~[?:?] at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2311) ~[?:?] at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:90) ~[?:?] at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2350) ~[?:?] at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2332) ~[?:?] at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:369) ~[?:?] at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) ~[?:?] at io.druid.storage.hdfs.HdfsDataSegmentPusher.push(HdfsDataSegmentPusher.java:83) ~[?:?]
该问题的主要缘由是在启动index任务时,mddileManager节点(若是使用overlord的local模式,则不须要配置middleManager节点,会在overlord内部实现middleManager功能)启动时加载hdfs包出现了错误。oop
解决办法替换包。首先中止middleManager节点的运行。ui
查找到数据包在你的druid包中,rm -rf extensions-repo/org/apache/hadoop/hadoop-hdfs/*。删除这个目录下全部的文件,这样从新启动middleManager节点,该节点会从新获取包 extensions-repo/org/apache/hadoop/hadoop-hdfs2.3.0/hadoop-hdfs-2.3.0.jar,因为这个包的错误致使了问题的出现,因此这里从新下载该包。code
从新运行后,问题没有了。ip