默认状况下,分片和输入文件的分块数是相等的。也不彻底相等,若是block size大小事128M,文件大小为128.1M,文件的block数目为2,可是application运行过程当中,你会发现分片数目是1,而不是2,其中的机理,后面会分析html
有的程序会设置map的数目,那么map数目是怎样影响分片的数目的呢?java
若是文件大小为0,是否会做为一个分片传给map任务?
node
经过listStatus()获取输入文件列表files,其中会遍历输入目录的子目录,并过滤掉部分文件,如文件_SUCCESSexpress
获取全部的文件大小totalSIzeapp
goalSIze=totalSize/numMaps。numMaps是用户指定的map数目ide
files中取出一个文件fileoop
计算splitSize。splitSize=max(minSplitSize,min(file.blockSize,goalSize)),其中minSplitSize是容许的最小分片大小,默认为1Bui
后面根据splitSize大小将file分片。在分片的时候,若是剩余的大小不大于splitSize*1.1,且大于0B的时候,会将该区域整个做为一个分片。这样作是为了防止一个mapper处理的数据过小this
将file的分片加入到splits中spa
返回4,直到将files遍历完
结束,返回splits
源码(hadoop2.2.0)
其实流程算起来也不算复杂,因此就直接用代码注释来作吧
这里边涉及这么几个方法:
一、public List<InputSplit> getSplits(JobContext job), 这个由客户端调用来得到当前Job的全部分片(split),而后发送给JobTracker(新API中应该是ResourceManager),而JobTracker根据这些分片的存储位置来给TaskTracker分配map任务去处理这些分片。这个方法用到了后边的listStatus,而后根据获得的这些文件信息,从FileSystem那里去拉取这些组成这些文件的块的信息(BlockLocation),使用的是getFileBlockLocation(file,start,len),这个方法是与使用的文件系统实现相关的(FileSystem,LocalFileSystem,DistributedFileSystem)
/** * Generate the list of files and make them into FileSplits. * @param job the job context * @throws IOException */ public List<InputSplit> getSplits(JobContext job) throws IOException { long minSize = Math.max(getFormatMinSplitSize(), getMinSplitSize(job)); long maxSize = getMaxSplitSize(job); // generate splits List<InputSplit> splits = new ArrayList<InputSplit>(); List<FileStatus> files = listStatus(job); for (FileStatus file: files) { Path path = file.getPath(); long length = file.getLen(); if (length != 0) { BlockLocation[] blkLocations; if (file instanceof LocatedFileStatus) { blkLocations = ((LocatedFileStatus) file).getBlockLocations(); } else { FileSystem fs = path.getFileSystem(job.getConfiguration()); blkLocations = fs.getFileBlockLocations(file, 0, length); } if (isSplitable(job, path)) { long blockSize = file.getBlockSize(); long splitSize = computeSplitSize(blockSize, minSize, maxSize); long bytesRemaining = length; while (((double) bytesRemaining)/splitSize > SPLIT_SLOP) { int blkIndex = getBlockIndex(blkLocations, length-bytesRemaining); splits.add(makeSplit(path, length-bytesRemaining, splitSize, blkLocations[blkIndex].getHosts())); bytesRemaining -= splitSize; } if (bytesRemaining != 0) { int blkIndex = getBlockIndex(blkLocations, length-bytesRemaining); splits.add(makeSplit(path, length-bytesRemaining, bytesRemaining, blkLocations[blkIndex].getHosts())); } } else { // not splitable splits.add(makeSplit(path, 0, length, blkLocations[0].getHosts())); } } else { //Create empty hosts array for zero length files splits.add(makeSplit(path, 0, length, new String[0])); } } // Save the number of input files for metrics/loadgen job.getConfiguration().setLong(NUM_INPUT_FILES, files.size()); LOG.debug("Total # of splits: " + splits.size()); return splits; }
二、protected List<FileStatus> listStatus(JobContext job), 先根据“mapred.input.dir”的配置值去获得用户指定的全部Path。而后根据这个JobContext的Configuration获得FileSystem(固然,更多是 DistributedFileSystem )。最后应用用户可能设置了的PathFilter,经过FileSystem获取全部这些Path所表明的File(FileStatus)。注:这个方法的东西至关多,不少内容还十分陌生。
/** List input directories. * Subclasses may override to, e.g., select only files matching a regular * expression. * * @param job the job to list input paths for * @return array of FileStatus objects * @throws IOException if zero items. */ protected List<FileStatus> listStatus(JobContext job ) throws IOException { List<FileStatus> result = new ArrayList<FileStatus>(); Path[] dirs = getInputPaths(job); if (dirs.length == 0) { throw new IOException("No input paths specified in job"); } // get tokens for all the required FileSystems.. TokenCache.obtainTokensForNamenodes(job.getCredentials(), dirs, job.getConfiguration()); // Whether we need to recursive look into the directory structure boolean recursive = getInputDirRecursive(job); List<IOException> errors = new ArrayList<IOException>(); // creates a MultiPathFilter with the hiddenFileFilter and the // user provided one (if any). List<PathFilter> filters = new ArrayList<PathFilter>(); filters.add(hiddenFileFilter); PathFilter jobFilter = getInputPathFilter(job); if (jobFilter != null) { filters.add(jobFilter); } PathFilter inputFilter = new MultiPathFilter(filters); for (int i=0; i < dirs.length; ++i) { Path p = dirs[i]; FileSystem fs = p.getFileSystem(job.getConfiguration()); FileStatus[] matches = fs.globStatus(p, inputFilter); if (matches == null) { errors.add(new IOException("Input path does not exist: " + p)); } else if (matches.length == 0) { errors.add(new IOException("Input Pattern " + p + " matches 0 files")); } else { for (FileStatus globStat: matches) { if (globStat.isDirectory()) { RemoteIterator<LocatedFileStatus> iter = fs.listLocatedStatus(globStat.getPath()); while (iter.hasNext()) { LocatedFileStatus stat = iter.next(); if (inputFilter.accept(stat.getPath())) { if (recursive && stat.isDirectory()) { addInputPathRecursively(result, fs, stat.getPath(), inputFilter); } else { result.add(stat); } } } } else { result.add(globStat); } } } } if (!errors.isEmpty()) { throw new InvalidInputException(errors); } LOG.info("Total input paths to process : " + result.size()); return result; }
三、protected long computeSplitSize(long blockSize, long minSize, long maxSize),计算出当前Job所配置的分片最大尺寸。
protected long computeSplitSize(long blockSize, long minSize, long maxSize) { return Math.max(minSize, Math.min(maxSize, blockSize)); }
四、protected int getBlockIndex(BlockLocation[] blkLocations, long offset), 因为组成文件的块的信息已经得到了,只须要根据offset来计算所在的那个块就好了。
protected int getBlockIndex(BlockLocation[] blkLocations, long offset) { for (int i = 0 ; i < blkLocations.length; i++) { // is the offset inside this block? if ((blkLocations[i].getOffset() <= offset) && (offset < blkLocations[i].getOffset() + blkLocations[i].getLength())){ return i; } } BlockLocation last = blkLocations[blkLocations.length -1]; long fileLength = last.getOffset() + last.getLength() -1; throw new IllegalArgumentException("Offset " + offset + " is outside of file (0.." + fileLength + ")"); }
转自:MapReduce获取分片数, Hadoop中FileInputFormat计算InputSplit的getSplits方法的流程