spark RDD分区2GB限制(Size exceeds Integer.MAX_VALUE)

最近使用spark处理较大的数据文件,遇到了分区2G限制的问题,spark日志会报以下的日志:
WARN scheduler.TaskSetManager: Lost task 19.0 in stage 6.0 (TID 120, 10.111.32.47): java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:828)
at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:123)
at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:132)
at org.apache.spark.storage.BlockManager.doGetLocal(BlockManager.scala:517)
at org.apache.spark.storage.BlockManager.getLocal(BlockManager.scala:432)
at org.apache.spark.storage.BlockManager.get(BlockManager.scala:618)
at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:146)
at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:70)java

解决方法:
手动设置RDD的分区数量。当前使用的Spark默认RDD分区是18个,后来手动设置为500个,上面这个问题就迎刃而解了。能够在RDD加载后,使用RDD.repartition(numPart:Int)函数从新设置分区数量。
val data_new = data.repartition(500)apache


下面是一些相关的资料,有兴趣的读者能够进一步的阅读: 函数

2GB limit in spark for blocks
create LargeByteBuffer abstraction for eliminating 2GB limit on blocks
Why does Spark RDD partition has 2GB limit for HDFS
抛异常的java代码:FileChannelImpl.javaspa