Spark的HashPartitioner方式的Python实现

 

spark中的默认分区方式是org.apache.spark.HashPartitioner,具体代码以下所示:java

class HashPartitioner(partitions: Int) extends Partitioner {
  require(partitions >= 0, s"Number of partitions ($partitions) cannot be negative.")

  def numPartitions: Int = partitions

  def getPartition(key: Any): Int = key match {
    case null => 0
    case _ => Utils.nonNegativeMod(key.hashCode, numPartitions)
  }

  override def equals(other: Any): Boolean = other match {
    case h: HashPartitioner =>
      h.numPartitions == numPartitions
    case _ =>
      false
  }

  override def hashCode: Int = numPartitions
}

若是想要在Python中获取一个key的分区,只须要实现hashCode,而后取模。python

hashCode的实现方式以下:apache

def java_string_hashcode(s):
    h = 0
    for c in s:
        h = (31 * h + ord(c)) & 0xFFFFFFFF
    return ((h + 0x80000000) & 0xFFFFFFFF) - 0x80000000

验证ide

Scala实现ui

Python实现spa

相关文章
相关标签/搜索