Spark 实现本身的RDD,让代码更优雅

你是否在最初书写spark的代码时老是使用object 是否在为代码的重复而忧心,接下来的博客中,我会专一于spark代码简洁性。java

1,什么事RDD,官网上有很全面的解释,在此再也不赘述,不过咱们须要从代码层面上理解什么事RDD,若是他是一个类,他又有哪些重要的属性和方法,如今列出如下几点:mysql

    1)partitions():Get the array of partitions of this RDD, taking into account whether the
RDD is checkpointed or not. Partition是一个特质,分布在每个excutor上的分区,都会有一个Partition实现类去作惟一标识。sql

    2)iterator():Internal method to this RDD; will read from cache if applicable, or otherwise compute it. This should not be called by users directly, but is available for implementors of custom subclasses of RDD. 这是一个RDD的迭代器,传入的参数是Partition和TaskContext,这样就能够在每个Partition上执行相应的逻辑了。数据库

    3)dependencies():Get the list of dependencies of this RDD,在1.6中,Dependency共有以下几个继承类,后续博文会详解它,感兴趣的读者能够直接阅读源码进一步了解apache

            

    4)partitioner():此函数返回一个Option[Partitioner],若是RDD不是key-value pair RDD类型的数据,那么为None,咱们和以本身实现这个抽象类。当时看到这里,我就在想为何不能实现一个特质,而要用app

抽象类,我的理解这是属于面向对象的东西了,类是实体的抽象爱,而接口则定义一些行为。分布式

    5)preferredLocations():Optionally overridden by subclasses to specify placement preferences.ide

 

下面咱们本身实现一个和Mysql交互的RDD,只涉及到上面说的部分函数,固然在生产环境中不建议这样作,除非你本身想把本身的mysql搞挂,此处只是演示,对于像Hbase之类的分布式数据库,逻辑相似。函数

package com.hypers.rdd

import java.sql.{Connection, ResultSet}

import org.apache.spark.annotation.DeveloperApi
import org.apache.spark.rdd.RDD
import org.apache.spark.{Logging, Partition, SparkContext, TaskContext}

import scala.reflect.ClassTag

//TODO 去重
class HFAJdbcRDD[T: ClassTag]
    (sc: SparkContext,
     connection: () => Connection, //method
     sql: String,
     numPartittions: Int,
     mapRow: (ResultSet) => T
) extends RDD[T](sc, Nil) with Logging {

    /**
      * 如果这个Rdd是有父RDD 那么 compute通常会调用到iterator方法 将taskContext传递出去
      * @param thePart
      * @param context
      * @return
      */
    @DeveloperApi
    override def compute(thePart: Partition, context: TaskContext): Iterator[T] = new Iterator[T] {

        val part = thePart.asInstanceOf[HFAJdbcPartition]
        val conn = connection()
        //若是直接执行sql会使数据重复,所以此处使用分页
        val stmt = conn.prepareStatement(String.format("%s limit %s,1",sql,thePart.index.toString), ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY)
        logInfo("Get sql data size is " + stmt.getFetchSize)
        val rs: ResultSet = stmt.executeQuery()

        override def hasNext: Boolean = {
            if(rs.next()){
                true
            }else{
                conn.close()
                false
            }
        }

        override def next(): T = {
            mapRow(rs)
        }
    }


    /**
      * 将一些信息传递到compute方法 例如sql limit 的参数
      * @return
      */
    override protected def getPartitions: Array[Partition] = {
        (0 until numPartittions).map { inx =>
            new HFAJdbcPartition(inx)
        }.toArray
    }
}

private class HFAJdbcPartition(inx: Int) extends Partition {
    override def index: Int = inx
}

 

package com.hypers.rdd.execute

import java.sql.{DriverManager, ResultSet}

import com.hypers.commons.spark.BaseJob
import com.hypers.rdd.HFAJdbcRDD

//BaseJob里面作了sc的初始化,在此不作演示,您也能够本身new出sparkContext
object HFAJdbcTest extends BaseJob {

    def main(args: Array[String]) {
        HFAJdbcTest(args)
    }

    override def apply(args: Array[String]): Unit = {

        val jdbcRdd = new HFAJdbcRDD[Tuple2[Int, String]](sc,
            getConnection,
            "select id,name from user where id<10",
            3,
            reseultHandler
        )

        logger.info("count is " + jdbcRdd.count())
        logger.info("count keys " + jdbcRdd.keys.collect().toList)

    }

    def getConnection() = {
        Class.forName("com.mysql.jdbc.Driver").newInstance()
        DriverManager.getConnection("jdbc:mysql://localhost:3306/db", "root", "123456")
    }

    def reseultHandler(rs: ResultSet): Tuple2[Int, String] = {
        rs.getInt("id") -> rs.getString("name")

    }
}
相关文章
相关标签/搜索