Giraph源码分析(一)— 启动ZooKeeper服务

做者 | 白松java

【注:本文为原创,引用转载需与数澜联系。】node

Giraph介绍:

Apache Giraph is an iterative graph processing system built for high scalability. For example, it is currently used at Facebook to analyze the social graph formed by users and their connections. Giraph originated as the open-source counterpart to Pregel, the graph processing architecture developed at Google and described in a 2010 paper. Both systems are inspired by the Bulk Synchronous Parallelmodel of distributed computation introduced by Leslie Valiant. Giraph adds several features beyond the basic Pregel model, including master computation, sharded aggregators, edge-oriented input, out-of-core computation, and more. With a steady development cycle and a growing community of users worldwide, Giraph is a natural choice for unleashing the potential of structured datasets at a massive scale.apache

原理:

Giraph基于Hadoop而建,将MapReduce中Mapper进行封装,未使用reducer。在Mapper中进行屡次迭代,每次迭代等价于BSP模型中的SuperStep。一个Hadoop Job等价于一次BSP做业。基础结构以下图所示。服务器

每部分的功能以下:app

1. ZooKeeper: responsible for computation statesocket

–partition/worker mapping分布式

–global state: #superstepide

–checkpoint paths, aggregator values, statisticsoop

2. Master: responsible for coordination测试

–assigns partitions to workers

–coordinates synchronization

–requests checkpoints

–aggregates aggregator values

–collects health statuses

3. Worker: responsible for vertices

–invokes active vertices compute() function

–sends, receives and assigns messages

–computes local aggregation values

说明

(1)实验环境

三台服务器:test16五、test6二、test63。test165同时是JobTracker和TaskTracker.

测试例子:官网自带的SSSP程序,数据是本身模拟生成。

运行命令:Hadoop jar giraph-examples-1.0.0-for-hadoop-0.20.203.0-jar-with-dependencies.jar org.apache.giraph.GiraphRunner org.apache.giraph.examples.SimpleShortestPathsVertex -vif org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexInputFormat -vip /user/giraph/SSSP -of org.apache.giraph.io.formats.IdWithValueTextOutputFormat -op /user/giraph/output-sssp-debug-7 -w 5

(2)为节约空间,下文中全部代码均为核心代码片断。

(3)core-site.xml中hadoop.tmp.dir的路径设为:/home/hadoop/hadooptmp

(4)写本文是屡次调试完成的,故文中的JobID不同,读者可理解为同一JobID.

(5)后续文章也遵循上述规则。

org.apache.giraph.graph.GraphMapper类

Giraph中自定义org.apache.giraph.graph.GraphMapper类来继承Hadoop中的 org.apache.hadoop.mapreduce.Mapper<Object,Object,Object,Object>类,覆写了setup()、map()、cleanup()和run()方法。GraphMapper类的说明以下:

“This mapper that will execute the BSP graph tasks alloted to this worker. All tasks will be performed by calling the GraphTaskManager object managed by this GraphMapper wrapper classs. Since this mapper will not be passing data by key-value pairs through the MR framework, the Mapper parameter types are irrelevant, and set to Object type.”

BSP的运算逻辑被封装在GraphMapper类中,其拥有一GraphTaskManager对象,用来管理Job的tasks。每一个GraphMapper对象都至关于BSP中的一个计算节点(compute node)。

在GraphMapper类中的setup()方法中,建立GraphTaskManager对象并调用其setup()方法进行一些初始化工做。以下:

map()方法为空,由于全部操做都被封装在了GraphTaskManager类中。在run()方法中调用GraphTaskManager对象的execute()方法进行BSP迭代计算。

org.apache.giraph.graph.GraphMapper类

功能:The Giraph-specific business logic for a single BSP compute node in whatever underlying type of cluster our Giraph job will run on. Owning object will provide the glue into the underlying cluster framework and will call this object to perform Giraph work.

下面讲述setup()方法,代码以下:

 依次介绍每一个方法的功能:

一、locateZookeeperClasspath(zkPathList)

找到ZK jar的本地副本,其路径为:/home/hadoop/hadooptmp/mapred/local/taskTracker/root/jobcache/job_201403270456_0001/jars/job.jar ,用于启动ZooKeeper服务。

二、startZooKeeperManager(),初始化和配置ZooKeeperManager。

定义以下:

三、org.apache.giraph.zk.ZooKeeperManager 类

功能:Manages the election of ZooKeeper servers, starting/stopping the services, etc.

ZooKeeperManager类的setup()定义以下:

createCandidateStamp()方法在 HDFS上 的_bsp/_defaultZkManagerDir/job_201403301409_0006/_task 目录下为每一个task建立一个文件,文件内容为空。文件名为本机的Hostname+taskPartition,以下截图:


运行时指定了5个workers(-w 5),再加上一个master,全部上面有6个task。

getZooKeeperServerList()方法中,taskPartition为0的task会调用createZooKeeperServerList()方法建立ZooKeeper server List,也是建立一个空文件,经过文件名来描述Zookeeper servers。

首先获取taskDirectory(_bsp/_defaultZkManagerDir/job_201403301409_0006/_task)目录下文件,若是当前目录下有文件,则把文件名(Hostname+taskPartition)中的Hostname和taskPartition存入到hostNameTaskMap中。扫描taskDirectory目录后,若hostNameTaskMap的size大于serverCount(等于GiraphConstants.java中的ZOOKEEPER_SERVER_COUNT变量,定义为1),就中止外层的循环。外层循环的目的是:由于taskDirectory下的文件每一个task文件时多个task在分布式条件下建立的,有可能task 0在此建立server List时,别的task尚未生成后task文件。Giraph默认为每一个Job启动一个ZooKeeper服务,也就是说只有一个task会启动ZooKeeper服务。

通过屡次测试,task 0老是被选为ZooKeeper Server ,由于在同一进程中,扫描taskDirectory时,只有它对应的task 文件(其余task的文件尚未生成好),而后退出for循环,发现hostNameTaskMap的size等于1,直接退出while循环。那么此处就选了test162 0。

最后,建立了文件:_bsp/_defaultZkManagerDir/job_201403301409_0006/zkServerList_test162 0

onlineZooKeeperServers(),根据zkServerList_test162 0文件,Task 0 先生成zoo.cfg配置文件,使用ProcessBuilder来建立ZooKeeper服务进程,而后Task 0 再经过socket链接到ZooKeeper服务进程上,最后建立文件 _bsp/_defaultZkManagerDir/job_201403301409_0006/_zkServer/test162 0 来标记master任务已完成。worker一直在进行循环检测master是否生成好 _bsp/_defaultZkManagerDir/job_201403301409_0006/_zkServer/test162 0即worker等待直到master上的ZooKeeper服务已经启动完成。

启动ZooKeeper服务的命令以下:

四、determineGraphFunctions()。

GraphTaskManager类中有CentralizedServiceMaster对象和CentralizedServiceWorker 对象,分别对应于master和worker。每一个BSP compute node扮演的角色断定逻辑以下:

a) If not split master, everyone does the everything and/or running ZooKeeper.
b) If split master/worker, masters also run ZooKeeper
c) If split master/worker == true and giraph.zkList is set, the master will not instantiate a ZK instance, but will assume a quorum is already active on the cluster for Giraph to use.

该断定在GraphTaskManager 类中的静态方法determineGraphFunctions()中定义,片断代码以下:

默认的,Giraph会区分master和worker。会在master上面启动zookeeper服务,不会在worker上启动ZooKeeper服务。那么Task 0 就是master+ZooKeeper,其余Tasks就是workers

相关文章
相关标签/搜索