:node
一、安装好mysql、Hadoop、oozie、hivemysql
二、上面安装的软件皆可正确执行web
开始:sql
用oozie调度sqoop1 Action 须要准备三个基本文件,workflow.xml、job.properties、hive-site.xml(其实也能够不要,后面会说)文件app
一、在HDFS建立一个oozie的工做流应用保存路径,我建立的是/user/oozieDemo/workflows/sq2hiveDemo,在此路径下再建立一个lib目录,以下图所示oop
二、将mysql的驱动程序上传到lib目录下spa
三、编写job.properties文件,文件内容以下:命令行
oozie.wf.application.path=hdfs://NODE3:8020/user/oozieDemo/workflows/sq2hiveDemoxml
#Shell Script to runip
EXEC=sq2hive.sh
jobTracker=NODE3:8032
nameNode=hdfs://NODE3:8020
queueName=default
oozie.use.system.libpath=true
oozie.libpath=/user/oozie/share/lib/lib_20150708191612
user.name=root
四、编写workflow.xml文件,也就是sqoopAction的配置文件,此处能够有两种配置文件,一种是命令行模式,一种是参数模式
命令行模式的workflow.xml配置文件:
<workflow-app xmlns='uri:oozie:workflow:0.1' name='sq2hive-wf'>
<start to='sq2hive' />
<action name='sq2hive'>
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://172.17.20.2:9083</value>
</property>
</configuration>
<command>import --connect jdbc:mysql://172.17.20.4/scm --username root --password root --table ROLES --columns "ROLE_ID,NAME,HOST_ID" --delete-target-dir --hive-import --hive-overwrite --hive-table sun.roles -m 2</command>
</sqoop>
<ok to="end" />
<error to="fail" />
</action>
<kill name="fail">
<message>Script failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name='end' />
参数模式的workflow.xml配置文件:
<workflow-app xmlns='uri:oozie:workflow:0.1' name='sq2hive-wf'>
<start to='sq2hive' />
<action name='sq2hive'>
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://172.17.20.2:9083</value>
</property>
</configuration>
<arg>import</arg>
<arg>--connect</arg>
<arg>jdbc:mysql://172.17.20.4/scm</arg>
<arg>--username</arg>
<arg>root</arg>
<arg>--password</arg>
<arg>root</arg>
<arg>--table</arg>
<arg>ROLES</arg>
<arg>--columns</arg>
<arg>ROLE_ID,NAME,HOST_ID</arg>
<arg>--delete-target-dir</arg>
<arg>--hive-import</arg>
<arg>--hive-overwrite</arg>
<arg>--hive-table</arg>
<arg>sun.roles</arg>
<arg>-m</arg>
<arg>2</arg>
<file>/user/oozieDemo/workflows/sq2hiveDemo/hive-site.xml#hive-site.xml</file>
</sqoop>
<ok to="end" />
<error to="fail" />
</action>
<kill name="fail">
<message>Script failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name='end' />
</workflow-app>
5,将 workflow.xml文件上传到HDFS上,即oozie工做流应用路径/user/oozieDemo/workflows /sq2hiveDemo下,同时将job.properties(也能够不用上传,运行OOzie的job时指定的是本地的 job.properties)和hive-site.xml上传到此路径下,以下图:
6,使用命令行方式提交并启动刚刚写好的工做流:命令以下:
oozie job --oozie http://node1:11000/oozie --config job.properties -run
此命令为提交并启动job
7,能够经过oozie web界面或者 hue的workflow查看界面查看job执行状况,hue下查看以下: