昨晚装好了oozie,能启动了,而且配置了mysql做为数据库,好了,今天要执行oozie自带的demo了,好家伙,一执行就报错!报错不少,就不一一列举了,就说我最后解决的方法吧。javascript
oozie job -oozie http://localhost:11000/oozie -config examples/apps/map-reduce/job.properties –runjava
这句话须要在oozie的目录里面执行,而后在网上查了不少资料,最后搞定了,须要修改三个配置文件。mysql
在说修改配置文件以前,还漏了一些东西,先补上,首先咱们须要解压目录下面的oozie-examples.tar.gz,oozie-client-3.3.2.tar.gz,web
oozie-sharelib-3.3.2.tar.gz,而后把examples和share目录上传到fs上面去。sql
hadoop fs -put examples examples数据库
hadoop fs -put share shareapache
而后在/etc/profile配置oozie-client的环境变量。app
接下来讲怎么解决的oozie的吧。ide
1.修改oozie的conf目录下的oozie-site.xmloop
增长如下内容:
View Code<property> <name>oozie.services</name> <value> org.apache.oozie.service.SchedulerService, org.apache.oozie.service.InstrumentationService, org.apache.oozie.service.CallableQueueService, org.apache.oozie.service.UUIDService, org.apache.oozie.service.ELService, org.apache.oozie.service.AuthorizationService, org.apache.oozie.service.MemoryLocksService, org.apache.oozie.service.DagXLogInfoService, org.apache.oozie.service.SchemaService, org.apache.oozie.service.LiteWorkflowAppService, org.apache.oozie.service.JPAService, org.apache.oozie.service.StoreService, org.apache.oozie.service.CoordinatorStoreService, org.apache.oozie.service.SLAStoreService, org.apache.oozie.service.DBLiteWorkflowStoreService, org.apache.oozie.service.CallbackService, org.apache.oozie.service.ActionService, org.apache.oozie.service.ActionCheckerService, org.apache.oozie.service.RecoveryService, org.apache.oozie.service.PurgeService, org.apache.oozie.service.CoordinatorEngineService, org.apache.oozie.service.BundleEngineService, org.apache.oozie.service.DagEngineService, org.apache.oozie.service.CoordMaterializeTriggerService, org.apache.oozie.service.StatusTransitService, org.apache.oozie.service.PauseTransitService, org.apache.oozie.service.HadoopAccessorService </value> <description> All services to be created and managed by Oozie Services singleton. Class names must be separated by commas. </description> </property> <property> <name>oozie.service.ProxyUserService.proxyuser.cenyuhai.hosts</name> <value>*</value> <description> List of hosts the '#USER#' user is allowed to perform 'doAs' operations. The '#USER#' must be replaced with the username o the user who is allowed to perform 'doAs' operations. The value can be the '*' wildcard or a list of hostnames. For multiple users copy this property and replace the user name in the property name. </description> </property> <property> <name>oozie.service.ProxyUserService.proxyuser.cenyuhai.groups</name> <value>*</value> <description> List of groups the '#USER#' user is allowed to impersonate users from to perform 'doAs' operations. The '#USER#' must be replaced with the username o the user who is allowed to perform 'doAs' operations. The value can be the '*' wildcard or a list of groups. For multiple users copy this property and replace the user name in the property name. </description> </property>
2.修改oozie-env.sh,增长如下内容
export OOZIE_CONF=${OOZIE_HOME}/conf export OOZIE_DATA=${OOZIE_HOME}/data export OOZIE_LOG=${OOZIE_HOME}/logs export CATALINA_BASE=${OOZIE_HOME}/oozie-server export CATALINA_TMPDIR=${OOZIE_HOME}/oozie-server/temp export CATALINA_OUT=${OOZIE_LOG}/catalina.out
3.修改全部节点的hadoop的配置文件core-site.xml,
<property> <name>hadoop.proxyuser.cenyuhai.hosts</name> <value>hadoop.Master</value> </property> <property> <name>hadoop.proxyuser.cenyuhai.groups</name> <value>cenyuhai</value> </property>而后重启就能够执行了,里面的cenyuhai是个人本机帐号。
补充:在进行完上述配置以后,做业能够提交了,可是提交了MR做业以后,在web页面中查看,遇到了一个错误:
JA006: Call to localhost/127.0.0.1:9001 failed on connection exception: java.net.ConnectException: Connection refused
这个问题排查了好久,都没有获得解决 ,最后经过修改job.properties,把jobTracker从localhost:9001改为下面的全称才行,这个可能跟个人hadoop的
jobTracker设置有关,因此遇到有这方面问题的童鞋能够试试。
nameNode=hdfs://192.168.1.133:9000
jobTracker=http://192.168.1.133:9001接下来咱们接着运行hive的demo,运行以前记得修改hive的demo的job.properties,改成上面写的那样。
而后提交,提交成功了,可是在web页面上查看状态为KILLED,被干掉了。。。
错误代码:JA018,错误消息:org/apache/hadoop/hive/cli/CliDriver
而后我就想着多是jar包的问题,删掉share目录下的hive目录里的全部jar包,而后把本身机器上的hive的全部jar包复制到该目录下。
而后上传到共享目录上:
hadoop fs -put share share
再次提交,就能够查看到成功的状态啦!
oozie job -oozie http://localhost:11000/oozie -config examples/apps/hive/job.properties -run
可是这个坑爹的玩意儿,实际上是把数据插入到了Derby中。。。无语了,虽然现实成功了,可是没有用。。。由于咱们配置了外置的mysql数据库,那怎么办呢?
须要修改workflow.xml,把其中的configuration的配置节改为下面的样子。
View Code<configuration> <property> <name>mapred.job.queue.name</name> <value>${queueName}</value> </property> <property> <name>hive.metastore.local</name> <value>true</value> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://192.168.1.133:3306/hive?createDatabaseIfNotExist=true</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>mysql</value> </property> <property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value> </property> </configuration>而后提交以后,在hive中就能够查询到你所创建的表啦,oh,yeah!