Kylin依赖于hadoop大数据平台,安装部署以前确认,大数据平台已经安装Hadoop
, HBase
, Hive
。html
说明:特别二进制包是一个在HBase 1.1+环境上编译的Kylin快照二进制包;安装它须要HBase 1.1.3或更高版本,不然以前版本中有一个已知的关于fuzzy key过滤器的缺陷,会致使Kylin查询结果缺乏记录:HBASE-14269。此外还需注意的是,这不是一个正式的发布版(每隔几周rebase KYLIN 1.3.x 分支上最新的改动),没有通过完整的测试。java
能够选择本身须要的版本进行下载,这里下载的是pache-kylin-1.6.0-bin.tar.gznode
$ tar -zxvf apache-kylin-1.6.0-bin.tar.gz $ mv apache-kylin-1.6.0 /home/hadoop/cloud/ $ ln -s /home/hadoop/cloud/apache-kylin-1.6.0 /home/hadoop/cloud/kylin
在/etc/profile里配置KYLIN环境变量和一个名为hive_dependency的变量web
vim /etc/profile
sql
//追加 export KYLIN_HOME=/home/hadoop/kylin export PATH=$PATH:$ KYLIN_HOME/bin export hive_dependency=/home/hadoop/hive/conf:/home/hadoop/hive/lib/*:/home/hadoop/hive/hcatalog/share/hcatalog/hive-hcatalog-core-2.0.0.jar
使配置文件生效shell
# source /etc/profile # su hadoop $ source /etc/profile
这个配置须要在从节点master2,slave1,slave2上同时配置,由于kylin提交的任务交给mr后,hadoop集群将任务分发给从节点时,须要hive的依赖信息,若是不配置,则mr任务将报错为: hcatalogXXX找不到。数据库
$ vim ~/cloud/kylin/bin/kylin.sh
apache
//显式声明 KYLIN_HOME export KYLIN_HOME=/home/Hadoop/kylin //在HBASE_CLASSPATH_PREFIX中显示增长$hive_dependency依赖 export HBASE_CLASSPATH_PREFIX=${tomcat_root}/bin/bootstrap.jar:${tomcat_root}/bin/tomcat-juli.jar:${tomcat_root}/lib/*:$hive_dependency:$HBASE_CLASSPATH_PREFIX
$ check-env.sh KYLIN_HOME is set to /home/hadoop/kylin
进入conf文件夹,修改kylin各配置文件kylin.properties
以下bootstrap
$ vim ~/cloud/kylin/conf/kylin.properties
vim
kylin.rest.servers=master:7070 #定义kylin用于MR jobs的job.jar包和hbase的协处理jar包,用于提高性能。 kylin.job.jar=/home/hadoop/kylin/lib/kylin-job-1.6.0-SNAPSHOT.jar kylin.coprocessor.local.jar=/home/hadoop/kylin/lib/kylin-coprocessor-1.6.0-SNAPSHOT.jar
将kylin_hive_conf.xml
和kylin_job_conf.xml
的副本数设置为2
<property> <name>dfs.replication</name> <value>2</value> <description>Block replication</description> </property>
注意:在启动Kylin以前,先确认如下服务已经启动
start-all.sh mr-jobhistory-daemon.sh start historyserver
hive --service metastore &
zkService.sh start
须要在每一个节点上执行,分别启动全部节点的zookeeper服务
start-hbase.sh
$ find-hive-dependency.sh $ find-hbase-dependency.sh
$ kylin.sh start $ kylin.sh stop
Web访问地址:http://192.168.1.10:7070/kylin/login
默认的登陆username/password 是 ADMIN/KYLIN
Kylin提供一个自动化脚原本建立测试CUBE,这个脚本也会自动建立出相应的hive数据表。运行sample例子的步骤:
S1: 运行${KYLIN_HOME}/bin/sample.sh脚本
$ sample.sh
关键提示信息:
KYLIN_HOME is set to /home/hadoop/kylin Going to create sample tables in hive... Sample hive tables are created successfully; Going to create sample cube... Sample cube is created successfully in project 'learn_kylin'; Restart Kylin server or reload the metadata from web UI to see the change.
S2:在MYSQL中查看此sample建立了哪几张表
select DB_ID,OWNER,SD_ID,TBL_NAME from TBLS;
S3: 在hive客户端查看建立的表和数据量(1w条)
hive> show tables; OK kylin_cal_dt kylin_category_groupings kylin_sales Time taken: 1.835 seconds, Fetched: 3 row(s) hive> select count(*) from kylin_sales; OK Time taken: 65.351 seconds, Fetched: 1 row(s)
S4: 重启kylin server 刷新缓存
$ kylin.sh stop $ kylin.sh start
S5:用默认的用户名密码ADMIN/KYLIN访问192.168.200.165:7070/kylin
进入控制台后选择project为learn_kylin的那个项目。
S6: 选择测试cube “kylin_sales_cube”,点击“Action”-“Build”,选择一个2014-01-01之后的日期,这是为了选择所有的10000条测试记录。
选择一个生成日期
点击提交会出现重建任务成功提交的提示
S7: 监控台查看这个任务的执行进度,直到这个任务100%完成。
任务完成
切换到model控制台会发现cube的状态成为了ready,表示能够执行sql查询了
执行过程当中,在hive里会生成临时表,待任务100%完成后,这张表会自动删除
check-env.sh
提示please make sure user has the privilege to run hbase shell
检查hbase
环境变量是否配置正确。从新配置后问题解决。
参考:http://www.jianshu.com/p/632b61f73fe8
hadoop-env.sh
脚本问题Kylin安装问题--/home/hadoop-2.5.1/contrib/capacity-scheduler/.jar (No such file or directory)
WARNING: Failed to process JAR [jar:file:/home/hadoop-2.5.1/contrib/capacity-scheduler/.jar!/] for TLD files java.io.FileNotFoundException: /home/hadoop-2.5.1/contrib/capacity-scheduler/.jar (No such file or directory) at java.util.zip.ZipFile.open(Native Method) at java.util.zip.ZipFile.(ZipFile.java:215) at java.util.zip.ZipFile.(ZipFile.java:145) at java.util.jar.JarFile.(JarFile.java:153) at java.util.jar.JarFile.(JarFile.java:90) at sun.net.www.protocol.jar.URLJarFile.(URLJarFile.java:93) at sun.net.www.protocol.jar.URLJarFile.getJarFile(URLJarFile.java:69) at sun.net.www.protocol.jar.JarFileFactory.get(JarFileFactory.java:99) at sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:122) at sun.net.www.protocol.jar.JarURLConnection.getJarFile(JarURLConnection.java:89) at org.apache.tomcat.util.scan.FileUrlJar.(FileUrlJar.java:41) at org.apache.tomcat.util.scan.JarFactory.newInstance(JarFactory.java:34) at org.apache.catalina.startup.TldConfig.tldScanJar(TldConfig.java:485) at org.apache.catalina.startup.TldConfig.access$100(TldConfig.java:61) at org.apache.catalina.startup.TldConfig$TldJarScannerCallback.scan(TldConfig.java:296) at org.apache.tomcat.util.scan.StandardJarScanner.process(StandardJarScanner.java:258) at org.apache.tomcat.util.scan.StandardJarScanner.scan(StandardJarScanner.java:220) at org.apache.catalina.startup.TldConfig.execute(TldConfig.java:269) at org.apache.catalina.startup.TldConfig.lifecycleEvent(TldConfig.java:565) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117) at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:90) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5412) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:649) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:1081) at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1877) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
其实这个问题只是一些小bug问题把这个脚本的内容改动一下就行了${HADOOP_HOME}/etc/hadoop/hadoop-env.sh
把下面的这一段循环语句给注释掉
#for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do # if [ "$HADOOP_CLASSPATH" ]; then # export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f # else # export HADOOP_CLASSPATH=$f # fi #done
kylin.sh org.apache.kylin.storage.hbase.util.StorageCleanupJob --delete true
kylin cube测试时,报错:org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
解决办法:
1 配置hdfs-site.xml
<property> <name>dfs.permissions</name> <value>false</value> </property>
2 在hdfs
上给目录/user
777
的权限
$ hadoop fs -chmod -R 777 /user
2017-02-17 19:51:39 星期五
update1: 2017-05-04 20:10:05 星期四