HiBench 7
官方:https://github.com/intel-hadoop/HiBenchhtml
HiBench is a big data benchmark suite that helps evaluate different big data frameworks in terms of speed, throughput and system resource utilizations. It contains a set of Hadoop, Spark and streaming workloads, including Sort, WordCount, TeraSort, Sleep, SQL, PageRank, Nutch indexing, Bayes, Kmeans, NWeight and enhanced DFSIO, etc. It also contains several streaming workloads for Spark Streaming, Flink, Storm and Gearpump.java
There are totally 19 workloads in HiBench. git
Supported Hadoop/Spark/Flink/Storm/Gearpump releases:github
Hadoop: Apache Hadoop 2.x, CDH5, HDP
Spark: Spark 1.6.x, Spark 2.0.x, Spark 2.1.x, Spark 2.2.x
Flink: 1.0.3
Storm: 1.0.1
Gearpump: 0.8.1
Kafka: 0.8.2.2sql
$ wget https://github.com/intel-hadoop/HiBench/archive/HiBench-7.0.tar.gz
$ tar xvf HiBench-7.0.tar.gz
$ cd HiBench-HiBench-7.0app
1)build alloop
$ mvn -Dspark=2.1 -Dscala=2.11 clean package测试
2)build hadoopbench and sparkbenchui
$ mvn -Phadoopbench -Psparkbench -Dspark=2.1 -Dscala=2.11 clean packagelua
3)only build spark sql
$ mvn -Psparkbench -Dmodules -Psql -Dspark=2.1 -Dscala=2.11 clean package
$ cp conf/hadoop.conf.template conf/hadoop.conf
$ vi conf/hadoop.conf$ cp conf/spark.conf.template conf/spark.conf
$ vi conf/spark.conf$ vi conf/hibench.conf
# Data scale profile. Available value is tiny, small, large, huge, gigantic and bigdata.
# The definition of these profiles can be found in the workload's conf file i.e. conf/workloads/micro/wordcount.conf
hibench.scale.profile bigdata
sql测试分为3种:scan/aggregation/join
$ bin/workloads/sql/scan/prepare/prepare.sh
$ bin/workloads/sql/scan/spark/run.sh
具体配置位于conf/workloads/sql/scan.conf
prepare以后会在hdfs的/HiBench/Scan/Input下生成测试数据,在report/scan/prepare/下生成报告
run以后会在report/scan/spark/下生成报告,好比monitor.html,在hive的default库下能够看到测试数据表
$ bin/workloads/sql/join/prepare/prepare.sh
$ bin/workloads/sql/join/spark/run.sh$ bin/workloads/sql/aggregation/prepare/prepare.sh
$ bin/workloads/sql/aggregation/spark/run.sh
依此类推
若是prepare时报错内存溢出
尝试修改
$ vi bin/functions/workload_functions.sh
local CMD="${HADOOP_EXECUTABLE} --config ${HADOOP_CONF_DIR} jar $job_jar $job_name $tail_arguments"
格式:hadoop jar <jarName> <youClassName> -D mapreduce.reduce.memory.mb=5120 -D mapreduce.reduce.java.opts=-Xmx4608m <otherArgs>
发现不能生效,尝试增长map数量
$ vi bin/functions/hibench_prop_env_mapping.py:
NUM_MAPS="hibench.default.map.parallelism",$ vi conf/hibench.conf
hibench.default.map.parallelism 5000
参考:https://github.com/intel-hadoop/HiBench/blob/master/docs/build-hibench.mdhttps://github.com/intel-hadoop/HiBench/blob/master/docs/run-sparkbench.md