2、安装部署html
1、storm伪分布式安装java
(一)环境准备
一、OS:debian 7
二、JDK 7.0
(二)安装zookeeper
一、下载zookeeper并解压
wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
tar -zxvf zookeeper-3.4.6.tar.gz
二、准备配置文件
cd conf
cp zoo_sample.cfg zoo.cfg
三、启动zookeeper
bin/zkServer.sh start
四、验证zookeeper的状态
bin/zkServer.sh status
输出以下:
JMX enabled by default
Using config: /home/jediael/setupfile/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: standalone
(三)安装storm
node
一、下载storm并解压
wget http://mirror.bit.edu.cn/apache/storm/apache-storm-0.9.4/apache-storm-0.9.4.tar.gz
tar -zxvf apache-storm-0.9.4.tar.gz
二、启动storm
nohup bin/storm nimbus &
nohup bin/storm supervisor &
nohup bin/storm ui &
三、查看进程
jediael@jediael:~/setupfile/zookeeper-3.4.6$ jps | grep -v Jps
3235 supervisor
3356 core
3140 QuorumPeerMain
3214 nimbus
四、查看ui界面
http://ip:8080
(四)运行程序
一、根据《storm分布式实时计算模式》第一章代码及P41的修改,并打包上传到服务器
二、运行job
storm jar word-count-1.0-SNAPSHOT.jar storm.blueprints.chapter1.v1.WordCountTopology wordcount-topology
三、在ui界面上能够看到一个topology正在运行
git
2、storm集群安装github
注意:先安装zookeeper:http://blog.csdn.net/jinhong_lu/article/details/46519899
shell
一、下载storm并解压
wget http://mirror.bit.edu.cn/apache/storm/apache-storm-0.9.4/apache-storm-0.9.4.tar.gz
tar -zxvf apache-storm-0.9.4.tar.gz
并在home目录中添加连接
ln -s src/apache-storm-0.9.4 storm
二、配置storm,在storm.yaml中添加如下内容
storm.zookeeper.servers:
- "gdc-nn01-test"
- "gdc-dn01-test"
- "gdc-dn02-test"
nimbus.host: "gdc-nn01-test"
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
storm.local.dir: "/home/hadoop/storm/data”apache
#jvm setting
nimbus.childopts:"-4096m”
supervisor.childopts:"-Xmx4096m"
nimubs.childopts:"-Xmx3072m”
数组
说明:安全
一、关于日志
在初次运行storm程序时,可能会出现各类各样的错误,通常错误都可在日志中发现,在本例中,须要重点关注的日志有:
(1)supervisor上的work日志,位于$STORM_HOME/logs,若是集群正常,但某个topology运行出现错误,通常能够在这些work日志中找到问题。最多见的是CLASSNOTFOUNDEXCEPTION, CLASSNOTDEFINDEXCEPTION,都是缺包致使的,将它们放入$STORM_HOME/lib便可。
(2)nimbus上的日志,位于$STORM_HOME/logs,主要观察整个集群的状态,有如下4个文件
access.log metrics.log nimbus.log ui.log
(3)kafka的日志,位于$KAFKA_HOME/logs,观察kafka是否运行正常。
2.关于emit与transfer(转自http://www.reader8.cn/jiaocheng/20120801/2057699.html)
storm ui上emit和transferred的区别
最开始对storm ui上展现出来的emit和transferred数量不是很明白, 因而在storm-user上google了一把, 发现有人也有跟我同样的困惑, nathan作了详细的回答:
emitted栏显示的数字表示的是调用OutputCollector的emit方法的次数.
transferred栏显示的数字表示的是实际tuple发送到下一个task的计数.
若是一个bolt A使用all group的方式(每个bolt都要接收到)向bolt B发射tuple, 此时bolt B启动了5个task, 那么trasferred显示的数量将是emitted的5倍.
若是一个bolt A内部执行了emit操做, 可是没有指定tuple的接受者, 那么transferred将为0.
这里还有关于spout, bolt之间的emitted数量的关系讨论, 也解释了个人一些疑惑:
有 的bolt的execture方法中并无emit tuple, 可是storm ui中依然有显示emitted, 主要是由于它调用了ack方法, 而该方法将emit ack tuple到系统默认的acker bolt. 所以若是anchor方式emit一个tuple, emitted通常会包含向acker bolt发射tuple的数量.
另外collector.emit(new Values(xxx))和collector.emit(tuple, new Values(xxx)) 这两种不一样的emit方法也会影响后面bolt的emitted和transferred, 若是是前者, 则后续bolt的这两个值都是0, 由于前一个emit方法是非安全的, 再也不使用acker来进行校验.
三、注意、重点:storm运行topology时会有一大堆的包依赖问题,建议保存好现有的包,在新集群中直接导入便可,并且都放到集群中的每个机器上。
三、将storm整个目录scp到dn01,dn02,dn03
四、启动storm
(1)在nn01上启动nimbus,ui
nohup bin/storm nimbus &
nohup bin/storm ui &
(2)在dn0[123]上启动
nohup bin/storm superivsor &
五、验证
(1)打开页面看状态
http://192.168.169.91:8080/index.html
(2)在example目录下执行一个示例topology
$ /home/hadoop/storm/bin/storm jar storm-starter-topologies-0.9.4.jar storm.stater.WordCountTopology word-count
服务器
而后再到ui上看看是否已经提交成功
4、配置
4、配置
完整的默认配置文件见下面defaluts.yaml,若须要修改,则在storm.yaml中修改。重要参数以下:
一、storm.zookeeper.servers:指定使用哪一个zookeeper集群
storm.zookeeper.servers:
- "gdc-nn01-test"
- "gdc-dn01-test"
- "gdc-dn02-test”
二、nimbus.host:指定nimbus是哪台机器
nimbus.host: "gdc-nn01-test”
三、指定supervisor在哪一个端口上运行worker,每一个端口可运行一个worker,所以有多少个配置端口,则每一个supervisor有多少个slot(便可运行多少个worker)
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
storm.local.dir: "/home/hadoop/storm/data"
四、jvm设置
#jvm setting
nimbus.childopts:"-4096m”
supervisor.childopts:"-Xmx4096m"
nimubs.childopts:"-Xmx3072m”
除此外,还有ui.childopts,logviewer.childopts
附完整配置文件:defaults.yaml
<span style="font-family:Courier New;">########### These all have default values as shown ########### Additional configuration goes into storm.yaml java.library.path: "/usr/local/lib:/opt/local/lib:/usr/lib" ### storm.* configs are general configurations # the local dir is where jars are kept storm.local.dir: "storm-local" storm.zookeeper.servers: - "localhost" storm.zookeeper.port: 2181 storm.zookeeper.root: "/storm" storm.zookeeper.session.timeout: 20000 storm.zookeeper.connection.timeout: 15000 storm.zookeeper.retry.times: 5 storm.zookeeper.retry.interval: 1000 storm.zookeeper.retry.intervalceiling.millis: 30000 storm.cluster.mode: "distributed" # can be distributed or local storm.local.mode.zmq: false storm.thrift.transport: "backtype.storm.security.auth.SimpleTransportPlugin" storm.messaging.transport: "backtype.storm.messaging.netty.Context" storm.meta.serialization.delegate: "backtype.storm.serialization.DefaultSerializationDelegate" ### nimbus.* configs are for the master nimbus.host: "localhost" nimbus.thrift.port: 6627 nimbus.thrift.max_buffer_size: 1048576 nimbus.childopts: "-Xmx1024m" nimbus.task.timeout.secs: 30 nimbus.supervisor.timeout.secs: 60 nimbus.monitor.freq.secs: 10 nimbus.cleanup.inbox.freq.secs: 600 nimbus.inbox.jar.expiration.secs: 3600 nimbus.task.launch.secs: 120 nimbus.reassign: true nimbus.file.copy.expiration.secs: 600 nimbus.topology.validator: "backtype.storm.nimbus.DefaultTopologyValidator" ### ui.* configs are for the master ui.port: 8080 ui.childopts: "-Xmx768m" logviewer.port: 8000 logviewer.childopts: "-Xmx128m" logviewer.appender.name: "A1" drpc.port: 3772 drpc.worker.threads: 64 drpc.queue.size: 128 drpc.invocations.port: 3773 drpc.request.timeout.secs: 600 drpc.childopts: "-Xmx768m" transactional.zookeeper.root: "/transactional" transactional.zookeeper.servers: null transactional.zookeeper.port: null ### supervisor.* configs are for node supervisors # Define the amount of workers that can be run on this machine. Each worker is assigned a port to use for communication supervisor.slots.ports: - 6700 - 6701 - 6702 - 6703 supervisor.childopts: "-Xmx256m" #how long supervisor will wait to ensure that a worker process is started supervisor.worker.start.timeout.secs: 120 #how long between heartbeats until supervisor considers that worker dead and tries to restart it supervisor.worker.timeout.secs: 30 #how frequently the supervisor checks on the status of the processes it's monitoring and restarts if necessary supervisor.monitor.frequency.secs: 3 #how frequently the supervisor heartbeats to the cluster state (for nimbus) supervisor.heartbeat.frequency.secs: 5 supervisor.enable: true ### worker.* configs are for task workers worker.childopts: "-Xmx768m" worker.heartbeat.frequency.secs: 1 # control how many worker receiver threads we need per worker topology.worker.receiver.thread.count: 1 task.heartbeat.frequency.secs: 3 task.refresh.poll.secs: 10 zmq.threads: 1 zmq.linger.millis: 5000 zmq.hwm: 0 storm.messaging.netty.server_worker_threads: 1 storm.messaging.netty.client_worker_threads: 1 storm.messaging.netty.buffer_size: 5242880 #5MB buffer # Since nimbus.task.launch.secs and supervisor.worker.start.timeout.secs are 120, other workers should also wait at least that long before giving up on connecting to the other worker. The reconnection period need also be bigger than storm.zookeeper.session.timeout(default is 20s), so that we can abort the reconnection when the target worker is dead. storm.messaging.netty.max_retries: 300 storm.messaging.netty.max_wait_ms: 1000 storm.messaging.netty.min_wait_ms: 100 # If the Netty messaging layer is busy(netty internal buffer not writable), the Netty client will try to batch message as more as possible up to the size of storm.messaging.netty.transfer.batch.size bytes, otherwise it will try to flush message as soon as possible to reduce latency. storm.messaging.netty.transfer.batch.size: 262144 # We check with this interval that whether the Netty channel is writable and try to write pending messages if it is. storm.messaging.netty.flush.check.interval.ms: 10 ### topology.* configs are for specific executing storms topology.enable.message.timeouts: true topology.debug: false topology.workers: 1 topology.acker.executors: null topology.tasks: null # maximum amount of time a message has to complete before it's considered failed topology.message.timeout.secs: 30 topology.multilang.serializer: "backtype.storm.multilang.JsonSerializer" topology.skip.missing.kryo.registrations: false topology.max.task.parallelism: null topology.max.spout.pending: null topology.state.synchronization.timeout.secs: 60 topology.stats.sample.rate: 0.05 topology.builtin.metrics.bucket.size.secs: 60 topology.fall.back.on.java.serialization: true topology.worker.childopts: null topology.executor.receive.buffer.size: 1024 #batched topology.executor.send.buffer.size: 1024 #individual messages topology.receiver.buffer.size: 8 # setting it too high causes a lot of problems (heartbeat thread gets starved, throughput plummets) topology.transfer.buffer.size: 1024 # batched topology.tick.tuple.freq.secs: null topology.worker.shared.thread.pool.size: 4 topology.disruptor.wait.strategy: "com.lmax.disruptor.BlockingWaitStrategy" topology.spout.wait.strategy: "backtype.storm.spout.SleepSpoutWaitStrategy" topology.sleep.spout.wait.strategy.time.ms: 1 topology.error.throttle.interval.secs: 10 topology.max.error.report.per.interval: 5 topology.kryo.factory: "backtype.storm.serialization.DefaultKryoFactory" topology.tuple.serializer: "backtype.storm.serialization.types.ListDelegateSerializer" topology.trident.batch.emit.interval.millis: 500 topology.classpath: null topology.environment: null dev.zookeeper.path: "/tmp/dev-storm-zookeeper"</span>
6、API
(一)一个例子
本示例使用storm运行经典的wordcount程序,拓扑以下:
sentence-spout—>split-bolt—>count-bolt—>report-bolt
分别完成句子的产生、拆分出单词、单词数量统计、统计结果输出
完整代码请见 https://github.com/jinhong-lu/stormdemo
如下是关键代码的分析。
一、建立spout
<span style="font-family:Courier New;">public class SentenceSpout extends BaseRichSpout { private SpoutOutputCollector collector; private int index = 0; private String[] sentences = { "when i was young i'd listen to the radio", "waiting for my favorite songs", "when they played i'd sing along", "it make me smile", "those were such happy times and not so long ago", "how i wondered where they'd gone", "but they're back again just like a long lost friend", "all the songs i love so well", "every shalala every wo'wo", "still shines.", "every shing-a-ling-a-ling", "that they're starting", "to sing so fine"}; public void open(Map conf, TopologyContext context, SpoutOutputCollector collector) { this.collector = collector; } public void declareOutputFields(OutputFieldsDeclarer declarer) { declarer.declare(new Fields("sentence")); } public void nextTuple() { this.collector.emit(new Values(sentences[index])); index++; if (index >= sentences.length) { index = 0; } try { Thread.sleep(1); } catch (InterruptedException e) { //e.printStackTrace(); } } }</span>
上述类中,将string数组中内容逐行发送出去,主要的方法有:
(1)open()方法完成spout的初始化工做,与bolt的prepare()方法相似
(2)declareOutputFileds()定义了发送内容的字段名称与字段数量,bolt中的方法名称同样。
(3)nextTuple()方法是对每个须要处理的数据均会执行的操做,也bolt的executor()方法相似。它是整个逻辑处理的核心,经过emit()方法将数据发送到拓扑中的下一个节点。
二、建立split-bolt
<span style="font-family:Courier New;">public class SplitSentenceBolt extends BaseRichBolt{ private OutputCollector collector; public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) { this.collector = collector; } public void declareOutputFields(OutputFieldsDeclarer declarer) { declarer.declare(new Fields("word")); } public void execute(Tuple input) { String sentence = input.getStringByField("sentence"); String[] words = sentence.split(" "); for(String word : words){ this.collector.emit(new Values(word)); //System.out.println(word); } } }</span>
三个方法的含义与spout相似,这个类根据空格把收到的句子进行拆分,拆成一个一个的单词,而后把单词逐个发送出去。
input.getStringByField("sentence”)能够根据上一节点发送的关键字获取到相应的内容。
三、建立wordcount-bolt
<span style="font-family:Courier New;">public class WordCountBolt extends BaseRichBolt{ private OutputCollector collector; private Map<String,Long> counts = null; public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) { this.collector = collector; this.counts = new HashMap<String, Long>(); } public void declareOutputFields(OutputFieldsDeclarer declarer) { declarer.declare(new Fields("word","count")); } public void execute(Tuple input) { String word = input.getStringByField("word"); Long count = this.counts.get(word); if(count == null){ count = 0L; } count++; this.counts.put(word, count); this.collector.emit(new Values(word,count)); //System.out.println(count); } }</span>
本类将接收到的word进行数量统计,并把结果发送出去。
这个bolt发送了2个filed:
declarer.declare(new Fields("word","count"));
this.collector.emit(new Values(word,count));
四、建立report-bolt
<span style="font-family:Courier New;">public class ReportBolt extends BaseRichBolt{ private Map<String, Long> counts; public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) { this.counts = new HashMap<String,Long>(); } public void declareOutputFields(OutputFieldsDeclarer declarer) { } public void execute(Tuple input) { String word = input.getStringByField("word"); Long count = input.getLongByField("count"); counts.put(word, count); } public void cleanup() { System.out.println("Final output"); Iterator<Entry<String, Long>> iter = counts.entrySet().iterator(); while (iter.hasNext()) { Entry<String, Long> entry = iter.next(); String word = (String) entry.getKey(); Long count = (Long) entry.getValue(); System.out.println(word + " : " + count); } super.cleanup(); } }</span>
本类将从wordcount-bolt接收到的数据进行输出。
先将结果放到一个map中,当topo被关闭时,会调用cleanup()方法,此时将map中的内容输出。
五、建立topo
<span style="font-family:Courier New;">public class WordCountTopology { private static final String SENTENCE_SPOUT_ID = "sentence-spout"; private static final String SPLIT_BOLT_ID = "split-bolt"; private static final String COUNT_BOLT_ID = "count-bolt"; private static final String REPORT_BOLT_ID = "report-bolt"; private static final String TOPOLOGY_NAME = "word-count-topology"; public static void main(String[] args) { SentenceSpout spout = new SentenceSpout(); SplitSentenceBolt splitBolt = new SplitSentenceBolt(); WordCountBolt countBolt = new WordCountBolt(); ReportBolt reportBolt = new ReportBolt(); TopologyBuilder builder = new TopologyBuilder(); builder.setSpout(SENTENCE_SPOUT_ID, spout); builder.setBolt(SPLIT_BOLT_ID, splitBolt).shuffleGrouping( SENTENCE_SPOUT_ID); builder.setBolt(COUNT_BOLT_ID, countBolt).fieldsGrouping(SPLIT_BOLT_ID, new Fields("word")); builder.setBolt(REPORT_BOLT_ID, reportBolt).globalGrouping( COUNT_BOLT_ID); Config conf = new Config(); if (args.length == 0) { LocalCluster cluster = new LocalCluster(); cluster.submitTopology(TOPOLOGY_NAME, conf, builder.createTopology()); try { Thread.sleep(10000); } catch (InterruptedException e) { } cluster.killTopology(TOPOLOGY_NAME); cluster.shutdown(); } else { try { StormSubmitter.submitTopology(args[0], conf,builder.createTopology()); } catch (AlreadyAliveException e) { e.printStackTrace(); } catch (InvalidTopologyException e) { e.printStackTrace(); } } } }</span>
关键步骤为:
(1)建立TopologyBuilder,并为这个builder指定spout与bolt
builder.setSpout(SENTENCE_SPOUT_ID, spout);
builder.setBolt(SPLIT_BOLT_ID, splitBolt).shuffleGrouping(
SENTENCE_SPOUT_ID);
builder.setBolt(COUNT_BOLT_ID, countBolt).fieldsGrouping(SPLIT_BOLT_ID,
new Fields("word"));
builder.setBolt(REPORT_BOLT_ID, reportBolt).globalGrouping(
COUNT_BOLT_ID);
(2)建立conf对象
Config conf = new Config();
这个对象用于指定一些与拓扑相关的属性,如并行度、nimbus地址等。
(3)建立并运行拓扑,这里使用了2种方式
一是当没有参数时,创建一个localcluster,在本地上直接运行,运行10秒后,关闭集群:
LocalCluster cluster = new LocalCluster();
cluster.submitTopology(TOPOLOGY_NAME, conf,builder.createTopology());
Thread.sleep(10000);
cluster.killTopology(TOPOLOGY_NAME);
cluster.shutdown();
二是有参数是,将拓扑提交到集群中:
StormSubmitter.submitTopology(args[0], conf,builder.createTopology());
第一个参数为拓扑的名称。
六、本地运行
直接在eclipse中运行便可,输出结果在console中看到
七、集群运行
(1)编译并打包
mvn clean compile
(2)把编译好的jar包上传到nimbus机器上,而后
storm jar com.ljh.storm.5_stormdemo com.ljh.storm.wordcount.WordCountTopology topology_name
将拓扑提交到集群中。
7、命令
storm完整命令以下:
Commands:
activate :storm activate test-topo,激活一个topo
classpath:storm classpath,打印出storm运行拓扑时的classpath
deactivate: storm deactivate test-topo,deativate一个topo
dev-zookeeper: 设置使用哪一个zk集群,用于开发时使用
drpc:storm drpc,启动drpc进程
help:
jar:storm jar **.jar **(主类名称) args... 启动一个拓扑
kill:storm kill test-topo,杀死一个topo
list:storm list 列出正在运行的拓扑及其状态
localconfvalue:storm localconfvalue conf-name,输出配置中conf-name的值
logviewer:storm logviewer 启动logviewer,能够经过ui查看日志
monitor:
nimbus:storm nimbus 启动nimbus进程
rebalance: 见上面的并行度,用于调整并行度
remoteconfvalue:storm remoteconfvalue conf-name,输出远程集群配置中conf-name的值
repl:
shell:执行shell脚本
supervisor: storm supervisor 启动supervisor
ui:storm ui 启动ui
version: storm version, 查看version
Help:
help
help <command>
详细介绍请见:http://storm.incubator.apache.org/documentation/Command-line-client.html
Configs can be overridden using one or more -c flags, e.g. "storm list -c nimbus.host=nimbus.mycompany.com"
8、并行度
(一)storm拓扑的并行度能够从如下4个维度进行设置:
一、node(服务器):指一个storm集群中的supervisor服务器数量。
二、worker(jvm进程):指整个拓扑中worker进程的总数量,这些数量会随机的平均分配到各个node。
三、executor(线程):指某个spout或者bolt的总线程数量,这些线程会被随机平均的分配到各个worker。
四、task(spout/bolt实例):task是spout和bolt的实例,它们的nextTuple()和execute()方法会被executors线程调用。除非明确指定,storm会给每一个executor分配一个task。若是设置了多个task,即一个线程持有了多个spout/bolt实例.
注意:以上设置的都是总数量,这些数量会被平均分配到各自的宿主上,而不是设置每一个宿主进行多少个进程/线程。详见下面的例子。
(二)并行度的设置方法
一、node:买机器吧,而后加入集群中……
二、worker:Config#setNumWorkers() 或者配置项 TOPOLOGY_WORKERS
三、executor:Topology.setSpout()/.setBolt()
四、task:ComponentConfigurationDeclarer#setNumWorker()
(三)例子:
一、经过config.setNumWorkers(3)将worker进程数量设置为3,假设集群中有3个node,则每一个node会运行一个worker。
二、executor的数量分别为:
spout:5
filter-bolt:3
log-splitter:3
hdfs-bolt:2
总共为13个executor,这13个executor会被随机分配到各个worker中去。
注:这段代码是从kafka中读取消息源的,而这个topic在kafka中的分区数量设置为5,所以这里spout的线程娄为5.
三、这个示例都没有单独设置task的数量,即便用每一个executor一个task的默认配置。若须要设置,能够:
builder.setBolt("log-splitter", new LogSplitterBolt(), 3)
.shuffleGrouping("filter-bolt").setNumTasks(5);
来进行设置,这5个task会被分配到3个executor中。
(四)并行度的动态调整
对storm拓扑的并行度进行调整有2种方法:
一、kill topo—>修改代码—>编译—>提交拓扑
二、动态调整
第1种方法太不方便了,有时候topo不能说kill就kill,另外,若是加几台机器,难道要把全部topo kill掉还要修改代码?
所以storm提供了动态调整的方法,动态调整有2种方法:
一、ui方式:进入某个topo的页面,点击rebalance便可,此时能够看到topo的状态是rebalancing。但此方法只是把进程、线程在各个机器上从新分配,即适用于增长机器,或者减小机器的情形,不能调整worker数量、executor数量等
二、cli方式:storm rebalance
举个例子
storm rebalance toponame -n 7 -e filter-bolt=6 -e hdfs-bolt=8
将topo的worker数量设置为7,并将filter-bolt与hdfs-bolt的executor数量分别设置为六、8.
此时,查看topo的状态是rebalancing,调整完成后,能够看到3台机器中的worker数量分别为三、二、2
9、分组
Storm经过分组来指定数据的流向,主要指定了每一个bolt消费哪一个流,以及如何消费。
storm内置了7个分组方式,并提供了CustomStreamGrouping来建立自定义的分组方式。
一、随机分组 shuffleGrouping
这种方式会随机分发tuple给bolt的各个task,每一个task接到到相同数量的tuple。
二、字段分组 fieldGrouping
按照指定字段进行分组,该字段具备相同组的会被发送到同一个task,具体不一样值的可能会被发送到不一样的task。
三、全复制分组 allGrouping(或者叫广播分组)
每个tuple都会发送给全部的task,必须当心使用。
四、全局分组 globlaGrouping
将全部tuple均发送到惟一的task,会选取task ID最小的task。这种分组下,设置task的并行度是没有意义的。另外,这种方式颇有可能引发瓶颈。
五、不分组 noneGrouping
留做之后使用,目前也随机分组相同。
六、指向型分组 directGrouping(或者叫直接分组)
数据源会调用emitDirect()方法来判断一个tuple应该由哪一个storm组件来接收,只能在声明了是指向型的数据流上使用。
七、本地或随机分组 localOrShuffleGrouping
若是接收bolt在同一个进程中存在一个或者多个task,tuple会优先发送给这个task。不然和随机分组同样。相对于随机分组,此方式能够减小网络传输,从而提升性能。
10、可靠性
storm blueprint: P20
从零开始学storm : P40
可靠性:spout发送的消息会被拓扑树上的全部节点ack,不然会一直重发。
致使重发的缘由有2个:
(1)fail()被调用
(2)超时无响应。
完整的可靠性示例请参考storm blueprint的chapter1 v4代码,或者P22,或者参考从零开始学storm P102页的例子。关键步骤以下:(一)spout一、建立一个map,用于记录已经发送的tuple的id与内容,此为待确认的tuple列表。private ConcurrentHashMap<UUID,Values> pending;二、发送tuple时,加上一个参数用于指明该tuple的id。同时,将此tuple加入map中,等待确认。UUID msgId = UUID.randomUUID();this.pending.put(msgId,values);this.collector.emit(values,msgId);三、定义ack方法与fail方法。ack方法将tuple从map中取出this.pending.remove(msgId);fail方法将tuple从新发送this.collector.emit(this.pending.get(msgId),msgId);对于没回复的tuple,会定时从新发送。(二)bolt处理该tuple的每一个bolt均须要增长如下内容:一、emit时,增长一个参数anchor,指定响应的tuplecollector.emit(tuple,new Values(word));二、确认接收到的tuple已经处理this.collector.ack(tuple);