kafka客户端中使用了不少的回调方式处理请求。基本思路是将回调函数暂存到ClientRequest中,而ClientRequest会暂存到inFlightRequests中,当返回response的时候,从inFlightRequests中读取对应的ClientRequest,并调用request中的回调函数完成处理。
inFlightRequests是请求和响应处理的桥梁.html
不管是producer仍是consumer,回调函数类都是实现了RequestCompletionHandler接口。java
public interface RequestCompletionHandler { public void onComplete(ClientResponse response); }
consumer的回调函数类不但实现了RequestCompletionHandler,还继承了RequestFuture。RequestFuture是一个有状态的类,在调用中会设置响应的状态,能够持有RequestFuture的引用,用来判断请求的状态。node
public class RequestFuture<T> { private boolean isDone = false; private T value; private RuntimeException exception; private List<RequestFutureListener<T>> listeners = new ArrayList<>(); // 省略其余方法 }
producer是在sender线程中建立的ClientRequest,以下:api
private List<ClientRequest> createProduceRequests(Map<Integer, List<RecordBatch>> collated, long now) { List<ClientRequest> requests = new ArrayList<ClientRequest>(collated.size()); for (Map.Entry<Integer, List<RecordBatch>> entry : collated.entrySet()) requests.add(produceRequest(now, entry.getKey(), acks, requestTimeout, entry.getValue())); return requests; } // 建立request private ClientRequest produceRequest(long now, int destination, short acks, int timeout, List<RecordBatch> batches) { Map<TopicPartition, ByteBuffer> produceRecordsByPartition = new HashMap<TopicPartition, ByteBuffer>(batches.size()); final Map<TopicPartition, RecordBatch> recordsByPartition = new HashMap<TopicPartition, RecordBatch>(batches.size()); for (RecordBatch batch : batches) { TopicPartition tp = batch.topicPartition; produceRecordsByPartition.put(tp, batch.records.buffer()); recordsByPartition.put(tp, batch); } ProduceRequest request = new ProduceRequest(acks, timeout, produceRecordsByPartition); RequestSend send = new RequestSend(Integer.toString(destination), this.client.nextRequestHeader(ApiKeys.PRODUCE), request.toStruct()); // 回调函数 RequestCompletionHandler callback = new RequestCompletionHandler() { public void onComplete(ClientResponse response) { handleProduceResponse(response, recordsByPartition, time.milliseconds()); } }; // 回调函数保存到request中, 而后request被保存到了inFlightRequests return new ClientRequest(now, acks != 0, send, callback); }
在NetworkClient#poll(..)最后会处理会调用对应的回调函数cookie
public List<ClientResponse> poll(long timeout, long now) { long metadataTimeout = metadataUpdater.maybeUpdate(now); try { this.selector.poll(Utils.min(timeout, metadataTimeout, requestTimeoutMs)); } catch (IOException e) { log.error("Unexpected error during I/O", e); } // process completed actions long updatedNow = this.time.milliseconds(); List<ClientResponse> responses = new ArrayList<>(); handleCompletedSends(responses, updatedNow); handleCompletedReceives(responses, updatedNow); handleDisconnections(responses, updatedNow); handleConnections(); handleTimedOutRequests(responses, updatedNow); // invoke callbacks for (ClientResponse response : responses) { // response中封装了request中的回调函数 if (response.request().hasCallback()) { try { response.request().callback().onComplete(response); //调用回调函数 } catch (Exception e) { log.error("Uncaught error in request completion:", e); } } } return responses; }
consumer使用回调函数和producer使用方式相似,可是比producer复杂一些。前面说了Consumer的回调函数不但实现了RequestCompletionHandler,还继承了RequestFuture。ide
public static class RequestFutureCompletionHandler extends RequestFuture<ClientResponse> implements RequestCompletionHandler { @Override public void onComplete(ClientResponse response) { if (response.wasDisconnected()) { ClientRequest request = response.request(); RequestSend send = request.request(); ApiKeys api = ApiKeys.forId(send.header().apiKey()); int correlation = send.header().correlationId(); log.debug("Cancelled {} request {} with correlation id {} due to node {} being disconnected", api, request, correlation, send.destination()); raise(DisconnectException.INSTANCE); } else { complete(response); // 关键, complete方法会设置RequestFuture的状态 } } } } public void complete(T value) { // 设置RequestFuture状态 if (isDone) throw new IllegalStateException("Invalid attempt to complete a request future which is already complete"); this.value = value; this.isDone = true; fireSuccess(); // 循环调用RequestFuture中的listeners } private void fireSuccess() { for (RequestFutureListener<T> listener : listeners) listener.onSuccess(value); } private void fireFailure() { for (RequestFutureListener<T> listener : listeners) listener.onFailure(exception); }
与producer相似,请求被放到一个map中,不过名字是unsent。以下ConsumerNetworkClient#send(..):函数
public RequestFuture<ClientResponse> send(Node node, ApiKeys api, AbstractRequest request) { long now = time.milliseconds(); RequestFutureCompletionHandler future = new RequestFutureCompletionHandler(); // 回调函数 RequestHeader header = client.nextRequestHeader(api); RequestSend send = new RequestSend(node.idString(), header, request.toStruct()); put(node, new ClientRequest(now, true, send, future)); // request方法哦unsent中 return future; // 并返回回调函数类的引用 }
在调用ConsumerNetworkClient#send(..)后又紧接着调用了Future#compose(..)。以下:this
private RequestFuture<Void> sendGroupCoordinatorRequest() { Node node = this.client.leastLoadedNode(); if (node == null) { return RequestFuture.noBrokersAvailable(); } else { log.debug("Sending coordinator request for group {} to broker {}", groupId, node); GroupCoordinatorRequest metadataRequest = new GroupCoordinatorRequest(this.groupId); return client.send(node, ApiKeys.GROUP_COORDINATOR, metadataRequest) // send后返回FutureRequest,而后又调用compose方法 .compose(new RequestFutureAdapter<ClientResponse, Void>() { @Override public void onSuccess(ClientResponse response, RequestFuture<Void> future) { handleGroupMetadataResponse(response, future); } }); } }
Future#compose(..)方法又两个做用线程
public <S> RequestFuture<S> compose(final RequestFutureAdapter<T, S> adapter) { final RequestFuture<S> adapted = new RequestFuture<S>(); // 返回新的RequestFuture addListener(new RequestFutureListener<T>() { // 添加到原先FutureRequest中的listeners中 @Override public void onSuccess(T value) { adapter.onSuccess(value, adapted); // 返回response后会调用listeners,从而会设置新的RequestFuture状态,咱们就能够根据这个新的RequestFuture来判断response处理状态。 } @Override public void onFailure(RuntimeException e) { adapter.onFailure(e, adapted); } }); return adapted; }
因此将ClientRequest放到map中后,最终咱们持有的是compose中新建的FutureRequest,如AbstractCoordinator#ensureCoordinatorReady(..):debug
public void ensureCoordinatorReady() { while (coordinatorUnknown()) { RequestFuture<Void> future = sendGroupCoordinatorRequest();// 最终返回compose返回的future。 client.poll(future); // 在poll中不停的轮训future的状态 if (future.failed()) { if (future.isRetriable()) client.awaitMetadataUpdate(); else throw future.exception(); } else if (coordinator != null && client.connectionFailed(coordinator)) { coordinatorDead(); time.sleep(retryBackoffMs); } } } public void poll(RequestFuture<?> future) { while (!future.isDone()) // 轮训future状态,当response作相应处理会调用回调函数,从而设置future相应状态。 poll(Long.MAX_VALUE); }
kafka客户端中使用了大量的回调函数作请求的处理,理解回调函数很重要,附回调函数连接:
http://www.cnblogs.com/set-cookie/p/8996951.html