在《OKHTTP3源码和设计模式(上篇)》,中总体介绍了 OKHttp3 的源码架构,重点讲解了请求任务的分发管理和线程池以及请求执行过程当中的拦截器。这一章咱们接着往下走认识一下 OKHttp3 底层链接和链接池工做机制。设计模式
RealCall 封装了请求过程, 组织了用户和内置拦截器,其中内置拦截器 retryAndFollowUpInterceptor -> BridgeInterceptor -> CacheInterceptor 完执行层的大部分逻辑 ,ConnectInterceptor -> CallServerInterceptor 两个拦截器开始迈向链接层最终完成网络请求。浏览器
ConnectInterceptor 的工做很简单, 负责打开链接; CallServerIntercerceptor 是核心链接器链上的最后一个链接器,
负责从当前链接中写入和读取数据。缓存
/** Opens a connection to the target server and proceeds to the next interceptor. */
// 打开一个和目标服务器的链接,并把处理交个下一个拦截器
public final class ConnectInterceptor implements Interceptor {
public final OkHttpClient client;
public ConnectInterceptor(OkHttpClient client) {
this.client = client;
}
@Override public
Response intercept(Chain chain) throws IOException {
RealInterceptorChain realChain = (RealInterceptorChain) chain;
Request request = realChain.request();
StreamAllocation streamAllocation = realChain.streamAllocation();
// We need the network to satisfy this request. Possibly for validating a conditional GET.
boolean doExtensiveHealthChecks = !request.method().equals("GET");
// 打开链接
HttpCodec httpCodec = streamAllocation.newStream(client, doExtensiveHealthChecks);
RealConnection connection = streamAllocation.connection();
// 交个下一个拦截器
return realChain.proceed(request, streamAllocation, httpCodec, connection);
}
}
复制代码
单独看 ConnectInterceptor 的代码很简单,不过链接正在打开的过程须要看看 streamAllocation.newStream(client, doExtensiveHealthChecks),内部执行过程。仍是先总体上了看看 StreamAllocation 这个类的做用。bash
StreamAllocation 处于上层请求和底层链接池直接 , 协调请求和链接池直接的关系。先来看看 StreamAllocation 对象在哪里建立的? 回到以前文章中介绍的 RetryAndFollowUpInterceptor, 这是核心拦截器链上的顶层拦截器其中源码:服务器
@Override
public Response intercept(Chain chain) throws IOException {
Request request = chain.request();
streamAllocation = new StreamAllocation(
client.connectionPool(), createAddress(request.url()), callStackTrace);
...省略代码
}
复制代码
这里, 每一次请求建立了一个 StreamAllocation 对象, 那么问题来了? 以前咱们说过每个 OkHttpClient 对象只有一个对应的链接池, 刚刚又说到 StreamAllocation 打开链接, 那么 StreamAllocation 是如何建立链接池的呢?咱们很容易就去 StreamAllocation 中找链接池建立的逻辑,可是找不到。 链接池建立的地方在 OkHttpClient 中:cookie
public Builder() {
dispatcher = new Dispatcher();
protocols = DEFAULT_PROTOCOLS;
connectionSpecs = DEFAULT_CONNECTION_SPECS;
eventListenerFactory = EventListener.factory(EventListener.NONE);
proxySelector = ProxySelector.getDefault();
cookieJar = CookieJar.NO_COOKIES;
socketFactory = SocketFactory.getDefault();
hostnameVerifier = OkHostnameVerifier.INSTANCE;
certificatePinner = CertificatePinner.DEFAULT;
proxyAuthenticator = Authenticator.NONE;
authenticator = Authenticator.NONE;
// 建立链接池
connectionPool = new ConnectionPool();
dns = Dns.SYSTEM;
followSslRedirects = true;
followRedirects = true;
retryOnConnectionFailure = true;
connectTimeout = 10_000;
readTimeout = 10_000;
writeTimeout = 10_000;
pingInterval = 0;
}
复制代码
OkHttpClient 默认构造函数的 Builder , 在这里建立了链接池。因此这里咱们也能够看到, 若是咱们对默认链接池不满,咱们是能够直经过 builder 接指定的。
搞懂了 StreamAllocation 和 ConnectionPool 的建立 , 咱们再来看看 StreamAllocation 是怎么打开链接的?直接兜源码可能有点绕 ,先给一个粗略流程图,而后逐点分析。网络
相信你们都有一些 Http 协议的基础(若是没有就去补了,否则看不懂)都知道 Http 的下层协议是 TCP。TCP 链接的建立和断开是有性能开销的,在 Http1.0 中,每一次请求就打开一个链接,在一些老的旧的浏览器上,若是仍是基于 Http1.0,体验会很是差; Http1.1 之后支持长链接, 运行一个请求打开链接完成请求后, 链接能够不关闭, 下次请求时复用此链接,从而提升链接的利用率。固然并非链接打开后一直开着不关,这样又会形成链接浪费,怎么管理?
在OKHttp3 的默认实现中,使用一个双向队列来缓存全部链接, 这些链接中最多只能存在 5 个空闲链接,空闲链接最多只能存活 5 分钟。架构
public final class ConnectionPool {
/**
* Background threads are used to cleanup expired connections. There will be at most a single
* thread running per connection pool. The thread pool executor permits the pool itself to be
* garbage collected.
*/
// 后台按期清理链接的线程池
private static final Executor executor = new ThreadPoolExecutor(0 /* corePoolSize */,
Integer.MAX_VALUE /* maximumPoolSize */, 60L /* keepAliveTime */, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>(), Util.threadFactory("OkHttp ConnectionPool", true));
/** The maximum number of idle connections for each address. */
private final int maxIdleConnections;
private final long keepAliveDurationNs;
// 后台按期清理链接的任务
private final Runnable cleanupRunnable = new Runnable() {
@Override
public void run() {
while (true) {
// cleanup 执行清理
long waitNanos = cleanup(System.nanoTime());
if (waitNanos == -1) return;
if (waitNanos > 0) {
long waitMillis = waitNanos / 1000000L;
waitNanos -= (waitMillis * 1000000L);
synchronized (ConnectionPool.this) {
try {
ConnectionPool.this.wait(waitMillis, (int) waitNanos);
} catch (InterruptedException ignored) {
}
}
}
}
}
};
复制代码
// 存储链接的双向队列
private final Deque<RealConnection> connections = new ArrayDeque<>();
复制代码
void put(RealConnection connection) {
assert (Thread.holdsLock(this));
if (!cleanupRunning) {
cleanupRunning = true;
executor.execute(cleanupRunnable);
}
connections.add(connection);
}
复制代码
RealConnection get(Address address, StreamAllocation streamAllocation, Route route) {
assert (Thread.holdsLock(this));
for (RealConnection connection : connections) {
if (connection.isEligible(address, route)) {
streamAllocation.acquire(connection);
return connection;
}
}
return null;
}
复制代码
ConnectionPool 的源码逻辑仍是至关比较简单, 主要提供一个双向列表来存取链接, 使用一个定时任务按期清理无用链接。 二链接的建立和复用逻辑主要在 StreamAllocation 中。socket
private RealConnection findHealthyConnection(int connectTimeout, int readTimeout,
int writeTimeout, boolean connectionRetryEnabled, boolean doExtensiveHealthChecks)
throws IOException {
while (true) {
// 核心逻辑在 findConnection()中
RealConnection candidate = findConnection(connectTimeout, readTimeout, writeTimeout,
connectionRetryEnabled);
// If this is a brand new connection, we can skip the extensive health checks.
synchronized (connectionPool) {
if (candidate.successCount == 0) {
return candidate;
}
}
// Do a (potentially slow) check to confirm that the pooled connection is still good. If it
// isn't, take it out of the pool and start again. if (!candidate.isHealthy(doExtensiveHealthChecks)) { noNewStreams(); continue; } return candidate; } } 复制代码
findConnection():ide
private RealConnection findConnection(int connectTimeout, int readTimeout, int writeTimeout,
boolean connectionRetryEnabled) throws IOException {
Route selectedRoute;
synchronized (connectionPool) {
// 省略部分代码...
// Attempt to get a connection from the pool. Internal.instance 就是 ConnectionPool 的实例
Internal.instance.get(connectionPool, address, this, null);
if (connection != null) {
// 复用此链接
return connection;
}
// 省略部分代码...
// 建立新新链接
result = new RealConnection(connectionPool, selectedRoute);
// 引用计数
acquire(result);
}
synchronized (connectionPool) {
// Pool the connection. 放入链接池
Internal.instance.put(connectionPool, result);
}
// 省略部分代码...
return result;
}
复制代码
StreamAllocation 主要是为上层提供一个链接, 若是链接池中有复用的链接则复用链接, 若是没有则建立新的。不管是拿到可复用的仍是建立新的, 都要为此链接计算一下引用计数。
public void acquire(RealConnection connection) {
assert (Thread.holdsLock(connectionPool));
if (this.connection != null) throw new IllegalStateException();
this.connection = connection;
// 链接使用allocations列表来记录每个引用
connection.allocations.add(new StreamAllocationReference(this, callStackTrace));
}
复制代码
Realconnection 封装了底层 socket 链接, 同时使用 OKio 来进行数据读写, OKio 是 square 公司的另外一个独立的开源项目, 你们感兴趣能够去深刻读下 OKio 源码, 这里不展开。
/** Does all the work necessary to build a full HTTP or HTTPS connection on a raw socket. */
private void connectSocket(int connectTimeout, int readTimeout) throws IOException {
Proxy proxy = route.proxy();
Address address = route.address();
rawSocket = proxy.type() == Proxy.Type.DIRECT || proxy.type() == Proxy.Type.HTTP
? address.socketFactory().createSocket()
: new Socket(proxy);
rawSocket.setSoTimeout(readTimeout);
try {
// 打开 socket 链接
Platform.get().connectSocket(rawSocket, route.socketAddress(), connectTimeout);
} catch (ConnectException e) {
ConnectException ce = new ConnectException("Failed to connect to " + route.socketAddress());
ce.initCause(e);
throw ce;
}
// 使用 OKil 连上 socket 后续读写使用 Okio
source = Okio.buffer(Okio.source(rawSocket));
sink = Okio.buffer(Okio.sink(rawSocket));
}复制代码