本文主要研究下reactor-netty的PoolResources的两种模式elastic及fixed。html
TcpResources是个工具类,能够用来建立loopResources和poolResources。java
主要是建立NioEventLoopGroup,以及该group下面的workerCount个NioEventLoop(这里涉及两个参数,一个是worker thread count,一个是selector thread count)react
主要是建立channelPools,类型是ConcurrentMap<SocketAddress, Pool>,这里主要研究下它的两种模式elastic及fixedlinux
reactor-netty-0.7.5.RELEASE-sources.jar!/reactor/ipc/netty/resources/DefaultPoolResources.javabootstrap
它实现了netty-transport-4.1.22.Final-sources.jar!/io/netty/channel/pool/ChannelPool.java的接口,重点看以下的几个方法:
@Override public Future<Channel> acquire() { return acquire(defaultGroup.next().newPromise()); } @Override public Future<Channel> acquire(Promise<Channel> promise) { return pool.acquire(promise).addListener(this); } @Override public Future<Void> release(Channel channel) { return pool.release(channel); } @Override public Future<Void> release(Channel channel, Promise<Void> promise) { return pool.release(channel, promise); } @Override public void close() { if(compareAndSet(false, true)) { pool.close(); } }
这里的几个接口基本是委托为具体的pool来进行操做,其实现主要有SimpleChannelPool及FixedChannelPool。
SimpleChannelPool
)reactor-netty-0.7.5.RELEASE-sources.jar!/reactor/ipc/netty/resources/PoolResources.javasegmentfault
/** * Create an uncapped {@link PoolResources} to provide automatically for {@link * ChannelPool}. * <p>An elastic {@link PoolResources} will never wait before opening a new * connection. The reuse window is limited but it cannot starve an undetermined volume * of clients using it. * * @param name the channel pool map name * * @return a new {@link PoolResources} to provide automatically for {@link * ChannelPool} */ static PoolResources elastic(String name) { return new DefaultPoolResources(name, SimpleChannelPool::new); }
这个是TcpClient.create过程当中,默认使用的方法,默认使用的是SimpleChannelPool,建立的是DefaultPoolResources
reactor-netty-0.7.5.RELEASE-sources.jar!/reactor/ipc/netty/tcp/TcpResources.javapromise
static <T extends TcpResources> T create(T previous, LoopResources loops, PoolResources pools, String name, BiFunction<LoopResources, PoolResources, T> onNew) { if (previous == null) { loops = loops == null ? LoopResources.create("reactor-" + name) : loops; pools = pools == null ? PoolResources.elastic(name) : pools; } else { loops = loops == null ? previous.defaultLoops : loops; pools = pools == null ? previous.defaultPools : pools; } return onNew.apply(loops, pools); }
netty-transport-4.1.22.Final-sources.jar!/io/netty/channel/pool/SimpleChannelPool.java服务器
/** * Simple {@link ChannelPool} implementation which will create new {@link Channel}s if someone tries to acquire * a {@link Channel} but none is in the pool atm. No limit on the maximal concurrent {@link Channel}s is enforced. * * This implementation uses LIFO order for {@link Channel}s in the {@link ChannelPool}. * */ public class SimpleChannelPool implements ChannelPool { @Override public final Future<Channel> acquire() { return acquire(bootstrap.config().group().next().<Channel>newPromise()); } @Override public Future<Channel> acquire(final Promise<Channel> promise) { checkNotNull(promise, "promise"); return acquireHealthyFromPoolOrNew(promise); } /** * Tries to retrieve healthy channel from the pool if any or creates a new channel otherwise. * @param promise the promise to provide acquire result. * @return future for acquiring a channel. */ private Future<Channel> acquireHealthyFromPoolOrNew(final Promise<Channel> promise) { try { final Channel ch = pollChannel(); if (ch == null) { // No Channel left in the pool bootstrap a new Channel Bootstrap bs = bootstrap.clone(); bs.attr(POOL_KEY, this); ChannelFuture f = connectChannel(bs); if (f.isDone()) { notifyConnect(f, promise); } else { f.addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { notifyConnect(future, promise); } }); } return promise; } EventLoop loop = ch.eventLoop(); if (loop.inEventLoop()) { doHealthCheck(ch, promise); } else { loop.execute(new Runnable() { @Override public void run() { doHealthCheck(ch, promise); } }); } } catch (Throwable cause) { promise.tryFailure(cause); } return promise; } @Override public final Future<Void> release(Channel channel) { return release(channel, channel.eventLoop().<Void>newPromise()); } @Override public Future<Void> release(final Channel channel, final Promise<Void> promise) { checkNotNull(channel, "channel"); checkNotNull(promise, "promise"); try { EventLoop loop = channel.eventLoop(); if (loop.inEventLoop()) { doReleaseChannel(channel, promise); } else { loop.execute(new Runnable() { @Override public void run() { doReleaseChannel(channel, promise); } }); } } catch (Throwable cause) { closeAndFail(channel, cause, promise); } return promise; } @Override public void close() { for (;;) { Channel channel = pollChannel(); if (channel == null) { break; } channel.close(); } } //...... }
这个链接池的实现若是没有链接则会建立一个(没有限制
),取出链接(链接池使用一个LIFO的Deque来维护Channel
)的时候会检测链接的有效性。
FixedChannelPool
)reactor-netty-0.7.5.RELEASE-sources.jar!/reactor/ipc/netty/resources/PoolResources.javaapp
/** * Default max connection, if -1 will never wait to acquire before opening new * connection in an unbounded fashion. Fallback to * available number of processors. */ int DEFAULT_POOL_MAX_CONNECTION = Integer.parseInt(System.getProperty("reactor.ipc.netty.pool.maxConnections", "" + Math.max(Runtime.getRuntime() .availableProcessors(), 8) * 2)); /** * Default acquisition timeout before error. If -1 will never wait to * acquire before opening new * connection in an unbounded fashion. Fallback to * available number of processors. */ long DEFAULT_POOL_ACQUIRE_TIMEOUT = Long.parseLong(System.getProperty( "reactor.ipc.netty.pool.acquireTimeout", "" + 45000)); /** * Create a capped {@link PoolResources} to provide automatically for {@link * ChannelPool}. * <p>A Fixed {@link PoolResources} will open up to the given max number of * processors observed by this jvm (minimum 4). * Further connections will be pending acquisition indefinitely. * * @param name the channel pool map name * * @return a new {@link PoolResources} to provide automatically for {@link * ChannelPool} */ static PoolResources fixed(String name) { return fixed(name, DEFAULT_POOL_MAX_CONNECTION); } /** * Create a capped {@link PoolResources} to provide automatically for {@link * ChannelPool}. * <p>A Fixed {@link PoolResources} will open up to the given max connection value. * Further connections will be pending acquisition indefinitely. * * @param name the channel pool map name * @param maxConnections the maximum number of connections before starting pending * acquisition on existing ones * * @return a new {@link PoolResources} to provide automatically for {@link * ChannelPool} */ static PoolResources fixed(String name, int maxConnections) { return fixed(name, maxConnections, DEFAULT_POOL_ACQUIRE_TIMEOUT); } /** * Create a capped {@link PoolResources} to provide automatically for {@link * ChannelPool}. * <p>A Fixed {@link PoolResources} will open up to the given max connection value. * Further connections will be pending acquisition indefinitely. * * @param name the channel pool map name * @param maxConnections the maximum number of connections before starting pending * @param acquireTimeout the maximum time in millis to wait for aquiring * * @return a new {@link PoolResources} to provide automatically for {@link * ChannelPool} */ static PoolResources fixed(String name, int maxConnections, long acquireTimeout) { if (maxConnections == -1) { return elastic(name); } if (maxConnections <= 0) { throw new IllegalArgumentException("Max Connections value must be strictly " + "positive"); } if (acquireTimeout != -1L && acquireTimeout < 0) { throw new IllegalArgumentException("Acquire Timeout value must " + "be " + "positive"); } return new DefaultPoolResources(name, (bootstrap, handler, checker) -> new FixedChannelPool(bootstrap, handler, checker, FixedChannelPool.AcquireTimeoutAction.FAIL, acquireTimeout, maxConnections, Integer.MAX_VALUE )); }
最后调用的fixed方法有三个参数,一个是name,一个是maxConnections,一个是acquireTimeout。能够看到这里建立的是FixedChannelPool。
netty-transport-4.1.22.Final-sources.jar!/io/netty/channel/pool/FixedChannelPool.javajvm
/** * {@link ChannelPool} implementation that takes another {@link ChannelPool} implementation and enforce a maximum * number of concurrent connections. */ public class FixedChannelPool extends SimpleChannelPool { @Override public Future<Channel> acquire(final Promise<Channel> promise) { try { if (executor.inEventLoop()) { acquire0(promise); } else { executor.execute(new Runnable() { @Override public void run() { acquire0(promise); } }); } } catch (Throwable cause) { promise.setFailure(cause); } return promise; } private void acquire0(final Promise<Channel> promise) { assert executor.inEventLoop(); if (closed) { promise.setFailure(POOL_CLOSED_ON_ACQUIRE_EXCEPTION); return; } if (acquiredChannelCount < maxConnections) { assert acquiredChannelCount >= 0; // We need to create a new promise as we need to ensure the AcquireListener runs in the correct // EventLoop Promise<Channel> p = executor.newPromise(); AcquireListener l = new AcquireListener(promise); l.acquired(); p.addListener(l); super.acquire(p); } else { if (pendingAcquireCount >= maxPendingAcquires) { promise.setFailure(FULL_EXCEPTION); } else { AcquireTask task = new AcquireTask(promise); if (pendingAcquireQueue.offer(task)) { ++pendingAcquireCount; if (timeoutTask != null) { task.timeoutFuture = executor.schedule(timeoutTask, acquireTimeoutNanos, TimeUnit.NANOSECONDS); } } else { promise.setFailure(FULL_EXCEPTION); } } assert pendingAcquireCount > 0; } } @Override public Future<Void> release(final Channel channel, final Promise<Void> promise) { ObjectUtil.checkNotNull(promise, "promise"); final Promise<Void> p = executor.newPromise(); super.release(channel, p.addListener(new FutureListener<Void>() { @Override public void operationComplete(Future<Void> future) throws Exception { assert executor.inEventLoop(); if (closed) { // Since the pool is closed, we have no choice but to close the channel channel.close(); promise.setFailure(POOL_CLOSED_ON_RELEASE_EXCEPTION); return; } if (future.isSuccess()) { decrementAndRunTaskQueue(); promise.setSuccess(null); } else { Throwable cause = future.cause(); // Check if the exception was not because of we passed the Channel to the wrong pool. if (!(cause instanceof IllegalArgumentException)) { decrementAndRunTaskQueue(); } promise.setFailure(future.cause()); } } })); return promise; } @Override public void close() { executor.execute(new Runnable() { @Override public void run() { if (!closed) { closed = true; for (;;) { AcquireTask task = pendingAcquireQueue.poll(); if (task == null) { break; } ScheduledFuture<?> f = task.timeoutFuture; if (f != null) { f.cancel(false); } task.promise.setFailure(new ClosedChannelException()); } acquiredChannelCount = 0; pendingAcquireCount = 0; FixedChannelPool.super.close(); } } }); } //...... }
这里的acquire,若是当前线程不是在eventLoop中,则放入队列中等待执行acquire0,这里可能撑爆eventLoop的taskQueue,不过其队列大小的值取Math.max(16,SystemPropertyUtil.getInt("io.netty.eventLoop.maxPendingTasks", Integer.MAX_VALUE)),默认是Integer.MAX_VALUE。
FixedChannelPool继承了SimpleChannelPool,并重写了acquire、release、close方法。它对获取链接进行了限制,主要有以下几个参数:
该值先从系统变量reactor.ipc.netty.pool.maxConnections取(若是设置为-1,表示无限制,回到elastic模式
),若是没有设置,则取Math.max(Runtime.getRuntime().availableProcessors(), 8) * 2,即核数与8的最大值的2倍。
该值先从系统变量reactor.ipc.netty.pool.acquireTimeout取(若是设置为-1,表示当即执行不等待
),若是没有设置,则为45000毫秒
这里设置的是Integer.MAX_VALUE
这里设置为FixedChannelPool.AcquireTimeoutAction.FAIL,即timeoutTask为
timeoutTask = new TimeoutTask() { @Override public void onTimeout(AcquireTask task) { // Fail the promise as we timed out. task.promise.setFailure(TIMEOUT_EXCEPTION); } };
若是当前链接超过maxConnections,则进入pendingAcquireQueue等待获取链接,而在进入pendingAcquireQueue以前,若是当前等待数量超过了maxPendingAcquires,则返回FULL_EXCEPTION(Too many outstanding acquire operations
),这里设置的是Integer.MAX_VALUE,因此不会有这个异常。进入pendingAcquireQueue以后,还有一个acquireTimeout参数,即进入pendingAcquireQueue等待acquireTimeout时间,若是尚未获取到链接则返回TIMEOUT_EXCEPTION(Acquire operation took longer then configured maximum time
)。
默认TcpClient建立的PoolResources使用的是elastic模式,即链接池的实现是SimpleChannelPool,默认使用一个LIFO的Deque来维护Channel,若是从链接池取不到链接则会建立新的链接,上限应该是系统设置的可以打开的文件资源数量,超过则报SocketException: Too many open files。PoolResources还提供了FixedChannelPool实现,使用的是fixed模式,即限定了链接池最大链接数及最大等待超时,避免链接建立数量过多撑爆内存或者报SocketException: Too many open files异常。
注意,对于fixed模式,若是reactor.ipc.netty.pool.maxConnections设置为-1,则回退到elastic模式。