近期工做中涉及到文件记录、文件翻转等操做,思考有没有成熟的代码以便参考.
所以,第一时间就联想到Logback的AsyncAppender以及RollingFileAppender.java
PS: AsyncAppender能够与RollingFileAppender结合使用,提高日志事件写入效率.安全
public class AsyncAppender extends AsyncAppenderBase<ILoggingEvent> { // 省略部分功能 boolean includeCallerData = false; protected boolean isDiscardable(ILoggingEvent event) { Level level = event.getLevel(); return level.toInt() <= Level.INFO_INT; } protected void preprocess(ILoggingEvent eventObject) { eventObject.prepareForDeferredProcessing(); if (includeCallerData) eventObject.getCallerData(); } }
(1)isDiscardable:肯定日志事件是否能够丢弃(当缓冲队列达到上限时,出于性能考虑须要丢弃诸如TRACE、DEBUG级别日志);
(2)preprocess:预处理日志事件(包括格式化msg,线程名称,MDC中存储的数据),若是includeCallerData为true,则须要经过日志事件的堆栈信息,获取日志所在的文件,行号等信息;app
由于AsyncAppender的绝大部分功能由AsyncAppenderBase中实现,所以接下来主要讲解AsyncAppenderBase的功能点.异步
public class AsyncAppenderBase<E> extends UnsynchronizedAppenderBase<E> implements AppenderAttachable<E> { BlockingQueue<E> blockingQueue; public static final int DEFAULT_QUEUE_SIZE = 256; int queueSize = DEFAULT_QUEUE_SIZE; int appenderCount = 0; static final int UNDEFINED = -1; int discardingThreshold = UNDEFINED; boolean neverBlock = false; Worker worker = new Worker(); public static final int DEFAULT_MAX_FLUSH_TIME = 1000; int maxFlushTime = DEFAULT_MAX_FLUSH_TIME; }
以上是AsyncAppenderBase的主要属性:async
class Worker extends Thread { public void run() { AsyncAppenderBase<E> parent = AsyncAppenderBase.this; AppenderAttachableImpl<E> aai = parent.aai; // loop while the parent is started while (parent.isStarted()) { try { E e = parent.blockingQueue.take(); aai.appendLoopOnAppenders(e); } catch (InterruptedException ie) { break; } } for (E e : parent.blockingQueue) { aai.appendLoopOnAppenders(e); parent.blockingQueue.remove(e); } aai.detachAndStopAllAppenders(); } }
Worker线程比较简单,其主要功能就是判断Appeneder是否处于运行状态(parent.isStarted()):ide
@Override public void start() { // 省略部分校验代码 blockingQueue = new ArrayBlockingQueue<E>(queueSize); if (discardingThreshold == UNDEFINED) discardingThreshold = queueSize / 5; worker.setDaemon(true); worker.setName("AsyncAppender-Worker-" + getName()); super.start(); worker.start(); }
主要步骤:函数
(1) 根据设置的队列大小,建立缓冲队列大小;工具
(2) 若是未设置discardingThreshold,则设置discardingThreshold阈值为缓冲队列大小的4/5(1-1/5);oop
(3) 设置worker线程为守护线程,设置线程名称;性能
(4) 启动Appender,启动worker线程读取数据(须要确保Appender在worker线程前启动).
@Override public void stop() { if (!isStarted()) return; super.stop(); // interrupt the worker thread so that it can terminate. Note that the interruption can be consumed // by sub-appenders worker.interrupt(); InterruptUtil interruptUtil = new InterruptUtil(context); try { interruptUtil.maskInterruptFlag(); worker.join(maxFlushTime); // check to see if the thread ended and if not add a warning message if (worker.isAlive()) { addWarn("Max queue flush timeout (" + maxFlushTime + " ms) exceeded. Approximately " + blockingQueue.size() + " queued events were possibly discarded."); } else { addInfo("Queue flush finished successfully within timeout."); } } catch (InterruptedException e) { int remaining = blockingQueue.size(); addError("Failed to join worker thread. " + remaining + " queued events may be discarded.", e); } finally { interruptUtil.unmaskInterruptFlag(); } }
主要步骤:
(1) super.stop():关闭Appender,worker线程执行退出逻辑;
(2) worker.interrupt():给worker线程设置中断标志(worker中未检测中断标志,所以保持继续运行状态),设置的意义是让当前appender关联的sub-appender消费,从而安全的关闭sub-appender;
(3) interruptUtil.maskInterruptFlag():清除当前线程的interrupt状态;
(4) worker.join(maxFlushTime):等待worker线程退出,结束其生命周期;
(5) 判断worker线程是否存活,若是存活说明阻塞队列中仍存有部分日志事件未被写入文件等载体中,记录消息;
(6) interruptUtil.unmaskInterruptFlag():恢复worker线程的interrupt状态.
划重点:
上述代码中,对worker线程的中断标志进行了若干次操做:
(1) interrupt:中断worker关联的sub-appender;
(2) interruptUtil.maskInterruptFlag: 取消中断标志(
在线程标志为true的状态下,join操做会当即返回
),所以为了确保join操做有效,须要清除worker线程的interrupt标志;(3) interruptUtil.unmaskInterruptFlag:结束操做,恢复worker线程的interrupt标志.
日志事件的添加,实质是就是往阻塞队列中插入日志事件.
根据阻塞队列接口,分两种插入方式:
(1) offer:非阻塞插入,插入失败不进行处理(存在丢日志可能性);
(2) put:阻塞插入(插入失败后会循环进行再次插入操做).
public class RollingFileAppender<E> extends FileAppender<E> { File currentlyActiveFile; TriggeringPolicy<E> triggeringPolicy; RollingPolicy rollingPolicy; }
主要属性:
须要注意,若是没有日志事件写入,那么即便日志文件达到时间或者大小的触发条件,也不会建立相应的新日志文件.
protected void subAppend(E event) { synchronized (triggeringPolicy) { if (triggeringPolicy.isTriggeringEvent(currentlyActiveFile, event)) { rollover(); } } super.subAppend(event); }
主要步骤:
(1) 判断当前文件是否达到触发条件,若是是则翻转文件(使用synchronized加锁以保证同一时间段,只有一个线程进行文件的翻转操做);
(2) 调用基类的subAppend方法,将日志文件写入BufferedOutputStream中.
翻转文件:
public void rollover() { lock.lock(); try { this.closeOutputStream(); attemptRollover(); attemptOpenFile(); } finally { lock.unlock(); } }
由以上代码可知,翻转文件涉及到如下几个操做:
(1) 关闭当前BufferedOutputStream;
(2) attemptRollover: 进行文件翻转(重命名已写入文件名,根据需求压缩日志文件,根据日志文件夹总大小以及日期删除文件等);
(3) attemptOpenFile:根据翻转条件,肯定新日志文件名称,并建立对应的日志文件供后续写入.
工做中,常常使用的文件翻转工具类为SizeAndTimeBasedRollingPolicy(实现了RollingPolicy以及TriggeringPolicy接口),顾名思义,其根据时间和文件大小肯定日志文件的翻转触发条件.
须要注意,触发器实际定义于SizeAndTimeBasedFNATP类中.
@Override public boolean isTriggeringEvent(File activeFile, final E event) { long time = getCurrentTime(); // first check for roll-over based on time if (time >= nextCheck) { Date dateInElapsedPeriod = dateInCurrentPeriod; elapsedPeriodsFileName = tbrp.fileNamePatternWithoutCompSuffix.convertMultipleArguments(dateInElapsedPeriod, currentPeriodsCounter); currentPeriodsCounter = 0; setDateInCurrentPeriod(time); computeNextCheck(); return true; } // next check for roll-over based on size if (invocationGate.isTooSoon(time)) { return false; } if (activeFile.length() >= maxFileSize.getSize()) { elapsedPeriodsFileName = tbrp.fileNamePatternWithoutCompSuffix.convertMultipleArguments(dateInCurrentPeriod, currentPeriodsCounter); currentPeriodsCounter++; return true; } return false; }
主要步骤:
(1) 获取当前时间点,并与nextCheck进行比较(日志出发时间点);
(2) 若是当前时间点大于nextCheck,则计算获得新的日志文件名前缀,赋值至elapsedPeriodsFileName;清空currentPeriodsCounter(记录时间段内的日志文件总数);计算下一个触发时间点后退出函数;
(3) 若当前时间点小于nextCheck,则进行文件大小的校验(经过isTooSoon判断函数触发是否过于频繁,若是时,则退出等待之后校验);
(4) 比较当前日志文件和设置的最大文件大小比较,若是当前文件大小达到阈值,则计算新的日志文件名前缀,currentPeriodsCounter进行+1操做.
public void rollover() throws RolloverFailure { // when rollover is called the elapsed period's file has // been already closed. This is a working assumption of this method. String elapsedPeriodsFileName = timeBasedFileNamingAndTriggeringPolicy.getElapsedPeriodsFileName(); String elapsedPeriodStem = FileFilterUtil.afterLastSlash(elapsedPeriodsFileName); if (compressionMode == CompressionMode.NONE) { if (getParentsRawFileProperty() != null) { renameUtil.rename(getParentsRawFileProperty(), elapsedPeriodsFileName); } // else { nothing to do if CompressionMode == NONE and parentsRawFileProperty == null } } else { if (getParentsRawFileProperty() == null) { compressionFuture = compressor.asyncCompress(elapsedPeriodsFileName, elapsedPeriodsFileName, elapsedPeriodStem); } else { compressionFuture = renameRawAndAsyncCompress(elapsedPeriodsFileName, elapsedPeriodStem); } } if (archiveRemover != null) { Date now = new Date(timeBasedFileNamingAndTriggeringPolicy.getCurrentTime()); this.cleanUpFuture = archiveRemover.cleanAsynchronously(now); } }
主要步骤:
(1) 获取isTriggeringEvent函数设置的elapsedPeriodsFileName文件名称;
(2) 若是不须要对日志文件进行压缩操做,则尝试将当前日志文件的名称重命名为elapsedPeriodsFileName;
(3) 若是须要对日志文件进行压缩,则尝试将日志文件进行异步压缩操做(须要注意,涉及到日志文件重命名操做);
(4) 设置archiveRemover,将当前时间点传入archiveRemover,经过其删除过时文件,或删除早期文件以保证文件夹大小在合理范围.
划重点:
(1) getParentsRawFileProperty: 配置文件中能够设置活动日志文件名称(简称rawFileName),当日志文件达到触发条件时,将日志文件内容转移至翻转文件中,从新建立日志文件并命名为rawFileName;
(2) renameUtil.rename:日志文件转移时,会判断当前日志文件和翻转文件是否在同一块volume上,若是是则重命名文件便可,若是不是则复制当前文件内容至翻转文件中;
(3) 以上Future任务均是提交至Logback的线程池中执行,以保证日志记录的稳定性,避免成为应用的性能负担.
经过以上篇幅可知,对于日志文件的翻转和写入,Logback均进行了细致和合理的设计,保证了日志组件的高可用性和性能. 在编写应用程序,涉及IO操做时,不妨参考Logback的代码编写.