农历年最后一场scala-meetup听刘颖分享专业软件开发经验,大受启发。忽然意识到一直以来都没有彻底按照任何标准的开发规范作事。诚然,在作技术调研和学习的过程当中不会对规范操做有什么严格要求,一旦技术落地进入应用阶段,开始进行产品开发时,只有严格按照专业的软件开发规范才能保证软件产品的质量。刘颖在meetup中提到了异常处理(exception handling)和过程跟踪(logging)做为软件开发规范中的重要环节。咱们在这篇先讨论logging。logging经过记录软件运行过程帮助开发者跟踪软件运行状况,分析运算结果或者异常产生缘由,是一个成功完整的软件不可缺乏的环节。 logback应该是java生态链中最流行、最通用的logger了。虽然logback已经提供了STDOUT、FILE、DB等多种跟踪信息输出方式,即ConsoleAppender、FileAppender、DBAppender,但针对分布式应用的appender仍是须要定制。由于分布式软件是跨系统运行的,跟踪信息天然也会在不一样的系统中产生并存储,因此分布式应用须要分布式存储才能实现跟踪信息的全局管理。logback是一套开发架构,任何定制的appender能够很方便地整合入logback。那么咱们就尝试开发一套基于cassandra的logback-appender。java
首先认识一下logback:感受须要重点了解的logging运做核心应该是消息等级level的操做。消息等级是指logback根据不一样的消息等级来筛选须要记录的消息。logback支持下面几个消息等级,按照各自记录动做覆盖面由弱到强排列,包括:数据库
TRACE -> DEBUG -> INFO -> WARN -> ERROR 分别对应记录函数 trace(msg),debug(msg),info(msg),warn(msg),error(msg)json
logback按消息等级进行记录筛选的规则以下:session
假设记录函数为p,某个class的消息等级level为q:当p>=q时选择记录消息。换言之调用函数error(msg)时logback会记录全部等级消息,反之trace(msg)只能记录TRACE级别的消息。logback手册中以下表示:架构
TRACE DEBUG INFO WARN ERROR OFF
trace() YES NO NO NO NO NO
debug() YES YES NO NO NO NO
info() YES YES YES NO NO NO
warn() YES YES YES YES NO NO
error() YES YES YES YES YES NOapp
logback中每一个类的默认消息等级能够按照类型继承树结构继承。当一个子类没有定义消息等级时,它继承对上父类的消息等级,即:X.Y.Z中Z的默认消息等级从Y继承。框架
好了,以上运做流程都包括在logback的功能里了,跟消息的存储appender没什么关系。下面咱们就开始自制一套基于cassandra的appender。上面提过,logback是一套开放的框架,任何按照logback要求开发的appender均可以很方便的整合入logback的功能中去。下面是一个logback的appender框架:分布式
package com.datatech.logback import ch.qos.logback.classic.spi.ILoggingEvent import ch.qos.logback.core.UnsynchronizedAppenderBase import com.datastax.driver.core.querybuilder.QueryBuilder class CassandraAppender extends UnsynchronizedAppenderBase[ILoggingEvent]{ override def append(eventObject: ILoggingEvent): Unit = { //write log message to cassandra
} override def start(): Unit = { //setup cassandra
super.start() } override def stop(): Unit = { super.stop() //clean up, closing cassandra
} }
咱们先实现一个完整的logback配置文件logback.xml,包括ConsoleAppender,FileAppender,CassandraAppender ide
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<Pattern>
%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n </Pattern>
</encoder>
</appender>
<appender name="FILE" class="ch.qos.logback.core.FileAppender">
<!-- path to your log file, where you want to store logs -->
<file>/Users/Tiger/logback.log</file>
<append>false</append>
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<appender name="cassandraLogger" class="com.datatech.logback.CassandraAppender">
<hosts>192.168.0.189</hosts>
<port>9042</port>
<appName>posware</appName>
<defaultFieldValues>{"app_customer":"bayakala.com","app_device":"1001"}</defaultFieldValues>
<keyspaceName>applog</keyspaceName>
<columnFamily>txnlog</columnFamily>
</appender>
<root level="debug">
<appender-ref ref="cassandraLogger" />
<appender-ref ref="STDOUT" />
</root>
<shutdownHook class="ch.qos.logback.core.hook.DelayingShutdownHook"/>
</configuration>
配置文件里CassandraAppender的属性,如hosts,port,keyspaceName等在scala程序实现方法以下:函数
private var _hosts: String = "" def setHosts(hosts: String): Unit = _hosts = hosts private var _port: Int = 9042 // for the binary protocol, 9160 is default for thrift
def setPort(port: Int): Unit = _port = port private var _username: String = "" def setUsername(username: String): Unit = _username = username private var _password: String = "" def setPassword(password: String): Unit = _password = password
属性的使用以下:
writeLog(eventObject)(optSession.get, _keyspaceName, _columnFamily)(_appName,ip,hostname,_defaultFieldValues)
实际上logback.xml里的这些属性能够在runtime时设定,以下:
//get appender instances
val log: Logger = LoggerFactory.getLogger(org.slf4j.Logger.ROOT_LOGGER_NAME).asInstanceOf[Logger] val cassAppender = log.getAppender("cassandraLogger").asInstanceOf[CassandraAppender] val stdoutAppender = log.getAppender("STDOUT").asInstanceOf[ConsoleAppender[ILoggingEvent]] val fileAppender = log.getAppender("FILE").asInstanceOf[FileAppender[ILoggingEvent]] if(cassAppender != null) { cassAppender.setHosts("192.168.0.189") cassAppender.setPort(9042) cassAppender.start() }
与通用的appender不一样的是:咱们须要在应用中与logback进行互动,由于咱们须要把具体应用中一些特定的跟踪目标看成数据库字段记录起来。这些特定的跟踪目标如userid,productid等是应用业务特有的,通用的logger是没法覆盖的。因此咱们关注的是一套在应用层面通用的logger。为了实现这一目标,首先能够在数据库表结构schema里表现应用的业务特色,下面是个例子:
CREATE TABLE IF NOT EXISTS applog.txnlog (
class_name text,
file_name text,
host_ip text,
host_name text,
level text,
line_number text,
logger_name text,
method_name text,
thread_name text,
throwable_str_rep text,
log_date text,
log_time text,
log_msg text,
app_name text,
app_customer text,
app_device text,
PRIMARY KEY (app_customer, app_device, log_date, log_time)
);
以上的schema中app_customer,app_device属于应用业务属性,由于咱们但愿从用户或设备角度对消息进行分类管理。以此类推对其它应用咱们也是经过设计另外一套涵盖业务特性的schema。这些反应业务特性的字段必须在应用中调用消息记录函数时提供,由于这些字段的内容是动态的(如:一个服务端软件的用户可能有几百上千个)。咱们只能经过记录的消息来传递这些字段的值。记住,logback能够同时支持自带的appender如ConsoleAppender,FileAppender等,以及CassandraAppender,你们共用logback获取的msg,但咱们又必须经过对msg的处理才能加入动态属性的值。为了避免影响msg的可读性,能够用json来处理msg,以下:
var msg = event.getMessage() try { val logMap = fromJson[Map[String,String]](msg) logMap.foreach ( m => qryInsert = qryInsert.value(m._1, m._2)) } catch { case e: Throwable => qryInsert = qryInsert.value(MESSAGE, msg) try { val dftMap = fromJson[Map[String,String]](default) dftMap.foreach ( m => qryInsert = qryInsert.value(m._1, m._2)) } catch { case e: Throwable => } } session.execute(qryInsert)
若是event.getMessage()获取的msg不是json格式(如:消息是应用中引用的第三方工具库产生的),就采用在配置文件中定义的默认值(也是json格式的),如上面配置文件中的<defaultFieldValues>属性。
cassandra的使用比较简单,并且咱们只使用了insert一项操做。完整的CassandraAppender源代码以下:
package com.datatech.logback import ch.qos.logback.classic.spi._ import ch.qos.logback.core.UnsynchronizedAppenderBase import com.datastax.driver.core._ import com.datastax.driver.core.querybuilder.{Insert, QueryBuilder} import java.net.InetAddress import java.time._ import java.time.format._ import java.util.Locale class CassandraAppender extends UnsynchronizedAppenderBase[ILoggingEvent]{ import CassandraAppender._ private var _hosts: String = "" def setHosts(hosts: String): Unit = _hosts = hosts private var _port: Int = 9042 // for the binary protocol, 9160 is default for thrift
def setPort(port: Int): Unit = _port = port private var _username: String = "" def setUsername(username: String): Unit = _username = username private var _password: String = "" def setPassword(password: String): Unit = _password = password private var _defaultFieldValues: String = "" def setDefaultFieldValues(defaultFieldValues: String) = _defaultFieldValues = defaultFieldValues private val ip: String = getIP() private val hostname: String = getHostName() // Keyspace/ColumnFamily information
private var _keyspaceName: String = "Logging" def setKeyspaceName(keyspaceName: String): Unit = _keyspaceName = keyspaceName private var _columnFamily: String = "log_entries" def setColumnFamily(columnFamily: String): Unit = _columnFamily = columnFamily private var _appName: String = "default" def setAppName(appName: String): Unit = _appName = appName private var _replication: String = "{ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }" def setReplication(replication: String): Unit = _replication = replication private var _consistencyLevelWrite: ConsistencyLevel = ConsistencyLevel.ONE def setConsistencyLevelWrite(consistencyLevelWrite: String): Unit = { try { _consistencyLevelWrite = ConsistencyLevel.valueOf(consistencyLevelWrite.trim) } catch { case e: Throwable =>
throw new IllegalArgumentException("Consistency level " + consistencyLevelWrite + " wasn't found.") } } private var optCluster: Option[Cluster] = None private var optSession: Option[Session] = None def connectDB(): Unit = { try { val cluster = new Cluster .Builder() .addContactPoints(_hosts) .withPort(_port) .build() val session = cluster.connect() optCluster = Some(cluster) optSession = Some(session) } catch { case e: Throwable => optCluster = None optSession = None println(s"error when logger connecting to cassandra [${_hosts}:${_port}]") } } override def append(eventObject: ILoggingEvent): Unit = { if(optSession.isDefined) { try { writeLog(eventObject)(optSession.get, _keyspaceName, _columnFamily)(_appName,ip,hostname,_defaultFieldValues) } catch { case e: Throwable => } } } override def start(): Unit = { if(! _hosts.isEmpty) { connectDB() super.start() } } override def stop(): Unit = { super.stop() if(optSession.isDefined) { optSession.get.closeAsync() optCluster.get.closeAsync() } } } object CassandraAppender extends JsonConverter { // CF column names
val HOST_IP: String = "host_ip" val HOST_NAME: String = "host_name" val APP_NAME: String = "app_name" val LOGGER_NAME: String = "logger_name" val LEVEL: String = "level" val CLASS_NAME: String = "class_name" val FILE_NAME: String = "file_name" val LINE_NUMBER: String = "line_number" val METHOD_NAME: String = "method_name" val THREAD_NAME: String = "thread_name" val THROWABLE_STR: String = "throwable_str_rep" val LOG_DATE: String = "log_date" val LOG_TIME: String = "log_time" val MESSAGE: String = "log_msg" val dateTimeFormatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSS", Locale.US) def logDate: String = java.time.LocalDate.now.format(DateTimeFormatter.ofPattern("yyyy-MM-dd")) def logTime: String = LocalDateTime.now.format(dateTimeFormatter).substring(11) def writeLog(event: ILoggingEvent)(session: Session, kspc: String, tbl: String)(appName: String, ip: String, hostName: String, default: String): ResultSet = { var qryInsert = QueryBuilder.insertInto(kspc,tbl) .value(APP_NAME,appName) .value(HOST_IP,ip) .value(HOST_NAME,hostName) .value(LOGGER_NAME,event.getLoggerName()) .value(LEVEL,event.getLevel().toString) .value(THREAD_NAME,event.getThreadName()) .value(LOG_DATE,logDate) .value(LOG_TIME,logTime) try { val callerData = event.getCallerData() if (callerData.nonEmpty) { qryInsert = qryInsert.value(CLASS_NAME, callerData.head.getClassName()) .value(FILE_NAME, callerData.head.getFileName()) .value(LINE_NUMBER, callerData.head.getLineNumber().toString) .value(METHOD_NAME, callerData.head.getMethodName()) } } catch {case e: Throwable => println(s"logging event error: ${e.getMessage}")} try { if (event.getThrowableProxy() != null) { val throwableStrs = event.getThrowableProxy().getSuppressed().asInstanceOf[List[IThrowableProxy]] val throwableStr = throwableStrs.foldLeft("") { case (b, t) => b + "," + t.getMessage() } qryInsert = qryInsert.value(THROWABLE_STR, throwableStr) } } catch {case e: Throwable => println(s"logging event error: ${e.getMessage}")} var msg = event.getMessage() try { val logMap = fromJson[Map[String,String]](msg) logMap.foreach ( m => qryInsert = qryInsert.value(m._1, m._2)) } catch { case e: Throwable => qryInsert = qryInsert.value(MESSAGE, msg) try { val dftMap = fromJson[Map[String,String]](default) dftMap.foreach ( m => qryInsert = qryInsert.value(m._1, m._2)) } catch { case e: Throwable => } } session.execute(qryInsert) } def getHostName(): String = { var hostname = "unknown"
try { val addr: InetAddress = InetAddress.getLocalHost() hostname = addr.getHostName() } catch { case e: Throwable => hostname = "error"} hostname } def getIP(): String = { var ip: String = "unknown"
try { val addr: InetAddress = InetAddress.getLocalHost() ip = addr.getHostAddress() } catch { case e: Throwable => ip = "error" } ip } }
下面是测试代码:
import ch.qos.logback.classic.Logger import ch.qos.logback.core.{ConsoleAppender, FileAppender} import com.datatech.logback.{CassandraAppender, JsonConverter} import ch.qos.logback.classic.spi.ILoggingEvent import org.slf4j.LoggerFactory import ch.qos.logback.classic.LoggerContext import java.time._ import java.time.format._ import java.util.Locale import scala.io._ import com.datastax.driver.core._ object LoggingDemo extends App with JsonConverter { val log: Logger = LoggerFactory.getLogger(org.slf4j.Logger.ROOT_LOGGER_NAME).asInstanceOf[Logger] val cassAppender = log.getAppender("cassandraLogger").asInstanceOf[CassandraAppender] val stdoutAppender = log.getAppender("STDOUT").asInstanceOf[ConsoleAppender[ILoggingEvent]] val fileAppender = log.getAppender("FILE").asInstanceOf[FileAppender[ILoggingEvent]] /* val cluster = new Cluster .Builder() .addContactPoints("192.168.0.189") .withPort(9042) .build() val session = cluster.connect() val keyspace = getClass.getResource("/logger.schema") val table = getClass.getResource("/txnlog.schema") val qrykspc = Source.fromFile(keyspace.getPath).getLines.mkString session.execute(qrykspc) val qrytbl = Source.fromFile(table.getPath).getLines.mkString session.execute(qrytbl) session.close() cluster.close() val json = toJson(loggedItems) println(s"json = $json") val m = fromJson[Map[String,String]](json) println(s"map = $m") //stop the appenders if (stdoutAppender != null) stdoutAppender.stop() if (fileAppender != null) fileAppender.stop() */ if(cassAppender != null) { cassAppender.setHosts("192.168.0.189") cassAppender.setPort(9042) cassAppender.start() } val dateTimeFormatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSS", Locale.US) val now = LocalDateTime.now.format(dateTimeFormatter) log.info("************this is a info message ..." + now) log.debug("***********debugging message here ..." + now) var loggedItems = Map[String,String]() // loggedItems += ("app_name" -> "test") loggedItems = loggedItems ++ Map( ("app_customer" -> "logback.com"), ("app_device" -> "9101"), ("log_msg" -> "specific message for cassandra ...")) log.debug(toJson(loggedItems)) //stop the logger val loggerContext = LoggerFactory.getILoggerFactory.asInstanceOf[LoggerContext] loggerContext.stop() }