Hbase coprocessor 初涉

HBase-Coprocessor

对于 HBase coprocessor 我的感受最基础且最好的文章:Coprocessor Introduction.java

hbase coprocessor 能够大体分为两类:observers && endpoint.mysql

observer : 能够简单理解为触发器,trigger。git

  • Region Observer : preGet, postGet, prePut, postPut...
  • WAL Observer
  • Master Observer : pre/post create/delete/split...

endpoint : 能够简单的理解为部署到 server 的 procedure。github

部署 coprocessor

方式一:静态部署

在 hbase-site.xml 文件中进行配置,并滚动重启集群,对全部的 hbase-table 有效sql

一样,在卸载时也须要从配置文件中移除并滚动重启集群。shell

<property>
    <name>hbase.coprocessor.master.classes</name>
    <value>org.apache.hadoop.hbase.security.access.AccessController</value>
  </property>
  <property>
    <name>hbase.coprocessor.region.classes</name>
    <value>org.apache.hadoop.hbase.security.access.AccessController,org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint</value>
  </property>

复制代码

方式二:动态部署

能够经过 hbase shell 或者 java api 的方式进行部署,可是须要将要加载的 coprocessor.jar jar包放到全部 regionserver 都能访问到的地方,能够是本地目录,也能够是hdfs目录(强烈建议使用该方式)。apache

动态部署的 coprocessor 只对部署的 table 生效,能够参考下面的方法。api

hbase shell 部署流程 : 1. disable table; 2. assign coprocessor.jar; 3. enable table; 4. done.bash

为单表动态部署 coprocessor(by hbase shell)app

## 1.移除 coprocessor.
hbase(main):001:0> alter "test:ymxze",METHOD => 'table_att_unset', NAME => 'coprocessor$1'
Updating all regions with the new schema...
2/2 regions updated.
Done.
0 row(s) in 2.8560 seconds

## 2.disable table
hbase(main):002:0> disable "test:ymxz"
0 row(s) in 2.3070 seconds

## 3.assign coprocessor
hbase(main):003:0> alter "test:ymxz", 'coprocessor'=>'hdfs:///nameservice1/hbase/data/coprocessor/endpoint-coprocessor.jar|com.ymxz.hbase.Coprocessor.autogenerated.SumEndPoint||'
Updating all regions with the new schema...
2/2 regions updated.
Done.
0 row(s) in 2.5290 seconds

## 4.enable table
hbase(main):004:0> enable "test:ymxz"
0 row(s) in 1.3010 seconds

## 5.check assign if ok.
hbase(main):005:0> describe "test:ymxz"
Table mysql2hbase is ENABLED
mysql2hbase, {TABLE_ATTRIBUTES => {coprocessor$1 => 'hdfs:///nameservice1/hbase/data/coprocessor/endpoint-coprocessor.jar|com.ymxz.hbase.Coprocessor.autogenerated.SumEndPoint||'}
COLUMN FAMILIES DESCRIPTION
{NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE =
> 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
1 row(s) in 0.0300 seconds

## core step
hbase(main):006:0> alter "test:ymxz", 'coprocessor'=>'hdfs:///nameservice1/hbase/data/coprocessor/RegionObserverCoprocessor.jar|com.ymxz.cp.RegionObserverExample||'

复制代码

为单表动态部署 coprocessor(by java api)

public class AddCoprocessorByApi {

    private static final Logger logger = LoggerFactory.getLogger(AddCoprocessorByApi.class);

    private static Configuration conf;
    private static String coreSite = "config" + File.separator + "core-site.xml";
    private static String hbaseSite = "config" + File.separator + "hbase-site.xml";
    private static String hdfsSite = "config" + File.separator + "hdfs-site.xml";
    private final String jarPath = "hdfs://nameservice1/hbase/data/coprocessor/endpoint-coprocessor.jar";
    private final TableName tableName = TableName.valueOf("test:ymxz");

    static {
        System.setProperty("HADOOP_USER_NAME", "hadoop");
        conf = HBaseConfiguration.create();
        conf.addResource(new Path(coreSite));
        conf.addResource(new Path(hbaseSite));
        conf.addResource(new Path(hdfsSite));
    }

    public void addCoprocessor() throws IOException {
        Connection connection = ConnectionFactory.createConnection(conf);
        logger.info("connected.");
        Admin admin = connection.getAdmin();
        if (admin.tableExists(tableName)) {
            admin.disableTable(tableName);
            admin.deleteTable(tableName);
        }
        HTableDescriptor descriptor = new HTableDescriptor(tableName);
        HColumnDescriptor columnDescriptor = new HColumnDescriptor("cf");
        descriptor.addFamily(columnDescriptor);
        admin.createTable(descriptor);
        logger.info("created.");
        admin.disableTable(tableName);
        logger.info("disabled.");
        descriptor.setValue("COPROCESSOR$1", jarPath + "|" + "com.ymxz.hbase.Coprocessor.autogenerated.SumEndPoint" + "|" + Coprocessor.PRIORITY_USER);
        admin.modifyTable(tableName, descriptor);
        logger.info("modified");
        admin.enableTable(tableName);
        logger.info("enabled.");
    }

    public static void main(String[] args) {
        AddCoprocessorByApi addCoprocessorByApi = new AddCoprocessorByApi();
        try {
            addCoprocessorByApi.addCoprocessor();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

复制代码

潜在风险

一、coprocessor 异常致使的 非IOException 可能会 crash regionserver(捕获异常并throw为IOException)。

二、coprocessor 与 regionserver 共享内存资源,可能会下降 regionserver 的性能。

三、注意,在打 jar 包时,强烈建议只讲本身的 java 文件打进 jar 包,避免没必要要的坑。

四、引用 pom.xml

<dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-server</artifactId>
            <version>1.2.0</version>
            <scope>provided</scope>
        </dependency>

复制代码

Observer Coprocessor

for cra

public class RegionObserverExample extends BaseRegionObserver {
    private static final byte[] COL_FAMILY = Bytes.toBytes("cf");
    private static final byte[] CREATED_AT = Bytes.toBytes("created_at");
    private static final SimpleDateFormat DATE_FORMAT = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");

    @Override
    public void prePut(ObserverContext<RegionCoprocessorEnvironment> e, Put put, WALEdit edit, Durability durability) throws IOException {
        Region region = e.getEnvironment().getRegion();
        Get get = new Get(put.getRow());
        get.addColumn(COL_FAMILY, CREATED_AT);
        List<Cell> existFixedQua = region.get(get, false);

        if (existFixedQua == null || existFixedQua.isEmpty()) {
            String now = DATE_FORMAT.format(new Date());
            put.addColumn(COL_FAMILY, CREATED_AT, Bytes.toBytes(now));
            e.complete(); // 跳事后续 coprocessor
        } 
    }
}
复制代码

实现效果:

hbase(main):016:0* scan "test:ymxz"
ROW                                                    COLUMN+CELL
0 row(s) in 0.0630 seconds

hbase(main):017:0>
hbase(main):018:0* put "test:ymxz", '001', 'cf:city', 'beijing'
0 row(s) in 0.1110 seconds

hbase(main):019:0> scan "test:ymxz"
ROW                                                    COLUMN+CELL
 001                                                   column=cf:city, timestamp=1546919507916, value=beijing
 001                                                   column=cf:created_at, timestamp=1546919507916, value=2019-01-08 11:51:47
1 row(s) in 0.0170 seconds
复制代码

EndpointCoprocessor

实现 Endpoint Coprocessor 须要: 1.自定义协议 .proto; 2.经过 protocBuf-2.5.0 编译 .proto 文件生成 .java 文件; 3.定义核心文件,须要 1> 继承 step-2 中生成的 .java 文件中的 server 类; 2> 实现 Coprocessor, CoprocessorService 两个interface; 4.编写client端代码调用 step-1 中自定义的协议。

下面是官方给的大体步骤:

一、Create a ‘.proto’ file defining your service.

二、Execute the ‘protoc’ command to generate the Java code from the above ‘.proto’ file.

三、Write a class that should:
    Extend the above generated service class.
    It should also implement two interfaces Coprocessor and CoprocessorService.
    Override the service method.

四、Load the Coprocessor.

五、Write a client code to call Coprocessor.
复制代码

下载并编译protobuf(hbase-1.2.0 对应的版本应该为 2.5.0)

# download the complier
https://github.com/protocolbuffers/protobuf/releases

cd ~/Downloads/protobuf-2.5.0
$./configure
$make
$make check
$sudo make install
$which protoc
$protoc --version

# compile .proto file
复制代码

根据上述步骤 + 官方文档开搞:

协议文件:

option java_package = "com.ymxz.hbase.Coprocessor.autogenerated";
option java_outer_classname = "Sum";
option java_generic_services = true;
option java_generate_equals_and_hash = true;
option optimize_for = SPEED;

message SumRequest {
    required string family = 1;
    required string column = 2;
}

message SumResponse {
  required int64 sum = 1 [default = 0];
}

service SumService {
  rpc getSum(SumRequest)
    returns (SumResponse);
}
复制代码

实现 getSum 服务:

public class SumEndPoint extends Sum.SumService implements Coprocessor, CoprocessorService {
    private RegionCoprocessorEnvironment env;

    @Override
    public void getSum(RpcController controller, Sum.SumRequest request, RpcCallback<Sum.SumResponse> done) {
        Scan scan = new Scan();
        scan.addFamily(Bytes.toBytes(request.getFamily()));
        scan.addColumn(Bytes.toBytes(request.getFamily()), Bytes.toBytes(request.getColumn()));
        Sum.SumResponse response = null;
        InternalScanner scanner = null;
        try {
            scanner = env.getRegion().getScanner(scan);
            List<Cell> results = new ArrayList();
            boolean hasMore = false;
            Long sum = 0L;
            do {
                hasMore = scanner.next(results);
                for (Cell cell : results) {
                    sum += Bytes.toLong(CellUtil.cloneValue(cell));
                }
                results.clear();
            } while (hasMore);

            response = Sum.SumResponse.newBuilder().setSum(sum).build();
        } catch (IOException e) {
            ResponseConverter.setControllerException(controller, e);
        } finally {
            if (scanner != null) {
                try {
                    scanner.close();
                } catch (IOException e) {
                    e.printStackTrace();
                }
            }
        }

        done.run(response);
    }

    @Override
    public void start(CoprocessorEnvironment env) throws IOException {
        if (env instanceof RegionCoprocessorEnvironment) {
            this.env = (RegionCoprocessorEnvironment) env;
        } else {
            throw new CoprocessorException("must be loaded on a table region");
        }
    }

    @Override
    public void stop(CoprocessorEnvironment env) throws IOException {
        System.out.println("stop");
    }

    @Override
    public Service getService() {
        return this;
    }
}

复制代码

向表中写入数据

public void putData() throws IOException {
        Connection connection = ConnectionFactory.createConnection(conf);
        Table table = connection.getTable(tableName);

        logger.info("begin to put...");
        long flag = 0L;
        while (flag < 100) {
            Put put = new Put(Bytes.toBytes(String.valueOf(flag)));
            put.addColumn(COLUMN_FAMILY, QUALIFER_NUM, Bytes.toBytes(flag));
            table.put(put);
            flag++;
        }
        logger.info("put over...");
    }
复制代码

客户端调用

...
        final Sum.SumRequest request = Sum.SumRequest.newBuilder()
                .setFamily("cf").setColumn("num")
                .build();

        try {
            Map<byte[], Long> results = table.coprocessorService(Sum.SumService.class, null, null, new Batch.Call<Sum.SumService, Long>() {
                @Override
                public Long call(Sum.SumService instance) throws IOException {
                    BlockingRpcCallback<Sum.SumResponse> rpcCallback = new BlockingRpcCallback();
                    instance.getSum(null, request, rpcCallback);
                    Sum.SumResponse response = rpcCallback.get();
                    return response.hasSum() ? response.getSum() : 0L;
                }
            });

            long total = 0L;
            for (Long sum : results.values()) {
                total += sum;
            }
            logger.info("Sum = {}", total);
        } catch (ServiceException e) {
            e.printStackTrace();
        } catch (Throwable throwable) {
            throwable.printStackTrace();
        }
        ...
复制代码

结果:

2019-01-17 10:56:33 [INFO ] 2019-01-17 10:56:33,453(21871) --> [main] ClientTest.main(ClientTest.java:57): Sum = 4950  
复制代码

一些报错

中途踩得坑。。有些还没解决..

2019-01-02 19:22:29 [WARN ] 2019-01-02 19:22:29,622(22893) --> [hconnection-0x3cc2931c-shared--pool1-t1] org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:58): Call failed on IOException  
org.apache.hadoop.hbase.exceptions.UnknownProtocolException: org.apache.hadoop.hbase.exceptions.UnknownProtocolException: No registered coprocessor service found for name SumService in region mysql2hbase,,1542168563860.0730ce3d070509a19c99954588f6e3a4.
    at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7972)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1986)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1968)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:328)
    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1625)
    at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:92)
    at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:89)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
    at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:95)
    at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:56)
    at com.ymxz.hbase.Coprocessor.autogenerated.Sum$SumService$Stub.getSum(Sum.java:1328)
    at ClientTest$1.call(ClientTest.java:48)
    at ClientTest$1.call(ClientTest.java:44)
    at org.apache.hadoop.hbase.client.HTable$15.call(HTable.java:1732)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.UnknownProtocolException): org.apache.hadoop.hbase.exceptions.UnknownProtocolException: No registered coprocessor service found for name SumService in region mysql2hbase,,1542168563860.0730ce3d070509a19c99954588f6e3a4.
    at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7972)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1986)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1968)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)

    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1268)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:34118)
    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1621)
    ... 13 more
2019-01-02 19:22:29 [WARN ] 2019-01-02 19:22:29,627(22898) --> [main] org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1746): Error calling coprocessor service com.ymxz.hbase.Coprocessor.autogenerated.Sum$SumService for row   
java.util.concurrent.ExecutionException: java.lang.NullPointerException
    at java.util.concurrent.FutureTask.report(FutureTask.java:122)
    at java.util.concurrent.FutureTask.get(FutureTask.java:192)
    at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1744)
    at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1700)
    at ClientTest.main(ClientTest.java:44)
Caused by: java.lang.NullPointerException
    at ClientTest$1.call(ClientTest.java:50)
    at ClientTest$1.call(ClientTest.java:44)
    at org.apache.hadoop.hbase.client.HTable$15.call(HTable.java:1732)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
java.lang.NullPointerException
    at ClientTest$1.call(ClientTest.java:50)
    at ClientTest$1.call(ClientTest.java:44)
    at org.apache.hadoop.hbase.client.HTable$15.call(HTable.java:1732)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

Process finished with exit code 0
复制代码

server 端报错, 将本地 jar 包用到的 dependency 都标记为 <scope>provided</scope> 以后解决。。。

2019-01-02 19:21:09,064 ERROR org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost: Failed to load coprocessor com.ymxz.hbase.Coprocessor.autogenerated.SumEndPoint
java.lang.LinkageError: loader constraint violation in interface itable initialization: when resolving method "com.ymxz.hbase.Coprocessor.autogenerated.SumEndPoint.getService()Lcom/google/protobuf/Service;" the class loader (instance of org/apache/hadoop/hbase/util/CoprocessorClassLoader) of the current class, com/ymxz/hbase/Coprocessor/autogenerated/SumEndPoint, and the class loader (instance of sun/misc/Launcher$AppClassLoader) for interface org/apache/hadoop/hbase/coprocessor/CoprocessorService have different Class objects for the type com/google/protobuf/Service used in the signature
        at java.lang.Class.getDeclaredConstructors0(Native Method)
        at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
        at java.lang.Class.getConstructor0(Class.java:3075)
        at java.lang.Class.newInstance(Class.java:412)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:245)
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:208)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:364)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:226)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:726)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:634)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:6294)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6598)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6570)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6526)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6477)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:362)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

复制代码

这个是由于递归死循环了,prePut递归致使失败:

Caused by: java.lang.StackOverflowError
    at java.security.AccessController.doPrivileged(Native Method)
    at java.io.PrintWriter.<init>(PrintWriter.java:116)
    at java.io.PrintWriter.<init>(PrintWriter.java:100)
    at org.apache.log4j.DefaultThrowableRenderer.render(DefaultThrowableRenderer.java:58)
    at org.apache.log4j.spi.ThrowableInformation.getThrowableStrRep(ThrowableInformation.java:87)
    at org.apache.log4j.spi.LoggingEvent.getThrowableStrRep(LoggingEvent.java:413)
    at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:313)
    at org.apache.log4j.RollingFileAppender.subAppend(RollingFileAppender.java:276)
    at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
    at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
    at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
    at org.apache.log4j.Category.callAppenders(Category.java:206)
    at org.apache.log4j.Category.forcedLog(Category.java:391)
    at org.apache.log4j.Category.log(Category.java:856)
    at org.apache.commons.logging.impl.Log4JLogger.error(Log4JLogger.java:229)
    at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.handleCoprocessorThrowable(CoprocessorHost.java:562)
    at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1751)
    at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
    at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.prePut(RegionCoprocessorHost.java:914)
    at org.apache.hadoop.hbase.regionserver.HRegion.doPreMutationHook(HRegion.java:2965)
    at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2940)
    at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2886)
    at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2890)
    at org.apache.hadoop.hbase.regionserver.HRegion.doBatchMutate(HRegion.java:3637)
    at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:2763)
    at com.ymxz.cp.RegionObserverExample.prePut(RegionObserverExample.java:39)
    at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:918)
    at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
    at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
    at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
    at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.prePut(RegionCoprocessorHost.java:914)
    at org.apache.hadoop.hbase.regionserver.HRegion.doPreMutationHook(HRegion.java:2965)
    at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2940)
    at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2886)
    at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2890)
    at org.apache.hadoop.hbase.regionserver.HRegion.doBatchMutate(HRegion.java:3637)
    at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:2763)

复制代码

protocol buffers

protocol buffers Language Guide

THE HOW TO OF HBASE COPROCESSORS

coprocessor - uses, abuses, solutions