1. 简单介绍html
2. 安装步骤及问题小记java
3. 部署配置
c++
4. Javaclient測试
apache
5. 參考资料json
1. 如下的安装部署基于Linux系统环境:centos 6(64位),其余Linux版本号可能有所差别。bootstrap
2. 网上有人说tair安装失败多是因为gcc版本号问题,高版本号的gcc可能不支持某些特性致使安装失败。通过实验证实。该说法是错误的,tair安装失败有各类可能的缘由但绝对与gcc版本号无关,比方个人gcc開始版本号为4.4.7,后来tair安装失败,我又一次编译低版本号的gcc(gcc4.1.2)。但是问题相同出现。centos
后来发现是其余缘由。修正后又一次用高版本号gcc4.4.7成功安装。api
3. 如下的内容部分參考tair官方介绍文档,转载请注明原文地址。缓存
tair 是淘宝本身开发的一个分布式 key/value 存储引擎. tair 分为持久化和非持久化两种使用方式. 非持久化的 tair 可以当作是一个分布式缓存. 持久化的 tair 将数据存放于磁盘中. 为了解决磁盘损坏致使数据丢失, tair 可以配置数据的备份数目, tair 本身主动将一份数据的不一样备份放到不一样的主机上, 当有主机发生异常, 没法正常提供服务的时候, 其他的备份会继续提供服务.服务器
2. 安装步骤及问题小记
2.1 安装步骤
因为tair的实现用到了底层库 tbsys 和 tbnet,所以在安装tair以前需要先安装依赖库 tbsys 和 tbnet。
2.1.1 获取源代码
首先需要经过svn下载源代码,可以经过sudo yum install subversion安装svn服务。
- svn checkout http://code.taobao.org/svn/tb-common-utils/trunk/ tb-common-utils # 获取tbsys 和 tbnet的源代码
- svn checkout http://code.taobao.org/svn/tair/trunk/ tair # 获取tair源代码
2.1.2 安装依赖库或软件
编译tair或tbnet/tbsys以前需要预先安装一些编译所需的依赖库或软件。在安装这些依赖以前最好首先检查系统是否已经安装,在用rpm管理软件包的os上可以使用 rpm -q 软件包名查看是否已安装该软件或库。
a. 安装libtoolsudo yum install libtool # 同一时候会安装libtool所依赖的automake和autoconfigb. 安装boost-devel库sudo yum install boost-develc. 安装zlib库sudo yum install zlib-devel2.1.3 编译安装tbsys和tbnet
- tair 的底层依赖于tbsys库和tbnet库, 因此要先编译安装这两个库.
- a. 环境变量设置 TBLIB_ROOT
取得源代码后, 先指定环境变量 TBLIB_ROOT 为需要安装的文件夹. 这个环境变量在兴许 tair 的编译安装中仍旧会被使用到.比方要安装到当前用户的lib文件夹下, 则指定export TBLIB_ROOT="~/lib"。
b. 安装进入源代码文件夹, 执行build.sh进行安装.
- 2.1.4 编译安装tair
进入 tair 源代码文件夹,依次按如下顺序编译安装./bootstrap.sh ./configure # 注意, 在执行configue的时候, 可以使用 --with-boost=xxxx 来指定boost的文件夹. 使用--with-release=yes 来编译release版本号. make make install
成功安装后会在当前用户home文件夹下生成文件夹tair_bin,即tair的成功安装后的文件夹。
2.2 问题小记
安装过程并不是一路顺风的,期间出现了很是多问题,在此简单记录以供參考。
2.2.1 g++未安装
说明安装了gcc但未安装g++,而tair是用C++开发的,所以仅仅能用g++编译。经过过 sudo yum install gcc-c++安装就能够。checking for C++ compiler default output file name...configure: error: in `/home/config_server/tair/tb-common-utils/tbnet':configure: error: C++ compiler cannot create executablesSee `config.log' for more details.make: *** No targets specified and no makefile found. Stop.make: *** No rule to make target `install'. Stop.
2.2.2 头文件路径错误
因为tbnet和tbsys在两个不一样的文件夹,但它们的源代码文件中头文件的互相引用却没有加绝对或相对路径,将两个文件夹的源代码加入到C++环境变量中就能够。In file included from channel.cpp:16: tbnet.h:39:19: error: tbsys.h: No such file or directory databuffer.h: In member function 'void tbnet::DataBuffer::expand(int)': databuffer.h:429: error: 'ERROR' was not declared in this scope databuffer.h:429: error: 'TBSYS_LOG' was not declared in this scope socket.h: At global scope: socket.h:191: error: 'tbsys' has not been declared socket.h:191: error: ISO C++ forbids declaration of 'CThreadMutex' with no type socket.h:191: error: expected ';' before '_dnsMutex' channelpool.h:85: error: 'tbsys' has not been declared channelpool.h:85: error: ISO C++ forbids declaration of 'CThreadMutex' with no type channelpool.h:85: error: expected ';' before '_mutex' channelpool.h:93: error: 'atomic_t' does not name a type channelpool.h:94: error: 'atomic_t' does not name a type connection.h:164: error: 'tbsys' has not been declared connection.h:164: error: ISO C++ forbids declaration of 'CThreadCond' with no type connection.h:164: error: expected ';' before '_outputCond' iocomponent.h:184: error: 'atomic_t' does not name a type iocomponent.h: In member function 'int tbnet::IOComponent::addRef()': iocomponent.h:108: error: '_refcount' was not declared in this scope iocomponent.h:108: error: 'atomic_add_return' was not declared in this scope iocomponent.h: In member function 'void tbnet::IOComponent::subRef()': iocomponent.h:115: error: '_refcount' was not declared in this scope iocomponent.h:115: error: 'atomic_dec' was not declared in this scope iocomponent.h: In member function 'int tbnet::IOComponent::getRef()': iocomponent.h:122: error: '_refcount' was not declared in this scope iocomponent.h:122: error: 'atomic_read' was not declared in this scope transport.h: At global scope: transport.h:23: error: 'tbsys' has not been declared transport.h:23: error: expected `{' before 'Runnable' transport.h:23: error: invalid function declaration packetqueuethread.h:28: error: 'tbsys' has not been declared packetqueuethread.h:28: error: expected `{' before 'CDefaultRunnable' packetqueuethread.h:28: error: invalid function declaration connectionmanager.h:93: error: 'tbsys' has not been declared connectionmanager.h:93: error: ISO C++ forbids declaration of 'CThreadMutex' with no type connectionmanager.h:93: error: expected ';' before '_mutex' make[1]: *** [channel.lo] Error 1 make[1]: Leaving directory `/home/tair/tair/tb-common-utils/tbnet/src' make: *** [install-recursive] Error 1have installed in ~/libCPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:/home/tair/tair/tb-common-utils/tbsys/src:/home/tair/tair/tb-common-utils/tbnet/srcexport CPLUS_INCLUDE_PATH
3. 部署配置
tair的执行, 至少需要一个 config server 和一个 data server. 推荐使用两个 config server 多个data server的方式. 两个config server有主备之分.tair有三个配置文件。各自是对config server、data server及group信息的配置,在tair_bin安装文件夹下的etc文件夹下有这三个配置文件的样例,咱们将其复制一下,成为咱们需要的配置文件。cp configserver.conf.default configserver.conf cp dataserver.conf.default dataserver.conf cp group.conf.default group.conf
个人部署环境:
在配置以前。请查阅官网给出的配置文件字段详细解释,如下直接贴出我本身的配置并加以简单的说明。
3.1 配置config server
# # tair 2.3 --- configserver config # [public] config_server=10.10.7.144:51980 config_server=10.10.7.144:51980 [configserver] port=51980 log_file=/home/dataserver1/tair_bin/logs/config.log pid_file=/home/dataserver1/tair_bin/logs/config.pid log_level=warn group_file=/home/dataserver1/tair_bin/etc/group.conf data_dir=/home/dataserver1/tair_bin/data/data dev_name=venet0:0
注意事项:
(1)首先需要配置config server的服务器地址和端口号,端口号可以默认,服务器地址改为本身的,有一主一备两台configserver,这里仅为測试使用就设置为一台了。
(2)log_file/pid_file等的路径设置最好用绝对路径,默认的是相对路径,而且是不对的相对路径(没有返回上级文件夹)。所以这里需要改动。注意data文件和log文件很是重要,data文件必不可少。而log文件是部署出错后能给你详细的出错缘由。
(3)dev_name很是重要。需要设置为你本身当前网络接口的名称,默以为eth0。这里我依据本身的网络状况进行了改动(ifconfig查看网络接口名称)。
# # tair 2.3 --- tairserver config # [public] config_server=10.10.7.144:51980 config_server=10.10.7.144:51980 [tairserver] # #storage_engine: # # mdb # kdb # ldb # storage_engine=ldb local_mode=0 # #mdb_type: # mdb # mdb_shm # mdb_type=mdb_shm # # if you just run 1 tairserver on a computer, you may ignore this option. # if you want to run more than 1 tairserver on a computer, each tairserver must have their own "mdb_shm_path" # # mdb_shm_path=/mdb_shm_path01 #tairserver listen port port=51910 heartbeat_port=55910 process_thread_num=16 # #mdb size in MB # slab_mem_size=1024 log_file=/home/dataserver1/tair_bin/logs/server.log pid_file=/home/dataserver1/tair_bin/logs/server.pid log_level=warn dev_name=venet0:0 ulog_dir=/home/dataserver1/tair_bin/data/ulog ulog_file_number=3 ulog_file_size=64 check_expired_hour_range=2-4 check_slab_hour_range=5-7 dup_sync=1 do_rsync=0 # much resemble json format # one local cluster config and one or multi remote cluster config. # {local:[master_cs_addr,slave_cs_addr,group_name,timeout_ms,queue_limit],remote:[...],remote:[...]} rsync_conf={local:[10.0.0.1:5198,10.0.0.2:5198,group_local,2000,1000],remote:[10.0.1.1:5198,10.0.1.2:5198,group_remote,2000,3000]} # if same data can be updated in local and remote cluster, then we need care modify time to # reserve latest update when do rsync to each other. rsync_mtime_care=0 # rsync data directory(retry_log/fail_log..) rsync_data_dir=/home/dataserver1/tair_bin/data/remote # max log file size to record failed rsync data, rotate to a new file when over the limit rsync_fail_log_size=30000000 # whether do retry when rsync failed at first time rsync_do_retry=0 # when doing retry, size limit of retry log's memory use rsync_retry_log_mem_size=100000000 [fdb] # in MB index_mmap_size=30 cache_size=256 bucket_size=10223 free_block_pool_size=8 data_dir=/home/dataserver1/tair_bin/data/fdb fdb_name=tair_fdb [kdb] # in byte map_size=10485760 # the size of the internal memory-mapped region bucket_size=1048583 # the number of buckets of the hash table record_align=128 # the power of the alignment of record size data_dir=/home/dataserver1/tair_bin/data/kdb # the directory of kdb's data [ldb] #### ldb manager config ## data dir prefix, db path will be data/ldbxx, "xx" means db instance index. ## so if ldb_db_instance_count = 2, then leveldb will init in ## /data/ldb1/ldb/, /data/ldb2/ldb/. We can mount each disk to ## data/ldb1, data/ldb2, so we can init each instance on each disk. data_dir=/home/dataserver1/tair_bin/data/ldb ## leveldb instance count, buckets will be well-distributed to instances ldb_db_instance_count=1 ## whether load backup version when startup. ## backup version may be created to maintain some db data of specifid version. ldb_load_backup_version=0 ## whether support version strategy. ## if yes, put will do get operation to update existed items's meta info(version .etc), ## get unexist item is expensive for leveldb. set 0 to disable if nobody even care version stuff. ldb_db_version_care=1 ## time range to compact for gc, 1-1 means do no compaction at all ldb_compact_gc_range = 3-6 ## backgroud task check compact interval (s) ldb_check_compact_interval = 120 ## use cache count, 0 means NOT use cache,`ldb_use_cache_count should NOT be larger ## than `ldb_db_instance_count, and better to be a factor of `ldb_db_instance_count. ## each cache mdb's config depends on mdb's config item(mdb_type, slab_mem_size, etc) ldb_use_cache_count=1 ## cache stat can't report configserver, record stat locally, stat file size. ## file will be rotate when file size is over this. ldb_cache_stat_file_size=20971520 ## migrate item batch size one time (1M) ldb_migrate_batch_size = 3145728 ## migrate item batch count. ## real batch migrate items depends on the smaller size/count ldb_migrate_batch_count = 5000 ## comparator_type bitcmp by default # ldb_comparator_type=numeric ## numeric comparator: special compare method for user_key sorting in order to reducing compact ## parameters for numeric compare. format: [meta][prefix][delimiter][number][suffix] ## skip meta size in compare # ldb_userkey_skip_meta_size=2 ## delimiter between prefix and number # ldb_userkey_num_delimiter=: #### ## use blommfilter ldb_use_bloomfilter=1 ## use mmap to speed up random acess file(sstable),may cost much memory ldb_use_mmap_random_access=0 ## how many highest levels to limit compaction ldb_limit_compact_level_count=0 ## limit compaction ratio: allow doing one compaction every ldb_limit_compact_interval ## 0 means limit all compaction ldb_limit_compact_count_interval=0 ## limit compaction time interval ## 0 means limit all compaction ldb_limit_compact_time_interval=0 ## limit compaction time range, start == end means doing limit the whole day. ldb_limit_compact_time_range=6-1 ## limit delete obsolete files when finishing one compaction ldb_limit_delete_obsolete_file_interval=5 ## whether trigger compaction by seek ldb_do_seek_compaction=0 ## whether split mmt when compaction with user-define logic(bucket range, eg) ldb_do_split_mmt_compaction=0 #### following config effects on FastDump #### ## when ldb_db_instance_count > 1, bucket will be sharded to instance base on config strategy. ## current supported: ## hash : just do integer hash to bucket number then module to instance, instance's balance may be ## not perfect in small buckets set. same bucket will be sharded to same instance ## all the time, so data will be reused even if buckets owned by server changed(maybe cluster has changed), ## map : handle to get better balance among all instances. same bucket may be sharded to different instance based ## on different buckets set(data will be migrated among instances). ldb_bucket_index_to_instance_strategy=map ## bucket index can be updated. this is useful if the cluster wouldn't change once started ## even server down/up accidently. ldb_bucket_index_can_update=1 ## strategy map will save bucket index statistics into file, this is the file's directory ldb_bucket_index_file_dir=/home/dataserver1/tair_bin/data/bindex ## memory usage for memtable sharded by bucket when batch-put(especially for FastDump) ldb_max_mem_usage_for_memtable=3221225472 #### #### leveldb config (Warning: you should know what you're doing.) ## one leveldb instance max open files(actually table_cache_ capacity, consider as working set, see `ldb_table_cache_size) ldb_max_open_files=655 ## whether return fail when occure fail when init/load db, and ## if true, read data when compactiong will verify checksum ldb_paranoid_check=0 ## memtable size ldb_write_buffer_size=67108864 ## sstable size ldb_target_file_size=8388608 ## max file size in each level. level-n (n > 0): (n - 1) * 10 * ldb_base_level_size ldb_base_level_size=134217728 ## sstable's block size # ldb_block_size=4096 ## sstable cache size (override `ldb_max_open_files) ldb_table_cache_size=1073741824 ##block cache size ldb_block_cache_size=16777216 ## arena used by memtable, arena block size #ldb_arenablock_size=4096 ## key is prefix-compressed period in block, ## this is period length(how many keys will be prefix-compressed period) # ldb_block_restart_interval=16 ## specifid compression method (snappy only now) # ldb_compression=1 ## compact when sstables count in level-0 is over this trigger ldb_l0_compaction_trigger=1 ## write will slow down when sstables count in level-0 is over this trigger ## or sstables' filesize in level-0 is over trigger * ldb_write_buffer_size if ldb_l0_limit_write_with_count=0 ldb_l0_slowdown_write_trigger=32 ## write will stop(wait until trigger down) ldb_l0_stop_write_trigger=64 ## when write memtable, max level to below maybe ldb_max_memcompact_level=3 ## read verify checksum ldb_read_verify_checksums=0 ## write sync log. (one write will sync log once, expensive) ldb_write_sync=0 ## bits per key when use bloom filter #ldb_bloomfilter_bits_per_key=10 ## filter data base logarithm. filterbasesize=1<<ldb_filter_base_logarithm #ldb_filter_base_logarithm=12
该配置文件内容很是多,红色标出来的是我改动的部分。其余的採用默认。当中:
(1)config_server的配置与以前必须全然相同。
(2)这里面的port和heartbeat_port是data server的端口号和心跳端口号,必须确保系统能给你使用这些端口号。通常默认的就能够。这里我改动是因为本身的Linux系统仅仅赞成分配30000之后的端口号。依据本身状况改动。
(3)data文件、log文件等很是重要,与前同样,最好用绝对路径
#group name [group_1] # data move is 1 means when some data serve down, the migrating will be start. # default value is 0 _data_move=0 #_min_data_server_count: when data servers left in a group less than this value, config server will stop serve for this group #default value is copy count. _min_data_server_count=1 #_plugIns_list=libStaticPlugIn.so _build_strategy=1 #1 normal 2 rack _build_diff_ratio=0.6 #how much difference is allowd between different rack # diff_ratio = |data_sever_count_in_rack1 - data_server_count_in_rack2| / max (data_sever_count_in_rack1, data_server_count_in_rack2) # diff_ration must less than _build_diff_ratio _pos_mask=65535 # 65535 is 0xffff this will be used to gernerate rack info. 64 bit serverId & _pos_mask is the rack info, _copy_count=1 _bucket_number=1023 # accept ds strategy. 1 means accept ds automatically _accept_strategy=1 # data center A _server_list=10.10.7.146:51910 #_server_list=192.168.1.2:5191 #_server_list=192.168.1.3:5191 #_server_list=192.168.1.4:5191 # data center B #_server_list=192.168.2.1:5191 #_server_list=192.168.2.2:5191 #_server_list=192.168.2.3:5191 #_server_list=192.168.2.4:5191 #quota info _areaCapacity_list=0,1124000;
这个文件我仅仅配置了data server列表,我仅仅有一个dataserver,所以仅仅需配置一个。
在完毕安装配置以后, 可以启动集群了. 启动的时候需要先启动data server 而后再启动cofnig server. 假设是为已有的集群加入dataserver则可以先启动dataserver进程而后再改动gruop.conf,假设你先改动group.conf再启动进程,那么需要执行touch group.conf;在scripts文件夹下有一个脚本 tair.sh 可以用来帮助启动 tair.sh start_ds 用来启动data server. tair.sh start_cs 用来启动config server. 这个脚本比較简单, 它要求配置文件放在固定位置, 採用固定名称. 使用者可以经过执行安装文件夹下的bin下的 tair_server (data server) 和 tair_cfg_svr(config server) 来启动集群.
进入tair_bin文件夹后,按顺序启动:
sudo sbin/tair_server -f etc/dataserver.conf # 在dataserver端启动 sudo sbin/tair_cfg_svr -f etc/configserver.conf # 在config server端启动
执行启动命令后,在两端经过ps aux | grep tair查看是否启动了。这里启动起来仅仅是第一步,还需要測试看是否真的启动成功。经过如下命令測试:
sudo sbin/tairclient -c 10.10.7.144:51980 -g group_1 TAIR> put k1 v1 put: success TAIR> put k2 v2 put: success TAIR> get k2 KEY: k2, LEN: 2
当中10.10.7.144:51980是config server IP:PORT,group_1是group name,在group.conf里配置的。
假设启动不成功或測试put/get时出现故障,那么需要查看config server端的logs/config.log和data server端的logs/server.log日志文件,里面会有详细的报错信息。
因为个人存储引擎选择的是ldb,而ldb有一个配置ldb_max_open_files=65535,即默认最多能打开的文件个数是65535个,但是个人系统不一样意,可以经过“ulimit -n”查看系统执行程序中打开的最多文件个数。通常为1024个,远远小于65535,这时有两个办法来解决,一是改动ldb_max_open_files的值,使其小于1024。二是改动系统最多赞成打开文件个数(如下的參考资料有提供改动的方法),因为我是測试使用,所以这里直接改动了ldb_max_open_files的值。[2014-07-09 10:37:24.863119] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001013.stat] failed: Too many open files[2014-07-09 10:37:24.863132] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001014.stat] failed: Too many open files[2014-07-09 10:37:24.863145] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001015.stat] failed: Too many open files[2014-07-09 10:37:24.863154] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001016.stat] failed: Too many open files[2014-07-09 10:37:24.863162] ERROR start (stat_manager.cpp:30) [139767832377088] open file [/home/dataserver1/tair_bin/data/ldb1/ldb/tair_db_001017.stat] failed: Too many open files
dataserver没配置好会报各类错误,如下列举一些我遇到的错误:
问题1:
TAIR> put abc aput: unknowTAIR> put a 11put: unknowTAIR> put abc 33put: unknowTAIR> get aget failed: data not exists.
问题2:
ERROR wakeup_wait_object (../../src/common/wait_object.hpp:302) [140627106383616] [3] packet is null
这些都是dataserver開始启动起来了。但是使用put/get时报错。而后dataserver当即down掉的状况,这时候就要依据log查看详细报错信息。改动错误的配置。
还有如下这种报错信息:
[2014-07-09 09:08:11.646430] ERROR rebuild (group_info.cpp:879) [139740048353024] can not get enough data servers. need 1 lef 0
这是config server在启动时找不到data server。也就是data server必需要先启动成功后才干启动config server。
start tair_cfg_srv listen port 5199 error
有时候使用默认的端口号也不必定行。需要依据系统限制进行设置,比方个人系统环境仅仅能执行普通用户使用30000以上的端口号。所以这里我就不能使用默认端口号了,改下就能够。
Tair是一个分布式的key/value存储系统。数据每每存储在多个数据节点上。
client需要决定数据存储的详细节点,而后才干完毕详细的操做。
Tair的client经过和configserver交互获取这部分信息。configserver会维护一张表,这张表包括hash值与存储其对应数据的节点的对比关系。
client在启动时,需要先和configserver通讯,获取这张对比表。
在获取到对比表后,client便可以開始提供服务。client会依据请求的key的hash值,查找对比表中负责该数据的数据节点,而后经过和数据节点通讯完毕用户的请求。
Tair当前支持Java和c++语言的client。Javaclient已有对应的实现(可从这里下载到对应的jar包),咱们直接使用封装的接口操做就能够,但C++client眼下还没看到实现版本号(需要本身实现)。
这里以简单的Javaclient为例进行client測试。
Java測试程序除了需要封装好的tair相关jar包以外,还需要tair依赖的一些jar包,详细的有如下几个(不必定是这个版本号号):
commons-logging-1.1.3.jar
slf4j-api-1.7.7.jar
slf4j-log4j12-1.7.7.jar
log4j-1.2.17.jar
mina-core-1.1.7.jar
tair-client-2.3.1.jar
首先请參考Tair用户指南里面的关于javaclient的接口说明,如下直接给出演示样例,很是easy理解。
package tair.client; import java.util.ArrayList; import java.util.List; import com.taobao.tair.DataEntry; import com.taobao.tair.Result; import com.taobao.tair.ResultCode; import com.taobao.tair.impl.DefaultTairManager; /** * @author WangJianmin * @date 2014-7-9 * @description Java-client test application for tair. * */ public class TairClientTest { public static void main(String[] args) { // 建立config server列表 List<String> confServers = new ArrayList<String>(); confServers.add("10.10.7.144:51980"); // confServers.add("10.10.7.144:51980"); // 可选 // 建立client实例 DefaultTairManager tairManager = new DefaultTairManager(); tairManager.setConfigServerList(confServers); // 设置组名 tairManager.setGroupName("group_1"); // 初始化client tairManager.init(); // put 10 items for (int i = 0; i < 10; i++) { // 第一个參数是namespace,第二个是key,第三是value,第四个是版本号。第五个是有效时间 ResultCode result = tairManager.put(0, "k" + i, "v" + i, 0, 10); System.out.println("put k" + i + ":" + result.isSuccess()); if (!result.isSuccess()) break; } // get one // 第一个參数是namespce。第二个是key Result<DataEntry> result = tairManager.get(0, "k3"); System.out.println("get:" + result.isSuccess()); if (result.isSuccess()) { DataEntry entry = result.getValue(); if (entry != null) { // 数据存在 System.out.println("value is " + entry.getValue().toString()); } else { // 数据不存在 System.out.println("this key doesn't exist."); } } else { // 异常处理 System.out.println(result.getRc().getMessage()); } } }
执行结果:
log4j:WARN No appenders could be found for logger (com.taobao.tair.impl.ConfigServer).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
put k0:true
put k1:true
put k2:true
put k3:true
put k4:true
put k5:true
put k6:true
put k7:true
put k8:true
put k9:true
get:true
value is v3
注意事项:測试假设不是在config server或data server上进行,那么必定要确保測试端系统与config server和data server能互相通讯,即ping通。不然有可能会报如下这种错误:
Exception in thread "main" java.lang.RuntimeException: init config failedat com.taobao.tair.impl.DefaultTairManager.init(DefaultTairManager.java:80)at tair.client.TairClientTest.main(TairClientTest.java:27)
我已将演示样例程序、需要的jar包及Makefile文件(我在Linux系统下測试,未用Eclipse跑程序)打包,需要的可以从这里下载。
2. Tair用户指南