Apache HBase is a database that runs on a Hadoop cluster. HBase is not a traditional RDBMS, as it relaxes the ACID (Atomicity, Consistency, Isolation, and Durability) properties of traditional RDBMS systems in order to achieve much greater scalability. Data stored in HBase also does not need to fit into a rigid schema like with an RDBMS, making it ideal for storing unstructured or semi-structured data.html
The MapR Converged Data Platform supports HBase, but also supports MapR-DB, a high performance, enterprise-grade NoSQL DBMS that includes the HBase API to run HBase applications. For this blog, I’ll specifically refer to HBase, but understand that many of the advantages of using HBase in your data architecture apply to MapR-DB. MapR built MapR-DB to take HBase applications to the next level, so if the thought of higher powered, more reliable HBase deployments sound appealing to you, take a look at some of the MapR-DB content here.node
HBase allows you to build big data applications for scaling, but with this comes some different ways of implementing applications compared to developing with traditional relational databases. In this blog post, I will provide an overview of HBase, touch on the limitations of relational databases, and dive into the specifics of the HBase data model.web
Relational Databases vs. HBase – Data Storage Model算法
Why do we need NoSQL/HBase? First, let’s look at the pros of relational databases before we discuss its limitations:sql
Relational databases were the standard for years, so what changed? With more and more data came the need to scale. One way to scale is vertically with a bigger server, but this can get expensive, and there are limits as your size increases.express
Relational Databases vs. HBase - Scalingapache
What changed to bring on NoSQL?缓存
An alternative to vertical scaling is to scale horizontally with a cluster of machines, which can use commodity hardware. This can be cheaper and more reliable. To horizontally partition or shard a RDBMS, data is distributed on the basis of rows, with some rows residing on a single machine and the other rows residing on other machines, However, it’s complicated to partition or shard a relational database, and it was not designed to do this automatically. In addition, you lose the querying, transactions, and consistency controls across shards. Relational databases were designed for a single node; they were not designed to be run on clusters.安全
Limitations of a Relational Model服务器
Database normalization eliminates redundant data, which makes storage efficient. However, a normalized schema causes joins for queries, in order to bring the data back together again. While HBase does not support relationships and joins, data that is accessed together is stored together so it avoids the limitations associated with a relational model. See the difference in data storage models in the chart below:
Relational databases vs. HBase - data storage model
HBase Designed for Distribution, Scale, and Speed
HBase was designed to scale due to the fact that data that is accessed together is stored together. Grouping the data by key is central to running on a cluster. In horizontal partitioning or sharding, the key range is used for sharding, which distributes different data across multiple servers. Each server is the source for a subset of data. Distributed data is accessed together, which makes it faster for scaling. HBase is actually an implementation of the BigTable storage architecture, which is a distributed storage system developed by Google that’s used to manage structured data that is designed to scale to a very large size.
HBase is referred to as a column family-oriented data store. It’s also row-oriented: each row is indexed by a key that you can use for lookup (for example, lookup a customer with the ID of 1234). Each column family groups like data (customer address, order) within rows. Think of a row as the join of all values in all column families.
HBase is a column family-oriented database
HBase is also considered a distributed database. Grouping the data by key is central to running on a cluster and sharding. The key acts as the atomic unit for updates. Sharding distributes different data across multiple servers, and each server is the source for a subset of data.
HBase is a distributed database
HBase Data Model
Data stored in HBase is located by its “rowkey.” This is like a primary key from a relational database. Records in HBase are stored in sorted order, according to rowkey. This is a fundamental tenet of HBase and is also a critical semantic used in HBase schema design.
HBase data model – row keys
Tables are divided into sequences of rows, by key range, called regions. These regions are then assigned to the data nodes in the cluster called “RegionServers.” This scales read and write capacity by spreading regions across the cluster. This is done automatically and is how HBase was designed for horizontal sharding.
Tables are split into regions = contiguous keys
The image below shows how column families are mapped to storage files. Column families are stored in separate files, which can be accessed separately.
The data is stored in HBase table cells. The entire cell, with the added structural information, is called Key Value. The entire cell, the row key, column family name, column name, timestamp, and value are stored for every cell for which you have set a value. The key consists of the row key, column family name, column name, and timestamp.
Logically, cells are stored in a table format, but physically, rows are stored as linear sets of cells containing all the key value information inside them.
In the image below, the top left shows the logical layout of the data, while the lower right section shows the physical storage in files. Column families are stored in separate files. The entire cell, the row key, column family name, column name, timestamp, and value are stored for every cell for which you have set a value.
Logical data model vs. physical data storage
As mentioned before, the complete coordinates to a cell's value are: Table:Row:Family:Column:Timestamp ➔ Value. HBase tables are sparsely populated. If data doesn’t exist at a column, it’s not stored. Table cells are versioned uninterpreted arrays of bytes. You can use the timestamp or set up your own versioning system. For every coordinate row family:column, there can be multiple versions of the value.
Sparse data with cell versions
Versioning is built in. A put is both an insert (create) and an update, and each one gets its own version. Delete gets a tombstone marker. The tombstone marker prevents the data being returned in queries. Get requests return specific version(s) based on parameters. If you do not specify any parameters, the most recent version is returned. You can configure how many versions you want to keep and this is done per column family. The default is to keep up to three versions. When the max number of versions is exceeded, extra records will be eventually removed.
Versioned data
In this blog post, you got an overview of HBase (and implicitly MapR-DB) and learned about the HBase/MapR-DB data model. Stay tuned for the next blog post, where I’ll take a deep dive into the details of the HBase architecture. In the third and final blog post in this series, we’ll take a look at schema design guidelines.
Want to learn more?
物理上看, HBase系统有3种类型的后台服务程序, 分别是Region server, Master server 和 zookeeper.
Region server负责实际数据的读写. 当访问数据时, 客户端与HBase的Region server直接通讯.
Master server管理Region的位置, DDL(新增和删除表结构)
Zookeeper负责维护和记录整个HBase集群的状态.
全部的HBase数据都存储在HDFS中. 每一个 Region server都把本身的数据存储在HDFS中. 若是一个服务器既是Region server又是HDFS的Datanode. 那么这个Region server的数据会在把其中一个副本存储在本地的HDFS中, 加速访问速度.
可是, 若是是一个新迁移来的Region server, 这个region server的数据并无本地副本. 直到HBase运行compaction, 才会把一个副本迁移到本地的Datanode上面.
HBase的表根据Row Key的区域分红多个Region, 一个Region包含这这个区域内全部数据. 而Region server负责管理多个Region, 负责在这个Region server上的全部region的读写操做. 一个Region server最多能够管理1000个region.
HBase Maste主要负责分配region和操做DDL(如新建和删除表)等,
HBase Master的功能:
Zookeepper是一个分布式的无中心的元数据存储服务. zookeeper探测和记录HBase集群中服务器的状态信息. 若是zookeeper发现服务器宕机, 它会通知Hbase的master节点. 在生产环境部署zookeeper至少须要3台服务器, 用来知足zookeeper的核心算法Paxos的最低要求.
Zookeepr负责维护集群的memberlist, 哪台服务器在线,哪台服务器宕机都由zookeeper探测和管理. Region server, 主备Master节点主动链接Zookeeper, 维护一个Session链接,
这个session要求定时发送heartbeat, 向zookeeper说明本身在线, 并无宕机.
ZooKeeper有一个Ephemeral Node(临时节点)的概念, session链接在zookeeper中创建一个临时节点(Ephemeral Node), 若是这个session断开, 临时节点被自动删除.
全部Region server都尝试链接Zookeeper, 并在这个session中创建一个临时节点(Ephemeral node). HBase的master节点监控这些临时节点的是否存在, 能够发现新加入region server和判断已经存在的region server宕机.
为了高可用需求, HBase的master也有多个, 这些master节点也同时向Zookeeper注册临时节点(Ephemeral Node). Zookeeper把第一个成功注册的master节点设置成active状态, 而其余master node处于inactive状态.
若是zookeeper规定时间内, 没有收到active的master节点的heartbeat, 链接session超时, 对应的临时节也自动删除. 以前处于Inactive的master节点获得通知, 立刻变成active状态, 当即提供服务.
一样, 若是zookeeper没有及时收到region server的heartbeat, session过时, 临时节点删除. HBase master得知region server宕机, 启动数据恢复方案.
HBase把各个region的位置信息存储在一个特殊的表中, 这个表叫作Meta table.
Zookeeper里面存储了这个Meta table的位置信息.
HBase的访问流程:
客户端缓存meta table的位置和row key的位置信息, 这样就不用每次访问都读zookeeper.
若是region server因为宕机等缘由迁移到其余服务器. Hbase客户端访问失败, 客户端缓存过时, 再从新访问zookeeper, 获得最新的meta table位置, 更新缓存.
Meta table存储全部region的列表
Meta table用相似于Btree的方式存储
Meta table的结构以下:
- Key: region的开始row key, region id
- Values: Region server
译注: 在google的bigtable论文中, bigtable采用了多级meta table, Hbase的Meta table只有2级
Region Server运行在HDFS的data node上面, 它有下面4个部分组成:
当hbase客户端发起Put请求, 第一步是将数据写入预写日志(WAL):
数据写入预写日志(WAL), 并存储在memstore以后, 向用户返回写成功.
MemStore在内存按照Key的顺序, 存储Key-Value对, 一个Memstore对应一个列簇(column family). 一样在HFile里面, 全部的Key-Value对也是根据Key有序存储.
译注: 原文里面Flush的意识是, 把缓冲的数据从内存 转存 到硬盘里, 这就相似与冲厕所(Flush the toilet) , 把数据比做是水, 一下把积攒的水冲到下水道, 想当于把缓存的数据写入硬盘. 和Flush很是相似的英文还有un-plug, 好比有一浴缸的水, 只要un-plug浴缸里面的塞子, 浴缸的水就开始流进下水道, 也类比把缓存数据写入硬盘
当Memstore累计了足够多的数据, Region server将Memstore中的数据写入HDFS, 存储为一个HFile. 每一个列簇(column family)对于多个HFile, 每一个HFile里面就是实际存储的数据.
这些HFile都是当Memstore满了之后, Flush到HDFS中的文件. 注意到HBase限制了列簇(column family)的个数. 由于每一个列簇(column family)都对应一个Memstore. [译注: 太多的memstore占用过多的内存].
当Memstore的数据Flush到硬盘时, 系统额外保存了最后写入操做的序列号(last written squence number), 因此HBase知道有多少数据已经成功写入硬盘. 每一个HFile都记录这个序号, 代表这个HFile记录了多少数据和从哪里继续写入数据.
在region server启动后, 读取全部HFile中最高的序列号, 新的写入序列号从这个最高序列号继续向上累加.
HFile中存储有序的Key-Value对. 当Memstore满了以后, Memstore中的全部数据写入HDFS中,造成一个新的HFile. 这种大文件写入是顺序写, 由于避免了机械硬盘的磁头移动, 因此写入速度很是快.
HFile存储了一个多级索引(multi-layered index), 查询请求不须要遍历整个HFile查询数据, 经过多级索引就能够快速获得数据(工做机制相似于b+tree)
尾部指针(trailer pointer)在HFile的最末尾, 它指向元数据块区(meta block), 布隆过滤器区域和时间范围区域. 查询布隆过滤器能够很快得肯定row key是否在HFile内, 时间范围区域也能够帮助查询跳过不在时间区域的读请求.
译注: 布隆过滤器在搜索和文件存储中有普遍用途, 具体算法请参考https://china.googleblog.com/2007/07/bloom-filter_7469.html
当打开HFile后, 系统自动缓存HFile的索引在Block Cache里, 这样后续查找操做只须要一次硬盘的寻道.
咱们发现HBase中的一个row里面的数据, 分配在多个地方. 已经持久化存储的Cell在HFile, 最近写入的Cell在Memstore里, 最近读取的Cell在Block cache里. 因此当你读HBase的一行时, 混合了Block cache, memstore和Hfiles的读操做
HBase自动选择较小的HFile, 将它们合并成更大的HFile. 这个过程叫作minor compaction. Minor compaction经过合并小HFile, 减小HFile的数量.
HFile的合并采用归并排序的算法.
译注: 较少的HFile能够提升HBase的读性能
Major compaction指一个region下的全部HFile作归并排序, 最后造成一个大的HFile. 这能够提升读性能. 可是, major compaction重写全部的Hfile, 占用大量硬盘IO和网络带宽. 这也被称为写放大现象(write amplification)
Major compaction能够被调度成自动运行的模式, 可是因为写放大的问题(write amplification), major compaction一般在一周执行一次或者只在凌晨运行. 此外, major compaction的过程当中, 若是发现region server负责的数据不在本地的HDFS datanode上, major compaction除了合并文件外, 还会把其中一份数据转存到本地的data node上.
快速的复习region的概念:
最初, 每张表只有一个region, 当一个region变得太大时, 它就分裂成2个子region. 2个子region, 各占原始region的一半数据, 仍然被相同的region server管理. Region server向HBase master节点汇报拆分完成.
若是集群内还有其余region server, master节点倾向于作负载均衡, 因此master节点有可能调度新的region到其余region server, 由其余region管理新的分裂出的region.
最初, 一个Region server上的region一分为二, 可是考虑到负载均衡, master node会把新region调度到其余服务器上. 然而, 新region所在的region server在本地data node上没有数据, 全部操做都是操做远程HDFS上面的数据. 直到这个Region server运行了major compaction以后, 才有一份副本落在本地datanode中.
译注: HFile和WAL都是存储在HDFS中, 这里说的把副本存储在本地是指: 因为HDFS是一种聪明的FS, 若是他发现要求写入文件的客户端刚好也是HDFS的data node, 那么在分配哪三台服务器存储副本时, 会优先在发请求的客户端存储数据, 这样就可让Region server管理的数据虽然是3份, 可是其中一份就在本地服务器上, 优化了访问路径.
具体能够参考这篇文章http://www.larsgeorge.com/2010/05/hbase-file-locality-in-hdfs.html, 里面详述了HDFS如何实现这种本地化的存储. 换句话说, 若是region server没有和HDFS的data node部署在同一台服务器, 就没法实现上面说的本地存储
全部读写都是操做primary node. HDFS自动复制全部WAL和HFile的数据块到其余节点. HBase依赖HDFS保证数据安全. 当在HDFS里面写入一个文件时, 一份存储在本地节点, 另两份存储到其余节点
预写日志(WAL) 和 HFile都存在HDFS里面, 能够保证数据的可靠性, 可是HBase memstore里的数据都在内存中, 若是系统崩溃后重启, Hbase如何恢复Memstore里面的数据?
译注: 从上图看memstore的数据在内存中, 也没有多副本
当region server宕机, 崩溃的region server管理的region不能再提供服务, HBase监测到异常后, 启动恢复程序, 恢复region.
Zookeeper发现region server的heartbeat中止, 判断region server宕机并通知master节点. Hbase master节点得知该region server停机后, 将崩溃的region server管理的region分配给其余region server. HBase从预写文件(WAL)里恢复memstore里的数据.
HBase master知道老的region被从新分配到哪些新的region server. Master把已经crash的Region server的预写日志(WAL)拆分红多个. 参与故障恢复的每一个region server重放的预写日志(WAL), 从新构建出丢失Memstore.
预写日志(WAL)记录了HBase的每一个操做, 每一个操做表明一个Put或者删除Delete动做. 全部的操做按照时间顺序在预写日志(WAL)排列, 文件头记录最老的操做, 最新的操做处于文件末尾.
如何恢复在memstore里, 但尚未写到HFile的数据? 从新执行预写日志(WAL)就能够. 从前到后依次执行预写日志(WAL)里的操做, 重建memstore数据. 最后, Flush memstore数据到的HFile, 完成恢复.
强一致模型
- 当写返回时, 确保全部读操做读到相同的值
自动扩展
- 数据增加过大时, 自动分裂region
- 利用HFDS分散数据和备份数据
内建自动回复
- 预写日志(WAL)
集成Hadoop生态
- 在HBase上运行map reduce
In this blog post, I’ll discuss how HBase schema is different from traditional relational schema modeling, and I’ll also provide you with some guidelines for proper HBase schema design.
Relational vs. HBase Schemas
There is no one-to-one mapping from relational databases to HBase. In relational design, the focus and effort is around describing the entity and its interaction with other entities; the queries and indexes are designed later.
With HBase, you have a “query-first” schema design; all possible queries should be identified first, and the schema model designed accordingly. You should design your HBase schema to take advantage of the strengths of HBase. Think about your access patterns, and design your schema so that the data that is read together is stored together. Remember that HBase is designed for clustering.
Normalization
In a relational database, you normalize the schema to eliminate redundancy by putting repeating information into a table of its own. This has the following benefits:
However, this causes joins. Since data has to be retrieved from more tables, queries can take more time to complete.
In this example below, we have an order table which has one-to-many relationship with an order items table. The order items table has a foreign key with the id of the corresponding order.
De-normalization
In a de-normalized datastore, you store in one table what would be multiple indexes in a relational world. De-normalization can be thought of as a replacement for joins. Often with HBase, you de-normalize or duplicate data so that data is accessed and stored together.
Parent-Child Relationship–Nested Entity
Here is an example of denormalization in HBase, if your tables exist in a one-to-many relationship, it’s possible to model it in HBase as a single row. In the example below, the order and related line items are stored together and can be read together with a get on the row key. This makes the reads a lot faster than joining tables together.
The rowkey corresponds to the parent entity id, the OrderId. There is one column family for the order data, and one column family for the order items. The Order Items are nested, the Order Item IDs are put into the column names and any non-identifying attributes are put into the value.
This kind of schema design is appropriate when the only way you get at the child entities is via the parent entity.
Many-to-Many Relationship in an RDBMS
Here is an example of a many-to-many relationship in a relational database. These are the query requirements:
Many-to-Many Relationship in HBase
The queries that we are interested in are:
For an entity table, it is pretty common to have one column family storing all the entity attributes, and column families to store the links to other entities.
The entity tables are as shown below:
Generic Data, Event Data, and Entity-Attribute-Value
Generic data that is schemaless is often expressed as name value or entity attribute value. In a relational database, this is complicated to represent. A conventional relational table consists of attribute columns that are relevant for every row in the table, because every row represents an instance of a similar object. A different set of attributes represents a different type of object, and thus belongs in a different table. The advantage of HBase is that you can define columns on the fly, put attribute names in column qualifiers, and group data by column families.
Here is an example of clinical patient event data. The Row Key is the patient ID plus a time stamp. The variable event type is put in the column qualifier, and the event measurement is put in the column value. OpenTSDB is an example of variable system monitoring data.
Self-Join Relationship – HBase
A self-join is a relationship in which both match fields are defined in the same table.
Consider a schema for twitter relationships, where the queries are: which users does userX follow, and which users follow userX? Here’s a possible solution: The userids are put in a composite row key with the relationship type as a separator. For example, Carol follows Steve Jobs and Carol is followed by BillyBob. This allows for row key scans for everyone carol:follows or carol:followedby
Below is the example Twitter table:
Tree, Graph Data
Here is an example of an adjacency list or graph, using a separate column for each parent and child:
Each row shows a node, and the row key is equal to the node id. There is a column family for parent p, and a column family children c. The column qualifiers are equal to the parent or child node ids, and the value is equal to the type to node. This allows to quickly find the parent or children nodes from the row key.
You can see there are multiple ways to represent trees, the best way depends on your queries.
Inheritance Mapping
In this online store example, the type of product is a prefix in the row key. Some of the columns are different, and may be empty depending on the type of product. This allows to model different product types in the same table and to scan easily by product type.
Data Access Patterns
Use Cases: Large-scale offline ETL analytics and generating derived data
In analytics, data is written multiple orders of magnitude more frequently than it is read. Offline analysis can also be used to provide a snapshot for online viewing. Offline systems don’t have a low-latency requirement; that is, a response isn’t expected immediately. Offline HBase ETL data access patterns, such as Map Reduce or Hive, are characterized by high latency reads and high throughput writes.
Data Access Patterns
Use Cases: Materialized view, pre-calculated summaries
To provide fast reads for online web sites, or an online view of data from data analysis, MapReduce jobs can reorganize the data into different groups for different readers or materialized views. Batch offline analysis could also be used to provide a snapshot for online views. This is going to be high throughput for batch offline writes and high latency for read (when online).
Examples include:
• Generating derived data, duplicating data for reads in HBase schemas, and delayed secondary indexes
Schema Design Exploration:
Designing for reads means aggressively de-normalizing data so that the data that is read together is stored together.
Data Access Patterns
Lambda Architecture
The Lambda architecture solves the problem of computing arbitrary functions on arbitrary data in real time by decomposing the problem into three layers: the batch layer, the serving layer, and the speed layer.
MapReduce jobs are used to create artifacts useful to consumers at scale. Incremental updates are handled in real time by processing updates to HBase in a Storm cluster, and are applied to the artifacts produced by MapReduce jobs.
The batch layer precomputes the batch views. In the batch view, you read the results from a precomputed view. The precomputed view is indexed so that it can be accessed quickly with random reads.
The serving layer indexes the batch view and loads it up so it can be efficiently queried to get particular values out of the view. A serving layer database only requires batch updates and random reads. The serving layer updates whenever the batch layer finishes precomputing a batch view.
You can do stream-based processing with Storm and batch processing with Hadoop. The speed layer only produces views on recent data, and is for functions computed on data in the few hours not covered by the batch. In order to achieve the fastest latencies possible, the speed layer doesn’t look at all the new data at once. Instead, it updates the real time view as it receives new data, instead of re-computing them like the batch layer does. In the speed layer, HBase provides the ability for Storm to continuously increment the real-time views.
How does Storm know to process new data in HBase? A needs work flag is set. Processing components scan for notifications and process them as they enter the system.
MapReduce Execution and Data Flow
The flow of data in a MapReduce execution is as follows:
In this blog post, you learned how HBase schema is different from traditional relational schema modeling, and you also got some guidelines for proper HBase schema design. If you have any questions about this blog post, please ask them in the comments section below.
Want to learn more? Take a look at these resources that I used to prepare this blog post:
Here are some additional resources for helping you get started with HBase: