回顾上期✈观光线路图:putAll() --> putMapEntries() --> tableSizeFor() --> resize() --> hash() --> putVal()...html
本期与您继续一块儿前进:putVal() --> putTreeVal() --> find() --> balanceInsertion() --> rotateLeft()/rotateRight() --> treeifyBin()...java
// 为了找到合适的位置插入新节点,源码中进行了一系列比较。 final TreeNode<K,V> putTreeVal(HashMap<K,V> map, Node<K,V>[] tab, int h, K k, V v) { Class<?> kc = null; boolean searched = false; TreeNode<K,V> root = (parent != null) ? root() : this; // 获取根节点,从根节点开始遍历 for (TreeNode<K,V> p = root;;) { int dir, ph; K pk; if ((ph = p.hash) > h) dir = -1; // 左 else if (ph < h) dir = 1; // 右 else if ((pk = p.key) == k || (k != null && k.equals(pk))) return p; // 相等直接返回 else if ((kc == null && (kc = comparableClassFor(k)) == null) || (dir = compareComparables(kc, k, pk)) == 0) { if (!searched) { TreeNode<K,V> q, ch; searched = true; if (((ch = p.left) != null && (q = ch.find(h, k, kc)) != null) || ((ch = p.right) != null && (q = ch.find(h, k, kc)) != null)) return q; } dir = tieBreakOrder(k, pk); } TreeNode<K,V> xp = p; if ((p = (dir <= 0) ? p.left : p.right) == null) { Node<K,V> xpn = xp.next; TreeNode<K,V> x = map.newTreeNode(h, k, v, xpn); if (dir <= 0) xp.left = x; else xp.right = x; xp.next = x; x.parent = x.prev = xp; if (xpn != null) ((TreeNode<K,V>)xpn).prev = x; moveRootToFront(tab, balanceInsertion(root, x)); return null; } } }
当前节点hash值(ph)与插入节点hash值(h)比较,
若ph > h(dir=-1),将新节点归为左子树;
若ph < h(dir=1),右子树;
不然即表示hash值相等,而后又对key进行了比较。segmentfault
“kc = comparableClassFor(k)) == null”表示该类自己不可比(class C don't implements Comparable<C>);“dir = compareComparables(kc, k, pk)) == 0”表示k与pk对应的Class之间不可比。searched为一次性开关仅在p为root时生效,遍历比较左右子树中是否存在于插入节点相等的。安全
最后比到tieBreakOrder()中的“System.identityHashCode(a) <= System.identityHashCode(b)”,即对象的内存地址来生成的hashCode相互比较。堪称铁杵磨成针的比较。数据结构
这里循环的推动是靠“if ((p = (dir <= 0) ? p.left : p.right) == null)”,以前千辛万苦比较出的dir也在这使用。直到为空的左/右子树节点,插入新值(新值插入的过程参考下图理解)。多线程
final TreeNode<K,V> find(int h, Object k, Class<?> kc) { TreeNode<K,V> p = this; do { int ph, dir; K pk; TreeNode<K,V> pl = p.left, pr = p.right, q; if ((ph = p.hash) > h) p = pl; else if (ph < h) p = pr; else if ((pk = p.key) == k || (k != null && k.equals(pk))) return p; else if (pl == null) p = pr; else if (pr == null) p = pl; else if ((kc != null || (kc = comparableClassFor(k)) != null) && (dir = compareComparables(kc, k, pk)) != 0) p = (dir < 0) ? pl : pr; else if ((q = pr.find(h, k, kc)) != null) return q; else p = pl; } while (p != null); return null; }
有没有发现,若是当你看懂putTreeVal,类比find是否是变得很好理解了呢?并发
static <K,V> TreeNode<K,V> balanceInsertion(TreeNode<K,V> root, TreeNode<K,V> x) { x.red = true; // x为红 for (TreeNode<K,V> xp, xpp, xppl, xppr;;) { // x为根 if ((xp = x.parent) == null) { x.red = false; return x; } // x父节点为黑 || x父节点为根(黑) else if (!xp.red || (xpp = xp.parent) == null) return root; // if (xp == (xppl = xpp.left)) { // ① if ((xppr = xpp.right) != null && xppr.red) { xppr.red = false; xp.red = false; xpp.red = true; x = xpp; } // ② else { if (x == xp.right) { root = rotateLeft(root, x = xp); xpp = (xp = x.parent) == null ? null : xp.parent; } if (xp != null) { xp.red = false; if (xpp != null) { xpp.red = true; root = rotateRight(root, xpp); } } } } else { if (xppl != null && xppl.red) { xppl.red = false; xp.red = false; xpp.red = true; x = xpp; } else { if (x == xp.left) { root = rotateRight(root, x = xp); xpp = (xp = x.parent) == null ? null : xp.parent; } if (xp != null) { xp.red = false; if (xpp != null) { xpp.red = true; root = rotateLeft(root, xpp); } } } } } }
在插入新值后,可能打破了红黑树原有的“平衡”,balanceInsertion()的做用就是要维持这种“平衡”,保证最佳效率。所谓的红黑树“平衡”即:app
①:全部节点非黑即红;异步
②:根为黑,叶子为null且为黑,红的两子节点为黑;ide
③:任一节点到叶子节点的黑子节点个数相同;
下面,以“(xp == (xppl = xpp.left))”简(chou)单(lou)的给你们画个图例(其中①②与源码注释相对应)。
图②中打钩的都是合格的红黑树其实,图②右边方框内为左旋右旋节点关系变化图解。
// 左旋 与 右旋 static <K,V> TreeNode<K,V> rotateLeft(TreeNode<K,V> root, TreeNode<K,V> p) { TreeNode<K,V> r, pp, rl; if (p != null && (r = p.right) != null) { if ((rl = p.right = r.left) != null) rl.parent = p; if ((pp = r.parent = p.parent) == null) (root = r).red = false; else if (pp.left == p) pp.left = r; // p为pp左子节点 else pp.right = r; r.left = p; p.parent = r; } return root; } static <K,V> TreeNode<K,V> rotateRight(TreeNode<K,V> root, TreeNode<K,V> p) { TreeNode<K,V> l, pp, lr; if (p != null && (l = p.left) != null) { if ((lr = p.left = l.right) != null) lr.parent = p; if ((pp = l.parent = p.parent) == null) (root = l).red = false; else if (pp.right == p) pp.right = l; else pp.left = l; l.right = p; p.parent = l; } return root; }
左旋右旋过程包含在上面的图例中了,另附上两张网上看到的动图便于你们理解。
同时,在线红黑树插入删除动画演示【点我】,还不理解的童鞋能够亲自直观的试试。
final void treeifyBin(Node<K,V>[] tab, int hash) { int n, index; Node<K,V> e; if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY) resize(); else if ((e = tab[index = (n - 1) & hash]) != null) { TreeNode<K,V> hd = null, tl = null; do { TreeNode<K,V> p = replacementTreeNode(e, null); if (tl == null) hd = p; else { p.prev = tl; tl.next = p; } tl = p; } while ((e = e.next) != null); if ((tab[index] = hd) != null) hd.treeify(tab); } }
putVal()的treeifyBin()在链表中数目大于等于“TREEIFY_THRESHOLD - 1”时触发。当数目知足MIN_TREEIFY_CAPACITY时,链表将转为红黑树结构,不然继续扩容。treeify()相似putTreeVal()。
至此,HashMap插入告一段落。有误或有读不懂的地方欢迎交流。时间有限,江湖再见。
更多有意思的内容,欢迎访问笔者小站: rebey.cn
附上前一段时间翻译的HashMap源码开篇注释,将开头做为总结。也算收尾呼应吧。
/** * Hash table based implementation of the <tt>Map</tt> interface. This * implementation provides all of the optional map operations, and permits * <tt>null</tt> values and the <tt>null</tt> key. (The <tt>HashMap</tt> * class is roughly equivalent to <tt>Hashtable</tt>, except that it is * unsynchronized and permits nulls.) This class makes no guarantees as to * the order of the map; in particular, it does not guarantee that the order * will remain constant over time. * * 哈希表实现了Map接口。该接口提供了全部可选的map操做,且容许键、值为空。(HashMap近似Hashtable,除了异步和 * 容许空值。)HashMap没法保证map的顺序;尤为是<b>持久</b>不变。(译者注:好比rehash。) * * <p>This implementation provides constant-time performance for the basic * operations (<tt>get</tt> and <tt>put</tt>), assuming the hash function * disperses the elements properly among the buckets. Iteration over * collection views requires time proportional to the "capacity" of the * <tt>HashMap</tt> instance (the number of buckets) plus its size (the number * of key-value mappings). Thus, it's very important not to set the initial * capacity too high (or the load factor too low) if iteration performance is * important. * * 在哈希函数将元素恰当的分布在桶中的状况下,接口提供了稳定的基础操做(get和put)。 * 遍历集合的时间与HashMap实例的 “容量”(hash桶的数量) + “大小”(键值对数量)的和成正比。 * 所以,当循环比重较大时,初始容量值不能设的太大(或者负载因子过小)是很是重要的。 * * <p>An instance of <tt>HashMap</tt> has two parameters that affect its * performance: <i>initial capacity</i> and <i>load factor</i>. The * <i>capacity</i> is the number of buckets in the hash table, and the initial * capacity is simply the capacity at the time the hash table is created. The * <i>load factor</i> is a measure of how full the hash table is allowed to * get before its capacity is automatically increased. When the number of * entries in the hash table exceeds the product of the load factor and the * current capacity, the hash table is <i>rehashed</i> (that is, internal data * structures are rebuilt) so that the hash table has approximately twice the * number of buckets. * * 两个参数影响着HashMap实例:“初始容量”和“负载因子”。“初始容量”指的是哈希表中桶的数量,在哈希表建立的同时初始化。 * “负载因子”度量着哈希表能装多满(译者注:相对于桶的形象概念,建议参看网上hashMap结构图理解)直到在自动扩容。 * 当超出时,哈希表将会rehashed(即内部数据结构重建)至大约两倍。 * * <p>As a general rule, the default load factor (.75) offers a good * tradeoff between time and space costs. Higher values decrease the * space overhead but increase the lookup cost (reflected in most of * the operations of the <tt>HashMap</tt> class, including * <tt>get</tt> and <tt>put</tt>). The expected number of entries in * the map and its load factor should be taken into account when * setting its initial capacity, so as to minimize the number of * rehash operations. If the initial capacity is greater than the * maximum number of entries divided by the load factor, no rehash * operations will ever occur. * * 通常来讲,默认负载因子(0.75)在时间和空间之间起到了很好的权衡。更大的值虽然减轻了空间负荷却增长了查找花销 * (在大多数HashMap操做上都有体现,包括get和put)。当设置map初始容量时,须要考虑预期条目数和它的负载因子 * 使得rehash操做次数最少。若是初始容量大于最大条目数/负载因子,甚至不会发生rehash。 * * <p>If many mappings are to be stored in a <tt>HashMap</tt> * instance, creating it with a sufficiently large capacity will allow * the mappings to be stored more efficiently than letting it perform * automatic rehashing as needed to grow the table. Note that using * many keys with the same {@code hashCode()} is a sure way to slow * down performance of any hash table. To ameliorate impact, when keys * are {@link Comparable}, this class may use comparison order among * keys to help break ties. * * 若是大量的键值对将存储在HashMap实例中,使用一个足够大的容量来初始化远比让它自动按需rehash扩容的效率高。 * 要注意的是使用许多有这相同hashCode()的键值确定会下降哈希表性能。为了下降影响,当key支持Comparable接口时, * 在keys间比较排序来打破瓶颈。 * * <p><strong>Note that this implementation is not synchronized.</strong> * If multiple threads access a hash map concurrently, and at least one of * the threads modifies the map structurally, it <i>must</i> be * synchronized externally. (A structural modification is any operation * that adds or deletes one or more mappings; merely changing the value * associated with a key that an instance already contains is not a * structural modification.) This is typically accomplished by * synchronizing on some object that naturally encapsulates the map. * * HashMap是非线程安全的。若是多线程同时访问一个哈希表,而且至少一个线程在修改map结构是,必须在外加上 * synchronized关键字。(结构化修改包括任何增删一个或者多个键值对;只修改一个已存在的key的值不属于 * 结构修改。)典型的是用同步对象封装map实现。 * * If no such object exists, the map should be "wrapped" using the * {@link Collections#synchronizedMap Collections.synchronizedMap} * method. This is best done at creation time, to prevent accidental * unsynchronized access to the map:<pre> * Map m = Collections.synchronizedMap(new HashMap(...));</pre> * * 若是没有这样的对象,map须要使用Collections.synchronizedMap方法封装。最好室在建立的时候,防止意外 * 异步访问map,如:Map m = Collections.synchronizedMap(new HashMap(...)); * * <p>The iterators returned by all of this class's "collection view methods" * are <i>fail-fast</i>: if the map is structurally modified at any time after * the iterator is created, in any way except through the iterator's own * <tt>remove</tt> method, the iterator will throw a * {@link ConcurrentModificationException}. Thus, in the face of concurrent * modification, the iterator fails quickly and cleanly, rather than risking * arbitrary, non-deterministic behavior at an undetermined time in the * future. * * 迭代器返回了类全部“集合视图方法”是fail-fast(错误的缘由):迭代器建立后,在任什么时候候进行结构化修改将会抛出 * ConcurrentModificationException,不包括迭代器自己的remove方法,所以,在并发修改时,迭代器宁 * 可快速而干净的抛错,也不任意存在,在不肯定的行为,在不肯定的时间的将来。(译者注:意会下吧各位- -) * * <p>Note that the fail-fast behavior of an iterator cannot be guaranteed * as it is, generally speaking, impossible to make any hard guarantees in the * presence of unsynchronized concurrent modification. Fail-fast iterators * throw <tt>ConcurrentModificationException</tt> on a best-effort basis. * Therefore, it would be wrong to write a program that depended on this * exception for its correctness: <i>the fail-fast behavior of iterators * should be used only to detect bugs.</i> * * 迭代器不能保证fail-fast行为,通常而言,在异步并发修改面前,不可能作 任何严格的保证。Fail-fast迭代器尽力地抛 * ConcurrentModificationException。所以,编写一个依赖于这个异常正确性的程序是错误的: * fail-fast行为只是用来检测BUG. * * <p>This class is a member of the * <a href="{@docRoot}/../technotes/guides/collections/index.html"> * Java Collections Framework</a>. * * @param <K> the type of keys maintained by this map * @param <V> the type of mapped values * * @author Doug Lea * @author Josh Bloch * @author Arthur van Hoff * @author Neal Gafter * @see Object#hashCode() * @see Collection * @see Map * @see TreeMap * @see Hashtable * @since 1.2 */