HashMap能够说是java中最多见的几种集合了。java
在了解HashMap前咱们要先了解Object的两个方法:Equals和hashCode()node
首先咱们来看一下object内的源码是怎样实现的:算法
hashcode():编程
/** * Returns a hash code value for the object. This method is * supported for the benefit of hash tables such as those provided by * {@link java.util.HashMap}. * <p> * The general contract of {@code hashCode} is: * <ul> * <li>Whenever it is invoked on the same object more than once during * an execution of a Java application, the {@code hashCode} method * must consistently return the same integer, provided no information * used in {@code equals} comparisons on the object is modified. * This integer need not remain consistent from one execution of an * application to another execution of the same application. * <li>If two objects are equal according to the {@code equals(Object)} * method, then calling the {@code hashCode} method on each of * the two objects must produce the same integer result. * <li>It is <em>not</em> required that if two objects are unequal * according to the {@link java.lang.Object#equals(java.lang.Object)} * method, then calling the {@code hashCode} method on each of the * two objects must produce distinct integer results. However, the * programmer should be aware that producing distinct integer results * for unequal objects may improve the performance of hash tables. * </ul> * <p> * As much as is reasonably practical, the hashCode method defined by * class {@code Object} does return distinct integers for distinct * objects. (This is typically implemented by converting the internal * address of the object into an integer, but this implementation * technique is not required by the * Java™ programming language.) * * @return a hash code value for this object. * @see java.lang.Object#equals(java.lang.Object) * @see java.lang.System#identityHashCode */ public native int hashCode();
可是这个方法没有实现!注意上面这句话:数组
but this implementation technique is not required by the Java™ programming language. 咱们不须要知道具体怎样实现的hashCode的运行过程,咱们须要知道的是它返回这个对象的特定的类型为整数的hashcode
equals():
/** * Indicates whether some other object is "equal to" this one. * <p> * The {@code equals} method implements an equivalence relation * on non-null object references: * <ul> * <li>It is <i>reflexive</i>: for any non-null reference value * {@code x}, {@code x.equals(x)} should return * {@code true}. * <li>It is <i>symmetric</i>: for any non-null reference values * {@code x} and {@code y}, {@code x.equals(y)} * should return {@code true} if and only if * {@code y.equals(x)} returns {@code true}. * <li>It is <i>transitive</i>: for any non-null reference values * {@code x}, {@code y}, and {@code z}, if * {@code x.equals(y)} returns {@code true} and * {@code y.equals(z)} returns {@code true}, then * {@code x.equals(z)} should return {@code true}. * <li>It is <i>consistent</i>: for any non-null reference values * {@code x} and {@code y}, multiple invocations of * {@code x.equals(y)} consistently return {@code true} * or consistently return {@code false}, provided no * information used in {@code equals} comparisons on the * objects is modified. * <li>For any non-null reference value {@code x}, * {@code x.equals(null)} should return {@code false}. * </ul> * <p> * The {@code equals} method for class {@code Object} implements * the most discriminating possible equivalence relation on objects; * that is, for any non-null reference values {@code x} and * {@code y}, this method returns {@code true} if and only * if {@code x} and {@code y} refer to the same object * ({@code x == y} has the value {@code true}). * <p> * Note that it is generally necessary to override the {@code hashCode} * method whenever this method is overridden, so as to maintain the * general contract for the {@code hashCode} method, which states * that equal objects must have equal hash codes. * * @param obj the reference object with which to compare. * @return {@code true} if this object is the same as the obj * argument; {@code false} otherwise. * @see #hashCode() * @see java.util.HashMap */ public boolean equals(Object obj) { return (this == obj); }
这里我将jdk源码中全部相关信息都给出来了,但愿在某些地方理解的时候,会提供必定的帮助。数据结构
固然咱们能够重写这两个函数,可是在java1.8中定义的函数最好不要进行重写,否则对hashmap的性能产生很大的影响;app
HashMap是基于哈希表的map接口的非同步实现,此实现提供全部可选的映射操做,并容许使用null值和null键。此类不保证映射的顺序,特别是它不保证该顺序恒久不变。less
在java语言编程中,最基本的数据结构就两种:数组和引用,其余全部的数据结构均可以经过这两个基本的数据结构来实现,在jkd 1.7之前,hashmap就是一个链表散列的结构,可是在jdk1.8发布后,hashmap的链表长度大于必定值事后,变编程红黑树,关于红黑树的概念,在上篇文章中进行了讲解:ide
java中采用的即是链地址法,即是每一个数组元素上都是一个链表。当数据被hash后,获得数组下标,将数据放在对应数组下标的链表上函数
其中每一个元素都用node节点表示:
static class Node<K,V> implements Map.Entry<K,V> { final int hash; final K key; V value; Node<K,V> next; Node(int hash, K key, V value, Node<K,V> next) { this.hash = hash; this.key = key; this.value = value; this.next = next; } }
node是hashmap的一个内部类,用来储存数据和保持链表结构的。它的本质就是一个映射(键值对)。
固然,会产生两个key值产生同一个位置,(最主要的即是由于index的产生原理,固然也有多是产生了同样的hash值)这种状况叫哈希碰撞。固然hash算法计算结果越分散均匀,发生hash碰撞的机率就越小,map的存储效率就越高。
hashmap中又一个很重要的字段就是Node[] table。如上图所示,这就是hashmap的基本结构,构成链表的数组。
若是哈希桶数组很大,即便较差的Hash算法也会比较分散,若是哈希桶数组数组很小,即便好的Hash算法也会出现较多碰撞,因此就须要在空间成本和时间成本之间权衡,其实就是在根据实际状况肯定哈希桶数组的大小,并在此基础上设计好的hash算法减小Hash碰撞。那么经过什么方式来控制map使得Hash碰撞的几率又小,哈希桶数组(Node[] table)占用空间又少呢?答案就是好的Hash算法和扩容机制。
在此以前,咱们先来了解一下hashmap一些很是很是重要的参数。源代码中以下:
/** * The default initial capacity - MUST be a power of two. */ static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16 /** * The maximum capacity, used if a higher value is implicitly specified * by either of the constructors with arguments. * MUST be a power of two <= 1<<30. */ static final int MAXIMUM_CAPACITY = 1 << 30; /** * The load factor used when none specified in constructor. */ static final float DEFAULT_LOAD_FACTOR = 0.75f; /** * The bin count threshold for using a tree rather than list for a * bin. Bins are converted to trees when adding an element to a * bin with at least this many nodes. The value must be greater * than 2 and should be at least 8 to mesh with assumptions in * tree removal about conversion back to plain bins upon * shrinkage. */ static final int TREEIFY_THRESHOLD = 8; /** * The bin count threshold for untreeifying a (split) bin during a * resize operation. Should be less than TREEIFY_THRESHOLD, and at * most 6 to mesh with shrinkage detection under removal. */ static final int UNTREEIFY_THRESHOLD = 6; /** * The smallest table capacity for which bins may be treeified. * (Otherwise the table is resized if too many nodes in a bin.) * Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts * between resizing and treeification thresholds. */ static final int MIN_TREEIFY_CAPACITY = 64; transient int size; /** * The number of times this HashMap has been structurally modified * Structural modifications are those that change the number of mappings in * the HashMap or otherwise modify its internal structure (e.g., * rehash). This field is used to make iterators on Collection-views of * the HashMap fail-fast. (See ConcurrentModificationException). */ transient int modCount; /** * The next size value at which to resize (capacity * load factor). * * @serial */ // (The javadoc description is true upon serialization. // Additionally, if the table array has not been allocated, this // field holds the initial array capacity, or zero signifying // DEFAULT_INITIAL_CAPACITY.) int threshold; /** * The load factor for the hash table. * * @serial */ final float loadFactor; /** * The number of key-value mappings contained in this map. */
上面这些参数的是很是很是重要的,其重要性至关于hashmap的数据结构的重要性。在本篇中,咱们运用到并重点讲解的为一下几个参数
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; static final float DEFAULT_LOAD_FACTOR = 0.75f; transient int size; transient int modCount; int threshold; final float loadFactor;
首先能够单刀,Node[] table的默认长度是16,loadFactor的默认大小为0.75,threshold是hashmap所能容纳的最大数据量的Node个数,默认为0.75,threshold=DEFAULT_INITIAL_CAPACITY*loadFactor;当添加元素数量超过这个数量事后,就要进行扩容,扩容后hashmap的容量是以前的两倍。对于0.75,建议你们不要轻易修改。除非在时间和空间比较特殊的状况下,若是内存空间不少而又对时间效率要求很高,能够下降负载因子Load factor的值;相反,若是内存空间紧张而对时间效率要求不高,能够增长负载因子loadFactor的值,这个值能够大于1。
size就是在这个hashmpa中实际存在的node数量。modCount即是hashmap结构修改的次数。在以前对iterator(迭代器)进行讲解的时候我已经进行了说明,须要注意的是在hashmap中modcount指的是结构更改的次数,例如添加新的node,可是若是是替换原有node的value,modcount是不变的,由于它不属于结构变化。
有兴趣能够了解下:在HashMap中,哈希桶数组table的长度length大小必须为2的n次方(必定是合数),这是一种很是规的设计,常规的设计是把桶的大小设计为素数。相对来讲素数致使冲突的几率要小于合数,具体证实能够参考http://blog.csdn.net/liuqiyao_01/article/details/14475159,Hashtable初始化桶大小为11,就是桶大小设计为素数的应用(Hashtable扩容后不能保证仍是素数)。HashMap采用这种很是规设计,主要是为了在取模和扩容时作优化,同时为了减小冲突,HashMap定位哈希桶索引位置时,也加入了高位参与运算的过程。
代码:
/** * Computes key.hashCode() and spreads (XORs) higher bits of hash * to lower. Because the table uses power-of-two masking, sets of * hashes that vary only in bits above the current mask will * always collide. (Among known examples are sets of Float keys * holding consecutive whole numbers in small tables.) So we * apply a transform that spreads the impact of higher bits * downward. There is a tradeoff between speed, utility, and * quality of bit-spreading. Because many common sets of hashes * are already reasonably distributed (so don't benefit from * spreading), and because we use trees to handle large sets of * collisions in bins, we just XOR some shifted bits in the * cheapest possible way to reduce systematic lossage, as well as * to incorporate impact of the highest bits that would otherwise * never be used in index calculations because of table bounds. */ static final int hash(Object key) { int h; return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16); }
这里的Hash算法本质上就是三步:取key的hashCode值、高位运算、取模运算。
对于任意给定的对象,只要它的hashCode()返回值相同,那么程序调用方法一所计算获得的Hash码值老是相同的。咱们首先想到的就是把hash值对数组长度取模运算,这样一来,元素的分布相对来讲是比较均匀的。可是,模运算的消耗仍是比较大的,在HashMap中是这样作的:咱们经过h & (table.length -1)来计算该对象应该保存在table数组的哪一个索引处。
这个方法很是巧妙,它经过h & (table.length -1)来获得该对象的保存位,而HashMap底层数组的长度老是2的n次方,这是HashMap在速度上的优化。当length老是2的n次方时,h& (length-1)运算等价于对length取模,也就是h%length,可是&比%具备更高的效率。
在JDK1.8的实现中,优化了高位运算的算法,经过hashCode()的高16位异或低16位实现的:(h = k.hashCode()) ^ (h >>> 16),主要是从速度、功效、质量来考虑的,这么作能够在数组table的length比较小的时候,也能保证考虑到高低Bit都参与到Hash的计算中,同时不会有太大的开销。
咱们举个栗子:
大概的获得索引的流程就是上面所示。
put函数大体的思路为:
代码以下:
public V put(K key, V value) { return putVal(hash(key), key, value, false, true); } /** * Implements Map.put and related methods * * @param hash hash for key * @param key the key * @param value the value to put * @param onlyIfAbsent if true, don't change existing value * @param evict if false, the table is in creation mode. * @return previous value, or null if none */ final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) { Node<K,V>[] tab; Node<K,V> p; int n, i; if ((tab = table) == null || (n = tab.length) == 0) n = (tab = resize()).length; if ((p = tab[i = (n - 1) & hash]) == null) tab[i] = newNode(hash, key, value, null); else { Node<K,V> e; K k; if (p.hash == hash && ((k = p.key) == key || (key != null && key.equals(k)))) e = p; else if (p instanceof TreeNode) e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value); else { for (int binCount = 0; ; ++binCount) { if ((e = p.next) == null) { p.next = newNode(hash, key, value, null); if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st treeifyBin(tab, hash); break; } if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) break; p = e; } } if (e != null) { // existing mapping for key V oldValue = e.value; if (!onlyIfAbsent || oldValue == null) e.value = value; afterNodeAccess(e); return oldValue; } } ++modCount; if (++size > threshold) resize(); afterNodeInsertion(evict); return null; }
思路以下:
代码以下:
public V get(Object key) { Node<K,V> e; return (e = getNode(hash(key), key)) == null ? null : e.value; } /** * Implements Map.get and related methods * * @param hash hash for key * @param key the key * @return the node, or null if none */ final Node<K,V> getNode(int hash, Object key) { Node<K,V>[] tab; Node<K,V> first, e; int n; K k; if ((tab = table) != null && (n = tab.length) > 0 && (first = tab[(n - 1) & hash]) != null) { if (first.hash == hash && // always check first node ((k = first.key) == key || (key != null && key.equals(k)))) return first; if ((e = first.next) != null) { if (first instanceof TreeNode) return ((TreeNode<K,V>)first).getTreeNode(hash, key); do { if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) return e; } while ((e = e.next) != null); } } return null; }
注意(重点,要考的):上述put的思路从putval的方法中是正确的,可是若是将putval方法打碎了分析,这个思路是不彻底的,这就涉及到了hashmap的扩容机制,我会在下一篇hashmap的讲解中来具体讲解,putval在不一样状况下是怎么运行的,以及扩容机制中最重要的函数,resize();
jdk1.8中对hashmap有着很是棒的扩容机制,咱们在上一篇文章提到了当链表长度大于某个值的时候,hashmap中的链表会变成红黑树结构,可是实际上真的是这样么?咱们来看一下树化的函数是怎样进行的:
final void treeifyBin(Node<K,V>[] tab, int hash) { int n, index; Node<K,V> e; if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY) resize(); else if ((e = tab[index = (n - 1) & hash]) != null) { TreeNode<K,V> hd = null, tl = null; do { TreeNode<K,V> p = replacementTreeNode(e, null); if (tl == null) hd = p; else { p.prev = tl; tl.next = p; } tl = p; } while ((e = e.next) != null); if ((tab[index] = hd) != null) hd.treeify(tab); } }
咱们从第一个判断语句就发现,若是hashmap中table的长度小于64(MIN_TREEIFY_CAPACITY)的时候,实际上是不会进行树化的,而是对这个hashmap进行扩容。因此咱们发现,扩容不只仅用于node的个数超过threshold的时候。
这个树化函数的设计即是想保持算法设计中的相对较好。
要了解扩容机制,咱们先来看看jdk1.7是怎么设计的,由于我用的是jdk1.8,因此一下代码是从网上摘取,若是和源码有区别,请各位告知:
void resize(int newCapacity) { //传入新的容量 Entry[] oldTable = table; //引用扩容前的Entry数组 int oldCapacity = oldTable.length; if (oldCapacity == MAXIMUM_CAPACITY) { //扩容前的数组大小若是已经达到最大(2^30)了 threshold = Integer.MAX_VALUE; //修改阈值为int的最大值(2^31-1),这样之后就不会扩容了 return; } Entry[] newTable = new Entry[newCapacity]; //初始化一个新的Entry数组 transfer(newTable); //!!将数据转移到新的Entry数组里 table = newTable; //HashMap的table属性引用新的Entry数组 threshold = (int) (newCapacity * loadFactor);//修改阈值 }
其中transfer方法以下:
void transfer(Entry[] newTable) { Entry[] src = table; //src引用了旧的Entry数组 int newCapacity = newTable.length; for (int j = 0; j < src.length; j++) { //遍历旧的Entry数组 Entry<K, V> e = src[j]; //取得旧Entry数组的每一个元素 if (e != null) { src[j] = null;//释放旧Entry数组的对象引用(for循环后,旧的Entry数组再也不引用任何对象) do { Entry<K, V> next = e.next; int i = indexFor(e.hash, newCapacity); //!!从新计算每一个元素在数组中的位置 e.next = newTable[i]; //标记[1] newTable[i] = e; //将元素放在数组上 e = next; //访问下一个Entry链上的元素 } while (e != null); } } }
咱们经过上面代码能够知道,咱们实际上是遍历这个链表,而后将新的元素位置从头位置插入。这样咱们能够知道,咱们链表中的前后顺序是会改变的。先后顺序会反过来。下图能够很明白的开出这种变换关系:
那么,关于jdk1.8,咱们作了哪些优化呢?
jdk1.8中的索引和1.7的原则是同样的,都采用的是:h & (length - 1)做为node的索引
若是咱们扩展长度为两倍,那么做为length-1就是尾端为一串1,其他为0的位序列。
那么位运算能够获得下图:
图a是扩展前产生的index,图二为扩展两倍容量的index,java1.8很巧妙的运用扩展2倍产生index这一点,咱们直接判断hash值在位中,比n-1高一位的比特是1仍是0来移动:
这就是上图中,红点标出的比特位便成了一种标志,咱们经过判断它为0为1来进行扩容操做。红圈的16不是定值,而是原hashmap的table的长度。
上面的例子,也说明,咱们table长度只有16的时候,有很大的状况可以让index相同,可是扩容后又不在拥有相同的index。
这个设计确实很是的巧妙,既省去了从新计算hash值的时间,并且同时,因为新增的1bit是0仍是1能够认为是随机的,所以resize的过程,均匀的把以前的冲突的节点分散到新的bucket了。这一块就是JDK1.8新增的优化点。有一点注意区别,JDK1.7中rehash的时候,旧链表迁移新链表的时候,若是在新表的数组索引位置相同,则链表元素会倒置,可是从上图能够看出,JDK1.8不会倒置,这一点正如以前的代码所示。
咱们能够用一张图略微表示一下,下图中蓝色为新增的index位为0,绿色的表示1:
固然,jdk1.8的resize代码复杂了不少,虽然你们都说它写的很好,我仍是在判断语句的执行中有不少疑惑,感受不少判断语句都是相互包含的。具体的我还要继续学习一下,可是jdk1.8中的resize()流程仍是很清晰的,怎么扩展,怎么移动链表,代码都很棒的:
final Node<K,V>[] resize() { Node<K,V>[] oldTab = table; int oldCap = (oldTab == null) ? 0 : oldTab.length; int oldThr = threshold; int newCap, newThr = 0; if (oldCap > 0) { if (oldCap >= MAXIMUM_CAPACITY) { threshold = Integer.MAX_VALUE; return oldTab; } else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY && oldCap >= DEFAULT_INITIAL_CAPACITY) newThr = oldThr << 1; // double threshold } else if (oldThr > 0) // initial capacity was placed in threshold newCap = oldThr; else { // zero initial threshold signifies using defaults newCap = DEFAULT_INITIAL_CAPACITY; newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY); } if (newThr == 0) { float ft = (float)newCap * loadFactor; newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ? (int)ft : Integer.MAX_VALUE); } threshold = newThr; @SuppressWarnings({"rawtypes","unchecked"}) Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap]; table = newTab; if (oldTab != null) { for (int j = 0; j < oldCap; ++j) { Node<K,V> e; if ((e = oldTab[j]) != null) { oldTab[j] = null; if (e.next == null) newTab[e.hash & (newCap - 1)] = e; else if (e instanceof TreeNode) ((TreeNode<K,V>)e).split(this, newTab, j, oldCap); else { // preserve order Node<K,V> loHead = null, loTail = null; Node<K,V> hiHead = null, hiTail = null; Node<K,V> next; do { next = e.next; if ((e.hash & oldCap) == 0) { if (loTail == null) loHead = e; else loTail.next = e; loTail = e; } else { if (hiTail == null) hiHead = e; else hiTail.next = e; hiTail = e; } } while ((e = next) != null); if (loTail != null) { loTail.next = null; newTab[j] = loHead; } if (hiTail != null) { hiTail.next = null; newTab[j + oldCap] = hiHead; } } } } } return newTab; }
其实说了这么多,hashmap若是只是运用的话,咱们只须要了解她的基础函数和结构便可,可是我相信对hashmap的原理有了解确定能增强对它理解和应用,对不一样状况的使用也有理解。
固然,我仍是那句话,源码必定是最好的老师。
一次记不住,多看10几遍。