http://www.cnblogs.com/DSNFZ/articles/7634042.htmlhtml
https://www.cnblogs.com/DSNFZ/articles/7675347.htmljava
HashMap能够说是java中最多见的几种集合了。node
在了解HashMap前咱们要先了解Object的两个方法:Equals和hashCode()算法
首先咱们来看一下object内的源码是怎样实现的:编程
hashcode():数组
/** * Returns a hash code value for the object. This method is * supported for the benefit of hash tables such as those provided by * {@link java.util.HashMap}. * <p> * The general contract of {@code hashCode} is: * <ul> * <li>Whenever it is invoked on the same object more than once during * an execution of a Java application, the {@code hashCode} method * must consistently return the same integer, provided no information * used in {@code equals} comparisons on the object is modified. * This integer need not remain consistent from one execution of an * application to another execution of the same application. * <li>If two objects are equal according to the {@code equals(Object)} * method, then calling the {@code hashCode} method on each of * the two objects must produce the same integer result. * <li>It is <em>not</em> required that if two objects are unequal * according to the {@link java.lang.Object#equals(java.lang.Object)} * method, then calling the {@code hashCode} method on each of the * two objects must produce distinct integer results. However, the * programmer should be aware that producing distinct integer results * for unequal objects may improve the performance of hash tables. * </ul> * <p> * As much as is reasonably practical, the hashCode method defined by * class {@code Object} does return distinct integers for distinct * objects. (This is typically implemented by converting the internal * address of the object into an integer, but this implementation * technique is not required by the * Java™ programming language.) * * @return a hash code value for this object. * @see java.lang.Object#equals(java.lang.Object) * @see java.lang.System#identityHashCode */ public native int hashCode();
可是这个方法没有实现!注意上面这句话:数据结构
but this implementation technique is not required by the Java™ programming language. 咱们不须要知道具体怎样实现的hashCode的运行过程,咱们须要知道的是它返回这个对象的特定的类型为整数的hashcode equals():
/** * Indicates whether some other object is "equal to" this one. * <p> * The {@code equals} method implements an equivalence relation * on non-null object references: * <ul> * <li>It is <i>reflexive</i>: for any non-null reference value * {@code x}, {@code x.equals(x)} should return * {@code true}. * <li>It is <i>symmetric</i>: for any non-null reference values * {@code x} and {@code y}, {@code x.equals(y)} * should return {@code true} if and only if * {@code y.equals(x)} returns {@code true}. * <li>It is <i>transitive</i>: for any non-null reference values * {@code x}, {@code y}, and {@code z}, if * {@code x.equals(y)} returns {@code true} and * {@code y.equals(z)} returns {@code true}, then * {@code x.equals(z)} should return {@code true}. * <li>It is <i>consistent</i>: for any non-null reference values * {@code x} and {@code y}, multiple invocations of * {@code x.equals(y)} consistently return {@code true} * or consistently return {@code false}, provided no * information used in {@code equals} comparisons on the * objects is modified. * <li>For any non-null reference value {@code x}, * {@code x.equals(null)} should return {@code false}. * </ul> * <p> * The {@code equals} method for class {@code Object} implements * the most discriminating possible equivalence relation on objects; * that is, for any non-null reference values {@code x} and * {@code y}, this method returns {@code true} if and only * if {@code x} and {@code y} refer to the same object * ({@code x == y} has the value {@code true}). * <p> * Note that it is generally necessary to override the {@code hashCode} * method whenever this method is overridden, so as to maintain the * general contract for the {@code hashCode} method, which states * that equal objects must have equal hash codes. * * @param obj the reference object with which to compare. * @return {@code true} if this object is the same as the obj * argument; {@code false} otherwise. * @see #hashCode() * @see java.util.HashMap */ public boolean equals(Object obj) { return (this == obj); }
这里我将jdk源码中全部相关信息都给出来了,但愿在某些地方理解的时候,会提供必定的帮助。app
固然咱们能够重写这两个函数,可是在java1.8中定义的函数最好不要进行重写,否则对hashmap的性能产生很大的影响;less
HashMap是基于哈希表的map接口的非同步实现,此实现提供全部可选的映射操做,并容许使用null值和null键。此类不保证映射的顺序,特别是它不保证该顺序恒久不变。ide
在java语言编程中,最基本的数据结构就两种:数组和引用,其余全部的数据结构均可以经过这两个基本的数据结构来实现,在jkd 1.7之前,hashmap就是一个链表散列的结构,可是在jdk1.8发布后,hashmap的链表长度大于必定值事后,变编程红黑树,关于红黑树的概念,在上篇文章中进行了讲解:
其HashMap的具体结构以下图所示:
java中采用的即是链地址法,即是每一个数组元素上都是一个链表。当数据被hash后,获得数组下标,将数据放在对应数组下标的链表上
其中每一个元素都用node节点表示:
static class Node<K,V> implements Map.Entry<K,V> { final int hash; final K key; V value; Node<K,V> next; Node(int hash, K key, V value, Node<K,V> next) { this.hash = hash; this.key = key; this.value = value; this.next = next; } }
node是hashmap的一个内部类,用来储存数据和保持链表结构的。它的本质就是一个映射(键值对)。
固然,会产生两个key值产生同一个位置,(最主要的即是由于index的产生原理,固然也有多是产生了同样的hash值)这种状况叫哈希碰撞。固然hash算法计算结果越分散均匀,发生hash碰撞的机率就越小,map的存储效率就越高。
hashmap中又一个很重要的字段就是Node[] table。如上图所示,这就是hashmap的基本结构,构成链表的数组。
若是哈希桶数组很大,即便较差的Hash算法也会比较分散,若是哈希桶数组数组很小,即便好的Hash算法也会出现较多碰撞,因此就须要在空间成本和时间成本之间权衡,其实就是在根据实际状况肯定哈希桶数组的大小,并在此基础上设计好的hash算法减小Hash碰撞。那么经过什么方式来控制map使得Hash碰撞的几率又小,哈希桶数组(Node[] table)占用空间又少呢?答案就是好的Hash算法和扩容机制。
在此以前,咱们先来了解一下hashmap一些很是很是重要的参数。源代码中以下:
/** * The default initial capacity - MUST be a power of two. */ static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16 /** * The maximum capacity, used if a higher value is implicitly specified * by either of the constructors with arguments. * MUST be a power of two <= 1<<30. */ static final int MAXIMUM_CAPACITY = 1 << 30; /** * The load factor used when none specified in constructor. */ static final float DEFAULT_LOAD_FACTOR = 0.75f; /** * The bin count threshold for using a tree rather than list for a * bin. Bins are converted to trees when adding an element to a * bin with at least this many nodes. The value must be greater * than 2 and should be at least 8 to mesh with assumptions in * tree removal about conversion back to plain bins upon * shrinkage. */ static final int TREEIFY_THRESHOLD = 8; /** * The bin count threshold for untreeifying a (split) bin during a * resize operation. Should be less than TREEIFY_THRESHOLD, and at * most 6 to mesh with shrinkage detection under removal. */ static final int UNTREEIFY_THRESHOLD = 6; /** * The smallest table capacity for which bins may be treeified. * (Otherwise the table is resized if too many nodes in a bin.) * Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts * between resizing and treeification thresholds. */ static final int MIN_TREEIFY_CAPACITY = 64; transient int size; /** * The number of times this HashMap has been structurally modified * Structural modifications are those that change the number of mappings in * the HashMap or otherwise modify its internal structure (e.g., * rehash). This field is used to make iterators on Collection-views of * the HashMap fail-fast. (See ConcurrentModificationException). */ transient int modCount; /** * The next size value at which to resize (capacity * load factor). * * @serial */ // (The javadoc description is true upon serialization. // Additionally, if the table array has not been allocated, this // field holds the initial array capacity, or zero signifying // DEFAULT_INITIAL_CAPACITY.) int threshold; /** * The load factor for the hash table. * * @serial */ final float loadFactor; /** * The number of key-value mappings contained in this map. */
上面这些参数的是很是很是重要的,其重要性至关于hashmap的数据结构的重要性。在本篇中,咱们运用到并重点讲解的为一下几个参数:
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; static final float DEFAULT_LOAD_FACTOR = 0.75f; transient int size; transient int modCount; int threshold; final float loadFactor;
首先能够单刀,Node[] table的默认长度是16,loadFactor的默认大小为0.75,threshold是hashmap所能容纳的最大数据量的Node个数,默认为0.75,threshold=DEFAULT_INITIAL_CAPACITY*loadFactor;当添加元素数量超过这个数量事后,就要进行扩容,扩容后hashmap的容量是以前的两倍。对于0.75,建议你们不要轻易修改。除非在时间和空间比较特殊的状况下,若是内存空间不少而又对时间效率要求很高,能够下降负载因子Load factor的值;相反,若是内存空间紧张而对时间效率要求不高,能够增长负载因子loadFactor的值,这个值能够大于1。
size就是在这个hashmpa中实际存在的node数量。modCount即是hashmap结构修改的次数。在以前对iterator(迭代器)进行讲解的时候我已经进行了说明,须要注意的是在hashmap中modcount指的是结构更改的次数,例如添加新的node,可是若是是替换原有node的value,modcount是不变的,由于它不属于结构变化。
有兴趣能够了解下:在HashMap中,哈希桶数组table的长度length大小必须为2的n次方(必定是合数),这是一种很是规的设计,常规的设计是把桶的大小设计为素数。相对来讲素数致使冲突的几率要小于合数,具体证实能够参考http://blog.csdn.net/liuqiyao_01/article/details/14475159,Hashtable初始化桶大小为11,就是桶大小设计为素数的应用(Hashtable扩容后不能保证仍是素数)。HashMap采用这种很是规设计,主要是为了在取模和扩容时作优化,同时为了减小冲突,HashMap定位哈希桶索引位置时,也加入了高位参与运算的过程。
代码:
/** * Computes key.hashCode() and spreads (XORs) higher bits of hash * to lower. Because the table uses power-of-two masking, sets of * hashes that vary only in bits above the current mask will * always collide. (Among known examples are sets of Float keys * holding consecutive whole numbers in small tables.) So we * apply a transform that spreads the impact of higher bits * downward. There is a tradeoff between speed, utility, and * quality of bit-spreading. Because many common sets of hashes * are already reasonably distributed (so don't benefit from * spreading), and because we use trees to handle large sets of * collisions in bins, we just XOR some shifted bits in the * cheapest possible way to reduce systematic lossage, as well as * to incorporate impact of the highest bits that would otherwise * never be used in index calculations because of table bounds. */ static final int hash(Object key) { int h; return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16); }
这里的Hash算法本质上就是三步:取key的hashCode值、高位运算、取模运算。
对于任意给定的对象,只要它的hashCode()返回值相同,那么程序调用方法一所计算获得的Hash码值老是相同的。咱们首先想到的就是把hash值对数组长度取模运算,这样一来,元素的分布相对来讲是比较均匀的。可是,模运算的消耗仍是比较大的,在HashMap中是这样作的:咱们经过h & (table.length -1)来计算该对象应该保存在table数组的哪一个索引处。
这个方法很是巧妙,它经过h & (table.length -1)来获得该对象的保存位,而HashMap底层数组的长度老是2的n次方,这是HashMap在速度上的优化。当length老是2的n次方时,h& (length-1)运算等价于对length取模,也就是h%length,可是&比%具备更高的效率。
在JDK1.8的实现中,优化了高位运算的算法,经过hashCode()的高16位异或低16位实现的:(h = k.hashCode()) ^ (h >>> 16),主要是从速度、功效、质量来考虑的,这么作能够在数组table的length比较小的时候,也能保证考虑到高低Bit都参与到Hash的计算中,同时不会有太大的开销。
咱们举个栗子:
大概的获得索引的流程就是上面所示。
put函数大体的思路为:
代码以下
public V put(K key, V value) { return putVal(hash(key), key, value, false, true); } /** * Implements Map.put and related methods * * @param hash hash for key * @param key the key * @param value the value to put * @param onlyIfAbsent if true, don't change existing value * @param evict if false, the table is in creation mode. * @return previous value, or null if none */ final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) { Node<K,V>[] tab; Node<K,V> p; int n, i; if ((tab = table) == null || (n = tab.length) == 0) n = (tab = resize()).length; if ((p = tab[i = (n - 1) & hash]) == null) tab[i] = newNode(hash, key, value, null); else { Node<K,V> e; K k; if (p.hash == hash && ((k = p.key) == key || (key != null && key.equals(k)))) e = p; else if (p instanceof TreeNode) e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value); else { for (int binCount = 0; ; ++binCount) { if ((e = p.next) == null) { p.next = newNode(hash, key, value, null); if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st treeifyBin(tab, hash); break; } if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) break; p = e; } } if (e != null) { // existing mapping for key V oldValue = e.value; if (!onlyIfAbsent || oldValue == null) e.value = value; afterNodeAccess(e); return oldValue; } } ++modCount; if (++size > threshold) resize(); afterNodeInsertion(evict); return null; }
思路以下:
代码以下:
public V get(Object key) { Node<K,V> e; return (e = getNode(hash(key), key)) == null ? null : e.value; } /** * Implements Map.get and related methods * * @param hash hash for key * @param key the key * @return the node, or null if none */ final Node<K,V> getNode(int hash, Object key) { Node<K,V>[] tab; Node<K,V> first, e; int n; K k; if ((tab = table) != null && (n = tab.length) > 0 && (first = tab[(n - 1) & hash]) != null) { if (first.hash == hash && // always check first node ((k = first.key) == key || (key != null && key.equals(k)))) return first; if ((e = first.next) != null) { if (first instanceof TreeNode) return ((TreeNode<K,V>)first).getTreeNode(hash, key); do { if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) return e; } while ((e = e.next) != null); } } return null; }
注意:上述put的思路从putval的方法中是正确的,可是若是将putval方法打碎了分析,这个思路是不彻底的,这就涉及到了hashmap的扩容机制,我会在下一篇hashmap的讲解中来具体讲解,putval在不一样状况下是怎么运行的,以及扩容机制中最重要的函数,resize();