我的站点:www.mycookies.cn
/** * A hash table supporting full concurrency of retrievals and * adjustable expected concurrency for updates. This class obeys the * same functional specification as {@link java.util.Hashtable}, and * includes versions of methods corresponding to each method of * <tt>Hashtable</tt>. However, even though all operations are * thread-safe, retrieval operations do <em>not</em> entail locking, * and there is <em>not</em> any support for locking the entire table * in a way that prevents all access. This class is fully * interoperable with <tt>Hashtable</tt> in programs that rely on its * thread safety but not on its synchronization details. * * <p> Retrieval operations (including <tt>get</tt>) generally do not * block, so may overlap with update operations (including * <tt>put</tt> and <tt>remove</tt>). Retrievals reflect the results * of the most recently <em>completed</em> update operations holding * upon their onset. For aggregate operations such as <tt>putAll</tt> * and <tt>clear</tt>, concurrent retrievals may reflect insertion or * removal of only some entries. Similarly, Iterators and * Enumerations return elements reflecting the state of the hash table * at some point at or since the creation of the iterator/enumeration. * They do <em>not</em> throw {@link ConcurrentModificationException}. * However, iterators are designed to be used by only one thread at a time. * * <p> The allowed concurrency among update operations is guided by * the optional <tt>concurrencyLevel</tt> constructor argument * (default <tt>16</tt>), which is used as a hint for internal sizing. The * table is internally partitioned to try to permit the indicated * number of concurrent updates without contention. Because placement * in hash tables is essentially random, the actual concurrency will * vary. Ideally, you should choose a value to accommodate as many * threads as will ever concurrently modify the table. Using a * significantly higher value than you need can waste space and time, * and a significantly lower value can lead to thread contention. But * overestimates and underestimates within an order of magnitude do * not usually have much noticeable impact. A value of one is * appropriate when it is known that only one thread will modify and * all others will only read. Also, resizing this or any other kind of * hash table is a relatively slow operation, so, when possible, it is * a good idea to provide estimates of expected table sizes in * constructors. */
一个哈希表支持彻底并发的检索和可更新的预期并发性。这个类服从与{@link java.util.Hashtable}相同的功能规范 包括对应于每种方法的版本 的HashTable的。可是,即便全部的操做都是 线程安全的检索操做不须要加锁, 而且没有任何对锁定整个表的支持, 阻止全部访问的方式。这这个类在依赖线程安全性但不一样步细节,在程序中彻底与Hashtable 互操做。java
检索操做(包括get )一般不会阻塞,所以可能会与更新操做并发 (添加 和删除)。检索反映结果 是最近完成更新操做持有在他们并发访问时时。对于像<tt> putAll </ tt>这样的集合操做 和<tt>清除</ tt>,并发检索可能反映插入或 只删除一些条目。一样,迭代器和 枚举返回反映散列表状态的元素 在建立迭代器/枚举时或以后的某个时间点。 它们不会<em>抛出ConcurrentModificationException。 可是,迭代器被设计为一次只能由一个线程使用。node
更新操做中容许的并发性由指导 可选的concurrencyLevel构造函数参数(默认16 ),用做内部大小调整的提示。该 表内部分区以尝试容许指示 没有争用的并发更新数量。由于安置 在散列表中基本上是随机的,实际的并发会 变化。理想状况下,您应该选择一个值来容纳尽量多的值线程将永远同时修改表。用一个 明显高于你须要的价值会浪费空间和时间 而显着较低的值可能会致使线程争用。但 在一个数量级内太高估计和低估 一般不会有太明显的影响。值为1 当知道只有一个线程会修改时适用 全部其余人只会阅读。此外,调整这个或任何其余类型的 散列表是一个相对较慢的操做,因此,若是可能的话,在构造函数中提供预期表格大小的估计值的一个好主意。git
ConcurrentHashMap的内部类HashEntry github
//用来存储键值对,与hashtable中不一样的是 value设置为volatile static final class HashEntry<K,V> { final int hash; final K key; volatile V value; volatile HashEntry<K,V> next; HashEntry(int hash, K key, V value, HashEntry<K,V> next) { this.hash = hash; this.key = key; this.value = value; this.next = next; } /** * Sets next field with volatile write semantics. (See above * about use of putOrderedObject.) */ final void setNext(HashEntry<K,V> n) { UNSAFE.putOrderedObject(this, nextOffset, n); } // Unsafe mechanics static final sun.misc.Unsafe UNSAFE; static final long nextOffset; static { try { UNSAFE = sun.misc.Unsafe.getUnsafe(); Class k = HashEntry.class; nextOffset = UNSAFE.objectFieldOffset (k.getDeclaredField("next")); } catch (Exception e) { throw new Error(e); } } }
public V put(K key, V value) { Segment<K,V> s; if (value == null)//value不能为null throw new NullPointerException(); int hash = hash(key);//第一次对key进行hash运算 int j = (hash >>> segmentShift) & segmentMask;//映射到hash表中的某个segment if ((s = (Segment<K,V>)UNSAFE.getObject // nonvolatile; recheck (segments, (j << SSHIFT) + SBASE)) == null) // in ensureSegment s = ensureSegment(j); //返回给定索引的Segment,建立它并在Segment表中(经过CAS)记录(若是尚不存在)。 return s.put(key, hash, value, false); } private Segment<K,V> ensureSegment(int k) { final Segment<K,V>[] ss = this.segments; long u = (k << SSHIFT) + SBASE; // raw offset Segment<K,V> seg; //若是当前索引对应segment不存在 if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) == null) { Segment<K,V> proto = ss[0]; // use segment 0 as prototype int cap = proto.table.length; float lf = proto.loadFactor; int threshold = (int)(cap * lf); HashEntry<K,V>[] tab = (HashEntry<K,V>[])new HashEntry[cap]; if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) == null) { // recheck //建立一个Segment Segment<K,V> s = new Segment<K,V>(lf, threshold, tab); while ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) == null) { if (UNSAFE.compareAndSwapObject(ss, u, null, seg = s)) break; } } } return seg; } final V put(K key, int hash, V value, boolean onlyIfAbsent) { HashEntry<K,V> node = tryLock() ? null : scanAndLockForPut(key, hash, value);//尝试获取锁,当前线程独家占有,node赋值为null,不然一直获取锁,直到获取到锁而后建立一个键值对并返回 V oldValue; try { HashEntry<K,V>[] tab = table; int index = (tab.length - 1) & hash; HashEntry<K,V> first = entryAt(tab, index); for (HashEntry<K,V> e = first;;) { if (e != null) { K k; if ((k = e.key) == key || (e.hash == hash && key.equals(k))) { oldValue = e.value; if (!onlyIfAbsent) { e.value = value; ++modCount; } break; } e = e.next; } else { if (node != null) node.setNext(first); else node = new HashEntry<K,V>(hash, key, value, first); int c = count + 1; if (c > threshold && tab.length < MAXIMUM_CAPACITY) rehash(node); else setEntryAt(tab, index, node); ++modCount; count = c; oldValue = null; break; } } } finally { unlock();//释放锁 } return oldValue; }
若是当前线程是该锁的持有者,则保持计数递减。 若是保持计数如今为零,则锁定被释放。 若是当前线程不是该锁的持有者,则抛出{@link IllegalMonitorStateException}数组
/** * Attempts to release this lock. * * <p>If the current thread is the holder of this lock then the hold * count is decremented. If the hold count is now zero then the lock * is released. If the current thread is not the holder of this * lock then {@link IllegalMonitorStateException} is thrown. * * @throws IllegalMonitorStateException if the current thread does not * hold this lock */ public void unlock() { sync.release(1); }
扫描包含给定key的节点 ,同时尝试获取锁,若是找不到则建立并返回一个。返回后,保证持有当前锁。安全
/** * Scans for a node containing given key while trying to * acquire lock, creating and returning one if not found. Upon * return, guarantees that lock is held. UNlike in most * methods, calls to method equals are not screened: Since * traversal speed doesn't matter, we might as well help warm * up the associated code and accesses as well. * * @return a new node if key not found, else null */ private HashEntry<K,V> scanAndLockForPut(K key, int hash, V value) { HashEntry<K,V> first = entryForHash(this, hash); HashEntry<K,V> e = first; HashEntry<K,V> node = null; int retries = -1; // negative while locating node while (!tryLock()) { HashEntry<K,V> f; // to recheck first below if (retries < 0) { if (e == null) { if (node == null) // speculatively create node node = new HashEntry<K,V>(hash, key, value, null); retries = 0; } else if (key.equals(e.key)) retries = 0; else e = e.next; } else if (++retries > MAX_SCAN_RETRIES) { lock(); break; } else if ((retries & 1) == 0 && (f = entryForHash(this, hash)) != first) { e = first = f; // re-traverse if entry changed retries = -1; } } return node; }
只有在当时没有被另外一个线程占用的状况下才会获取该锁cookie
若是该锁没有被另外一个线程和另外一个线程占用,则获取该锁 当即返回值为true,将锁定保持计数设置为1。 即便此锁已设置为使用公平的顺序策略,对 tryLock()调用将当即得到该锁(若是该锁可用),不管其余线程当前是否正在等待锁。 这种强制 行为在某些状况下是有用的,即便它违背了公平。 若是您想遵照此锁的公平性设置,请使用 {@link #tryLock(long,TimeUnit)tryLock(0,TimeUnit.SECONDS)} 他们几乎相同(它也检测到中断)。 若是当前线程已经拥有这个锁,那么保持计数增长1,方法返回{true}。 若是该锁由另外一个线程保存,则此方法将当即以* {false}的值返回*。并发
public boolean tryLock() { return sync.nonfairTryAcquire(1); } final boolean nonfairTryAcquire(int acquires) { //获取当前线程 final Thread current = Thread.currentThread(); int c = getState();//返回statue (state是voltile修饰的) if (c == 0) {//若是state==0,即当前锁空闲 if (compareAndSetState(0, acquires)) { setExclusiveOwnerThread(current);//设置当前线程拥有锁 return true; } } else if (current == getExclusiveOwnerThread()) { int nextc = c + acquires; if (nextc < 0) // overflow throw new Error("Maximum lock count exceeded"); setState(nextc); return true; } return false; } protected final void setExclusiveOwnerThread(Thread t) { exclusiveOwnerThread = t; } protected final Thread getExclusiveOwnerThread() { return exclusiveOwnerThread; }
public int size() { // Try a few times to get accurate count. On failure due to // continuous async changes in table, resort to locking. final Segment<K,V>[] segments = this.segments; int size; boolean overflow; // true if size overflows 32 bits long sum; // sum of modCounts long last = 0L; // previous sum int retries = -1; // first iteration isn't retry try { for (;;) { if (retries++ == RETRIES_BEFORE_LOCK) { for (int j = 0; j < segments.length; ++j) ensureSegment(j).lock(); // 获取全部segment的锁 } sum = 0L; size = 0; overflow = false; for (int j = 0; j < segments.length; ++j) { Segment<K,V> seg = segmentAt(segments, j); if (seg != null) { sum += seg.modCount; int c = seg.count; if (c < 0 || (size += c) < 0) overflow = true; } } if (sum == last) break; last = sum; } } finally { if (retries > RETRIES_BEFORE_LOCK) { for (int j = 0; j < segments.length; ++j)//释放全部segment的锁 segmentAt(segments, j).unlock(); } } return overflow ? Integer.MAX_VALUE : size; }
总结:ConcurrentHashMap是线程安全的哈希表,它是经过“分段”来实现的。ConcurrentHashMap中包括了“Segment(分段)数组”,每一个Segment就是一个哈希表,并且也是可重入的互斥锁。第一,Segment是哈希表表如今,Segment包含了“HashEntry数组”,而“HashEntry数组”中的每个HashEntry元素是一个单向链表。即Segment是经过链式哈希表。第二,Segment是可重入的互斥锁表如今,Segment继承于ReentrantLock,而ReentrantLock就是可重入的互斥锁。对于ConcurrentHashMap的添加,删除操做,在操做开始前,线程都会获取Segment的互斥锁;操做完毕以后,才会释放。而对于读取操做,它是经过volatile去实现的,HashEntry数组是volatile类型的,而volatile能保证“即对一个volatile变量的读,老是能看到(任意线程)对这个volatile变量最后的写入”,即咱们总能读到其它线程写入HashEntry以后的值。 以上这些方式,就是ConcurrentHashMap线程安全的实现原理。app
经过分段方式减少的锁的粒度,若是整个map使用一个锁,则就不能并行地操做键值对。而ConcurrentHashMap将HashMap分解成段,每一个段有一把锁,锁的粒度就少了。可是与此同时,锁的数量增多了。当须要访问ConcurrentHashMap的全局属性时(好比ConcurrentHashMap的size()方法),须要 得到 全部的Segment的锁。dom
我的站点:www.mycookies.cn
github:https://github.com/liqianggh