volley
是一个很是流行的Android
开源框架,本身平时也常用它,但本身对于它的内部的实现过程并无进行太多的深究。因此为了之后能更通透的使用它,了解它的实现是一个很是重要的过程。本身有了一点研究,作个笔记同时与你们一块儿分享。期间本身也画了一张图,但愿能更好的帮助咱们理解其中的步骤与原理。以下:html
开始看可能会一脸懵逼,咱们先结合源码一步一步来,如今让咱们一块儿进入Volley
的源码世界,来解读大神们的编程思想。android
若是使用过Volley
的都知道,不论是进行网络请求仍是什么别的图片加载,首先都要建立好一个RequestQueue
git
public static final RequestQueue volleyQueue = Volley.newRequestQueue(App.mContext);
而RequestQueue
的建立天然离不开Volley
中的静态方法newRequestQueue
,从上面的图片也能知道首先进入点是newRequestQueue
,好了如今咱们来看下该方法中到底作了什么:github
public static RequestQueue newRequestQueue(Context context, HttpStack stack) { File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR); String userAgent = "volley/0"; try { String packageName = context.getPackageName(); PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0); userAgent = packageName + "/" + info.versionCode; } catch (NameNotFoundException e) { } if (stack == null) { if (Build.VERSION.SDK_INT >= 9) { stack = new HurlStack(); } else { // Prior to Gingerbread, HttpUrlConnection was unreliable. // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent)); } } Network network = new BasicNetwork(stack); RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network); queue.start(); return queue; }
从上面的源码中咱们能发现有一个stack
,它表明着网络请求通讯HttpClient
与HttpUrlConnection
,但咱们通常都是默认设置为null
。由于它会默认帮咱们进判断选择更适合的。当手机版本大于等于9
时会使用HurlStack
它里面使用HttpUrlConnection
进行实现通讯的,而小于9
时则建立HttpClientStack
,它里面天然是HttpClient
的实现。经过BasicNetwork
构形成Network
来进行传递使用;在这以后构建了RequestQueue
,其中帮咱们构造了一个缓存new DiskBasedCache(cacheDir)
,默认为5MB
,缓存目录为volley
。因此咱们能在data/data/应用包名/volley
下找到缓存。最后调用其start
方法,并返回RequestQueue
。这就是咱们前面第一段代码的内部实现。下面咱们进入RequestQueue
中的start
方法看下它到底作了什么呢?编程
在RequestQueue
中经过this
调用自身来默认帮咱们调用了下面的构造函数缓存
public RequestQueue(Cache cache, Network network, int threadPoolSize, ResponseDelivery delivery) { mCache = cache; mNetwork = network; mDispatchers = new NetworkDispatcher[threadPoolSize]; mDelivery = delivery; }
cache
后续在NetworkDispatcher
中会帮咱们进行response.cacheEntry
的缓存,netWork
是前面的根据版本所封装的通讯,threadPoolSize
线程池大小默认为4
,delivery
,是ExecutorDelivery
做用在主线程,在最后对请求响应的分发。网络
public void start() { stop(); // Make sure any currently running dispatchers are stopped. // Create the cache dispatcher and start it. mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery); mCacheDispatcher.start(); // Create network dispatchers (and corresponding threads) up to the pool size. for (int i = 0; i < mDispatchers.length; i++) { NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork, mCache, mDelivery); mDispatchers[i] = networkDispatcher; networkDispatcher.start(); } }
在start
方法中咱们发现它实现了两个dispatch
,一个是CacheDispatcher
缓存派遣,另外一个是networkDispatcher
进行网络派遣,其实他们都是Thread
,因此都调用了他们的start
方法。其中networkDispatcher
默认构建了4
个,至关于包含4
个线程的线程池。如今咱们先不去看他们内部的run
方法到底实现了什么,咱们仍是接着看RequestQueue
中咱们频繁使用的add
方法。app
public <T> Request<T> add(Request<T> request) { // Tag the request as belonging to this queue and add it to the set of current requests. request.setRequestQueue(this); synchronized (mCurrentRequests) { mCurrentRequests.add(request); } // Process requests in the order they are added. request.setSequence(getSequenceNumber()); request.addMarker("add-to-queue"); // If the request is uncacheable, skip the cache queue and go straight to the network. if (!request.shouldCache()) { mNetworkQueue.add(request); return request; } // Insert request into stage if there's already a request with the same cache key in flight. synchronized (mWaitingRequests) { String cacheKey = request.getCacheKey(); if (mWaitingRequests.containsKey(cacheKey)) { // There is already a request in flight. Queue up. Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey); if (stagedRequests == null) { stagedRequests = new LinkedList<Request<?>>(); } stagedRequests.add(request); mWaitingRequests.put(cacheKey, stagedRequests); if (VolleyLog.DEBUG) { VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey); } } else { // Insert 'null' queue for this cacheKey, indicating there is now a request in // flight. mWaitingRequests.put(cacheKey, null); mCacheQueue.add(request); } return request; } }
咱们结合代码与前面的图,首先会为当前的request
设置RequestQueue
,而且根据状况同步是不是当前正在进行的请求,加入到mCurrentRequests
中,看11
行代码,以后对当前request
进行判断是否须要缓存(默认实现是true
,若是不须要可调用request.setShouldCache()
进行设置),若是不须要则直接加入到前面的mNetworkQueue
中,它会在CacheDispatcher
与NetworkDispatcher
中作相应的处理,而后返回request
。若是须要缓存,看16行代码,则对mWaitingRequests
中是否包含cacheKey
进行相应的处理。其中cacheKey
为请求的url
。最后再加入到缓存队列mCacheQueue
中。框架
细心的人会发现当对应cacheKey
的value
不为空时,建立了LinkedList
即Queue
,只是将request
加入到了Queue
中,只是更新了mWaitingRequests
中相应的value
但并无加入到mCacheQueue
中。其实否则,由于后续会调用finish
方法,咱们来看下源码:ide
<T> void finish(Request<T> request) { // Remove from the set of requests currently being processed. synchronized (mCurrentRequests) { mCurrentRequests.remove(request); } synchronized (mFinishedListeners) { for (RequestFinishedListener<T> listener : mFinishedListeners) { listener.onRequestFinished(request); } } if (request.shouldCache()) { synchronized (mWaitingRequests) { String cacheKey = request.getCacheKey(); Queue<Request<?>> waitingRequests = mWaitingRequests.remove(cacheKey); if (waitingRequests != null) { if (VolleyLog.DEBUG) { VolleyLog.v("Releasing %d waiting requests for cacheKey=%s.", waitingRequests.size(), cacheKey); } // Process all queued up requests. They won't be considered as in flight, but // that's not a problem as the cache has been primed by 'request'. mCacheQueue.addAll(waitingRequests); } } } }
看14
、22
行代码,正如上面我所说,会将Queue
中的request
所有加入到mCacheQueue
中。
好了RequestQueue
的主要源码差很少就这些,下面咱们进入CacheDispatcher
的源码分析,看它究竟如何工做的呢?
前面提到了它与NetworkDispatcher
本质都是Thread
,那么咱们天然是看run
方法
@Override public void run() { if (DEBUG) VolleyLog.v("start new dispatcher"); Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); // Make a blocking call to initialize the cache. mCache.initialize(); while (true) { try { // Get a request from the cache triage queue, blocking until // at least one is available. final Request<?> request = mCacheQueue.take(); request.addMarker("cache-queue-take"); // If the request has been canceled, don't bother dispatching it. if (request.isCanceled()) { request.finish("cache-discard-canceled"); continue; } // Attempt to retrieve this item from cache. Cache.Entry entry = mCache.get(request.getCacheKey()); if (entry == null) { request.addMarker("cache-miss"); // Cache miss; send off to the network dispatcher. mNetworkQueue.put(request); continue; } // If it is completely expired, just send it to the network. if (entry.isExpired()) { request.addMarker("cache-hit-expired"); request.setCacheEntry(entry); mNetworkQueue.put(request); continue; } // We have a cache hit; parse its data for delivery back to the request. request.addMarker("cache-hit"); Response<?> response = request.parseNetworkResponse( new NetworkResponse(entry.data, entry.responseHeaders)); request.addMarker("cache-hit-parsed"); if (!entry.refreshNeeded()) { // Completely unexpired cache hit. Just deliver the response. mDelivery.postResponse(request, response); } else { // Soft-expired cache hit. We can deliver the cached response, // but we need to also send the request to the network for // refreshing. request.addMarker("cache-hit-refresh-needed"); request.setCacheEntry(entry); // Mark the response as intermediate. response.intermediate = true; // Post the intermediate response back to the user and have // the delivery then forward the request along to the network. mDelivery.postResponse(request, response, new Runnable() { @Override public void run() { try { mNetworkQueue.put(request); } catch (InterruptedException e) { // Not much we can do about this. } } }); } } catch (InterruptedException e) { // We may have been interrupted because it was time to quit. if (mQuit) { return; } continue; } } }
看起来不少,咱们结合图来挑主要的看。首先建立了一个无限循环一直在监视着request
的变化。从缓存队列mCacheQueue
中获取request
,若是该请求是cancle
了,调用request.finish()
清除相应数据并进行下一个请求的操做,不然从前面提到的mCache
中获取Cache.Entry
。若是不存在或者已通过期,将请求加入到网络队列中mNetWorkQueue
,进行后续的网络请求。若是存在(41
行代码)则进行request.parseNetworkResponse()
解析出response
,不一样的request
对应不一样的解析方法。例如StringRequest
与JsonObjectRequest
有各自的解析实现。再看45
行,发现无论entry
是否须要更新的,都会进一步对response
进行mDelivery.postResponse(request, response)
递送,不一样的是须要更新的话从新设置request
的entry
与加入到mNetworkQueue
中,也就至关与从新进行网络请求一遍。那么再回到递送的阶段,前面已经提到在建立RequestQueue
是实现了ExecutorDelivery
,mDelivery.postResponse
就是其中的方法。咱们来看一下
在这里建立了一个Executor
,对后面进行递送,做用在主线程
public ExecutorDelivery(final Handler handler) { // Make an Executor that just wraps the handler. mResponsePoster = new Executor() { @Override public void execute(Runnable command) { handler.post(command); } }; }
@Override public void postResponse(Request<?> request, Response<?> response) { postResponse(request, response, null); } @Override public void postResponse(Request<?> request, Response<?> response, Runnable runnable) { request.markDelivered(); request.addMarker("post-response"); mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable)); }
这个方法就简单了就是调用execute
进行执行,在进入ResponseDeliveryRunnable
的run
看它如何执行
public void run() { // If this request has canceled, finish it and don't deliver. if (mRequest.isCanceled()) { mRequest.finish("canceled-at-delivery"); return; } // Deliver a normal response or error, depending. if (mResponse.isSuccess()) { mRequest.deliverResponse(mResponse.result); } else { mRequest.deliverError(mResponse.error); } // If this is an intermediate response, add a marker, otherwise we're done // and the request can be finished. if (mResponse.intermediate) { mRequest.addMarker("intermediate-response"); } else { mRequest.finish("done"); } // If we have been provided a post-delivery runnable, run it. if (mRunnable != null) { mRunnable.run(); } }
主要是第9
行代码,对于不一样的响应作不一样的递送,deliverResponse
与deliverError
内部分别调用的就是咱们很是熟悉的Listener
中的onResponse
与onErrorResponse
方法,进而返回到咱们对网络请求结果的处理函数。
这就是整个的缓存派遣,简而言之,存在请求响应的缓存数据就不进行网络请求,直接调用缓存中的数据进行分发递送。反之执行网络请求。
下面我来看下NetworkDispatcher
是如何处理的
@Override public void run() { Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); while (true) { long startTimeMs = SystemClock.elapsedRealtime(); Request<?> request; try { // Take a request from the queue. request = mQueue.take(); } catch (InterruptedException e) { // We may have been interrupted because it was time to quit. if (mQuit) { return; } continue; } try { request.addMarker("network-queue-take"); // If the request was cancelled already, do not perform the // network request. if (request.isCanceled()) { request.finish("network-discard-cancelled"); continue; } addTrafficStatsTag(request); // Perform the network request. NetworkResponse networkResponse = mNetwork.performRequest(request); request.addMarker("network-http-complete"); // If the server returned 304 AND we delivered a response already, // we're done -- don't deliver a second identical response. if (networkResponse.notModified && request.hasHadResponseDelivered()) { request.finish("not-modified"); continue; } // Parse the response here on the worker thread. Response<?> response = request.parseNetworkResponse(networkResponse); request.addMarker("network-parse-complete"); // Write to cache if applicable. // TODO: Only update cache metadata instead of entire record for 304s. if (request.shouldCache() && response.cacheEntry != null) { mCache.put(request.getCacheKey(), response.cacheEntry); request.addMarker("network-cache-written"); } // Post the response back. request.markDelivered(); mDelivery.postResponse(request, response); } catch (VolleyError volleyError) { volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs); parseAndDeliverNetworkError(request, volleyError); } catch (Exception e) { VolleyLog.e(e, "Unhandled exception %s", e.toString()); VolleyError volleyError = new VolleyError(e); volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs); mDelivery.postError(request, volleyError); } } }
在这里也建立了一个无限循环的while
,一样也是先获取request
,不过是从mQueue
中,即前面屡次出现的mNetWorkQueue
,经过看代码发现一些实现跟CacheDispatcher
中的相似。也正如图片中所展现的同样,(23
行)如何请求取消了,直接finish
;不然进行网络请求,调用(31
行)mNetwork.performRequest(request)
,这里的mNetWork
即为前面RequestQueue
中对不一样版本进行选择的stack
的封装,分别调用HurlStack
与HttpClientStack
各自的performRequest
方法,该方法中构造请求头与参数分别使用HttpClient
或者HttpUrlConnection
进行网络请求。咱们再来看42
行,是否是很熟悉,与CacheDispatcher
中的同样进行response
进行解析,而后若是须要缓存就加入到缓存中,最后(54
行)再调用mDelivery.postResponse(request, response)
进行递送。至于后面的剩余的步骤与CacheDispatcher
中的如出一辙,这里就很少累赘了。
好了,Volley
的源码解析先就到这里了,咱们再回过去看那张图是否是感受很清晰了呢?
咱们来对使用Volley
网络请求作个总结
newRequestQueue
初始化与构造RequestQueue
RequestQueue
中的add
方法添加request
到请求队列中CacheDispatcher
,判断缓存中是否存在,有则解析response
,再直接postResponse
递送,不然进行后续的网络请求NetworkDispatcher
中进行相应的request
请求,解析response
如设置了缓存就将结果保存到cache
中,再进行最后的postResponse
递送。若是有所帮助欢迎关注个人下一次解析
更多分享:我的博客