Android Volley源码分析(一)


volley是一个很是流行的Android开源框架,本身平时也常用它,但本身对于它的内部的实现过程并无进行太多的深究。因此为了之后能更通透的使用它,了解它的实现是一个很是重要的过程。本身有了一点研究,作个笔记同时与你们一块儿分享。期间本身也画了一张图,但愿能更好的帮助咱们理解其中的步骤与原理。以下:html

图片描述

开始看可能会一脸懵逼,咱们先结合源码一步一步来,如今让咱们一块儿进入Volley的源码世界,来解读大神们的编程思想。android

newRequestQueue

若是使用过Volley的都知道,不论是进行网络请求仍是什么别的图片加载,首先都要建立好一个RequestQueuegit

public static final RequestQueue volleyQueue = Volley.newRequestQueue(App.mContext);

RequestQueue的建立天然离不开Volley中的静态方法newRequestQueue,从上面的图片也能知道首先进入点是newRequestQueue,好了如今咱们来看下该方法中到底作了什么:github

public static RequestQueue newRequestQueue(Context context, HttpStack stack) {
        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
        String userAgent = "volley/0";
        try {
            String packageName = context.getPackageName();
            PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
            userAgent = packageName + "/" + info.versionCode;
        } catch (NameNotFoundException e) {
        }
        if (stack == null) {
            if (Build.VERSION.SDK_INT >= 9) {
                stack = new HurlStack();
            } else {
                // Prior to Gingerbread, HttpUrlConnection was unreliable.
                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
            }
        }
        Network network = new BasicNetwork(stack);
        RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
        queue.start();
        return queue;
    }

从上面的源码中咱们能发现有一个stack,它表明着网络请求通讯HttpClientHttpUrlConnection,但咱们通常都是默认设置为null。由于它会默认帮咱们进判断选择更适合的。当手机版本大于等于9时会使用HurlStack它里面使用HttpUrlConnection进行实现通讯的,而小于9时则建立HttpClientStack,它里面天然是HttpClient的实现。经过BasicNetwork构形成Network来进行传递使用;在这以后构建了RequestQueue,其中帮咱们构造了一个缓存new DiskBasedCache(cacheDir),默认为5MB,缓存目录为volley。因此咱们能在data/data/应用包名/volley下找到缓存。最后调用其start方法,并返回RequestQueue。这就是咱们前面第一段代码的内部实现。下面咱们进入RequestQueue中的start方法看下它到底作了什么呢?编程

RequestQueue

RequestQueue中经过this调用自身来默认帮咱们调用了下面的构造函数缓存

public RequestQueue(Cache cache, Network network, int threadPoolSize,
            ResponseDelivery delivery) {
        mCache = cache;
        mNetwork = network;
        mDispatchers = new NetworkDispatcher[threadPoolSize];
        mDelivery = delivery;
    }

cache后续在NetworkDispatcher中会帮咱们进行response.cacheEntry的缓存,netWork是前面的根据版本所封装的通讯,threadPoolSize线程池大小默认为4delivery,是ExecutorDelivery做用在主线程,在最后对请求响应的分发。网络

start

public void start() {
        stop();  // Make sure any currently running dispatchers are stopped.
        // Create the cache dispatcher and start it.
        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
        mCacheDispatcher.start();
        // Create network dispatchers (and corresponding threads) up to the pool size.
        for (int i = 0; i < mDispatchers.length; i++) {
            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
                    mCache, mDelivery);
            mDispatchers[i] = networkDispatcher;
            networkDispatcher.start();
        }
    }

start方法中咱们发现它实现了两个dispatch,一个是CacheDispatcher缓存派遣,另外一个是networkDispatcher进行网络派遣,其实他们都是Thread,因此都调用了他们的start方法。其中networkDispatcher默认构建了4个,至关于包含4个线程的线程池。如今咱们先不去看他们内部的run方法到底实现了什么,咱们仍是接着看RequestQueue中咱们频繁使用的add方法。app

add

public <T> Request<T> add(Request<T> request) {
        // Tag the request as belonging to this queue and add it to the set of current requests.
        request.setRequestQueue(this);
        synchronized (mCurrentRequests) {
            mCurrentRequests.add(request);
        }
        // Process requests in the order they are added.
        request.setSequence(getSequenceNumber());
        request.addMarker("add-to-queue");
        // If the request is uncacheable, skip the cache queue and go straight to the network.
        if (!request.shouldCache()) {
            mNetworkQueue.add(request);
            return request;
        }
        // Insert request into stage if there's already a request with the same cache key in flight.
        synchronized (mWaitingRequests) {
            String cacheKey = request.getCacheKey();
            if (mWaitingRequests.containsKey(cacheKey)) {
                // There is already a request in flight. Queue up.
                Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
                if (stagedRequests == null) {
                    stagedRequests = new LinkedList<Request<?>>();
                }
                stagedRequests.add(request);
                mWaitingRequests.put(cacheKey, stagedRequests);
                if (VolleyLog.DEBUG) {
                    VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
                }
            } else {
                // Insert 'null' queue for this cacheKey, indicating there is now a request in
                // flight.
                mWaitingRequests.put(cacheKey, null);
                mCacheQueue.add(request);
            }
            return request;
        }
    }

咱们结合代码与前面的图,首先会为当前的request设置RequestQueue,而且根据状况同步是不是当前正在进行的请求,加入到mCurrentRequests中,看11行代码,以后对当前request进行判断是否须要缓存(默认实现是true,若是不须要可调用request.setShouldCache()进行设置),若是不须要则直接加入到前面的mNetworkQueue中,它会在CacheDispatcherNetworkDispatcher中作相应的处理,而后返回request。若是须要缓存,看16行代码,则对mWaitingRequests中是否包含cacheKey进行相应的处理。其中cacheKey为请求的url。最后再加入到缓存队列mCacheQueue中。框架

finish

细心的人会发现当对应cacheKeyvalue不为空时,建立了LinkedListQueue,只是将request加入到了Queue中,只是更新了mWaitingRequests中相应的value但并无加入到mCacheQueue中。其实否则,由于后续会调用finish方法,咱们来看下源码:ide

<T> void finish(Request<T> request) {
        // Remove from the set of requests currently being processed.
        synchronized (mCurrentRequests) {
            mCurrentRequests.remove(request);
        }
        synchronized (mFinishedListeners) {
          for (RequestFinishedListener<T> listener : mFinishedListeners) {
            listener.onRequestFinished(request);
          }
        }
        if (request.shouldCache()) {
            synchronized (mWaitingRequests) {
                String cacheKey = request.getCacheKey();
                Queue<Request<?>> waitingRequests = mWaitingRequests.remove(cacheKey);
                if (waitingRequests != null) {
                    if (VolleyLog.DEBUG) {
                        VolleyLog.v("Releasing %d waiting requests for cacheKey=%s.",
                                waitingRequests.size(), cacheKey);
                    }
                    // Process all queued up requests. They won't be considered as in flight, but
                    // that's not a problem as the cache has been primed by 'request'.
                    mCacheQueue.addAll(waitingRequests);
                }
            }
        }
    }

1422行代码,正如上面我所说,会将Queue中的request所有加入到mCacheQueue中。
好了RequestQueue的主要源码差很少就这些,下面咱们进入CacheDispatcher的源码分析,看它究竟如何工做的呢?

CacheDispatcher

前面提到了它与NetworkDispatcher本质都是Thread,那么咱们天然是看run方法

run

@Override
    public void run() {
        if (DEBUG) VolleyLog.v("start new dispatcher");
                Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
        
        // Make a blocking call to initialize the cache.
        mCache.initialize();
                
        while (true) {
            try {
                // Get a request from the cache triage queue, blocking until
                // at least one is available.
                final Request<?> request = mCacheQueue.take();
                request.addMarker("cache-queue-take");
                
                // If the request has been canceled, don't bother dispatching it.
                if (request.isCanceled()) {
                    request.finish("cache-discard-canceled");
                    continue;
                }
                
                // Attempt to retrieve this item from cache.
                Cache.Entry entry = mCache.get(request.getCacheKey());
                if (entry == null) {
                    request.addMarker("cache-miss");
                    // Cache miss; send off to the network dispatcher.
                    mNetworkQueue.put(request);
                    continue;
                }
                
                // If it is completely expired, just send it to the network.
                if (entry.isExpired()) {
                    request.addMarker("cache-hit-expired");
                    request.setCacheEntry(entry);
                    mNetworkQueue.put(request);
                    continue;
                }
                
                // We have a cache hit; parse its data for delivery back to the request.
                request.addMarker("cache-hit");
                Response<?> response = request.parseNetworkResponse(
                        new NetworkResponse(entry.data, entry.responseHeaders));
                request.addMarker("cache-hit-parsed");
                    
                if (!entry.refreshNeeded()) {
                    // Completely unexpired cache hit. Just deliver the response.
                    mDelivery.postResponse(request, response);
                } else {
                    // Soft-expired cache hit. We can deliver the cached response,
                    // but we need to also send the request to the network for
                    // refreshing.
                    request.addMarker("cache-hit-refresh-needed");
                    request.setCacheEntry(entry);
                    
                    // Mark the response as intermediate.
                    response.intermediate = true;
             
                    // Post the intermediate response back to the user and have
                    // the delivery then forward the request along to the network.
                    mDelivery.postResponse(request, response, new Runnable() {
                        @Override
                        public void run() {
                            try {
                                                                            mNetworkQueue.put(request);
                            } catch (InterruptedException e) {
                                // Not much we can do about this.
                            }
                        }
                    });
                }
                                                                                          
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }
        }
    }

看起来不少,咱们结合图来挑主要的看。首先建立了一个无限循环一直在监视着request的变化。从缓存队列mCacheQueue中获取request,若是该请求是cancle了,调用request.finish()清除相应数据并进行下一个请求的操做,不然从前面提到的mCache中获取Cache.Entry。若是不存在或者已通过期,将请求加入到网络队列中mNetWorkQueue,进行后续的网络请求。若是存在(41行代码)则进行request.parseNetworkResponse()解析出response,不一样的request对应不一样的解析方法。例如StringRequestJsonObjectRequest有各自的解析实现。再看45行,发现无论entry是否须要更新的,都会进一步对response进行mDelivery.postResponse(request, response)递送,不一样的是须要更新的话从新设置requestentry与加入到mNetworkQueue中,也就至关与从新进行网络请求一遍。那么再回到递送的阶段,前面已经提到在建立RequestQueue是实现了ExecutorDelivery,mDelivery.postResponse就是其中的方法。咱们来看一下

ExecutorDelivery

在这里建立了一个Executor,对后面进行递送,做用在主线程

public ExecutorDelivery(final Handler handler) {
        // Make an Executor that just wraps the handler.
        mResponsePoster = new Executor() {
            @Override
            public void execute(Runnable command) {
                handler.post(command);
            }
        };
    }

postResponse

@Override
    public void postResponse(Request<?> request, Response<?> response) {
        postResponse(request, response, null);
    }
 
    @Override
    public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
        request.markDelivered();
        request.addMarker("post-response");
        mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
    }

这个方法就简单了就是调用execute进行执行,在进入ResponseDeliveryRunnablerun看它如何执行

ResponseDeliveryRunnable

public void run() {
            // If this request has canceled, finish it and don't deliver.
            if (mRequest.isCanceled()) {
                mRequest.finish("canceled-at-delivery");
                return;
            }
 
            // Deliver a normal response or error, depending.
            if (mResponse.isSuccess()) {
                 mRequest.deliverResponse(mResponse.result);
            } else {
                mRequest.deliverError(mResponse.error);
            }
 
            // If this is an intermediate response, add a marker, otherwise we're done
            // and the request can be finished.
            if (mResponse.intermediate) {
                mRequest.addMarker("intermediate-response");
            } else {
                mRequest.finish("done");
            }
 
            // If we have been provided a post-delivery runnable, run it.
            if (mRunnable != null) {
                mRunnable.run();
            }
       }

主要是第9行代码,对于不一样的响应作不一样的递送,deliverResponsedeliverError内部分别调用的就是咱们很是熟悉的Listener中的onResponseonErrorResponse方法,进而返回到咱们对网络请求结果的处理函数。

这就是整个的缓存派遣,简而言之,存在请求响应的缓存数据就不进行网络请求,直接调用缓存中的数据进行分发递送。反之执行网络请求。
下面我来看下NetworkDispatcher是如何处理的

NetworkDispatcher

run

@Override
    public void run() {
         Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
        while (true) {
            long startTimeMs = SystemClock.elapsedRealtime();
            Request<?> request;
            try {
                // Take a request from the queue.
                request = mQueue.take();
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }
 
            try {
                request.addMarker("network-queue-take");
 
                // If the request was cancelled already, do not perform the
                // network request.
                if (request.isCanceled()) {
                    request.finish("network-discard-cancelled");
                    continue;
                }
 
                addTrafficStatsTag(request);
 
                // Perform the network request.
                NetworkResponse networkResponse = mNetwork.performRequest(request);
                request.addMarker("network-http-complete");
 
                // If the server returned 304 AND we delivered a response already,
                // we're done -- don't deliver a second identical response.
                if (networkResponse.notModified && request.hasHadResponseDelivered()) {
                    request.finish("not-modified");
                    continue;
                }
 
                // Parse the response here on the worker thread.
                Response<?> response = request.parseNetworkResponse(networkResponse);
                request.addMarker("network-parse-complete");
 
                // Write to cache if applicable.
                // TODO: Only update cache metadata instead of entire record for 304s.
                if (request.shouldCache() && response.cacheEntry != null) {
                     mCache.put(request.getCacheKey(), response.cacheEntry);
                    request.addMarker("network-cache-written");
                }
 
                // Post the response back.
                request.markDelivered();
                mDelivery.postResponse(request, response);
            } catch (VolleyError volleyError) {
                 volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                parseAndDeliverNetworkError(request, volleyError);
            } catch (Exception e) {
                VolleyLog.e(e, "Unhandled exception %s", e.toString());
                VolleyError volleyError = new VolleyError(e);
                 volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                mDelivery.postError(request, volleyError);
            }
        }
    }

在这里也建立了一个无限循环的while,一样也是先获取request,不过是从mQueue中,即前面屡次出现的mNetWorkQueue,经过看代码发现一些实现跟CacheDispatcher中的相似。也正如图片中所展现的同样,(23行)如何请求取消了,直接finish;不然进行网络请求,调用(31行)mNetwork.performRequest(request),这里的mNetWork即为前面RequestQueue中对不一样版本进行选择的stack的封装,分别调用HurlStackHttpClientStack各自的performRequest方法,该方法中构造请求头与参数分别使用HttpClient或者HttpUrlConnection进行网络请求。咱们再来看42行,是否是很熟悉,与CacheDispatcher中的同样进行response进行解析,而后若是须要缓存就加入到缓存中,最后(54行)再调用mDelivery.postResponse(request, response)进行递送。至于后面的剩余的步骤与CacheDispatcher中的如出一辙,这里就很少累赘了。

好了,Volley的源码解析先就到这里了,咱们再回过去看那张图是否是感受很清晰了呢?

总结

咱们来对使用Volley网络请求作个总结

  • 经过newRequestQueue初始化与构造RequestQueue
  • 调用RequestQueue中的add方法添加request到请求队列中
  • 缓存派遣,先进行CacheDispatcher,判断缓存中是否存在,有则解析response,再直接postResponse递送,不然进行后续的网络请求
  • 网络派遣,NetworkDispatcher中进行相应的request请求,解析response如设置了缓存就将结果保存到cache中,再进行最后的postResponse递送。

若是有所帮助欢迎关注个人下一次解析
更多分享:我的博客

关注

clipboard.png

相关文章
相关标签/搜索