Volley源碼完全解析

知道Volley的使用方法,但是我們要知其然還要知其所以然,所以來看看源碼究竟是怎麼樣實現的,下面是Volley的工作流程圖


 

首先可以看到藍色代表主線程,綠色代表緩存線程,橙色代表網絡線程。從左上角開始看,首先是把請求按順序加入到緩存隊列中,然後把它從緩存線程移出,如果這時有相應的緩存結果就取出緩存交給主線程處理;如果沒有緩存那麼就交由網絡線程,發出HTTP請求獲得相應結果寫入緩存,最後也是把結果給主線程處理,這樣就結束了。如果現在還不是很清楚,那麼看完這篇文章你再看看這個圖就很清楚了!!


Volley中的一些類的介紹

Request:表示一個請求的抽象類。StringRequestJsonRequestImageRequest 都是它的子類,表示某種類型的請求。

RequestQueue:表示請求隊列,裏面包含一個CacheDispatcher(用於處理走緩存請求的調度線程)、NetworkDispatcher數組(用於處理走網絡請求的調度線程),一個ResponseDelivery(返回結果分發接口),通過 start() 函數啓動時會啓動CacheDispatcherNetworkDispatchers

CacheDispatcher:一個線程,用於調度處理走緩存的請求。啓動後會不斷從緩存請求隊列中取請求處理,隊列爲空則等待,請求處理結束則將結果傳遞給ResponseDelivery去執行後續處理。當結果未緩存過、緩存失效或緩存需要刷新的情況下,該請求都需要重新進入NetworkDispatcher去調度處理。

NetworkDispatcher:一個線程,用於調度處理走網絡的請求。啓動後會不斷從網絡請求隊列中取請求處理,隊列爲空則等待,請求處理結束則將結果傳遞給ResponseDelivery去執行後續處理,並判斷結果是否要進行緩存。

ResponseDelivery:返回結果分發接口,目前只有基於ExecutorDelivery的在入參 handler 對應線程內進行分發。

HttpStack:處理 Http 請求,返回請求結果。目前 Volley 中有基於 HttpURLConnection 的HurlStack和 基於 Apache HttpClient 的HttpClientStack

Network:調用HttpStack處理請求,並將結果轉換爲可被ResponseDelivery處理的NetworkResponse

Cache:緩存請求結果,Volley 默認使用的是基於 sdcard 的DiskBasedCacheNetworkDispatcher得到請求結果後判斷是否需要存儲在 Cache,CacheDispatcher會從 Cache 中取緩存結果。


Volley的使用是從Volley.newRequestQueue(this)開始的,所以首先看看這個函數

public static RequestQueue newRequestQueue(Context context) {
        return newRequestQueue(context, null);
    }
跳轉到另外一個構造方法

 public static RequestQueue newRequestQueue(Context context, HttpStack stack)
    {
    	return newRequestQueue(context, stack, -1);
    }
繼續跳轉

public static RequestQueue newRequestQueue(Context context, HttpStack stack, int maxDiskCacheBytes) {
        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);

        String userAgent = "volley/0";
        try {
            String packageName = context.getPackageName();
            PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
            userAgent = packageName + "/" + info.versionCode;
        } catch (NameNotFoundException e) {
        }

        if (stack == null) {
            if (Build.VERSION.SDK_INT >= 9) {
                stack = new HurlStack();
            } else {
                // Prior to Gingerbread, HttpUrlConnection was unreliable.
                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
            }
        }

        Network network = new BasicNetwork(stack);
        
        RequestQueue queue;
        if (maxDiskCacheBytes <= -1)
        {
        	// No maximum size specified
        	queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
        }
        else
        {
        	// Disk cache size specified
        	queue = new RequestQueue(new DiskBasedCache(cacheDir, maxDiskCacheBytes), network);
        }

        queue.start();

        return queue;
    }
這裏傳入的第二個參數是HttpStack是用來處理HTTP請求的,傳入null就是使用默認的,第三個參數是最大的緩存大小,-1代表使用默認的。

在12行如果傳入的HttpStack爲null就判斷手機版本號,如果大於9使用HurlStack,否則使用HttpClientStack,在HurlStack內部就是使用HttpURLConnection進行通信,而HttpClientStack是使用HttpClient通信的,因爲在在9以下的版本使用HttpClient更穩定少Bug,然而大於9使用HttpURLConnection更好了。

在22行把HttpStack傳入Network,這是一個接口,BasicNetwork是它的實現,其中實現了performRequest方法,這個方法主要就是通過傳入的HttpStack處理網絡請求。

接着就開始創建RequestQueue,並且傳入Network和緩存相關的類,RequestQueue前面說過就是一個請求隊列。

最後及時調用RequestQueue的start方法。那麼接着看start方法


 public void start() {
        stop();  // Make sure any currently running dispatchers are stopped.
        // Create the cache dispatcher and start it.
        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
        mCacheDispatcher.start();

        // Create network dispatchers (and corresponding threads) up to the pool size.
        for (int i = 0; i < mDispatchers.length; i++) {
            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
                    mCache, mDelivery);
            mDispatchers[i] = networkDispatcher;
            networkDispatcher.start();
        }
    }

這段代碼主要就是CacheDispatcher調用它的start方法,然後一個for循環創建NetworkDispatcher,這裏創建的個數就是傳入的線程池的大小,默認是四個,所以這裏創建了四個NetworkDispatcher,然後分別調用它們的start方法。CacheDispatcher和NetworkDispatcher在前面提過,前者是調度緩存請求的,後者是調度執行網絡請求的。

所以總的來說這個newRequestQueue裏面主要就做了幾件事:初始化緩存對象、創建RequestQueue、創建緩存線程CacheDispatcher並開啓、初始化用於網絡請求的HttpStack、創建四個網絡線程NetworkDispatcher並開啓!


在獲得了RequestQueue之後,我們就開始調用add方法向裏面添加Request

public <T> Request<T> add(Request<T> request) {
        // Tag the request as belonging to this queue and add it to the set of current requests.
        request.setRequestQueue(this);
        synchronized (mCurrentRequests) {
            mCurrentRequests.add(request);
        }

        // Process requests in the order they are added.
        request.setSequence(getSequenceNumber());
        request.addMarker("add-to-queue");

        // If the request is uncacheable, skip the cache queue and go straight to the network.
        if (!request.shouldCache()) {
            mNetworkQueue.add(request);
            return request;
        }

        // Insert request into stage if there's already a request with the same cache key in flight.
        synchronized (mWaitingRequests) {
            String cacheKey = request.getCacheKey();
            if (mWaitingRequests.containsKey(cacheKey)) {
                // There is already a request in flight. Queue up.
                Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
                if (stagedRequests == null) {
                    stagedRequests = new LinkedList<Request<?>>();
                }
                stagedRequests.add(request);
                mWaitingRequests.put(cacheKey, stagedRequests);
                if (VolleyLog.DEBUG) {
                    VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
                }
            } else {
                // Insert 'null' queue for this cacheKey, indicating there is now a request in
                // flight.
                mWaitingRequests.put(cacheKey, null);
                mCacheQueue.add(request);
            }
            return request;
        }
    }
第三行request.setRequestQueue(this)也就是把Request和RequestQueue相關聯,然後把這個Request同步添加到當前正在處理的Request的集合中。

在十三行判斷當前的請求是否可以緩存,如果不能緩存 mNetworkQueue.add(request)直接添加到網絡請求隊列中,返回;否則執行下面的同步代碼塊。可以調用Request的setShouldCache(false)方法來改變這一默認行爲,下面繼續執行的就是一些緩存操作,最後添加到緩存隊列中後,在後臺默默等待的緩存線程就可以執行了,下面就繼續看CacheDispatcher


public class CacheDispatcher extends Thread {

    private static final boolean DEBUG = VolleyLog.DEBUG;

    /** The queue of requests coming in for triage. */
    private final BlockingQueue<Request<?>> mCacheQueue;

    /** The queue of requests going out to the network. */
    private final BlockingQueue<Request<?>> mNetworkQueue;

    /** The cache to read from. */
    private final Cache mCache;

    /** For posting responses. */
    private final ResponseDelivery mDelivery;

    /** Used for telling us to die. */
    private volatile boolean mQuit = false;

    /**
     * Creates a new cache triage dispatcher thread.  You must call {@link #start()}
     * in order to begin processing.
     *
     * @param cacheQueue Queue of incoming requests for triage
     * @param networkQueue Queue to post requests that require network to
     * @param cache Cache interface to use for resolution
     * @param delivery Delivery interface to use for posting responses
     */
    public CacheDispatcher(
            BlockingQueue<Request<?>> cacheQueue, BlockingQueue<Request<?>> networkQueue,
            Cache cache, ResponseDelivery delivery) {
        mCacheQueue = cacheQueue;
        mNetworkQueue = networkQueue;
        mCache = cache;
        mDelivery = delivery;
    }

    /**
     * Forces this dispatcher to quit immediately.  If any requests are still in
     * the queue, they are not guaranteed to be processed.
     */
    public void quit() {
        mQuit = true;
        interrupt();
    }

    @Override
    public void run() {
        if (DEBUG) VolleyLog.v("start new dispatcher");
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);

        // Make a blocking call to initialize the cache.
        mCache.initialize();

        Request<?> request;
        while (true) {
            // release previous request object to avoid leaking request object when mQueue is drained.
            request = null;
            try {
                // Take a request from the queue.
                request = mCacheQueue.take();
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }
            try {
                request.addMarker("cache-queue-take");

                // If the request has been canceled, don't bother dispatching it.
                if (request.isCanceled()) {
                    request.finish("cache-discard-canceled");
                    continue;
                }

                // Attempt to retrieve this item from cache.
                Cache.Entry entry = mCache.get(request.getCacheKey());
                if (entry == null) {
                    request.addMarker("cache-miss");
                    // Cache miss; send off to the network dispatcher.
                    mNetworkQueue.put(request);
                    continue;
                }

                // If it is completely expired, just send it to the network.
                if (entry.isExpired()) {
                    request.addMarker("cache-hit-expired");
                    request.setCacheEntry(entry);
                    mNetworkQueue.put(request);
                    continue;
                }

                // We have a cache hit; parse its data for delivery back to the request.
                request.addMarker("cache-hit");
                Response<?> response = request.parseNetworkResponse(
                        new NetworkResponse(entry.data, entry.responseHeaders));
                request.addMarker("cache-hit-parsed");

                if (!entry.refreshNeeded()) {
                    // Completely unexpired cache hit. Just deliver the response.
                    mDelivery.postResponse(request, response);
                } else {
                    // Soft-expired cache hit. We can deliver the cached response,
                    // but we need to also send the request to the network for
                    // refreshing.
                    request.addMarker("cache-hit-refresh-needed");
                    request.setCacheEntry(entry);

                    // Mark the response as intermediate.
                    response.intermediate = true;

                    // Post the intermediate response back to the user and have
                    // the delivery then forward the request along to the network.
                    final Request<?> finalRequest = request;
                    mDelivery.postResponse(request, response, new Runnable() {
                        @Override
                        public void run() {
                            try {
                                mNetworkQueue.put(finalRequest);
                            } catch (InterruptedException e) {
                                // Not much we can do about this.
                            }
                        }
                    });
                }
            } catch (Exception e) {
                VolleyLog.e(e, "Unhandled exception %s", e.toString());
            }
        }
    }
}

首先是設置了線程的優先級,然後初始化緩存,接着就進入了while死循環,保證着這個線程一直運行。

第61行從阻塞隊列中取出Request,如果沒有就阻塞,mQuit是退出的標誌。

第73行判斷是否取消了Request,取消了就結束continue

第79行嘗試從緩存中取出相應結果,接着就繼續判斷這個相應結果,如果是null或者是過期了的那麼就把這個Request加入到網絡線程,否則的話就認爲不需要重新獲取直接使用這個相應的緩存結果就好了。

第91行就調用 request.parseNetworkResponse來解析響應的結果,再把解析的結果進行回調。

Request的parseNetworkResponse這個方法是個抽象方法,是不是很熟悉,在上一篇博客中說要自定義Request就需要實現這個方法,所以這個方法都是需要子類來實現的。

當獲取瞭解析後的響應結果就調用mDelivery.postResponse(request, response)分發出去。ResponseDelivery的子類的是ExecutorDelivery,這個類是在構造RequestQueue的時候實例化的,RequestQueue(Cache cache, Network network, int threadPoolSize,ResponseDelivery delivery),看看ExecutorDelivery裏面postResponse是怎麼實現的

 @Override
    public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
        request.markDelivered();
        request.addMarker("post-response");
        mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
    }
也就是執行ResponseDeliveryRunnable,繼續點進去看看它的run方法

public void run() {
            // If this request has canceled, finish it and don't deliver.
            if (mRequest.isCanceled()) {
                mRequest.finish("canceled-at-delivery");
                return;
            }

            // Deliver a normal response or error, depending.
            if (mResponse.isSuccess()) {
                mRequest.deliverResponse(mResponse.result);
            } else {
                mRequest.deliverError(mResponse.error);
            }

            // If this is an intermediate response, add a marker, otherwise we're done
            // and the request can be finished.
            if (mResponse.intermediate) {
                mRequest.addMarker("intermediate-response");
            } else {
                mRequest.finish("done");
            }

            // If we have been provided a post-delivery runnable, run it.
            if (mRunnable != null) {
                mRunnable.run();
            }
       }
看關鍵第10行,調用了Request的deliverResponse,又是一個熟悉的方法,這正是我們在自定義Request時需要重寫的一個方法,在重寫的這個方法中又把這個結果傳給了Listener,最後也就回調給我們來處理!到這裏這邊的整個流程就結束了,現在再看看當沒有緩存需要放到網絡線程中處理的Request,也就是NetworkDispatcher,這也是一個線程,還是主要看看它的run方法

public void run() {
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
        Request<?> request;
        while (true) {
            long startTimeMs = SystemClock.elapsedRealtime();
            // release previous request object to avoid leaking request object when mQueue is drained.
            request = null;
            try {
                // Take a request from the queue.
                request = mQueue.take();
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }

            try {
                request.addMarker("network-queue-take");

                // If the request was cancelled already, do not perform the
                // network request.
                if (request.isCanceled()) {
                    request.finish("network-discard-cancelled");
                    continue;
                }

                addTrafficStatsTag(request);

                // Perform the network request.
                NetworkResponse networkResponse = mNetwork.performRequest(request);
                request.addMarker("network-http-complete");

                // If the server returned 304 AND we delivered a response already,
                // we're done -- don't deliver a second identical response.
                if (networkResponse.notModified && request.hasHadResponseDelivered()) {
                    request.finish("not-modified");
                    continue;
                }

                // Parse the response here on the worker thread.
                Response<?> response = request.parseNetworkResponse(networkResponse);
                request.addMarker("network-parse-complete");

                // Write to cache if applicable.
                // TODO: Only update cache metadata instead of entire record for 304s.
                if (request.shouldCache() && response.cacheEntry != null) {
                    mCache.put(request.getCacheKey(), response.cacheEntry);
                    request.addMarker("network-cache-written");
                }

                // Post the response back.
                request.markDelivered();
                mDelivery.postResponse(request, response);
            } catch (VolleyError volleyError) {
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                parseAndDeliverNetworkError(request, volleyError);
            } catch (Exception e) {
                VolleyLog.e(e, "Unhandled exception %s", e.toString());
                VolleyError volleyError = new VolleyError(e);
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                mDelivery.postError(request, volleyError);
            }
        }
    }
可以看到其實很多都和CacheDispatcher類似,主要不同的是這裏是通過網絡獲取響應的結果,如果需要緩存就把這個結果緩存起來,最後也是一樣的步驟了調用 mDelivery.postResponse(request, response)來分發,之後都是一樣的了就不多說了。

在32行調用mNetwork.performRequest(request)來獲取響應的結果,這裏的mNetwork是在創建RequestQueue創建的,它的子類BasicNetwork來實現其中的performRequest方法,這個BasicNetwork就是根據傳入的HttpStack來獲取響應的結果,具體怎麼實現就不說了,至此整個流程就完成了!



總結

這時再回頭看看最初的流程圖應該就很清楚了吧,在梳理一下

1.在創建RequestQueue時默認創建了一個緩存線程CacheDispatcher和默認四個網絡線程NetworkDispatcher,並且創建了用於請求網絡的HttpStack、BasicNetwork

2.CacheDispatcher是緩存線程,會不斷從緩存隊列中取出Request,沒有就阻塞,取出之後如果需要會網絡獲取數據那麼直接把Request扔給NetworkDispatcher,如果不需要就從緩存中去數據,最後回調結果

3.NetworkDispatcher是網絡線程,真正處理網絡請求的地方,可以指定多個,獲取到數據後如果需要緩存則緩存,然後回調

4.ResponseDelivery用於分發結果,它的實現類是ExecutorDelivery,最終會觸發我們知道的Listener。

5.當創建Request,通過add方法添加到RequestQueue裏面後,之前的CacheDispatcher就可以獲取到Request,開始執行起來。

 

發佈了61 篇原創文章 · 獲贊 56 · 訪問量 14萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章