android源码分析-深刻消息机制

概述

android里的消息机制是很是重要的部分,此次我但愿可以系统的剖析这个部分,做为一个总结。
首先这里涉及到几个部分,从层次上看,分为java层和native层2部分;从类上看,分为Handler/Looper/Message/MessageQueue。java

  • Handler:辅助功能类,提供接口向消息池中发送各种消息事件,而且提供响应消息的机制。android

  • Looper:消息泵,不断的循环处理消息队列中的每一个消息,确保最终分发给处理者。数组

  • Message:消息体,承载消息内容。缓存

  • MessageQueue:消息队列,提供消息池和缓存。安全

结构关系

他们之间的关系就是Looper使用MessageQueue提供机制,Handler提供调用接口和回调处理,Message做为载体数据传递。
图片描述异步

咱们下面逐次剖析。async

一. Looper

  • prepare --- 准备和初始化ide

这里的准备过程分为2个接口,分别是prepare和prepareMainLooper。区别是前者给线程提供,后者是给主UI线程调用。咱们看下代码来确认区别:函数

public static void prepare() {
        prepare(true);
    }
    
    public static void prepareMainLooper() {
        prepare(false);
        synchronized (Looper.class) {
            if (sMainLooper != null) {
                throw new IllegalStateException("The main Looper has already been prepared.");
            }
            sMainLooper = myLooper();
        }
    }

首先调用内部带参数的静态方法prepare给的参数不一样,true表示能够退出此looper,false表示不容许退出。prepareMainLooper中将这个looper对象赋值给了一个静态私有变量sMainLooper保存下来。往下看带参数的prepare:oop

private static void prepare(boolean quitAllowed) {
        if (sThreadLocal.get() != null) {
            throw new RuntimeException("Only one Looper may be created per thread");
        }
        sThreadLocal.set(new Looper(quitAllowed));
    }

首先是个tls的获取和设置,这里作了断定,一个线程只能有一个Looper对象,而后建立的Looper设置在tls对象中。再来就是构造了:

private Looper(boolean quitAllowed) {
        mQueue = new MessageQueue(quitAllowed);
        mThread = Thread.currentThread();
    }

建立了一个MessageQueue对象保存下来,而后将当前的线程对象保留下来。
至此准备过程完毕,总结一下,2个入口,不一样的场景。tls对象的使用并确保每一个线程都有惟一的一个Looper对象。

  • loop --- 消息循环

这个才是核心部分,循环进行消息的获取及派发工做。咱们直接上代码:

public static void loop() {
        // 获取tls的惟一Looper
        final Looper me = myLooper();
        if (me == null) {
            throw new RuntimeException("No Looper; Looper.prepare() wasn't called on this thread.");
        }
        // 获取Looper中的消息队列
        final MessageQueue queue = me.mQueue;

        // Make sure the identity of this thread is that of the local process,
        // and keep track of what that identity token actually is.
        Binder.clearCallingIdentity();
        final long ident = Binder.clearCallingIdentity();

        // 进入消息泵循环体
        for (;;) {
            // 获取一个待处理的消息,有可能会阻塞,后面分析MessageQueue的时候回阐述
            Message msg = queue.next(); // might block
            if (msg == null) {
                // No message indicates that the message queue is quitting.
                return;
            }

            // This must be in a local variable, in case a UI event sets the logger
            final Printer logging = me.mLogging;
            if (logging != null) {
                logging.println(">>>>> Dispatching to " + msg.target + " " +
                        msg.callback + ": " + msg.what);
            }

            // 跟踪消息
            final long traceTag = me.mTraceTag;
            if (traceTag != 0 && Trace.isTagEnabled(traceTag)) {
                Trace.traceBegin(traceTag, msg.target.getTraceName(msg));
            }
            try {
                // 处理消息的派发
                msg.target.dispatchMessage(msg);
            } finally {
                // 结束跟踪
                if (traceTag != 0) {
                    Trace.traceEnd(traceTag);
                }
            }

            if (logging != null) {
                logging.println("<<<<< Finished to " + msg.target + " " + msg.callback);
            }

            // Make sure that during the course of dispatching the
            // identity of the thread wasn't corrupted.
            final long newIdent = Binder.clearCallingIdentity();
            if (ident != newIdent) {
                Log.wtf(TAG, "Thread identity changed from 0x"
                        + Long.toHexString(ident) + " to 0x"
                        + Long.toHexString(newIdent) + " while dispatching to "
                        + msg.target.getClass().getName() + " "
                        + msg.callback + " what=" + msg.what);
            }

            // 回收处理后的消息,将其放入消息池中,准备复用
            msg.recycleUnchecked();
        }
    }

过程以下:
1.获取线程tls对象,那个惟一的looper,而后获取MessageQueue消息队列。
2.进入消息泵循环体,以阻塞的方式获取待处理消息。
3.执行消息的派发。
4.回收处理后的消息,放入消息池中,等待复用。
5.回头2,开始下一个。
这里能够看出,大多数都是调用MessageQueue或者Message或者Handler来处理具体事务,loop这里只是个逻辑流程处理,很简单。其实不管什么平台下的消息机制大致都是这种流程。其中注意的是,处理消息派发的部分:msg.target.dispatchMessage(msg)。这个调用其实走的是message里面保留的handler的dispatchMessage,后面具体讲到handler的时候会阐述。

  • quit --- 退出消息泵

退出过程比较简单:

public void quit() {
        mQueue.quit(false);
    }
    
    public void quitSafely() {
        mQueue.quit(true);
    }

有2个调用,分别是正常退出和安全退出。区别是前者移除全部消息,后者是只移除还没有处理的消息。具体的在MessageQueue中阐述。

二. Handler

handler咱们最普通的用法就是new出来以后,重载handleMessage方法,来等待消息触发并在这里写下处理。以后无非就是在合适的时候调用sendMessage发送消息了。

  • Handler --- 构造

光是构造函数就有好多个,最后无非就2个入口:

public Handler(Callback callback, boolean async) {
        if (FIND_POTENTIAL_LEAKS) {
            final Class<? extends Handler> klass = getClass();
            if ((klass.isAnonymousClass() || klass.isMemberClass() || klass.isLocalClass()) &&
                    (klass.getModifiers() & Modifier.STATIC) == 0) {
                Log.w(TAG, "The following Handler class should be static or leaks might occur: " +
                    klass.getCanonicalName());
            }
        }

        mLooper = Looper.myLooper();
        if (mLooper == null) {
            throw new RuntimeException(
                "Can't create handler inside thread that has not called Looper.prepare()");
        }
        mQueue = mLooper.mQueue;
        mCallback = callback;
        mAsynchronous = async;
    }

    public Handler(Looper looper, Callback callback, boolean async) {
        mLooper = looper;
        mQueue = looper.mQueue;
        mCallback = callback;
        mAsynchronous = async;
    }

仔细看,其实都是在根据参数设置环境,共3个参数,looper、callback、async。looper是要绑定的looper就是说这个handler是要执行在哪一个线程的looper上的,通常咱们使用的时候都是不指定,那么默认就是当前线程,这里是容许在其余任意线程的;callback是一个相应消息的回调,后面说;async表示是否异步执行,关系到callback或者其余的处理消息体的执行方式。在2个参数的构造中,Looper.myLooper();指定了是本线程的looper。

  • dispatchMessage --- 消息派发

还记得上面的looper的loop调用中,处理具体message的派发使用的是msg.target.dispatchMessage(msg)。这个target就是handler。那么咱们直接看dispatchMessage,相关代码以下:

public interface Callback {
        public boolean handleMessage(Message msg);
    }
    
    public void dispatchMessage(Message msg) {
        if (msg.callback != null) {
            handleCallback(msg);
        } else {
            if (mCallback != null) {
                if (mCallback.handleMessage(msg)) {
                    return;
                }
            }
            handleMessage(msg);
        }
    }
    
    private static void handleCallback(Message message) {
        message.callback.run();
    }

1.首先判断message的callback是否存在,若是存在,调用这个callback,注意,查看message源码可知,callback是个runnable。
2.不然,构造中保存的callback是否存在,若是存在,调用他的handleMessage方法。
3.若是上面2个条件都不知足,调用自身的handleMessage。这个能够复写。
咱们可以知道什么?
3个回调,分别是message的runnable,handler构造的callback,handler的自身handleMessage。这3个的调用是排他性的,一旦一个知足,就直接返回,再也不走别的。

  • sendMessage --- 发送消息
    发送消息的过程自己有2个调用,一个是sendMessageXXX,一个是post。前者最终都会调用到sendMessageAtTime:

public final boolean sendMessageDelayed(Message msg, long delayMillis)
    {
        if (delayMillis < 0) {
            delayMillis = 0;
        }
        return sendMessageAtTime(msg, SystemClock.uptimeMillis() + delayMillis);
    }
    
    public boolean sendMessageAtTime(Message msg, long uptimeMillis) {
        MessageQueue queue = mQueue;
        if (queue == null) {
            RuntimeException e = new RuntimeException(
                    this + " sendMessageAtTime() called with no mQueue");
            Log.w("Looper", e.getMessage(), e);
            return false;
        }
        return enqueueMessage(queue, msg, uptimeMillis);
    }
    
    private boolean enqueueMessage(MessageQueue queue, Message msg, long uptimeMillis) {
        msg.target = this;
        if (mAsynchronous) {
            msg.setAsynchronous(true);
        }
        return queue.enqueueMessage(msg, uptimeMillis);
    }

其实最后走的都是MessageQueue的enqueueMessage。具体的咱们在后面介绍MessageQueue的时候阐述。须要注意的是mAsynchronous,这个是否异步的标记,设置在了message里面。还有就是delayed的发送,实际上是本地的系统当前时间加上延迟的时间差。
再来看看post:

public final boolean post(Runnable r)
    {
       return  sendMessageDelayed(getPostMessage(r), 0);
    }
    
    private static Message getPostMessage(Runnable r) {
        Message m = Message.obtain();
        m.callback = r;
        return m;
    }

最后也是走的sendmessage,可是区别是若是是Post调用,会将传递进来的runnable设置到message的callback中。

三. Message

既然message是载体,那么先来看看数据内容:

// 消息的惟一key
    public int what;
    
    // 消息支持的2个参数,都是int类型
    public int arg1; 
    public int arg2;
    
    // 消息内容
    public Object obj;
    
    // 这个是一个应答的信使,实际上是和信使服务有关系的一个东西,这里暂时不作解释
    public Messenger replyTo;
    
    // 消息触发的时间
    /*package*/ long when;
    
    // 消息相应的handler
    /*package*/ Handler target;
    
    // 消息回调
    /*package*/ Runnable callback;
    
    // 本消息的下一个
    /*package*/ Message next;
    
    // 消息池,其实就是第一个消息
    private static Message sPool;
    
    // 消息池当前的大小
    private static int sPoolSize = 0;
  • obtain --- 获取消息

public static Message obtain() {
        synchronized (sPoolSync) {
            if (sPool != null) {
                Message m = sPool;
                sPool = m.next;
                m.next = null;
                m.flags = 0; // clear in-use flag
                sPoolSize--;
                return m;
            }
        }
        return new Message();
    }

这个sPool是一个静态私有的变量,存储的就是一个链表性质的表头元素message。取出链表头的元素,将链表表头日后移动一个元素。能够看出这个sPool表头对应的链表就是一个回收后可复用的全部的message的集合,因为是静态私有的,所以这里至关于一个全局的存在。
再看下回收就会比较清楚是如何将废弃的message存储的。

  • recycle --- 回收消息

public void recycle() {
        if (isInUse()) {
            if (gCheckRecycle) {
                throw new IllegalStateException("This message cannot be recycled because it "
                        + "is still in use.");
            }
            return;
        }
        recycleUnchecked();
    }
    
    void recycleUnchecked() {
        // Mark the message as in use while it remains in the recycled object pool.
        // Clear out all other details.
        flags = FLAG_IN_USE;
        what = 0;
        arg1 = 0;
        arg2 = 0;
        obj = null;
        replyTo = null;
        sendingUid = -1;
        when = 0;
        target = null;
        callback = null;
        data = null;

        synchronized (sPoolSync) {
            if (sPoolSize < MAX_POOL_SIZE) {
                next = sPool;
                sPool = this;
                sPoolSize++;
            }
        }
    }

在recycleUnchecked中就是修改自身message的成员,将其清空,而后判断若是没有超过这个链表的最大上限,则将这个message自身存储为sPool,就是做为表头了,而后再将pool的size增1。

从以上能够看到,message的复用机制是独立的,与消息队列并不直接关系,耦合性较低。

四. MessageQueue

MessageQueue里会涉及到c层,也就是native层的内容,其实他大部分核心内容都是在c层完成的。java层是个衔接部分。

  • 构造

MessageQueue的构造是在Looper的构造中完成的,也就是说一个线程有一个looper一个MessageQueue。

MessageQueue(boolean quitAllowed) {
        mQuitAllowed = quitAllowed;
        mPtr = nativeInit();
    }

构造里面直接走了nativeInit。而且将返回值保存在了mPtr。咱们深刻下去看看c层的部分,在frameworks/base/core/jni/android_os_MessageQueue.cpp:

static jlong android_os_MessageQueue_nativeInit(JNIEnv* env, jclass clazz) {
    NativeMessageQueue* nativeMessageQueue = new NativeMessageQueue();
    if (!nativeMessageQueue) {
        jniThrowRuntimeException(env, "Unable to allocate native queue");
        return 0;
    }

    nativeMessageQueue->incStrong(env);
    return reinterpret_cast<jlong>(nativeMessageQueue);
}

这里明显生成了一个新的NativeMessageQueue,而且将地址做为返回值返回了。这个NativeMessageQueue就是个c层的queue对象。

  • 获取队列消息 --- next

回到java层,咱们看下next这个相当重要的函数在作什么,在looper的loop中,循环中第一句就是调用他获取一个message:

Message next() {
        // 拿到初始化时候保存的地址,便是c层NativeMessageQueue对象的地址
        final long ptr = mPtr;
        if (ptr == 0) {
            return null;
        }

        int pendingIdleHandlerCount = -1; // -1 only during first iteration
        int nextPollTimeoutMillis = 0;
        // 进入循环,为了获取到消息
        for (;;) {
            if (nextPollTimeoutMillis != 0) {
                Binder.flushPendingCommands();
            }

            // 阻塞,有超时
            nativePollOnce(ptr, nextPollTimeoutMillis);

            synchronized (this) {
                // Try to retrieve the next message.  Return if found.
                final long now = SystemClock.uptimeMillis();
                Message prevMsg = null;
                Message msg = mMessages;
                // 当消息的handler为Null,找下一个异步的消息
                if (msg != null && msg.target == null) {
                    // Stalled by a barrier.  Find the next asynchronous message in the queue.
                    do {
                        prevMsg = msg;
                        msg = msg.next;
                    } while (msg != null && !msg.isAsynchronous());
                }
                if (msg != null) {
                    if (now < msg.when) {
                        // Next message is not ready.  Set a timeout to wake up when it is ready.
                        // 若是消息的触发时间大于当前时钟,则设置下一次阻塞等待超时为这个差值
                        nextPollTimeoutMillis = (int) Math.min(msg.when - now, Integer.MAX_VALUE);
                    } else {
                        // 获得并返回一个message,这里是个链表操做
                        mBlocked = false;
                        if (prevMsg != null) {
                            prevMsg.next = msg.next;
                        } else {
                            mMessages = msg.next;
                        }
                        msg.next = null;
                        if (DEBUG) Log.v(TAG, "Returning message: " + msg);
                        msg.markInUse();
                        return msg;
                    }
                } else {
                    // No more messages.
                    nextPollTimeoutMillis = -1;
                }

                // 退出状况的判断
                if (mQuitting) {
                    dispose();
                    return null;
                }

                // 空闲时候的idlerHandler处理
                if (pendingIdleHandlerCount < 0
                        && (mMessages == null || now < mMessages.when)) {
                    pendingIdleHandlerCount = mIdleHandlers.size();
                }
                if (pendingIdleHandlerCount <= 0) {
                    // No idle handlers to run.  Loop and wait some more.
                    mBlocked = true;
                    continue;
                }

                if (mPendingIdleHandlers == null) {
                    mPendingIdleHandlers = new IdleHandler[Math.max(pendingIdleHandlerCount, 4)];
                }
                mPendingIdleHandlers = mIdleHandlers.toArray(mPendingIdleHandlers);
            }

            // Run the idle handlers.
            // We only ever reach this code block during the first iteration.
            for (int i = 0; i < pendingIdleHandlerCount; i++) {
                final IdleHandler idler = mPendingIdleHandlers[i];
                mPendingIdleHandlers[i] = null; // release the reference to the handler

                boolean keep = false;
                try {
                    keep = idler.queueIdle();
                } catch (Throwable t) {
                    Log.wtf(TAG, "IdleHandler threw exception", t);
                }

                if (!keep) {
                    synchronized (this) {
                        mIdleHandlers.remove(idler);
                    }
                }
            }

            // Reset the idle handler count to 0 so we do not run them again.
            pendingIdleHandlerCount = 0;

            // While calling an idle handler, a new message could have been delivered
            // so go back and look again for a pending message without waiting.
            nextPollTimeoutMillis = 0;
        }
    }

1.经过以前初始化时保留的NativeMessageQueue阻塞获取消息;
2.若是不是当即执行的消息,而且没有到达执行点,根据该消息与当前时钟的差值动态调节下一次阻塞获取的超时时间;
3.若是到达执行点的消息,操做链表,并返回该消息;
4.若是没有消息可供处理,执行全部以前注册的IdleHandler;

往下看的话就是这个阻塞获取消息的nativePollOnce了,继续。

上面这个函数须要进入c层:

static void android_os_MessageQueue_nativePollOnce(JNIEnv* env, jobject obj,
        jlong ptr, jint timeoutMillis) {
    NativeMessageQueue* nativeMessageQueue = reinterpret_cast<NativeMessageQueue*>(ptr);
    nativeMessageQueue->pollOnce(env, obj, timeoutMillis);
}

这里进入了NativeMessageQueue,为了了解c层的具体状况,咱们须要分析下初始化过程。

五. c层运转

  • 初始化

首先仍是须要看看NativeMessageQueue类的初始化:

NativeMessageQueue::NativeMessageQueue() :
        mPollEnv(NULL), mPollObj(NULL), mExceptionObj(NULL) {
    mLooper = Looper::getForThread();
    if (mLooper == NULL) {
        mLooper = new Looper(false);
        Looper::setForThread(mLooper);
    }
}

这里又有一个Looper,注意,这里的已经不是java层的那个了,而是c层自身的Looper,在/system/core/libutils/Looper.cpp这里,仍是老规矩,看看他初始化的时候:

Looper::Looper(bool allowNonCallbacks) :
        mAllowNonCallbacks(allowNonCallbacks), mSendingMessage(false),
        mPolling(false), mEpollFd(-1), mEpollRebuildRequired(false),
        mNextRequestSeq(0), mResponseIndex(0), mNextMessageUptime(LLONG_MAX) {
    mWakeEventFd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
    LOG_ALWAYS_FATAL_IF(mWakeEventFd < 0, "Could not make wake event fd: %s",
                        strerror(errno));

    AutoMutex _l(mLock);
    rebuildEpollLocked();
}

void Looper::rebuildEpollLocked() {
    // Close old epoll instance if we have one.
    if (mEpollFd >= 0) {
#if DEBUG_CALLBACKS
        ALOGD("%p ~ rebuildEpollLocked - rebuilding epoll set", this);
#endif
        close(mEpollFd);
    }

    // Allocate the new epoll instance and register the wake pipe.
    mEpollFd = epoll_create(EPOLL_SIZE_HINT);
    LOG_ALWAYS_FATAL_IF(mEpollFd < 0, "Could not create epoll instance: %s", strerror(errno));

    struct epoll_event eventItem;
    memset(& eventItem, 0, sizeof(epoll_event)); // zero out unused members of data field union
    eventItem.events = EPOLLIN;
    eventItem.data.fd = mWakeEventFd;
    int result = epoll_ctl(mEpollFd, EPOLL_CTL_ADD, mWakeEventFd, & eventItem);
    LOG_ALWAYS_FATAL_IF(result != 0, "Could not add wake event fd to epoll instance: %s",
                        strerror(errno));

    for (size_t i = 0; i < mRequests.size(); i++) {
        const Request& request = mRequests.valueAt(i);
        struct epoll_event eventItem;
        request.initEventItem(&eventItem);

        int epollResult = epoll_ctl(mEpollFd, EPOLL_CTL_ADD, request.fd, & eventItem);
        if (epollResult < 0) {
            ALOGE("Error adding epoll events for fd %d while rebuilding epoll set: %s",
                  request.fd, strerror(errno));
        }
    }
}

看到了吧,使用了epoll来监控多个fd。首先是一个唤醒的事件fd,而后是根据request队列的每一个request来添加不一样的监控fd。request是什么呢?咱们暂时先放一下,后面会阐述。
总结一下初始化过程:
图片描述

  • 读取消息 --- nativePollOnce

static void android_os_MessageQueue_nativePollOnce(JNIEnv* env, jobject obj,
        jlong ptr, jint timeoutMillis) {
    NativeMessageQueue* nativeMessageQueue = reinterpret_cast<NativeMessageQueue*>(ptr);
    nativeMessageQueue->pollOnce(env, obj, timeoutMillis);
}

void NativeMessageQueue::pollOnce(JNIEnv* env, jobject pollObj, int timeoutMillis) {
    mPollEnv = env;
    mPollObj = pollObj;
    mLooper->pollOnce(timeoutMillis);
    mPollObj = NULL;
    mPollEnv = NULL;

    if (mExceptionObj) {
        env->Throw(mExceptionObj);
        env->DeleteLocalRef(mExceptionObj);
        mExceptionObj = NULL;
    }
}

从这里能够看到,实际上是经过c层的Looper调用pollOnce来完成的。

int Looper::pollOnce(int timeoutMillis, int* outFd, int* outEvents, void** outData) {
    int result = 0;
    for (;;) {
        // 处理每一个response里的request,若是没有回调,直接返回
        while (mResponseIndex < mResponses.size()) {
            const Response& response = mResponses.itemAt(mResponseIndex++);
            int ident = response.request.ident;
            if (ident >= 0) {
                int fd = response.request.fd;
                int events = response.events;
                void* data = response.request.data;
#if DEBUG_POLL_AND_WAKE
                ALOGD("%p ~ pollOnce - returning signalled identifier %d: "
                        "fd=%d, events=0x%x, data=%p",
                        this, ident, fd, events, data);
#endif
                if (outFd != NULL) *outFd = fd;
                if (outEvents != NULL) *outEvents = events;
                if (outData != NULL) *outData = data;
                return ident;
            }
        }

        if (result != 0) {
#if DEBUG_POLL_AND_WAKE
            ALOGD("%p ~ pollOnce - returning result %d", this, result);
#endif
            if (outFd != NULL) *outFd = 0;
            if (outEvents != NULL) *outEvents = 0;
            if (outData != NULL) *outData = NULL;
            return result;
        }

        result = pollInner(timeoutMillis);
    }
}

重点就一个:pollInner:

......
    // Poll.
    int result = POLL_WAKE;
    mResponses.clear();
    mResponseIndex = 0;

    // We are about to idle.
    mPolling = true;

    // 最大处理16个fd
    struct epoll_event eventItems[EPOLL_MAX_EVENTS];
    等待事件发生或超时
    int eventCount = epoll_wait(mEpollFd, eventItems, EPOLL_MAX_EVENTS, timeoutMillis);
    // No longer idling.
    mPolling = false;

    // Acquire lock.
    mLock.lock();

    // 若是须要进行重建epoll
    if (mEpollRebuildRequired) {
        mEpollRebuildRequired = false;
        rebuildEpollLocked();
        goto Done;
    }

    // <0错误处理,直接跳转到Done
    if (eventCount < 0) {
        if (errno == EINTR) {
            goto Done;
        }
        ALOGW("Poll failed with an unexpected error: %s", strerror(errno));
        result = POLL_ERROR;
        goto Done;
    }
    
    // 超时,跳转到Done
    if (eventCount == 0) {
#if DEBUG_POLL_AND_WAKE
        ALOGD("%p ~ pollOnce - timeout", this);
#endif
        result = POLL_TIMEOUT;
        goto Done;
    }
    
    ......
    
    // 循环处理获取到的event
    for (int i = 0; i < eventCount; i++) {
        int fd = eventItems[i].data.fd;
        uint32_t epollEvents = eventItems[i].events;
        if (fd == mWakeEventFd) {
        // 若是是唤醒的fd,执行唤醒处理
            if (epollEvents & EPOLLIN) {
                awoken();
            } else {
                ALOGW("Ignoring unexpected epoll events 0x%x on wake event fd.", epollEvents);
            }
        } else {
            // 不然,处理每一个request
            ssize_t requestIndex = mRequests.indexOfKey(fd);
            if (requestIndex >= 0) {
                // 建立新的events,并经过pushResponse生成新的response,push
                int events = 0;
                if (epollEvents & EPOLLIN) events |= EVENT_INPUT;
                if (epollEvents & EPOLLOUT) events |= EVENT_OUTPUT;
                if (epollEvents & EPOLLERR) events |= EVENT_ERROR;
                if (epollEvents & EPOLLHUP) events |= EVENT_HANGUP;
                pushResponse(events, mRequests.valueAt(requestIndex));
            } else {
                ALOGW("Ignoring unexpected epoll events 0x%x on fd %d that is "
                        "no longer registered.", epollEvents, fd);
            }
        }
    }
    
Done: ;

    // Invoke pending message callbacks.
    mNextMessageUptime = LLONG_MAX;
    // 处理堆积未处理的事件
    while (mMessageEnvelopes.size() != 0) {
        nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);
        const MessageEnvelope& messageEnvelope = mMessageEnvelopes.itemAt(0);
        if (messageEnvelope.uptime <= now) {
            // Remove the envelope from the list.
            // We keep a strong reference to the handler until the call to handleMessage
            // finishes.  Then we drop it so that the handler can be deleted *before*
            // we reacquire our lock.
            { // obtain handler
                sp<MessageHandler> handler = messageEnvelope.handler;
                Message message = messageEnvelope.message;
                mMessageEnvelopes.removeAt(0);
                mSendingMessage = true;
                mLock.unlock();

#if DEBUG_POLL_AND_WAKE || DEBUG_CALLBACKS
                ALOGD("%p ~ pollOnce - sending message: handler=%p, what=%d",
                        this, handler.get(), message.what);
#endif
                handler->handleMessage(message);
            } // release handler

            mLock.lock();
            mSendingMessage = false;
            result = POLL_CALLBACK;
        } else {
            // The last message left at the head of the queue determines the next wakeup time.
            mNextMessageUptime = messageEnvelope.uptime;
            break;
        }
    }

    // Release lock.
    mLock.unlock();

    // 处理每一个response
    for (size_t i = 0; i < mResponses.size(); i++) {
        Response& response = mResponses.editItemAt(i);
        if (response.request.ident == POLL_CALLBACK) {
            int fd = response.request.fd;
            int events = response.events;
            void* data = response.request.data;
#if DEBUG_POLL_AND_WAKE || DEBUG_CALLBACKS
            ALOGD("%p ~ pollOnce - invoking fd event callback %p: fd=%d, events=0x%x, data=%p",
                    this, response.request.callback.get(), fd, events, data);
#endif
            // Invoke the callback.  Note that the file descriptor may be closed by
            // the callback (and potentially even reused) before the function returns so
            // we need to be a little careful when removing the file descriptor afterwards.
            int callbackResult = response.request.callback->handleEvent(fd, events, data);
            if (callbackResult == 0) {
                removeFd(fd, response.request.seq);
            }

            // Clear the callback reference in the response structure promptly because we
            // will not clear the response vector itself until the next poll.
            response.request.callback.clear();
            result = POLL_CALLBACK;
        }
    }
    return result;

总结一下:
1.经过epoll_wait执行等待事件的操做;
2.根据等待到的event与request数组,生成response并push;
3.循环处理堆积的未处理的mMessageEnvelopes事件;
4.处理全部response;

这里引出3个东西,request/response/mMessageEnvelopes。咱们分别解释下。
首先,request的add动做是在addFd时候调用的,所以这里应该是将fd与关心的event绑定的东西。一个fd可绑定多个事件,经过|操做符。后面每次收到event后,使用&来判断是否存在关心的事件,若是是执行pushResponse。
response就更简单了,就是一个events与request的对应关系的维护:

struct Request {
        int fd;
        int ident;
        int events;
        int seq;
        sp<LooperCallback> callback;
        void* data;

        void initEventItem(struct epoll_event* eventItem) const;
    };

    struct Response {
        int events;
        Request request;
    };

mRequests是个以fd做为索引的vector,mResponses就干脆就是个vector。

mMessageEnvelopes是个vector,存储的是MessageEnvelope对象:

struct MessageEnvelope {
        MessageEnvelope() : uptime(0) { }

        MessageEnvelope(nsecs_t u, const sp<MessageHandler> h,
                const Message& m) : uptime(u), handler(h), message(m) {
        }

        nsecs_t uptime;
        sp<MessageHandler> handler;
        Message message;
    };

在sendMessageAtTime里生成并insertAt了MessageEnvelope。所以能够看出,MessageEnvelope其实就是缓存了须要处理的message,并记录了须要执行的时间uptime和handler,及消息体message。

  • 处理消息 --- handler的调用

处理消息的过程其实在上面已经代表了,就是调用response.request.callback->handleEvent(fd, events, data);这句话,那么来看看这个callback是怎么回事:

class LooperCallback : public virtual RefBase {
protected:
    virtual ~LooperCallback();

public:
    /**
     * Handles a poll event for the given file descriptor.
     * It is given the file descriptor it is associated with,
     * a bitmask of the poll events that were triggered (typically EVENT_INPUT),
     * and the data pointer that was originally supplied.
     *
     * Implementations should return 1 to continue receiving callbacks, or 0
     * to have this file descriptor and callback unregistered from the looper.
     */
    virtual int handleEvent(int fd, int events, void* data) = 0;
};

就是个回调的对象,在addFd的时候须要传递进去。在setFileDescriptorEvents的时候调用了addFd,给定的是this,也就是说,回调响应由NativeMessageQueue自行截获。顺便说下,这个setFileDescriptorEvents最后仍是提供给Java层调用的,对应的是nativeSetFileDescriptorEvents函数。
好吧,咱们回来,既然调用的是handleEvent,那么咱们就看看这个东西:

int NativeMessageQueue::handleEvent(int fd, int looperEvents, void* data) {
    int events = 0;
    if (looperEvents & Looper::EVENT_INPUT) {
        events |= CALLBACK_EVENT_INPUT;
    }
    if (looperEvents & Looper::EVENT_OUTPUT) {
        events |= CALLBACK_EVENT_OUTPUT;
    }
    if (looperEvents & (Looper::EVENT_ERROR | Looper::EVENT_HANGUP | Looper::EVENT_INVALID)) {
        events |= CALLBACK_EVENT_ERROR;
    }
    int oldWatchedEvents = reinterpret_cast<intptr_t>(data);
    int newWatchedEvents = mPollEnv->CallIntMethod(mPollObj,
            gMessageQueueClassInfo.dispatchEvents, fd, events);
    if (!newWatchedEvents) {
        return 0; // unregister the fd
    }
    if (newWatchedEvents != oldWatchedEvents) {
        setFileDescriptorEvents(fd, newWatchedEvents);
    }
    return 1;
}

组合event后,调用的是mPollEnv->CallIntMethod(mPollObj,
            gMessageQueueClassInfo.dispatchEvents, fd, events);能看出来吧,mPollEnv是JNIEnv,那么这个明显是调用java层的方法,是谁呢?就是MessageQueue.dispatchEvents:

private int dispatchEvents(int fd, int events) {
        // Get the file descriptor record and any state that might change.
        final FileDescriptorRecord record;
        final int oldWatchedEvents;
        final OnFileDescriptorEventListener listener;
        final int seq;
        synchronized (this) {
            record = mFileDescriptorRecords.get(fd);
            if (record == null) {
                return 0; // spurious, no listener registered
            }

            oldWatchedEvents = record.mEvents;
            events &= oldWatchedEvents; // filter events based on current watched set
            if (events == 0) {
                return oldWatchedEvents; // spurious, watched events changed
            }

            listener = record.mListener;
            seq = record.mSeq;
        }

        // Invoke the listener outside of the lock.
        int newWatchedEvents = listener.onFileDescriptorEvents(
                record.mDescriptor, events);
        if (newWatchedEvents != 0) {
            newWatchedEvents |= OnFileDescriptorEventListener.EVENT_ERROR;
        }

        // Update the file descriptor record if the listener changed the set of
        // events to watch and the listener itself hasn't been updated since.
        if (newWatchedEvents != oldWatchedEvents) {
            synchronized (this) {
                int index = mFileDescriptorRecords.indexOfKey(fd);
                if (index >= 0 && mFileDescriptorRecords.valueAt(index) == record
                        && record.mSeq == seq) {
                    record.mEvents = newWatchedEvents;
                    if (newWatchedEvents == 0) {
                        mFileDescriptorRecords.removeAt(index);
                    }
                }
            }
        }

        // Return the new set of events to watch for native code to take care of.
        return newWatchedEvents;
    }

其实最主要的就是调用了listener.onFileDescriptorEvents(                record.mDescriptor, events);,其实就是调用以前设置好的监听者响应。是根据fd来选择listener的。我查了一下,调用addOnFileDescriptorEventListener的只有在java层的ParcelFileDescriptor.fromFd有这个动做,再深刻查下去就是MountService.mountAppFuse来作这个事情。感受是在mountApp的时候作的这个监听。总之这个过程是要在有事件响应的时候根据事件的状况(EVENT_INPUT/EVENT_OUTPUT/EVENT_ERROR/EVENT_HANGUP/EVENT_INVALID)若是有这些状况,则须要通知对应的监听者进行响应,可是看状况跟message自己的处理关系就不大了。

相关文章
相关标签/搜索