一句话归纳进程通讯:进程间的数据传递。html
Binder是Anroid系统里最重要的进程通讯方式,不少文章会直接用代码、原理类的文字进行描述,对于接触Android与Linux不是特别深的人来讲,特别晦涩难懂,常常是看了这忘了那里,其实探索Binder通讯的一条核心就是:Client如何找Server,将请求发送给Server,Server再将结果返回给Client。java
Binder基于OpenBinder,被引入后添加了不少Android特性,好比,在驱动层添加了ServiceManager逻辑,搭建起ServiceManger-Clien-Server框架模型。Android基于Linux内核,其进程管理模型彻底沿用了Linux的进程/线程模型,进程划分为用户空间与内核空间,在用户空间,进程间是没法通讯的,只有经过内核空间才能传递数据。 Binder自身的意义倾向于通讯,只是进程间通讯的一种方式,可是在Android系统中,Binder被提到了核心高度,Android基本能够看作基于Binder模型实现的是一种Android RPC模型(远程过程调用协议 Remote Procedure Call Protocal ),即:C/S架构。 node
Binder只是定义了Android通讯模型,至于内部的业务实现,仍是要有Server自身来实现,不要把数据传输跟业务处理弄混淆,Android只是基于Binder,搭建了一个C/S通讯框架、或者说通讯协议。Android基于Linux内核,在Linux中,Binder被看作一个字符设备,Binder驱动会为每一个打开Binder的进程在内核里分配一块地址空间,Client向Server传递数据,其实就是将数据写道内核空间中Server的地址里面,而后通知Server去取数据。原理其实很简单,可是Google为了更加合理的使用Binder,本身进行了不少层次的封装与优化,致使代码看的昏头转向, 比较难的就是进程或者线程的挂起与唤醒以及Android CS框架。linux
ServiceManager如何管理Serversandroid
每一个Server进程在注册的时候,首先往本地进程的内核空间的Binders红黑树种插入Binder实体服务的bind_node节点,而后会在ServiceManager的进程的内核空间中为其添加引用ref结构体,ref,会保存相应的信息、名字、ptr地址等。c++
Client如何找到Server,而且向其发送请求
Client在getService的时候,ServiceManager会找到Server的node节点,并在在Client中建立Server的bind_ref引用,Client能够在本身进程的内核空间中找到该引用,最终获取Server的bind_node节点,直接访问Server,传输数据并唤醒。web
Client端,服务实体的引用bind_ref存在哪里了,与Handler的关系式怎么样的面试
Binder驱动会在内核空间为打开Binder设备的进程(包括Client及Server端)建立bind_proc结构体,bind_proc包含4棵红黑树:threads、bind_refs、bind_nodes、bind_desc这四棵树分别记录该进程的线程树、Binder引用树、本地Binder实体,等信息,方便本地查找。Handler其是ServiceManager为了方便客户端查找bind_ref作的一套处理,只是为了标定目标。数组
如何唤醒目标进程或者线程:缓存
每一个Binder进程或者线程在内核中都设置了本身的等待队列,Client将目标进程或者线程告诉Binder驱动,驱动负责唤醒挂起在等待队列上的线程或者进程。
Server如何找到返回目标进程或者线程,Client在请求的时候,会在bind_trasaction的from中添加请求端信息
如何Binder节点与ref节点的添加时机
驱动中存在一个TYPE_BINDER与TYPR_HANDLE的转换,Binder节点是Binder Server进程(通常是Native进程)在向Servicemanager注册时候添加的,而ref是Client在getService的时候添加的,而且是由ServiceManager添加的。
Binder如何实现只拷贝一次
数据从用户空间拷贝到内核中的时候,是直接拷贝到目标进程的内核空间,这个过程是在请求端线程中处理的,只不过操做对象是目标进城的内核空间。其实,内核中的bind_trasaction_data是直接在目标进程汇总分配的,因为Binder进程的Binder内存部分在内核空间跟用户空间只存在一个误差值,用户空间不须要再次拷贝数据就可
以完成访问。
Binder接收线程管理:请求发送时没有特别标记,驱动怎么判断哪些数据包该送入全局to-do队列,哪些数据包该送入特定线程的to-do队列呢?这里有两条规则:【1】
规则1:Client发给Server的请求数据包都提交到Server进程的全局to-do队列。不过有个特例,当进程P1的中的线程T1向进程P2发送请求时,驱动会先查看一下线程T1是否也正在处理来自P2某个线程请求,(尚在处理,未完成,没有发送回复),这种状况一般发生在两个进程都有Binder实体并互相对发时请求的时候。若是在进程P2中发现了这样的线程,好比说T2,就会要求T2来处理T1的此次请求。由于T2既然向T1发送了请求还没有获得返回包,说明T2确定(或将会)阻塞在读取返回包的状态。这时候可让T2顺便作点事情,总比等在那里闲着好。并且若是T2不是线程池中的线程还能够为线程池分担部分工。通过优化,来自T1的请求不是提交给P2的全局to-do队列,而是送入了T2的私有to-do队列。
规则2:对同步请求的返回数据包(由BC_REPLY发送的包)都发送到发起请求的线程的私有to-do队列中。如上面的例子,若是进程P1的线程T1发给进程P2的线程T2的是同步请求,那么T2返回的数据包将送进T1的私有to-do队列而不会提交到P1的全局to-do队列。
Binder Server都会在ServiceManager中注册吗?
Java层的Binder实体就不会去ServiceManager,尤为是bindService这样一种,实际上是ActivityManagerService充当了ServiceManager的角色。
IPCThreadState::joinThreadPool的真正意义是什么?
能够理解加入该进程内核的线程池,进行循环,多个线程开启,其实一个就能够,怕处理不过来,能够开启多个线程处理起来,其实跟线程池相似。
为什么ServiceManager启动的时候没有采用joinThreadPool,而是本身经过for循环来实现本身Loop
由于Binder环境还没准备好啊,因此,本身控制,因此也咩有talkWithDriver那套逻辑,不用onTransact实现。由于前文也说过,Binder为Android作了深层的改变,其实在驱动里面ServiceManager也是特殊对待的,在binder_transaction中,会对目标是ServiceManager的请求进行特殊处理。
在应用层ServiceManager的使用通常以下:(基于源码4.3)
public abstract Object getSystemService(@ServiceName @NonNull String name);
那么ServiceManager是何时启动的呢?ServiceManager代码位于/frameworks/native/cmds/servicemanager/中,在init.rc中能够看到
service servicemanager /system/bin/servicemanager class core user system group system critical onrestart restart zygote onrestart restart media onrestart restart surfaceflinger onrestart restart drm
因此,ServiceManager是有init进程启动的,在Linux系统中init是一切用户空间进程的父进程,ServiceManager所以不依赖与任何Android服务进程。彻底由系统的init进程加载进来。
init进程启动的servicemanager的入口是service_manager.c的main函数:
int main(int argc, char **argv) { struct binder_state *bs; void *svcmgr = BINDER_SERVICE_MANAGER; bs = binder_open(128*1024); if (binder_become_context_manager(bs)) { LOGE("cannot become context manager (%s)\n", strerror(errno)); return -1; } svcmgr_handle = svcmgr; binder_loop(bs, svcmgr_handler); return 0; }
主要作了如下几件事情
第三步:调用函数binder_loop开启循环,监听Client进程的通讯要求。
下面详细的分析一下核心函数代码:
int binder_become_context_manager(struct binder_state *bs)
{
return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
}
进入binder驱动
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) { int ret; struct binder_proc *proc = filp->private_data; struct binder_thread *thread; unsigned int size = _IOC_SIZE(cmd); void __user *ubuf = (void __user *)arg; . . . . . . . . . . . . case BINDER_SET_CONTEXT_MGR: . . . . . . . . . . . . binder_context_mgr_uid = current->cred->euid; binder_context_mgr_node = binder_new_node(proc, NULL, NULL); if (binder_context_mgr_node == NULL) { ret = -ENOMEM; goto err; } binder_context_mgr_node->local_weak_refs++; binder_context_mgr_node->local_strong_refs++; binder_context_mgr_node->has_strong_ref = 1; binder_context_mgr_node->has_weak_ref = 1; break; . . . . . . . . . . . . }
Binder驱动为ServiceManager生成一个binder_node节点,并记入静态变量binder_context_mgr_node。通常状况下,应用层的每一个binder实体都会在binder驱动层对应一个binder_node节点,然而binder_context_mgr_node比较特殊,它没有对应的应用层binder实体。系统规定:任何应用都必须使用句柄0来跨进程地访问它,由于ServiceManager独一无二,而且只有一个服务实体,因此并不须要区分Binder实体。Android规定几乎任何用户进程均可以经过0号句柄访问ServiceManager,其他Server的远程接口句柄之都是一个Binder驱动分配的大于0的值,能够在Binder驱动中看到对于请求是ServiceManager的特殊处理
关于binder_loop开启循环
void binder_loop(struct binder_state *bs, binder_handler func) { int res; struct binder_write_read bwr; unsigned readbuf[32]; bwr.write_size = 0; bwr.write_consumed = 0; bwr.write_buffer = 0; readbuf[0] = BC_ENTER_LOOPER; binder_write(bs, readbuf, sizeof(unsigned)); for (;;) { bwr.read_size = sizeof(readbuf); bwr.read_consumed = 0; bwr.read_buffer = (unsigned) readbuf; res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func); .... } }
能够看出ServiceManager直接经过for的ioctl进行binder字符设备的读取,若是没请求到来,就将本身的进程挂起,等待请求唤醒。通常Client的请求目标若是是ServiceManager会进行以下区分处理:
if (tr->target.handle) {//若是不是ServiceManager,也就是tr->target.handle!=0 struct binder_ref *ref; ref = binder_get_ref(proc, tr->target.handle); if (ref == NULL) { binder_user_error("binder: %d:%d got " "transaction to invalid handle\n", proc->pid, thread->pid); return_error = BR_FAILED_REPLY; goto err_invalid_target_handle; } target_node = ref->node; } else {//若是不是ServiceManager,也就是tr->target.handle==0 target_node = binder_context_mgr_node; if (target_node == NULL) { return_error = BR_DEAD_REPLY; goto err_no_context_mgr_node; }
到这里咱们就知道,ServiceManager怎么启动、究竟作什么,以及客户端如何找到ServiceManager的逻辑。下面就来看一下Server如何将本身注册到ServiceManager中,以及Client如何经过,。ServiceManager找到目标Server。
系统启动的时候,init进程会先启动ServicManager进程,以后,init会启动mediaserver进程,注意mediaserver不是SysytemServer启动的,而是init启动的。MediaPlayerServiceService的,SystemServer.java经过init1调用Jni函数,mediaserver配置在init.rc配置文件中,init.rc中的Service服务进程是顺序启动的。
service media /system/bin/mediaserver class main user media group audio camera inet net_bt net_bt_admin net_bw_acct drmrpc mediadrm ioprio rt 4
mediaserver的入口是/frameworks/av/media/mediaserver/main_mediaserver.cpp的main函数:
int main(int argc, char** argv){ sp<ProcessState> proc(ProcessState::self()); sp<IServiceManager> sm = defaultServiceManager(); AudioFlinger::instantiate(); //内含注册到ServiceManager MediaPlayerService::instantiate();//内含注册到ServiceManager CameraService::instantiate();//内含注册到ServiceManager AudioPolicyService::instantiate();//内含注册到ServiceManager ProcessState::self()->startThreadPool(); IPCThreadState::self()->joinThreadPool(); }
经过以上代码,能够看出一个进程能够同时注册了多个服务,那么究竟是怎么注册到ServiceManager呢。主观上想就是经过defaultServiceManager()获取ServiceManager的远程代理,而后经过代理发送请求,将服务Server信息注册到ServiceManager中。
defaultServiceManager函数定义在IServiceManager.h中,可是不属于某个类,相似全局方法,其实现也透漏着单利模式的影子:
sp<IServiceManager> defaultServiceManager() { if (gDefaultServiceManager != NULL) return gDefaultServiceManager; { AutoMutex _l(gDefaultServiceManagerLock); if (gDefaultServiceManager == NULL) { gDefaultServiceManager = interface_cast<IServiceManager>( ProcessState::self()->getContextObject(NULL)); } } return gDefaultServiceManager; }
gDefaultServiceManager定义在namespace android中,是个全局变量,若是非要找源码,能够再static.h中找到它
namespace android { // For ProcessState.cpp extern Mutex gProcessMutex; extern sp<ProcessState> gProcess; // For ServiceManager.cpp extern Mutex gDefaultServiceManagerLock; extern sp<IServiceManager> gDefaultServiceManager; extern sp<IPermissionController> gPermissionController; }// namespace android
在IServiceManager /ProcessState.cpp中发现#include
template<typename INTERFACE> class BpInterface : public INTERFACE, public BpRefBase { public: BpInterface(const sp<IBinder>& remote); protected: virtual IBinder* onAsBinder(); }; template<typename INTERFACE> inline BpInterface<INTERFACE>::BpInterface(const sp<IBinder>& remote) : BpRefBase(remote){} template<typename INTERFACE> inline IBinder* BpInterface<INTERFACE>::onAsBinder() { return remote(); } BpRefBase::BpRefBase(const sp<IBinder>& o) : mRemote(o.get()), mRefs(NULL), mState(0) { extendObjectLifetime(OBJECT_LIFETIME_WEAK); if (mRemote) { mRemote->incStrong(this); // Removed on first IncStrong(). mRefs = mRemote->createWeak(this); // Held for our entire lifetime. } }
用于通讯的BpBinder最终被赋值给mRemote,这牵扯到Android智能指针的问题,有时间你们本身当作一个专题去学习一下, 到此,就真正完成了BpServiceManager的建立于分析。既然建立好了,那么用一下,去ServiceManager登记当前系统Server
defaultServiceManager()->addService(String16("media.player"), new MediaPlayerService());
这里其实就是调用BpServiceManager的addService,看一下源码:
virtual status_t addService(const String16& name, const sp<IBinder>& service) { Parcel data, reply; data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); data.writeString16(name); data.writeStrongBinder(service); status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply); return err == NO_ERROR ? reply.readExceptionCode() : err; }
关键是remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply),而remote()最终返回的其实就是mRemote,也就是BpBinder(0),
inline IBinder* remote() { return mRemote; }
再来看一下BpBinder的transact函数,
status_t BpBinder::transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){ if (mAlive) { status_t status = IPCThreadState::self()->transact(mHandle, code, data, reply, flags); if (status == DEAD_OBJECT) mAlive = 0; return status; } return DEAD_OBJECT; }
能够看出最终调用:
status_t status = IPCThreadState::self()->transact(mHandle, code, data, reply, flags);
咱们看一下关键代码:
status_t IPCThreadState::transact(int32_t handle, uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { status_t err = data.errorCheck(); flags |= TF_ACCEPT_FDS; if (err == NO_ERROR) { // 写缓存 err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL); //写缓存 mout } if ((flags & TF_ONE_WAY) == 0) { if (reply) { err = waitForResponse(reply); // 访问Binder驱动,交互 } else { Parcel fakeReply; err = waitForResponse(&fakeReply); } } else { err = waitForResponse(NULL, NULL); } return err; }
waitForResponse向ServiceManager发送请求,并阻塞等待回复,如何发送的呢:talkWithDriver,
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult){ int32_t cmd; int32_t err; while (1) { <!--这里去驱动发送数据--> if ((err=talkWithDriver()) < NO_ERROR) break; err = mIn.errorCheck(); if (err < NO_ERROR) break; if (mIn.dataAvail() == 0) continue; cmd = mIn.readInt32(); IF_LOG_COMMANDS() { alog << "Processing waitForResponse Command: " << getReturnString(cmd) << endl; } switch (cmd) { case BR_TRANSACTION_COMPLETE: if (!reply && !acquireResult) goto finish; break; case BR_DEAD_REPLY: err = DEAD_OBJECT; goto finish; case BR_FAILED_REPLY: err = FAILED_TRANSACTION; goto finish; case BR_ACQUIRE_RESULT: { LOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT"); const int32_t result = mIn.readInt32(); if (!acquireResult) continue; *acquireResult = result ? NO_ERROR : INVALID_OPERATION; } goto finish; case BR_REPLY: { binder_transaction_data tr; err = mIn.read(&tr, sizeof(tr)); if (reply) { if ((tr.flags & TF_STATUS_CODE) == 0) { reply->ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t), freeBuffer, this); } else { err = *static_cast<const status_t*>(tr.data.ptr.buffer); freeBuffer(NULL, reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),tr.data_size, reinterpret_cast<const size_t*>(tr.data.ptr.offsets),tr.offsets_size/sizeof(size_t), this); }} else { freeBuffer(NULL, reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t), this); continue; }} goto finish; default: err = executeCommand(cmd); if (err != NO_ERROR) goto finish; break; }} finish: if (err != NO_ERROR) { if (acquireResult) *acquireResult = err; if (reply) reply->setError(err); mLastError = err; } return err; }
talkWithDriver()的涉及到ioctrl,去访问Binder驱动,这里牵扯的驱动的问题。
status_t IPCThreadState::talkWithDriver(bool doReceive) { binder_write_read bwr; // Is the read buffer empty? const bool needRead = mIn.dataPosition() >= mIn.dataSize(); // We don't want to write anything if we are still reading // from data left in the input buffer and the caller // has requested to read the next data. //正在读取的时候不写 const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0; bwr.write_size = outAvail; bwr.write_buffer = (long unsigned int)mOut.data(); // This is what we'll read. if (doReceive && needRead) { bwr.read_size = mIn.dataCapacity(); bwr.read_buffer = (long unsigned int)mIn.data(); } else { bwr.read_size = 0; } // Return immediately if there is nothing to do. if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR; bwr.write_consumed = 0; bwr.read_consumed = 0; status_t err; do { if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0) err = NO_ERROR; else err = -errno; IF_LOG_COMMANDS() { alog << "Finished read/write, write size = " << mOut.dataSize() << endl; }//这个log说明,talkWithDriver是读写一体的,而且,请求段是采用阻塞的方式来等待请求返回的 } while (err == -EINTR); if (err >= NO_ERROR) { if (bwr.write_consumed > 0) { if (bwr.write_consumed < (ssize_t)mOut.dataSize()) mOut.remove(0, bwr.write_consumed); else mOut.setDataSize(0); } if (bwr.read_consumed > 0) {//通知去读 mIn.setDataSize(bwr.read_consumed); mIn.setDataPosition(0); } return NO_ERROR; } return err; }
看ioctrl,系统调用函数,其对应的bind_ioctrl位于Binder驱动中,首先会写数据,若是是异步传输,不须要等待回复数据,若是是同步请求,须要阻塞等在数据返回。在binder_transcation_data中的flags域中能够体现出来,也就是flags的TF_ONE_WAY位为1,就表示须要异步传输。
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) { int ret; // 当前进程对应的binder_proc struct binder_proc *proc = filp->private_data; // 进程的每一个线程在binder驱动中的表示 struct binder_thread *thread; unsigned int size = _IOC_SIZE(cmd); void __user *ubuf = (void __user *)arg; binder_lock(__func__); thread = binder_get_thread(proc); if (thread == NULL) { ret = -ENOMEM; goto err; } switch (cmd) { case BINDER_WRITE_READ: { struct binder_write_read bwr; if (size != sizeof(struct binder_write_read)) { ret = -EINVAL; goto err; } if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { ret = -EFAULT; goto err; } // > 0, 表示本次ioctl有待发送的数据 if (bwr.write_size > 0) { // 写数据 ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed); trace_binder_write_done(ret); // 成功返回0,出错小于0 if (ret < 0) { bwr.read_consumed = 0; if (copy_to_user(ubuf, &bwr, sizeof(bwr))) ret = -EFAULT; goto err; } } //> 0表示本次ioctl想接收数据 if (bwr.read_size > 0) { // binder驱动接收读数据函数,这里会阻塞,而后被唤醒 ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK); trace_binder_read_done(ret); /* 读返回的时候若是发现todo任务队列中有待处理的任务,那么将会唤醒binder_proc.wait中下一个等待着的空闲线程。*/ if (!list_empty(&proc->todo)) wake_up_interruptible(&proc->wait); if (ret < 0) { if (copy_to_user(ubuf, &bwr, sizeof(bwr))) ret = -EFAULT; goto err; } }
在写请求与数据的过程当中,会调用binder_transaction函数,这个函数是比较关键的函数,里面涉及到Binder驱动管理的核心部分,包括:
static void binder_transaction(struct binder_proc *proc, struct binder_thread *thread, struct binder_transaction_data *tr, int reply) { // // 发送请求时使用的内核传输数据结构 struct binder_transaction *t; struct binder_work *tcomplete; size_t *offp, *off_end; // 目标进程对应的binder_proc struct binder_proc *target_proc; // 目标线程对应的binder_thread struct binder_thread *target_thread = NULL; //目标binder实体在内核中的节点结构体 struct binder_node *target_node = NULL; // struct list_head *target_list; wait_queue_head_t *target_wait; struct binder_transaction *in_reply_to = NULL; struct binder_transaction_log_entry *e; uint32_t return_error; e = binder_transaction_log_add(&binder_transaction_log); e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY); e->from_proc = proc->pid; e->from_thread = thread->pid; e->target_handle = tr->target.handle; e->data_size = tr->data_size; e->offsets_size = tr->offsets_size; // 如过是返回 if (reply) { // 这里的in_reply_to实际上是在Server进程或者线程 in_reply_to = thread->transaction_stack; // ¥ binder_set_nice(in_reply_to->saved_priority); // 这里是判断是不是正确的返回线程 if (in_reply_to->to_thread != thread) { // ¥ } thread->transaction_stack = in_reply_to->to_parent; // 获取目标线程,这里是在请求发送是写入的 target_thread = in_reply_to->from; if (target_thread->transaction_stack != in_reply_to) { //$ } // 根据目标线程获取目标进程 target_proc = target_thread->proc; } // 如过是请求 else { // 若是这里目标服务是普通服务 不是SMgr管理类进程 if (tr->target.handle) { struct binder_ref *ref; // 从proc进程中,这里实际上是本身的进程,查询出bind_node的引用bind_ref, // 对于通常的进程,会在getService的时候有Servicemanager为本身添加 // 注意handle 是记录在本地的,用来本地索引ref用的 ref = binder_get_ref(proc, tr->target.handle); //¥ target_node = ref->node; } else { // 若是这里目标服务是Servicemanager服务 // 若是ioctl想和SMgr的binder实体创建联系,须要使tr->target.handle = 0 target_node = binder_context_mgr_node; } e->to_node = target_node->debug_id; // 获取目标进程 target_proc = target_node->proc;
能够看到,若是是请求,会根据target.handle在本进程中找到目标进程的proc节点,固然,第一次通常是找到binder_context_mgr_node,而后binder_context_mgr_node会在Client进程中设置目标Service节点的ref,这样Client就能够找到target_proc了。
//* 在经过进行binder事物的传递时,若是一个binder事物(用struct binder_transaction结构体表示)须要使用到内存, 就会调用binder_alloc_buf函数分配这次binder事物须要的内存空间。 须要注意的是:从目标进程所在binder内存空间分配所需的内存 // //从target进程的binder内存空间分配所需的内存大小,这也是一次拷贝,完成通讯的关键,直接拷贝到目标进程的内核空间 //因为用户空间跟内核空间仅仅存在一个偏移地址,因此也算拷贝到用户空间 t->buffer = binder_alloc_buf(target_proc, tr->data_size, tr->offsets_size, !reply && (t->flags & TF_ONE_WAY)); //¥ t->buffer->allow_user_free = 0; t->buffer->debug_id = t->debug_id; //该binder_buffer对应的事务 t->buffer->transaction = t; //该事物对应的目标binder实体 ,由于目标进程中可能不只仅有一个Binder实体 t->buffer->target_node = target_node; trace_binder_transaction_alloc_buf(t->buffer); // if (target_node) binder_inc_node(target_node, 1, 0, NULL); // 计算出存放flat_binder_object结构体偏移数组的起始地址,4字节对齐。 offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *))); // struct flat_binder_object是binder在进程之间传输的表示方式 // // 这里就是完成binder通信单边时候在用户进程同内核buffer之间的一次拷贝动做 // // 这里的数据拷贝,实际上是拷贝到目标进程中去,由于t自己就是在目标进程的内核空间中分配的, if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) { binder_user_error("binder: %d:%d got transaction with invalid " "data ptr\n", proc->pid, thread->pid); return_error = BR_FAILED_REPLY; goto err_copy_data_failed; } // 拷贝内嵌在传输数据中的flat_binder_object结构体偏移数组 if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) { //¥ } if (!IS_ALIGNED(tr->offsets_size, sizeof(size_t))) { // ¥ } //这里不必定必定执行,好比 MediaPlayer getMediaplayerservie的时候,传递数据中不包含对象, off_end为null // 这里就不会执行,必定要携带传输的数据才会走到这里 //off_end = (void *)offp + tr->offsets_size; //flat_binder_object结构体偏移数组的结束地址 off_end = (void *)offp + tr->offsets_size; for (; offp < off_end; offp++) { struct flat_binder_object *fp; // *offp是t->buffer中第一个flat_binder_object结构体的位置偏移,相 当于t->buffer->data的偏移,这里的if语句用来判断binder偏移数组的第一个 元素所指向的flat_binder_object结构体地址是不是在t->buffer->data的有效范 围内,或者数据的总大小就小于一个flat_binder_object的大小,或者说这个数组的元 素根本就没有4字节对齐(一个指针在32位平台上用4个字节表示)。// if (*offp > t->buffer->data_size - sizeof(*fp) || t->buffer->data_size < sizeof(*fp) || !IS_ALIGNED(*offp, sizeof(void *))) { // $ } // 取得第一个flat_binder_object结构体指针 fp = (struct flat_binder_object *)(t->buffer->data + *offp); switch (fp->type) { // 只有具备binder实体的进程才有权利发送这类binder。这里实际上是在binder实体中 case BINDER_TYPE_BINDER: case BINDER_TYPE_WEAK_BINDER: { struct binder_ref *ref; // 根据flat_binder_object.binder这个binder实体在进程间的地 // 址搜索当前进程的binder_proc->nodes红黑树,看看是否已经建立了binder_node内核节点 // 若是没建立,在本身的进程中为本身建立node节点 struct binder_node *node = binder_get_node(proc, fp->binder); if (node == NULL) { // 若是为空,建立 node = binder_new_node(proc, fp->binder, fp->cookie); if (node == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_new_node_failed; } node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK; node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS); } // 校验 if (fp->cookie != node->cookie) { // # } if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) { return_error = BR_FAILED_REPLY; goto err_binder_get_ref_for_node_failed; } // 在目标进程中为它建立引用,其实相似(在ServiceManager中建立bind_ref其实能够说,Servicemanager拥有所有Service的引用) ref = binder_get_ref_for_node(target_proc, node); // ¥ // 修改flat_binder_object数据结构的type和handle域,接下来要传给接收方 if (fp->type == BINDER_TYPE_BINDER) fp->type = BINDER_TYPE_HANDLE; else fp->type = BINDER_TYPE_WEAK_HANDLE; fp->handle = ref->desc; binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, &thread->todo); trace_binder_transaction_node_to_ref(t, node, ref); binder_debug(BINDER_DEBUG_TRANSACTION, " node %d u%p -> ref %d desc %d\n", node->debug_id, node->ptr, ref->debug_id, ref->desc); } break; case BINDER_TYPE_HANDLE: case BINDER_TYPE_WEAK_HANDLE: { // 经过引用号取得当前进程中对应的binder_ref结构体 struct binder_ref *ref = binder_get_ref(proc, fp->handle); // ¥ if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) { return_error = BR_FAILED_REPLY; goto err_binder_get_ref_failed; } // 若是目标进程正好是提供该引用号对应的binder实体的进程,那么按照下面的方式 修改flat_binder_object的相应域: type 和 binder,cookie。// // 不太理解??? if (ref->node->proc == target_proc) { if (fp->type == BINDER_TYPE_HANDLE) fp->type = BINDER_TYPE_BINDER; else fp->type = BINDER_TYPE_WEAK_BINDER; fp->binder = ref->node->ptr; fp->cookie = ref->node->cookie; binder_inc_node(ref->node, fp->type == BINDER_TYPE_BINDER, 0, NULL); trace_binder_transaction_ref_to_node(t, ref); // $ } else { // 不然会在目标进程的refs_by_node红黑树中先搜索看是否以前有建立过对应的b inder_ref,若是没有找到,那么就须要为ref->node节点在目标进程中新建一个目标进程的bi nder_ref挂入红黑树refs_by_node中。// struct binder_ref *new_ref; new_ref = binder_get_ref_for_node(target_proc, ref->node); if (new_ref == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_get_ref_for_node_failed; } //此时只须要将此域修改成新建binder_ref的引用号 fp->handle = new_ref->desc; binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL); trace_binder_transaction_ref_to_ref(t, ref, new_ref); } } break;
// 这里如何获取 target->thread(若是有的化), 目标进程或者线程,ref 对应node会有记录 if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) { struct binder_transaction *tmp; // 目前确定只有本身,由于阻塞,只能有本身 tmp = thread->transaction_stack; 这里尝试解释一下 http://blog.csdn.net/universus/article/details/6211589 不知道是否是正确 关于工做线程的启动,Binder驱动还作了一点小小的优化。当进程P1的线程T1向进程P2发送请求时, 驱动会先查看一下线程T1是否也正在处理来自P2某个线程请求但还没有完成(没有发送回复)。这种状况 一般发生在两个进程都有Binder实体并互相对发时请求时。假如驱动在进程P2中发现了这样的线程,好比说T2 ,就会要求T2来处理T1的此次请求。由于T2既然向T1发送了请求还没有获得返回包,说明T2确定(或将会)阻塞在读取返 回包的状态。这时候可让T2顺便作点事情,总比等在那里闲着好。 并且若是T2不是线程池中的线程还能够为线程池分担部分工做,减小线程池使用率。// 规则1:Client发给Server的请求数据包都提交到Server进程的全局to-do队列。不过有个特例,就 是上节谈到的Binder对工做线程启动的优化。通过优化,来自T1的请求不是提交给P2的全局to-do队列,而是送入了T2 的私有to-do队列。规则2:对同步请求的返回数据包(由BC_REPLY发送的包)都发送到发起请求的线程的私有to-do队列 中。如上面的例子,若是进程P1的线程T1发给进程P2的线程T2的是同步请求,那么T2返回的数据包将送进T1的私有to-do队 列而不会提交到P1的全局to-do队列。 数据包进入接收队列的潜规则也就决定了线程进入等待队列的潜规则, 即一个线程只要不接收返回数据包则应该在全局等待队列中等待新任务, 不然就应该在其私有等待队列中等待Server的返回数据。仍是上面的例子, T1在向T2发送同步请求后就必须等待在它私有等待队列中, 而不是在P1的全局等待队列中排队,不然将得不到T2的返回的数据包。 只有相互请求,才会在请求的时候发送到某个特定的线程。 while (tmp) { if (tmp->from && tmp->from->proc == target_proc) target_thread = tmp->from; tmp = tmp->from_parent; } } } // 这里若是目标是线程,那么唤醒的就是线程的等待队列,若是是进程,就唤醒进程等待队列 // 其实唤醒的时候,对应是queuen , if (target_thread) { e->to_thread = target_thread->pid; target_list = &target_thread->todo; target_wait = &target_thread->wait; } else { target_list = &target_proc->todo; target_wait = &target_proc->wait; } e->to_proc = target_proc->pid; // TODO: reuse incoming transaction for reply 如何复用incoming transaction// t = kzalloc(sizeof(*t), GFP_KERNEL); //¥ binder_stats_created(BINDER_STAT_TRANSACTION); //分配本次单边传输须要使用的binder_work结构体内存 tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL); //¥ binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE); t->debug_id = ++binder_last_id; e->debug_id = t->debug_id; //¥ ; // reply 是否是回复 若是是同步传输的发送边,这里将当前的binder_thre if (!reply && !(tr->flags & TF_ONE_WAY)) // 若是是同步传输的发送边,这里将当前的binder_thread记录 // 在binder_transaction.from中,以供在同步传输的回复边时,驱动能够根据这个找到回复的目的task。 t->from = thread; else // 若是是BC_REPLY或者是异步传输,这里不须要记录和返回信息相关的信息。// t->from = NULL; t->sender_euid = proc->tsk->cred->euid; //事务的目标进程 t->to_proc = target_proc; //事物的目标线程 t->to_thread = target_thread; // 这个保持不变,驱动不会关心它 t->code = tr->code; t->flags = tr->flags; //线程的优先级的迁移 t->priority = task_nice(current); trace_binder_transaction(reply, t, target_node); .... if (reply) { BUG_ON(t->buffer->async_transaction != 0); // 这里是出栈操做 binder_pop_transaction(target_thread, in_reply_to); } // 若是是同步传输,请求方须要获取返回 else if (!(t->flags & TF_ONE_WAY)) { BUG_ON(t->buffer->async_transaction != 0); // 须要带返回数据的标记 t->need_reply = 1; // binder_transaction的的上一层任务,因为是堆栈,因此抽象为parent,惹歧义啊!这里是入栈操做 t->from_parent = thread->transaction_stack; // 正在处理的事务栈,为什么会有事务栈呢?由于在等待返回的过程当中,还会有事务插入进去,好比null的reply thread->transaction_stack = t; // 若是本次发起传输以前,当前task没有处于通信过程当中的话,这里必然为 NULL。并且第一次发起传输时,这里也是为NULl。若是以前有异步传输没处理完,没处理完,就尚未release, 那么这里不为null(为了本身,由于本身以前的任务还在执行,还没完),若是以前本task正在处理接收请求,这里也不为NULL(为了其余进程), 这里将传输中间数据结构保存在binder_transaction链表顶部。这个transaction_stack实际 上是管理这一个链表,只不过这个指针时时指向最新加入该链表的成员,最早加入的成员在最底部,有点相似于 stack,因此这里取名叫transaction_stack。 // } // 异步传输,不须要返回 else { // 不须要返回的状况下 ,下面是对异步通信的分流处理 BUG_ON(target_node == NULL); BUG_ON(t->buffer->async_transaction != 1); // 若是目标task仍然还有一个异步须要处理的话,该标志为1。// if (target_node->has_async_transaction) { // 将当前的这个的异步传输任务转入目标进程binder_node的异步等待队列async_todo中。为了加到异步中秋// target_list = &target_node->async_todo; // 将后来的异步交互转入异步等待队列。就不唤醒了,由于有在执行的 target_wait = NULL; } else // 若是没有,设置目标节点存在异步任务,设置也是在这里设置的, target_node->has_async_transaction = 1; } // t->work.type = BINDER_WORK_TRANSACTION; // 添加到目标的任务列表 list_add_tail(&t->work.entry, target_list); tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE; // 添加到请求线程,也就是本身的待处理任务列表 list_add_tail(&tcomplete->entry, &thread->todo); // 唤醒目标进程,查看是否须要唤醒,若是target_wait!=null // 当前task等待在task本身的等待队列中(binder_thread.todo),永远只有其本身。// if (target_wait) wake_up_interruptible(target_wait); return; //err: ¥异常处理 。。。
经过以上三步,binder驱动就会唤醒阻塞的ServiceManager,ServiceManager会解析、处理addService请求:在前面分析ServiceManager成为关键的时候,ServiceManager阻塞在binder_loop(bs, svcmgr_handler)中,被binder驱动唤醒后继续运行:
int svcmgr_handler(struct binder_state *bs, struct binder_txn *txn, struct binder_io *msg, struct binder_io *reply) { struct svcinfo *si; uint16_t *s; unsigned len; void *ptr; uint32_t strict_policy; if (txn->target != svcmgr_handle) strict_policy = bio_get_uint32(msg); s = bio_get_string16(msg, &len); if ((len != (sizeof(svcmgr_id) / 2)) || memcmp(svcmgr_id, s, sizeof(svcmgr_id))) { fprintf(stderr,"invalid id %s\n", str8(s)); return -1; } switch(txn->code) { case SVC_MGR_GET_SERVICE: case SVC_MGR_CHECK_SERVICE: s = bio_get_string16(msg, &len); ptr = do_find_service(bs, s, len); if (!ptr) break; bio_put_ref(reply, ptr); return 0; case SVC_MGR_ADD_SERVICE: s = bio_get_string16(msg, &len); ptr = bio_get_ref(msg); if (do_add_service(bs, s, len, ptr, txn->sender_euid)) return -1; break; case SVC_MGR_LIST_SERVICES: { unsigned n = bio_get_uint32(msg); si = svclist; while ((n-- > 0) && si) si = si->next; if (si) { bio_put_string16(reply, si->name); return 0; } return -1; } default: LOGE("unknown code %d\n", txn->code); return -1; } bio_put_uint32(reply, 0); return 0; }
能够看到,addService最终会调用do_add_service函数:
int do_add_service(struct binder_state *bs, uint16_t *s, unsigned len, void *ptr, unsigned uid, int allow_isolated) { struct svcinfo *si; if (!ptr || (len == 0) || (len > 127)) return -1; if (!svc_can_register(uid, s)) { return -1; } si = find_svc(s, len); if (si) { if (si->ptr) { svcinfo_death(bs, si); } si->ptr = ptr; } else { si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t)); if (!si) { ALOGE("add_service('%s',%p) uid=%d - OUT OF MEMORY\n", str8(s), ptr, uid); return -1; } si->ptr = ptr; si->len = len; memcpy(si->name, s, (len + 1) * sizeof(uint16_t)); si->name[len] = '\0'; si->death.func = svcinfo_death; si->death.ptr = si; si->allow_isolated = allow_isolated; si->next = svclist; svclist = si; } binder_acquire(bs, ptr); binder_link_to_death(bs, ptr, &si->death); return 0; }
这样就将Service添加到ServiceManager的svclist列表中去了,只有,Client就能够同getService去ServiceManager查找相应Service了。
Native层,Binder实体通常都要实现BBinder接口,关键是实现onTransact函数,这是Android对于Binder通讯框架的抽象。刚才在Add的时候,只谈到了BpServiceManager,没说Server的MediaPlayerService实体,其实MediaPlayerService继承自BnMediaPlayerServiceclass, BnMediaPlayerService又继承与BnInterface,这里就是Bn与Bp对于Binder框架最好的描述,二者利用泛型与继承的结合,在上层很好的用依赖倒置的方式实现了对框架的归纳,须要实现什么业务逻辑,只须要定义一份接口,而后让Bn与Bp实现便可。Anroid的Binder的框架如何抽象很是完美,业务逻辑与底层通讯实现了完美的分离与统一,既兼顾了通讯的统一与考虑了应用的扩展。业务逻辑的实现是对应于BBinder实体,可是循环监听的开启对应于进程与线程,一个Native进程,只须要调用如下函数便可让本身进入Binder监听Loop ,其核心函数仍是talkWithDriver:
ProcessState::self()->startThreadPool(); IPCThreadState::self()->joinThreadPool();
其实只须要最上面一行就够了,IPCThreadState::self()->joinThreadPool();linux中若是主线程若是退出,程序就退出,因此主线程必需要进入循环,可是与其让他退出,不如重复利用,因而两个Binder线程中,一个是专门开启的,一个是赠送的,核心的binder驱动交互仍是要走到talkWithDriver
void IPCThreadState::joinThreadPool(bool isMain){ mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER); status_t result; do { ... result = talkWithDriver(); if (result >= NO_ERROR) { size_t IN = mIn.dataAvail(); if (IN < sizeof(int32_t)) continue; cmd = mIn.readInt32(); result = executeCommand(cmd); } } while (result != -ECONNREFUSED && result != -EBADF); (void*)pthread_self(), getpid(), (void*)result); mOut.writeInt32(BC_EXIT_LOOPER); talkWithDriver(false); };
若是是Java层App,全部的进程都是Zygote的子进程,不用担忧由于ActivityThread会保证UI Thread进入循环。startThreadPool回传给底层BC_ENTER_LOOPER参数,其目的是什么,只是向Binder驱动注册通报一声,我这个服务要做为Binder监听主线程服务了,由于有些Client在请求Server的时候,Server可能还没进入Loop,当前进程中实现了BBinder服务,而且完成了注册,那么ProcessState::self()->startThreadPool();开启的线程就能够访问改BBinder,这个底层是统一的。只要进入循环,Binder就起来了,起来两个线程,那么最终请求端会唤醒哪一个线程呢?后面讨论,到这里,其实Service的Loop就算
起来了。
Java层实现跟Native层相似,实现服务,以后将本身注册到ServiceManager,其实更加相似于Java程序,这里的Java层系统服务,是不提供界面的,与Application要分开,以ActivityServiceManager为例子:
public static final Context main(int factoryTest) { AThread thr = new AThread(); thr.start(); synchronized (thr) { while (thr.mService == null) { try { thr.wait(); } catch (InterruptedException e) { } } } ActivityManagerService m = thr.mService; mSelf = m; ActivityThread at = ActivityThread.systemMain(); mSystemThread = at; Context context = at.getSystemContext(); context.setTheme(android.R.style.Theme_Holo); m.mContext = context; m.mFactoryTest = factoryTest; m.mMainStack = new ActivityStack(m, context, true, thr.mLooper); m.mIntentFirewall = new IntentFirewall(m.new IntentFirewallInterface());
public static void setSystemProcess() { try { ActivityManagerService m = mSelf; ServiceManager.addService("activity", m, true); ServiceManager.addService("meminfo", new MemBinder(m)); ServiceManager.addService("gfxinfo", new GraphicsBinder(m)); ServiceManager.addService("dbinfo", new DbBinder(m)); if (MONITOR_CPU_USAGE) { ServiceManager.addService("cpuinfo", new CpuBinder(m)); } ServiceManager.addService("permission", new PermissionController(m));
Client请求是通常是在当前线程中,看下MediaPlayService的用法,在java代码中,MediaPlayService的体现是MediaPlayer这个类,而开发者的用法通常以下
MediaPlayer mMediaPlayer = new MediaPlayer(); mMediaPlayer.setDataSource(path); mMediaPlayer.prepareAsync();
Java是解释类语言,其具体实现要去Native代码中去查找
public MediaPlayer/* { // Native setup requires a weak reference to our object. * It’s easier to create it here than in C++. */ native_setup(new WeakReference(this));//关键是这句,JNI的使用 }
native_setup函数在android_media_MediaPlayer.cpp中实现实现代码以下:
static void android_media_MediaPlayer_native_setup(JNIEnv *env, jobject thiz, jobject weak_this) { sp mp = new MediaPlayer(); //建立于Java层对应的c++层MediaPlayer实例,并设置给Java if (mp == NULL) { jniThrowException(env, "java/lang/RuntimeException", "Out of memory"); return; } // create new listener and give it to MediaPlayer sp listener = new JNIMediaPlayerListener(env, thiz, weak_this); mp->setListener(listener); setMediaPlayer(env, thiz, mp); }
以上代码的主要功能是,建立Java与C++之间的的相互引用,相似于实现C++的对象封装,并无与Binder的进行交互。继续看MediaPlayer的使用setDataSource,
status_t MediaPlayer::setDataSource(const sp<IStreamSource> &source) { ALOGV("setDataSource"); status_t err = UNKNOWN_ERROR; const sp<IMediaPlayerService>& service(getMediaPlayerService()); if (service != 0) { sp<IMediaPlayer> player(service->create(this, mAudioSessionId)); if ((NO_ERROR != doSetRetransmitEndpoint(player)) || (NO_ERROR != player->setDataSource(source))) { player.clear(); } err = attachNewPlayer(player); } return err; }
这部分代码的主要功能是,建立MediaPlayer代理,其实就是变相的建立BpMediaPlayerService。getMediaPlayerService这里是获取的Hander引用, getMediaPlayerService在IMediaDeathNotifier.cpp中。
// establish binder interface to MediaPlayerSer/*ce //static*/const sp<IMediaPlayerService>& IMediaDeathNotifier::getMediaPlayerService() { ALOGV("getMediaPlayerService"); Mutex::Autolock _l(sServiceLock); if (sMediaPlayerService == 0) { sp<IServiceManager> sm = defaultServiceManager(); sp<IBinder> binder; do { binder = sm->getService(String16("media.player")); //这里开始获取MediaPlayer服务 if (binder != 0) { break; } ALOGW("Media player service not published, waiting..."); usleep(500000); // 0.5 s } while (true); if (sDeathNotifier == NULL) { sDeathNotifier = new DeathNotifier(); } binder->linkToDeath(sDeathNotifier); sMediaPlayerService = interface_cast<IMediaPlayerService>(binder); } return sMediaPlayerService; }
这里终于涉及到Binder部分,比较重要的三句话:
如此,Client去获取Service代理的路就通了。首先看第一部分defaultServiceManager(),这部分代码位于IServiceManager.cpp中,以前addService已作过度析,这里再细化一下:
sp<IServiceManager> defaultServiceManager() { if (gDefaultServiceManager != NULL) return gDefaultServiceManager; { AutoMutex _l(gDefaultServiceManagerLock); if (gDefaultServiceManager == NULL) { gDefaultServiceManager = interface_cast<IServiceManager>( ProcessState::self()->getContextObject(NULL)); } } return gDefaultServiceManager; }
顺着这条单路走下去,最后会发现,上面的代码能够转化为
gDefaultServiceManager = interface_cast<IServiceManager>(new BpBinder(0)); android::sp<IServiceManager> IServiceManager::asInterface(const android::sp<android::IBinder>& obj) { android::sp<IServiceManager> intr; if (obj != NULL) { intr = static_cast<IServiceManager*>( obj->queryLocalInterface( IServiceManager::descriptor).get()); if (intr == NULL) { intr = new BpServiceManager(obj); } } return intr; }
BpServiceManager是一个BpRef对象,由于内部必有一个对象引用BpBinder,果不其然,mRemote就是这个引用。BpServiceManager其实本身的代码更加倾向于抽象ServiceManager服务接口,而BpBinder对客户单而言,是真正用于Binder进程通讯的工具,BpBinder同业务逻辑没有任何关系,好比Mediaplayer的 play,stop与BpBinder彻底不关系,BpBinder它只负责将请求传输到Binder驱动,这里就能够看出业务跟通讯的分离BpServiceManager(BpBider)这里第一部分就完结了。
第二部分,目标Service远程代理的的获取,代码以下:
virtual sp<IBinder> getService(const String16& name) const { unsigned n; for (n = 0; n < 5; n++){ sp<IBinder> svc = checkService(name); if (svc != NULL) return svc; sleep(1); } return NULL; } virtual sp<IBinder> checkService( const String16& name) const { Parcel data, reply; data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); data.writeString16(name); remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply); return reply.readStrongBinder(); }
能够看出,最终会调用checkService去向SVM发送请求,最终返回IBinder类,做为Client和目标对象通讯的接口。remote()返回的实际上是BpBinder的引用,remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply)最终调用的是BpBinder的transact函数:
status_t BpBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { // Once a Binder has died, it will never come back to life. if (mAlive) { status_t status = IPCThreadState::self()->transact( mHandle, code, data, reply, flags); if (status == DEAD_OBJECT) mAlive = 0; return status; } return DEAD_OBJECT; }
IPCThreadState最终调用那个本身的transact函数,将请求传递给Binder驱动,transact关键代码以下:
status_t IPCThreadState::transact(int32_t handle, uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){ status_t err = data.errorCheck(); flags |= TF_ACCEPT_FDS; if (err == NO_ERROR) { err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL); } if ((flags & TF_ONE_WAY) == 0) { 。。。 if (reply) { err = waitForResponse(reply); } else { Parcel fakeReply; err = waitForResponse(&fakeReply); }
IPCThreadState首先调用那个writeTransactionData整合数据,注意,只是整合数据,以后调用waitForResponse向SVM发送数据,
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t BinderFlags, int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer) { Binder_transaction_data tr; tr.target.handle = handle; tr.code = code; tr.flags = BinderFlags; ... tr.data_size = data.ipcDataSize(); tr.data.ptr.buffer = data.ipcData(); tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t); tr.data.ptr.offsets = data.ipcObjects(); ... mOut.writeInt32(cmd); mOut.write(&tr, sizeof(tr)); return NO_ERROR; }
上面的代码并无直接与Binder交互,只是整合数据,waitForResponse才会开始交互
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult) { int32_t cmd; int32_t err; while (1) { //写请求 if ((err=talkWithDriver()) < NO_ERROR) break; //mIn 这是已经写入了数据,下面就是根据返回去处理 err = mIn.errorCheck(); if (err < NO_ERROR) break; if (mIn.dataAvail() == 0) continue; cmd = mIn.readInt32(); switch (cmd) { case BR_TRANSACTION_COMPLETE: if (!reply && !acquireResult) goto finish; break; case BR_DEAD_REPLY: err = DEAD_OBJECT; goto finish; ... case BR_REPLY: { Binder_transaction_data tr; err = mIn.read(&tr, sizeof(tr)); LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY"); if (err != NO_ERROR) goto finish; if (reply) { if ((tr.flags & TF_STATUS_CODE) == 0) { reply->ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t), freeBuffer, this); } else { err = *static_cast<const status_t*>(tr.data.ptr.buffer); freeBuffer(NULL, reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t), this); } } ... } goto finish; default: err = executeCommand(cmd); if (err != NO_ERROR) goto finish; ...
注意While(1)的次数与做用,在waitForResponse的做用:一直等到有意义的反馈,才会中止等待,这里其实又屡次握手的。
while (1) { if ((err=talkWithDriver()) < NO_ERROR) break; err = mIn.errorCheck(); if (err < NO_ERROR) break; if (mIn.dataAvail() == 0) continue; talkWithDriver死循环保证数据正确的传递到内核: do { #if defined(HAVE_ANDROID_OS) if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0) err = NO_ERROR; else err = -errno; #else err = INVALID_OPERATION; #endif IF_LOG_COMMANDS() { alog << "Finished read/write, write size = " << mOut.dataSize() << endl; } } while (err == -EINTR);
ioctrl 写命令,写返回,读取返回,为何先要写返回,这里相似tcp协议,告诉已发送,在让其进入等待,分阶段来处理命令的发送。
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) { int ret; struct Binder_proc *proc = filp->private_data; struct Binder_thread *thread; unsigned int size = _IOC_SIZE(cmd); switch (cmd) { case BINDER_WRITE_READ: { struct Binder_write_read bwr; if (size != sizeof(struct Binder_write_read)) { ret = -EINVAL; goto err; } if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { ret = -EFAULT; goto err; } …. if (bwr.write_size > 0) { ret = Binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed); trace_Binder_write_done(ret); if (ret < 0) { bwr.read_consumed = 0; if (copy_to_user(ubuf, &bwr, sizeof(bwr))) ret = -EFAULT; goto err; } } ….. if (bwr.read_size > 0) { ret = Binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK); trace_Binder_read_done(ret); if (!list_empty(&proc->todo)) wake_up_interruptible(&proc->wait); if (ret < 0) { if (copy_to_user(ubuf, &bwr, sizeof(bwr))) ret = -EFAULT; goto err; } } ….. if (copy_to_user(ubuf, &bwr, sizeof(bwr))) { ret = -EFAULT; goto err; } break; else Binder_context_mgr_uid = current->cred->euid; Binder_context_mgr_node = Binder_new_node(proc, NULL, NULL); if (Binder_context_mgr_node == NULL) { ret = -ENOMEM; goto err; } goto err; } if (thread) thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN; Binder_unlock(__func__); wait_event_interruptible(Binder_user_error_wait, Binder_stop_on_user_error < 2); if (ret && ret != -ERESTARTSYS) printk(KERN_INFO "Binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret); err_unlocked: trace_Binder_ioctl_done(ret); return ret; }
ioctrl 写命令,写返回,读取返回,为何先要写返回,相似tcp协议,告诉已发送,在让其进入等待,分阶段来处理命令的发送。这里还要注意,比较重要的是下面的Smgr的写返回处理,在那里,要为客户端添加bind_node的ref,而且在客户端中建立引用句柄,注意,注意,引用句柄只是对客户端有效,客户端用句柄在客户端查找出Server的bind_proc,而后处理。
void *do_find_service(struct binder_state *bs, uint16_t *s, unsigned len, unsigned uid) { struct svcinfo *si; si = find_svc(s, len); // ALOGI("check_service('%s') ptr = %p\n", str8(s), si ? si->ptr : 0); if (si && si->ptr) { if (!si->allow_isolated) { // If this service doesn't allow access from isolated processes, // then check the uid to see if it is isolated. unsigned appid = uid % AID_USER; if (appid >= AID_ISOLATED_START && appid <= AID_ISOLATED_END) { return 0; } } return si->ptr; } else { return 0; } }
do_find_service这里返回时Binder驱动里Service的ref指针,回写的时候,采用的方式是BINDER_TYPE_HANDLE,
void bio_put_ref(struct binder_io *bio, void *ptr) { struct binder_object *obj; if (ptr) obj = bio_alloc_obj(bio); else obj = bio_alloc(bio, sizeof(*obj)); if (!obj) return; obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS; obj->type = BINDER_TYPE_HANDLE; obj->pointer = ptr; obj->cookie = 0; }
在Binder驱动里,若是须要会作相应的转换,好比,若是目标进程刚好是Binder实体的进程,就转换成BINDER_TYPE_BINDER,不然仍是BINDER_TYPE_HANDLE,而且在Client进程的内核空间中建立Binder实体node的ref要引用,这样就为Client搭建起了通讯线路。
case BINDER_TYPE_HANDLE: case BINDER_TYPE_WEAK_HANDLE: { // 经过引用号取得当前进程中对应的binder_ref结构体 struct binder_ref *ref = binder_get_ref(proc, fp->handle); // ¥ if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) { return_error = BR_FAILED_REPLY; goto err_binder_get_ref_failed; } /* 若是目标进程正好是提供该引用号对应的binder实体的进程,那么按照下面的方式 修改flat_binder_object的相应域: type 和 binder,cookie。*/ // 不太理解??? if (ref->node->proc == target_proc) { if (fp->type == BINDER_TYPE_HANDLE) fp->type = BINDER_TYPE_BINDER; else fp->type = BINDER_TYPE_WEAK_BINDER; fp->binder = ref->node->ptr; fp->cookie = ref->node->cookie; binder_inc_node(ref->node, fp->type == BINDER_TYPE_BINDER, 0, NULL); trace_binder_transaction_ref_to_node(t, ref); // $ } else { /* 不然会在目标进程的refs_by_node红黑树中先搜索看是否以前有建立过对应的b inder_ref,若是没有找到,那么就须要为ref->node节点在目标进程中新建一个目标进程的bi nder_ref挂入红黑树refs_by_node中。*/ struct binder_ref *new_ref; new_ref = binder_get_ref_for_node(target_proc, ref->node); if (new_ref == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_get_ref_for_node_failed; } //此时只须要将此域修改成新建binder_ref的引用号 fp->handle = new_ref->desc; binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL); trace_binder_transaction_ref_to_ref(t, ref, new_ref); } }
以后,数据被写入到Client进程的内核空间,一样基于一次拷贝的原理,Client的用户空间能够直接使用返回数据,Client以后利用readStrongBinder()获取BpBinder,
status_t unflatten_binder(const sp<ProcessState>& proc, const Parcel& in, sp<IBinder>* out) { const flat_binder_object* flat = in.readObject(false); if (flat) { switch (flat->type) { case BINDER_TYPE_BINDER: *out = static_cast<IBinder*>(flat->cookie); return finish_unflatten_binder(NULL, *flat, in); case BINDER_TYPE_HANDLE: *out = proc->getStrongProxyForHandle(flat->handle); return finish_unflatten_binder( static_cast<BpBinder*>(out->get()), *flat, in); } } return BAD_TYPE; }
再利用Binder框架将业务逻辑投射进来。其实将请求跟服务接口加进来。
sMediaPlayerService = interface_cast<IMediaPlayerService>(binder);
这里的sMediaPlayerService其实就是BpMediaPlayerService,MediaPlayerService的本地带来,至此,MediaPlayer就获取了MediaPlayerService的本地代理。
Client请求Service服务的流程与addService相似,只不过Server端由ServiceManager变成了特定的XXXService,因此不作额外分析
Zygote进程启动的时候,已经对Binder进行了支持,其余Application进程均是Zygote的子进程,启动之初就已经支持Binder通讯,其实这里主要是针对Server,由于Client自身请求是在当前线程,随时请求,随时使用。在zygote启动Android应用程序时,会调用zygoteInit函数来初始化应用程序运行环境、好比虚拟机堆栈大小、Binder线程的注册等。
public static final void zygoteInit(int targetSdkVersion, String[] argv) throws ZygoteInit.MethodAndArgsCaller { redirectLogStreams(); commonInit(); //启动Binder线程池以支持Binder通讯 nativeZygoteInit(); applicationInit(targetSdkVersion, argv); }
nativeZygoteInit函数用于建立线程池,该函数是一个Native函数,其对应的JNI函数为frameworks\base\core\jni\AndroidRuntime.cpp
static void com_android_internal_os_RuntimeInit_nativeZygoteInit(JNIEnv* env, jobject clazz) { gCurRuntime->onZygoteInit(); }
变量gCurRuntime的类型是AndroidRuntime,AndroidRuntime类的onZygoteInit()函数是一个虚函数,在AndroidRuntime的子类AppRuntime中被实现frameworks\base\cmds\app_process\App_main.cpp
virtual void onZygoteInit() { sp<ProcessState> proc = ProcessState::self(); ALOGV("App process: starting thread pool.\n"); proc->startThreadPool(); }
函数首先获得ProcessState对象,而后调用它的startThreadPool()函数来启动线程池。
void ProcessState::startThreadPool() { AutoMutex _l(mLock); if (!mThreadPoolStarted) { mThreadPoolStarted = true; spawnPooledThread(true); } }
这里跟Native Service的开启自身Loop机制相似,其实就是启动一个线程去监听Binder字符设备,阻塞等待Client请求,以下:
ProcessState::self()->startThreadPool(); IPCThreadState::self()->joinThreadPool();
在开发Android应用程序的时候,须要和其它进程中进行通讯时,只要定义Binder对象,而后把这个Binder对象的远程代理经过其它途径传给其它进程后,其它进程就能够经过这个Binder对象的远程接口来调用咱们的应用程序进程的函数了,它不像C++层实现Binder进程间通讯机制的Server时,必需要手动调用IPCThreadState.joinThreadPool函数来进入一个无限循环中与Binder驱动程序交互以便得到Client端的请求。
其实并非全部的进程都须要Binder通讯,这个命令是为了将当前线程注册到驱动,其实线程池的意思也是相对驱动而言,在驱动建立binder_thread结构体。 BC_ENTER_LOOPER命令下的处理很是简单,仅仅是将当前线程binder_thread的状态标志位设置为BINDER_LOOPER_STATE_ENTERED,binder_thread_write函数执行完后,因为bwr.read_size > 0,所以binder_ioctl()函数还会执行Binder驱动读. 这样就将当前线程注册到了Binder驱动中,同时该线程进入睡眠等待客户端请求。这样就将当前线程注册到了Binder驱动中,同时该线程进入睡眠等待客户端请求,当有客户端请求到来时,该Binder线程被唤醒,接收并处理客户端的请求。所以Android应用程序经过注册Binder线程来支持Binder进程间通讯机制,joinThreadPool进入循环,就是开启Server的Binder路径。 线程退出时,也会告诉Binder驱动程序,它退出了,这样Binder驱动程序就不会再在Client端的进程间调用分发给它了:
mOut.writeInt32(BC_EXIT_LOOPER); talkWithDriver(false);
Java层Application使用binder的是经过AIDL语言来进行的,为何AIDL定义接口与服务就足够了呢? 编译系统进行了怎么样的转换,使得接口支持binder通讯?AIDL(Android Interface Definition Language)其实就是基于Binder框架的一种实现语言,在进行编译的时候,就已经对Binder的实现进行一系列的封装,生成的IxxxxService.stub以及IxxxxService.Proxy类都是对Binder封装的一种体现。AIDL的最终效果就是让IPC的通信就像调用函数那样简单,系统自动帮你完成了发送请求数据的序列化,以及返回结果数据的解析。开发者须要作的就是写上一个接口文件,而后利用aidl工具转化一下获得另外一个java文件,这个文件在服务和客户端程序各放一份,服务程序继承IxxxxService.Stub 而后将函数接口里面的逻辑代码实现,这里能够看到明显的Server与Client架构模型。Service只是个容器,其中基于aidl实现的IXXService才是真正的服务,Service中主要就是初始化,提供绑定操做,已经解绑,结束一些服务的操做。
//server 端
public interface ICatService extends android.os.IInterface { /** Local-side IPC implementation stub class. */ public static abstract class Stub extends android.os.Binder implements org.crazyit.service.ICatService { private static final java.lang.String DESCRIPTOR = "org.crazyit.service.ICatService"; /** Construct the stub at attach it to the interface. */ public Stub() { <!--这里必定会调用父类的init函数,虽然这里没有写 --> this.attachInterface(this, DESCRIPTOR); }
//client 端
private static class Proxy implements org.crazyit.service.ICatService { private android.os.IBinder mRemote; //mRemote其实就是binder的代理类,其实这个类的实例化是在Native建立的,在java层看不到实例代码,其实这样也是为了更好的封装。 final class BinderProxy implements IBinder { public native boolean pingBinder(); public native boolean isBinderAlive(); public IInterface queryLocalInterface(String descriptor) { return null; }
当bindService以后,客户端会获得一个Binder引用,是Binder,不是IxxxxService.Proxy也不是IxxxxService.Stub实例,必须基于Binder实例化出一个IxxxxService.Proxy或者stub。若是服务端和客户端都是在同一个进程,直接将IxxxxService当作普通的对象调用就成了。Google的同志利用IxxxxService.Stub.asInterface函数对这两种不一样的状况进行了统一,也就是无论你是在同一进程仍是不一样进程,那么在拿到Binder引用后,调用IxxxxService.Stub.asInterface(IBinder obj) 便可获得一个IxxxxService实例,只管调用IxxxxService里的函数就成了。
/** * Cast an IBinder object into an org.crazyit.service.ICatService * interface, generating a proxy if needed. */ public static org.crazyit.service.ICatService asInterface(android.os.IBinder obj) { if ((obj == null)) { return null; } android.os.IInterface iin = obj.queryLocalInterface(DESCRIPTOR); if (((iin != null) && (iin instanceof org.crazyit.service.ICatService))) { return ((org.crazyit.service.ICatService) iin); } return new org.crazyit.service.ICatService.Stub.Proxy(obj); }
AIDL的最终效果就是让 IPC的通信就像调用函数那样简单。自动的帮你完成了参数序列化发送以及解析返回数据的那一系列麻烦。而你所须要作的就是写上一个接口文件,而后利用aidl工具转化一下获得另外一个java文件,这个文件在服务和客户端程序各放一份。服务程序继承IxxxxService.Stub 而后将函数接口里面的逻辑代码实现一下。
bindService与onServiceConnected执行是异步的,首先看一下binderService的源码,
@Override public boolean bindService(Intent service, ServiceConnection conn, .... int res = ActivityManagerNative.getDefault().bindService( mMainThread.getApplicationThread(), getActivityToken(), service, service.resolveTypeIfNeeded(getContentResolver()), sd, flags); if (res < 0) { throw new SecurityException( "Not allowed to bind to service " + service); } ... }
IServiceConnection.Stub 其实也是一条Binder体系,InnerConnection对象是一个Binder对象,一会是要传递给ActivityManagerService的,ActivityManagerServic后续就是要经过这个Binder对象和ServiceConnection通讯的。
private static class InnerConnection extends IServiceConnection.Stub { final WeakReference<LoadedApk.ServiceDispatcher> mDispatcher; ...... InnerConnection(LoadedApk.ServiceDispatcher sd) { mDispatcher = new WeakReference<LoadedApk.ServiceDispatcher>(sd); } ...... } 来看看bind函数 public int bindService(IApplicationThread caller, IBinder token, Intent service, String resolvedType, IServiceConnection connection, int flags data.writeStrongBinder(connection.asBinder());
接着经过retrieveServiceLocked函数,获得一个ServiceRecord,这个ServiceReocrd描述的是一个Service对象,这里就是CounterService了,这是根据传进来的参数service的内容得到的。回忆一下在MainActivity.onCreate函数绑定服务的语句:
Intent bindIntent = new Intent(MainActivity.this, CounterService.class); bindService(bindIntent, serviceConnection, Context.BIND_AUTO_CREATE);
这里的参数service,就是上面的bindIntent了,它里面设置了Service类的信息,所以,这里能够经过它来把Service的信息取出来,而且保存在ServiceRecord对象s中。 接下来,就是把传进来的参数connection封装成一个ConnectionRecord对象。注意,这里的参数connection是一个Binder对象,它的类型是LoadedApk.ServiceDispatcher.InnerConnection,是在Step 4中建立的,后续ActivityManagerService就是要经过它来告诉MainActivity,CounterService已经启动起来了,所以,这里要把这个ConnectionRecord变量c保存下来,它保在在好几个地方,都是为了后面要用时方便地取回来的,这里就不仔细去研究了,只要知道ActivityManagerService要使用它时就能够方便地把它取出来就能够了. 咱们先沿着app.thread.scheduleCreateService这个路径分析下去,而后再回过头来分析requestServiceBindingsLocked的调用过程。这里的app.thread是一个Binder对象的远程接口,类型为ApplicationThreadProxy。每个Android应用程序进程里面都有一个ActivtyThread对象和一个ApplicationThread对象,其中是ApplicationThread对象是ActivityThread对象的一个成员变量,是ActivityThread与ActivityManagerService之间用来执行进程间通讯的,
IBinder binder = s.onBind(data.intent); ActivityManagerNative.getDefault().publishService( data.token, data.intent, binder);
IServiceConnection如何完成在AMS端口的转换
sp<IBinder> Parcel::readStrongBinder() const { sp<IBinder> val; unflatten_binder(ProcessState::self(), *this, &val); return val; } 没什么,再向下看,不是什么东东均可以向下看的,不然别人会骂的。 [cpp] view plain copy 在CODE上查看代码片派生到个人代码片 status_t unflatten_binder(const sp<ProcessState>& proc, const Parcel& in, sp<IBinder>* out) { const flat_binder_object* flat = in.readObject(false); if (flat) { switch (flat->type) { case BINDER_TYPE_BINDER: *out = static_cast<IBinder*>(flat->cookie); return finish_unflatten_binder(NULL, *flat, in); case BINDER_TYPE_HANDLE: //由于咱们是Client,固然会调用这个 *out = proc->getStrongProxyForHandle(flat->handle); return finish_unflatten_binder( static_cast<BpBinder*>(out->get()), *flat, in); } } return BAD_TYPE; }
这个返回的就是一个BpBinder,其handle为传入的handle.如今已经看到reply.readStrongBinder()的返回值为一个BpBinder,即interface_cast(reply.readStrongBinder());的参数为一个BpBinder.
case BIND_SERVICE_TRANSACTION: { data.enforceInterface(IActivityManager.descriptor); IBinder b = data.readStrongBinder(); IApplicationThread app = ApplicationThreadNative.asInterface(b); IBinder token = data.readStrongBinder(); Intent service = Intent.CREATOR.createFromParcel(data); String resolvedType = data.readString(); //这个转换能够吧b转换成代理 b = data.readStrongBinder(); int fl = data.readInt(); IServiceConnection conn = IServiceConnection.Stub.asInterface(b); int res = bindService(app, token, service, resolvedType, conn, fl); reply.writeNoException(); reply.writeInt(res); return true; }
接着经过retrieveServiceLocked函数,获得一个ServiceRecord,这个ServiceReocrd描述的是一个Service对象,这里就是CounterService了,这是根据传进来的参数service的内容得到的。回忆一下在MainActivity.onCreate函数绑定服务的语句:
Intent bindIntent = new Intent(MainActivity.this, CounterService.class); bindService(bindIntent, serviceConnection, Context.BIND_AUTO_CREATE);
这里的参数service,就是上面的bindIntent了,它里面设置了CounterService类的信息(CounterService.class),所以,这里能够经过它来把CounterService的信息取出来,而且保存在ServiceRecord对象s中。接下来,就是把传进来的参数connection封装成一个ConnectionRecord对象。注意,这里的参数connection是一个Binder对象,它的类型是LoadedApk.ServiceDispatcher.InnerConnection,是在Step 4中建立的,后续ActivityManagerService就是要经过它来告诉MainActivity,CounterService已经启动起来了,所以,这里要把这个ConnectionRecord变量c保存下来,它保在在好几个地方,都是为了后面要用时方便地取回来的,这里就不仔细去研究了,只要知道ActivityManagerService要使用它时就能够方便地把它取出来就能够了. 咱们先沿着app.thread.scheduleCreateService这个路径分析下去,而后再回过头来分析requestServiceBindingsLocked的调用过程。这里的app.thread是一个Binder对象的远程接口,类型为ApplicationThreadProxy。每个Android应用程序进程里面都有一个ActivtyThread对象和一个ApplicationThread对象,其中是ApplicationThread对象是ActivityThread对象的一个成员变量,是ActivityThread与ActivityManagerService之间用来执行进程间通讯的,
IBinder binder = s.onBind(data.intent); ActivityManagerNative.getDefault().publishService( data.token, data.intent, binder);
接下来
class ActivityManagerProxy implements IActivityManager { ...... public void publishService(IBinder token, Intent intent, IBinder service) throws RemoteException { Parcel data = Parcel.obtain(); Parcel reply = Parcel.obtain(); data.writeInterfaceToken(IActivityManager.descriptor); data.writeStrongBinder(token); intent.writeToParcel(data, 0); data.writeStrongBinder(service); mRemote.transact(PUBLISH_SERVICE_TRANSACTION, data, reply, 0); reply.readException(); data.recycle(); reply.recycle(); } ...... }
IServiceConnection如何完成在AMS端口的转换
sp<IBinder> Parcel::readStrongBinder() const { sp<IBinder> val; unflatten_binder(ProcessState::self(), *this, &val); return val; } [cpp] view plain copy 在CODE上查看代码片派生到个人代码片 status_t unflatten_binder(const sp<ProcessState>& proc, const Parcel& in, sp<IBinder>* out) { const flat_binder_object* flat = in.readObject(false); if (flat) { switch (flat->type) { case BINDER_TYPE_BINDER: *out = static_cast<IBinder*>(flat->cookie); return finish_unflatten_binder(NULL, *flat, in); case BINDER_TYPE_HANDLE: //由于咱们是Client,固然会调用这个 *out = proc->getStrongProxyForHandle(flat->handle); return finish_unflatten_binder( static_cast<BpBinder*>(out->get()), *flat, in); } } return BAD_TYPE; }
这个返回的就是一个BpBinder,其handle为传入的handle.如今已经看到reply.readStrongBinder()的返回值为一个BpBinder,即interface_cast(reply.readStrongBinder());的参数为一个BpBinder.
case BIND_SERVICE_TRANSACTION: { data.enforceInterface(IActivityManager.descriptor); IBinder b = data.readStrongBinder(); IApplicationThread app = ApplicationThreadNative.asInterface(b); IBinder token = data.readStrongBinder(); Intent service = Intent.CREATOR.createFromParcel(data); String resolvedType = data.readString(); //这个转换能够吧b转换成代理 b = data.readStrongBinder(); int fl = data.readInt(); IServiceConnection conn = IServiceConnection.Stub.asInterface(b); int res = bindService(app, token, service, resolvedType, conn, fl); reply.writeNoException(); reply.writeInt(res); return true; }
private void handleBindService(BindServiceData data) { Service s = mServices.get(data.token); if (DEBUG_SERVICE) Slog.v(TAG, "handleBindService s=" + s + " rebind=" + data.rebind); if (s != null) { try { data.intent.setExtrasClassLoader(s.getClassLoader()); try { if (!data.rebind) { IBinder binder = s.onBind(data.intent); ActivityManagerNative.getDefault().publishService( data.token, data.intent, binder); } else { s.onRebind(data.intent); ActivityManagerNative.getDefault().serviceDoneExecuting( data.token, 0, 0, 0); } ensureJitEnabled(); } catch (RemoteException ex) { } } catch (Exception e) { if (!mInstrumentation.onException(s, e)) { throw new RuntimeException( "Unable to bind to service " + s + " with " + data.intent + ": " + e.toString(), e); } } } } public void publishService(IBinder token, Intent intent, IBinder service) throws RemoteException { Parcel data = Parcel.obtain(); Parcel reply = Parcel.obtain(); data.writeInterfaceToken(IActivityManager.descriptor); data.writeStrongBinder(token); intent.writeToParcel(data, 0); data.writeStrongBinder(service); mRemote.transact(PUBLISH_SERVICE_TRANSACTION, data, reply, 0); reply.readException(); data.recycle(); reply.recycle(); }
case BINDER_TYPE_HANDLE: case BINDER_TYPE_WEAK_HANDLE: { struct binder_ref *ref = binder_get_ref(proc, fp->handle); if (ref == NULL) { binder_user_error("binder: %d:%d got " "transaction with invalid " "handle, %ld\n", proc->pid, thread->pid, fp->handle); return_error = BR_FAILED_REPLY; goto err_binder_get_ref_failed; } if (ref->node->proc == target_proc) { if (fp->type == BINDER_TYPE_HANDLE) fp->type = BINDER_TYPE_BINDER; else fp->type = BINDER_TYPE_WEAK_BINDER; fp->binder = ref->node->ptr; fp->cookie = ref->node->cookie; binder_inc_node(ref->node, fp->type == BINDER_TYPE_BINDER, 0, NULL); if (binder_debug_mask & BINDER_DEBUG_TRANSACTION) printk(KERN_INFO " ref %d desc %d -> node %d u%p\n", ref->debug_id, ref->desc, ref->node->debug_id, ref->node->ptr); } else { struct binder_ref *new_ref; new_ref = binder_get_ref_for_node(target_proc, ref->node); if (new_ref == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_get_ref_for_node_failed; } fp->handle = new_ref->desc; binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL); if (binder_debug_mask & BINDER_DEBUG_TRANSACTION) printk(KERN_INFO " ref %d desc %d -> ref %d desc %d (node %d)\n", ref->debug_id, ref->desc, new_ref->debug_id, new_ref->desc, ref->node->debug_id); } } break;
Java层的通讯是通过封装。in与to 就是个例子
Java层客户端的Binder代理都是BinderProxy,并且他们都是在native层生成的,所以,在上层看不到BinderProxy实例化。BinderProxy位于Binder.java中,
final class BinderProxy implements IBinder { public native boolean pingBinder(); public native boolean isBinderAlive();
其建立位于Native代码/frameworks/base/core/jni/android_util_Binder.cpp中
const char* const kBinderProxyPathName = "android/os/BinderProxy"; clazz = env->FindClass(kBinderProxyPathName); gBinderProxyOffsets.mClass = (jclass) env->NewGlobalRef(clazz); jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val) { if (val == NULL) return NULL; if (val->checkSubclass(&gBinderOffsets)) { // One of our own! jobject object = static_cast<JavaBBinder*>(val.get())->object(); //printf("objectForBinder %p: it's our own %p!\n", val.get(), object); return object; } // For the rest of the function we will hold this lock, to serialize // looking/creation of Java proxies for native Binder proxies. AutoMutex _l(mProxyLock); // Someone else's... do we know about it? jobject object = (jobject)val->findObject(&gBinderProxyOffsets); if (object != NULL) { jobject res = env->CallObjectMethod(object, gWeakReferenceOffsets.mGet); if (res != NULL) { LOGV("objectForBinder %p: found existing %p!\n", val.get(), res); return res; } LOGV("Proxy object %p of IBinder %p no longer in working set!!!", object, val.get()); android_atomic_dec(&gNumProxyRefs); val->detachObject(&gBinderProxyOffsets); env->DeleteGlobalRef(object); } object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor); if (object != NULL) { LOGV("objectForBinder %p: created new %p!\n", val.get(), object); // The proxy holds a reference to the native object. env->SetIntField(object, gBinderProxyOffsets.mObject, (int)val.get()); val->incStrong(object); // The native object needs to hold a weak reference back to the // proxy, so we can retrieve the same proxy if it is still active. jobject refObject = env->NewGlobalRef( env->GetObjectField(object, gBinderProxyOffsets.mSelf)); val->attachObject(&gBinderProxyOffsets, refObject, jnienv_to_javavm(env), proxy_cleanup); // Note that a new object reference has been created. android_atomic_inc(&gNumProxyRefs); incRefsCreated(env); } return object; }
接下去是进入AMS的bindService,再调用ActiveServices.java 的bindServiceLocked,它会把IServiceConnection实例存放到ConnectionRecord里面,并执行bringUpServiceLocked,
int bindServiceLocked(IApplicationThread caller, IBinder token, Intent service, String resolvedType, IServiceConnection connection, int flags, int userId) { ConnectionRecord c = new ConnectionRecord(b, activity, connection, flags, clientLabel, clientIntent); IBinder binder = connection.asBinder(); ... if (bringUpServiceLocked(s, service.getFlags(), callerFg, false) != null){ return 0; }
bringUpServiceLocked会调用realStartServiceLocked,调用scheduleCreateService,完成service的建立和Oncreate()的执行,而后执行requestServiceBindingsLocked,这个是bind服务相关处理,最后是sendServiceArgsLocked,这个是Start服务的处理。
private final void realStartServiceLocked(ServiceRecord r, ProcessRecord app, boolean execInFg) throws RemoteException { <!--下面是Service的建立即启动流程--> app.thread.scheduleCreateService(r, r.serviceInfo, mAm.compatibilityInfoForPackageLocked(r.serviceInfo.applicationInfo), app.repProcState); requestServiceBindingsLocked(r, execInFg); sendServiceArgsLocked(r, execInFg, true);
}
继续往下看requestServiceBindingsLocked再调用ActivityThread的方法scheduleBindService,在ActivityThread.java 中,它发出一个BIND_SERVICE事件,被handleBindService处理,
private void handleBindService(BindServiceData data) { if (!data.rebind) { <!--若是是第一次绑定--> IBinder binder = s.onBind(data.intent); ActivityManagerNative.getDefault().publishService( data.token, data.intent, binder); } else { s.onRebind(data.intent); ActivityManagerNative.getDefault().serviceDoneExecuting( data.token, 0, 0, 0); }
这里先调用Service服务的onBind方法,由于服务是重载的,因此会执行具体服务类的方法,并返回服务里的binder实例,被转换后返回到AMS中,AMS继续调用publishService方法,进而调用ActiveServices.java的publishServiceLocked,
void publishServiceLocked(ServiceRecord r, Intent intent, IBinder service) { for (int conni=r.connections.size()-1; conni>=0; conni--) { ArrayList<ConnectionRecord> clist = r.connections.valueAt(conni); for (int i=0; i<clist.size(); i++) { ConnectionRecord c = clist.get(i); try { c.conn.connected(r.name, service); } serviceDoneExecutingLocked(r, mDestroyingServices.contains(r), false);
这里主要调用到c.conn.connected,c就是ConnectionRecord,其成员conn是一个IServiceConnection类型实例,connected则是其实现类的方法,这里其实也是一套基于binder通讯proxy与Stub,IServiceConnection是采用aidl定义的一个接口,位置在core/java/Android/app/IServiceConnection.aidl,aidl定义以下,只有一个接口方法connected:
oneway interface IServiceConnection { void connected(in ComponentName name, IBinder service); }
其服务端的实如今LoadedApk.java,InnerConnection类是在ServiceDispatcher的内部类,并在ServiceDispatcher的构造函数里面实例化的,其方法connected也是调用的ServiceDispatcher的方法connected,
private static class InnerConnection extendsIServiceConnection.Stub { final WeakReference<LoadedApk.ServiceDispatcher> mDispatcher; InnerConnection(LoadedApk.ServiceDispatcher sd) { mDispatcher = new WeakReference<LoadedApk.ServiceDispatcher>(sd); } public void connected(ComponentName name, IBinder service) throws RemoteException { LoadedApk.ServiceDispatcher sd = mDispatcher.get(); if (sd != null) { sd.connected(name, service); } } } ServiceDispatcher(ServiceConnection conn, Context context, Handler activityThread, int flags) { mIServiceConnection = new InnerConnection(this); mConnection = conn; mContext = context; mActivityThread = activityThread; mLocation = new ServiceConnectionLeaked(null); mLocation.fillInStackTrace(); mFlags = flags; }
这里就再回到咱们前面的ContextImpl里面bindServiceCommon方法里面,这里进行ServiceConnection转化为IServiceConnection时,调用了mPackageInfo.getServiceDispatcher,mPackageInfo就是一个LoadedApk实例,
/*package*/ LoadedApk mPackageInfo; private boolean bindServiceCommon(Intent service, ServiceConnection conn, int flags, UserHandle user) { IServiceConnection sd; sd = mPackageInfo.getServiceDispatcher(conn, getOuterContext(), mMainThread.getHandler(), flags);
}
因此,getServiceDispatcher会建立一个ServiceDispatcher实例,并将ServiceDispatcher实例和ServiceConnection实例造成KV对,并在ServiceDispatcher的构造函数里将ServiceConnection实例c赋值给ServiceConnection的成员变量mConnection,
public final IServiceConnection getServiceDispatcher(ServiceConnection c, Context context, Handler handler, int flags) { synchronized (mServices) { LoadedApk.ServiceDispatcher sd = null; ArrayMap<ServiceConnection, LoadedApk.ServiceDispatcher> map = mServices.get(context); if (map != null) { sd = map.get(c); } if (sd == null) { sd = new ServiceDispatcher(c, context, handler, flags); if (map == null) { map = new ArrayMap<ServiceConnection, LoadedApk.ServiceDispatcher>(); mServices.put(context, map); } map.put(c, sd); }
在执行ServiceDispatcher的connected方法时,就会调用到ServiceConnection的onServiceConnected,完成绑定ServiceConnection的触发。
public void doConnected(ComponentName name, IBinder service) { if (old != null) { mConnection.onServiceDisconnected(name); } // If there is a new service, it is now connected. if (service != null) { mConnection.onServiceConnected(name, service); } }
binderService实际上是经过AMS进行中转,若是Service没启动,就启动Service,以后进行Publish将新进程的Bidner的代理转发给各个端口,谁须要发给谁,具体流程以下图:
原则由点及面,由面及里:Java AIDL原理,binder驱动,挂起与唤醒等
【1】 接收线程管理,这里真正理解了如何区分查找目标线程或者进程