本篇是第二篇,主要是涉及Binder线程与进程的唤醒,传输数据的封装与解析等知识点。java
先看第一部分:发送端线程睡眠在哪一个队列上?node
发送端线程必定睡眠在本身binder_thread的等待队列上,而且,该队列上有且只有本身一个睡眠线程编程
再看第二部分:在Binder驱动去唤醒线程的时候,唤醒的是哪一个等待队列上的线程?markdown
理解这个问题须要理解binder_thread中的 struct binder_transaction transaction_stack栈,这个栈规定了transaction的执行顺序:*栈顶的必定先于栈内执行。cookie
若是本地操做是BC_REPLY,必定是唤醒以前发送等待的线程,这个是100%的,可是若是是BC_TRANSACTION,那就不必定了,尤为是当两端互为服务相互请求的时候,场景以下:数据结构
这个时候就会遇到一个问题:唤醒哪一个线程比较合适?是睡眠在进程队列上的线程,仍是以前睡眠的线程BT1?答案是:以前睡眠的线程BT1,具体看下面的图解分析async
首先第一步A普通线程去请求B进程的B1服务,这个时候在A进程的AT1线程的binder_ref中会将binder_transaction1入栈,而一样B的Binder线程在读取binder_work以后,也会将binder_transaction1加入本身的堆栈,以下图:函数
而当B的Binder线程被唤醒后,执行Binder实体中的服务时,发现服务函数须要反过来去请求A端的A1服务,那就须要经过Binder向A进程发送请求,并新建binder_transaction2压入本身的binder_transaction堆栈,而A进程的Binder线程被唤醒后也会将binder_transaction2加入本身的堆栈,会后效果以下:oop
这个时候,仍是没有任何问题,可是刚好在执行A1服务的时候,又须要请求B2服务,这个时候,A1线程重复上述压栈过程,新建binder_transaction3压入本身的栈,不过在写入到目标端B的时候,会面临一个抉择,写入那个队列,是binder_proc上的队列,仍是正在等候A返回的BT1线程的队列?post
结果已经说过,是BT1的队列,为何呢?由于BT1队列上的以前的binder_transaction2在等待A进程执行完,可是A端的binder_transaction3一样要等待binder_transaction3在B进程中执行完毕,也就是说,binder_transaction3在B端必定是先于binder_transaction2执行的,所以唤醒BT1线程,并将binder_transaction3压入BT2的栈,等binder_transaction3执行完毕,出栈后,binder_transaction2才能执行,这样,既不妨碍binder_transaction2的执行,一样也能让睡眠的BT1进程提升利用率,由于最终的堆栈效果就是:
而当binder_transaction3完成,出栈的过程其实就简单了,
从这里能够看出,其实设计的仍是很巧妙的,让线程复用,提升了效率,还避免了新建没必要要的Binder线程,在binder驱动中岛实现代码,其实就是根据binder_transaction中堆栈记录查询,
static void binder_transaction(struct binder_proc proc,
struct binder_thread thread,
struct binder_transaction_data *tr, int reply)
{..
while (tmp) {
// 找到对方正在等待本身进程的线程,若是线程没有在等待本身进程的返回,就不要找了
// 判断是不target_proc中,是否是有线程,等待当前线程 // thread->transaction_stack,这个时候, // 是binder线程的,不是普通线程 B去请求A服务, // 在A服务的时候,又请求了B,这个时候,A的服务必定要等B处理完,才能再返回B,能够放心用B if (tmp->from && tmp->from->proc == target_proc) target_thread = tmp->from; tmp = tmp->from_parent; ... } } }复制代码
BC与BR主要是标志数据及Transaction流向,其中BC是从用户空间流向内核,而BR是从内核流线用户空间,好比Client向Server发送请求的时候,用的是BC_TRANSACTION,当数据被写入到目标进程后,target_proc所在的进程被唤醒,在内核空间中,会将BC转换为BR,并将数据与操做传递该用户空间。
内核中,与用户空间对应的结构体对象都须要新建,但传输数据的数据只拷贝一次,就是一次拷贝的时候。
从Client端请求开始分析,暂不考虑java层,只考虑Native,以ServiceManager的addService为例,具体看一下
MediaPlayerService::instantiate();复制代码
MediaPlayerService会新建Binder实体,并将其注册到ServiceManager中:
void MediaPlayerService::instantiate() { defaultServiceManager()->addService( String16("media.player"), new MediaPlayerService()); } 复制代码
这里defaultServiceManager其实就是获取ServiceManager的远程代理:
sp<IServiceManager> defaultServiceManager() { if (gDefaultServiceManager != NULL) return gDefaultServiceManager; { AutoMutex _l(gDefaultServiceManagerLock); if (gDefaultServiceManager == NULL) { gDefaultServiceManager = interface_cast<IServiceManager>( ProcessState::self()->getContextObject(NULL)); } } return gDefaultServiceManager; }复制代码
若是将代码简化其实就是
return gDefaultServiceManager = BpServiceManager (new BpBinder(0));复制代码
addService就是调用BpServiceManager的addService,
virtual status_t addService(const String16& name, const sp<IBinder>& service, bool allowIsolated) { Parcel data, reply; data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); data.writeString16(name); data.writeStrongBinder(service); data.writeInt32(allowIsolated ? 1 : 0); status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply); return err == NO_ERROR ? reply.readExceptionCode() : err; }复制代码
这里会开始第一步的封装,数据封装,其实就是讲具体的传输数据写入到Parcel对象中,与Parcel对应是ADD_SERVICE_TRANSACTION等具体操做。比较须要注意的就是data.writeStrongBinder,这里其实就是把Binder实体压扁:
status_t Parcel::writeStrongBinder(const sp<IBinder>& val) { return flatten_binder(ProcessState::self(), val, this); }复制代码
具体作法就是转换成flat_binder_object,以传递Binder的类型、指针之类的信息:
status_t flatten_binder(const sp<ProcessState>& proc, const sp<IBinder>& binder, Parcel* out) { flat_binder_object obj; obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS; if (binder != NULL) { IBinder *local = binder->localBinder(); if (!local) { BpBinder *proxy = binder->remoteBinder(); if (proxy == NULL) { ALOGE("null proxy"); } const int32_t handle = proxy ? proxy->handle() : 0; obj.type = BINDER_TYPE_HANDLE; obj.handle = handle; obj.cookie = NULL; } else { obj.type = BINDER_TYPE_BINDER; obj.binder = local->getWeakRefs(); obj.cookie = local; } } else { obj.type = BINDER_TYPE_BINDER; obj.binder = NULL; obj.cookie = NULL; } return finish_flatten_binder(binder, obj, out); }复制代码
接下来看 remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply); 在上面的环境中,remote()函数返回的就是BpBinder(0),
status_t BpBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { // Once a binder has died, it will never come back to life. if (mAlive) { status_t status = IPCThreadState::self()->transact( mHandle, code, data, reply, flags); if (status == DEAD_OBJECT) mAlive = 0; return status; } return DEAD_OBJECT; }复制代码
以后经过 IPCThreadState::self()->transact( mHandle, code, data, reply, flags)进行进一步封装:
status_t IPCThreadState::transact(int32_t handle, uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){ if ((flags & TF_ONE_WAY) == 0) { if (err == NO_ERROR) { err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL); } if (reply) { err = waitForResponse(reply); } .. return err; }复制代码
writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);是进一步封装的入口,在这个函数中Parcel& data、handle、code、被进一步封装成binder_transaction_data对象,并拷贝到mOut的data中去,同时也会将BC_TRANSACTION命令也写入mOut,这里与binder_transaction_data对应的CMD是BC_TRANSACTION,binder_transaction_data也存储了数据的指引新信息:
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags, int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer) { binder_transaction_data tr; tr.target.handle = handle; tr.code = code; tr.flags = binderFlags; tr.cookie = 0; tr.sender_pid = 0; tr.sender_euid = 0; const status_t err = data.errorCheck(); if (err == NO_ERROR) { tr.data_size = data.ipcDataSize(); tr.data.ptr.buffer = data.ipcData(); tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t); tr.data.ptr.offsets = data.ipcObjects(); } .. mOut.writeInt32(cmd); mOut.write(&tr, sizeof(tr)); return NO_ERROR; }复制代码
mOut封装结束后,会经过waitForResponse调用talkWithDriver继续封装:
status_t IPCThreadState::talkWithDriver(bool doReceive) { binder_write_read bwr; // Is the read buffer empty? 这里会有同时返回两个命令的状况 BR_NOOP、BR_COMPLETE const bool needRead = mIn.dataPosition() >= mIn.dataSize(); // We don't want to write anything if we are still reading // from data left in the input buffer and the caller // has requested to read the next data. const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0; bwr.write_size = outAvail; bwr.write_buffer = (long unsigned int)mOut.data(); // This is what we'll read. if (doReceive && needRead) { bwr.read_size = mIn.dataCapacity(); bwr.read_buffer = (long unsigned int)mIn.data(); } else { bwr.read_size = 0; bwr.read_buffer = 0; } // Return immediately if there is nothing to do. if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR; bwr.write_consumed = 0; bwr.read_consumed = 0; status_t err; do { 。。 if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0) err = NO_ERROR; if (mProcess->mDriverFD <= 0) { err = -EBADF; } } while (err == -EINTR); if (err >= NO_ERROR) { if (bwr.write_consumed > 0) { if (bwr.write_consumed < (ssize_t)mOut.dataSize()) mOut.remove(0, bwr.write_consumed); else mOut.setDataSize(0); } if (bwr.read_consumed > 0) { mIn.setDataSize(bwr.read_consumed); mIn.setDataPosition(0); } return NO_ERROR; } return err; }复制代码
talkWithDriver会将mOut中的数据与命令继续封装成binder_write_read对象,其中bwr.write_buffer就是mOut中的data(binder_transaction_data+BC_TRRANSACTION),以后就会经过ioctl与binder驱动交互,进入内核,这里与binder_write_read对象对应的CMD是BINDER_WRITE_READ,进入驱动后,是先写后读的顺序,因此才叫BINDER_WRITE_READ命令,与BINDER_WRITE_READ层级对应的几个命令码通常都是跟线程、进程、数据总体传输相关的操做,不涉及具体的业务处理,好比BINDER_SET_CONTEXT_MGR是将线程编程ServiceManager线程,并建立0号Handle对应的binder_node、BINDER_SET_MAX_THREADS是设置最大的非主Binder线程数,而BINDER_WRITE_READ就是表示这是一次读写操做:
#define BINDER_CURRENT_PROTOCOL_VERSION 7
#define BINDER_WRITE_READ _IOWR('b', 1, struct binder_write_read)
#define BINDER_SET_IDLE_TIMEOUT _IOW('b', 3, int64_t)
#define BINDER_SET_MAX_THREADS _IOW('b', 5, size_t)
/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */
#define BINDER_SET_IDLE_PRIORITY _IOW('b', 6, int)
#define BINDER_SET_CONTEXT_MGR _IOW('b', 7, int)
#define BINDER_THREAD_EXIT _IOW('b', 8, int)
#define BINDER_VERSION _IOWR('b', 9, struct binder_version)复制代码
详细看一下binder_ioctl对于BINDER_WRITE_READ的处理,
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) { switch (cmd) { case BINDER_WRITE_READ: { struct binder_write_read bwr; .. <!--拷贝binder_write_read对象到内核空间--> if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { ret = -EFAULT; goto err; } <!--根据是否须要写数据处理是否是要写到目标进程中去--> if (bwr.write_size > 0) { ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed); } <!--根据是否须要写数据处理是否是要读,往本身进程里读数据--> if (bwr.read_size > 0) { ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK); <!--是否是要同时唤醒进程上的阻塞队列--> if (!list_empty(&proc->todo)) wake_up_interruptible(&proc->wait); } break; } case BINDER_SET_MAX_THREADS: if (copy_from_user(&proc->max_threads, ubuf, sizeof(proc->max_threads))) { } break; case BINDER_SET_CONTEXT_MGR: .. break; case BINDER_THREAD_EXIT: binder_free_thread(proc, thread); thread = NULL; break; case BINDER_VERSION: .. }复制代码
binder_thread_write(proc, thread, (void __user )bwr.write_buffer, bwr.write_size, &bwr.write_consumed)这里其实就是把解析的binder_write_read对象再剥离,*bwr.write_buffer 就是上面的(BC_TRANSACTION+ binder_transaction_data),
int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread, void __user *buffer, int size, signed long *consumed) { uint32_t cmd; void __user *ptr = buffer + *consumed; void __user *end = buffer + size; while (ptr < end && thread->return_error == BR_OK) { // binder_transaction_data BC_XXX+binder_transaction_data if (get_user(cmd, (uint32_t __user *)ptr)) (BC_TRANSACTION) return -EFAULT; ptr += sizeof(uint32_t); switch (cmd) { .. case BC_FREE_BUFFER: { ... } case BC_TRANSACTION: case BC_REPLY: { struct binder_transaction_data tr; if (copy_from_user(&tr, ptr, sizeof(tr))) return -EFAULT; ptr += sizeof(tr); binder_transaction(proc, thread, &tr, cmd == BC_REPLY); break; } case BC_REGISTER_LOOPER: .. case BC_ENTER_LOOPER: ... thread->looper |= BINDER_LOOPER_STATE_ENTERED; break; case BC_EXIT_LOOPER: // 这里会修改读取的数据, *consumed = ptr - buffer; } return 0; }复制代码
binder_thread_write会进一步根据CMD剥离出binder_transaction_data tr,交给binder_transaction处理,其实到binder_transaction数据几乎已经剥离极限,剩下的都是业务相关的,可是这里牵扯到一个Binder实体与Handle的转换过程,同城也牵扯两个进程在内核空间共享一些数据的问题,所以这里又进行了一次进一步的封装与拆封装,这里新封装了连个对象 binder_transaction与binder_work,有所区别的是binder_work能够看作是进程私有,可是binder_transaction是两个交互的进程共享的:binder_work是插入到线程或者进程的work todo队列上去的:
struct binder_thread { struct binder_proc *proc; struct rb_node rb_node; int pid; int looper; struct binder_transaction *transaction_stack; struct list_head todo; uint32_t return_error; /* Write failed, return error code in read buf */ uint32_t return_error2; /* Write failed, return error code in read */ wait_queue_head_t wait; struct binder_stats stats; };复制代码
这里主要关心一下binder_transaction:binder_transaction主要记录了当前transaction的来源,去向,同时也为了返回作准备,buffer字段是一次拷贝后数据在Binder的内存地址。
struct binder_transaction { int debug_id; struct binder_work work; struct binder_thread *from; struct binder_transaction *from_parent; struct binder_proc *to_proc; struct binder_thread *to_thread; struct binder_transaction *to_parent; unsigned need_reply:1; /* unsigned is_dead:1; */ /* not used at the moment */ struct binder_buffer *buffer; unsigned int code; unsigned int flags; long priority; long saved_priority; uid_t sender_euid; };复制代码
binder_transaction函数主要负责的工做:
Binder与Handle的转换 (flat_binder_object)
static void binder_transaction(struct binder_proc *proc, struct binder_thread *thread, struct binder_transaction_data *tr, int reply) { struct binder_transaction *t; struct binder_work *tcomplete; size_t *offp, *off_end; struct binder_proc *target_proc; struct binder_thread *target_thread = NULL; struct binder_node *target_node = NULL; **关键点1** if (reply) { in_reply_to = thread->transaction_stack; thread->transaction_stack = in_reply_to->to_parent; target_thread = in_reply_to->from; target_proc = target_thread->proc; }else { if (tr->target.handle) { struct binder_ref * ref; ref = binder_get_ref(proc, tr->target.handle); target_node = ref->node; } else { target_node = binder_context_mgr_node; } ..。 **关键点2** t = kzalloc(sizeof( * t), GFP_KERNEL); ... tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL); **关键点3 ** off_end = (void *)offp + tr->offsets_size; for (; offp < off_end; offp++) { struct flat_binder_object *fp; fp = (struct flat_binder_object *)(t->buffer->data + *offp); switch (fp->type) { case BINDER_TYPE_BINDER: case BINDER_TYPE_WEAK_BINDER: { struct binder_ref *ref; struct binder_node *node = binder_get_node(proc, fp->binder); if (node == NULL) { node = binder_new_node(proc, fp->binder, fp->cookie); }.. ref = (target_proc, node); if (fp->type == BINDER_TYPE_BINDER) fp->type = BINDER_TYPE_HANDLE; else fp->type = BINDER_TYPE_WEAK_HANDLE; fp->handle = ref->desc; } break; case BINDER_TYPE_HANDLE: case BINDER_TYPE_WEAK_HANDLE: { struct binder_ref *ref = binder_get_ref(proc, fp->handle); if (ref->node->proc == target_proc) { if (fp->type == BINDER_TYPE_HANDLE) fp->type = BINDER_TYPE_BINDER; else fp->type = BINDER_TYPE_WEAK_BINDER; fp->binder = ref->node->ptr; fp->cookie = ref->node->cookie; } else { struct binder_ref *new_ref; new_ref = binder_get_ref_for_node(target_proc, ref->node); fp->handle = new_ref->desc; } } break; **关键点4** 将binder_work 插入到目标队列 t->work.type = BINDER_WORK_TRANSACTION; list_add_tail(&t->work.entry, target_list); tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE; list_add_tail(&tcomplete->entry, &thread->todo); if (target_wait) wake_up_interruptible(target_wait); return;复制代码
}
关键点1,找到目标进程,关键点2 建立binder_transaction与binder_work,关键点3 处理Binder实体与Handle转化,关键点4,将binder_work插入目标队列,并唤醒相应的等待队列,在处理Binder实体与Handle转化的时候,有下面几点注意的:
如此下来,写数据的流程所经历的数据结构就完了。再简单看一下被唤醒一方的读取流程,读取从阻塞在内核态的binder_thread_read开始,以传递而来的BC_TRANSACTION为例,binder_thread_read会根据一些场景添加BRXXX参数,标识驱动传给用户空间的数据流向:
enum BinderDriverReturnProtocol { BR_ERROR = _IOR_BAD('r', 0, int), BR_OK = _IO('r', 1), BR_TRANSACTION = _IOR_BAD('r', 2, struct binder_transaction_data), BR_REPLY = _IOR_BAD('r', 3, struct binder_transaction_data), BR_ACQUIRE_RESULT = _IOR_BAD('r', 4, int), BR_DEAD_REPLY = _IO('r', 5), BR_TRANSACTION_COMPLETE = _IO('r', 6), BR_INCREFS = _IOR_BAD('r', 7, struct binder_ptr_cookie), BR_ACQUIRE = _IOR_BAD('r', 8, struct binder_ptr_cookie), BR_RELEASE = _IOR_BAD('r', 9, struct binder_ptr_cookie), BR_DECREFS = _IOR_BAD('r', 10, struct binder_ptr_cookie), BR_ATTEMPT_ACQUIRE = _IOR_BAD('r', 11, struct binder_pri_ptr_cookie), BR_NOOP = _IO('r', 12), BR_SPAWN_LOOPER = _IO('r', 13), BR_FINISHED = _IO('r', 14), BR_DEAD_BINDER = _IOR_BAD('r', 15, void *), BR_CLEAR_DEATH_NOTIFICATION_DONE = _IOR_BAD('r', 16, void *), BR_FAILED_REPLY = _IO('r', 17), };复制代码
以后,read线程根据binder_transaction新建binder_transaction_data对象,再经过copy_to_user,传递给用户空间,
static int binder_thread_read(struct binder_proc *proc, struct binder_thread *thread, void __user *buffer, int size, signed long *consumed, int non_block) { while (1) { uint32_t cmd; struct binder_transaction_data tr ; struct binder_work *w; struct binder_transaction *t = NULL; if (!list_empty(&thread->todo)) w = list_first_entry(&thread->todo, struct binder_work, entry); else if (!list_empty(&proc->todo) && wait_for_proc_work) w = list_first_entry(&proc->todo, struct binder_work, entry); else { if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */ goto retry; break; } // 数据大小 tr.data_size = t->buffer->data_size; tr.offsets_size = t->buffer->offsets_size; // 偏移地址要加上 tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset; tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *)); // 写命令 if (put_user(cmd, (uint32_t __user *)ptr)) return -EFAULT; // 写数据结构体到用户空间, ptr += sizeof(uint32_t); if (copy_to_user(ptr, &tr, sizeof(tr))) return -EFAULT; ptr += sizeof(tr); }复制代码
上层经过ioctrl等待的函数被唤醒,假设如今被唤醒的是服务端,通常会执行请求,这里首先经过Parcel的ipcSetDataReference函数将数据将数据映射到Parcel对象中,以后再经过BBinder的transact函数处理具体需求;
status_t IPCThreadState::executeCommand(int32_t cmd) { ... // read到了数据请求,这里是须要处理的逻辑 ,处理完毕, case BR_TRANSACTION: { binder_transaction_data tr; Parcel buffer; buffer.ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t), freeBuffer, this); ... // 这里是处理 若是非空,就是数据有效, if (tr.target.ptr) { // 这里什么是tr.cookie sp<BBinder> b((BBinder*)tr.cookie); const status_t error = b->transact(tr.code, buffer, &reply, tr.flags); if (error < NO_ERROR) reply.setError(error); } 复制代码
这里的 b->transact(tr.code, buffer, &reply, tr.flags);就同一开始Client调用transact( mHandle, code, data, reply, flags)函数对应的处理相似,进入相对应的业务逻辑。
在Binder通讯的过程当中,数据是从发起通讯进程的用户空间直接写到目标进程内核空间,而这部分数据是直接映射到用户空间,必须等用户空间使用完数据才能释放,也就是说Binder通讯中内核数据的释放时机应该是用户空间控制的,内种中释放内存空间的函数是binder_free_buf,其余的数据结构其实能够直接释放掉,执行这个函数的命令是BC_FREE_BUFFER。上层用户空间经常使用的入口是IPCThreadState::freeBuffer:
void IPCThreadState::freeBuffer(Parcel* parcel, const uint8_t* data, size_t dataSize, const size_t* objects, size_t objectsSize, void* cookie) { if (parcel != NULL) parcel->closeFileDescriptors(); IPCThreadState* state = self(); state->mOut.writeInt32(BC_FREE_BUFFER); state->mOut.writeInt32((int32_t)data); }复制代码
那何时会调用这个函数呢?在以前分析数据传递的时候,有一步是将binder_transaction_data中的数据映射到Parcel中去,其实这里是关键
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult) { int32_t cmd; int32_t err; while (1) { ... case BR_REPLY: { binder_transaction_data tr; // 注意这里是没有传输数据拷贝的,只有一个指针跟数据结构的拷贝, err = mIn.read(&tr, sizeof(tr)); ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY"); if (err != NO_ERROR) goto finish; // free buffer,先设置数据,直接 if (reply) { if ((tr.flags & TF_STATUS_CODE) == 0) { // 牵扯到数据利用,与内存释放 reply->ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t), freeBuffer, this);复制代码
Parcel 的ipcSetDataReference函数不只仅能讲数据映射到Parcel对象,同时还能将数据的清理函数映射进来
void Parcel::ipcSetDataReference(const uint8_t* data, size_t dataSize, const size_t* objects, size_t objectsCount, release_func relFunc, void* relCookie)复制代码
看函数定义中的release_func relFunc参数,这里就是指定内存释放函数,这里指定了IPCThreadState::freeBuffer函数,在Native层,Parcel在使用完,并走完本身的生命周期后,就会调用本身的析构函数,在其析构函数中调用了freeDataNoInit(),这个函数会间接调用上面设置的内存释放函数:
Parcel::~Parcel()
{
freeDataNoInit();
}复制代码
这就是数据释放的入口,进入内核空间后,执行binder_free_buf,将此次分配的内存释放,同时更新binder_proc的binder_buffer表,从新标记那些内存块被使用了,哪些没被使用。
static void binder_free_buf(struct binder_proc *proc, struct binder_buffer *buffer) { size_t size, buffer_size; buffer_size = binder_buffer_size(proc, buffer); size = ALIGN(buffer->data_size, sizeof(void *)) + ALIGN(buffer->offsets_size, sizeof(void *)); binder_debug(BINDER_DEBUG_BUFFER_ALLOC, "binder: %d: binder_free_buf %p size %zd buffer" "_size %zd\n", proc->pid, buffer, size, buffer_size); if (buffer->async_transaction) { proc->free_async_space += size + sizeof(struct binder_buffer); binder_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC, "binder: %d: binder_free_buf size %zd " "async free %zd\n", proc->pid, size, proc->free_async_space); } binder_update_page_range(proc, 0, (void *)PAGE_ALIGN((uintptr_t)buffer->data), (void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK), NULL); rb_erase(&buffer->rb_node, &proc->allocated_buffers); buffer->free = 1; if (!list_is_last(&buffer->entry, &proc->buffers)) { struct binder_buffer *next = list_entry(buffer->entry.next, struct binder_buffer, entry); if (next->free) { rb_erase(&next->rb_node, &proc->free_buffers); binder_delete_free_buffer(proc, next); } } if (proc->buffers.next != &buffer->entry) { struct binder_buffer *prev = list_entry(buffer->entry.prev, struct binder_buffer, entry); if (prev->free) { binder_delete_free_buffer(proc, buffer); rb_erase(&prev->rb_node, &proc->free_buffers); buffer = prev; } } binder_insert_free_buffer(proc, buffer); }复制代码
Java层相似,经过JNI调用Parcel的freeData()函数释放内存,在用户空间,每次执行BR_TRANSACTION或者BR_REPLY,都会利用freeBuffer发送请求,去释放内核中的内存
据说你Binder机制学的不错,来解决下这几个问题(一)
据说你 Binder 机制学的不错,来解决下这几个问题(二)
据说你 Binder 机制学的不错,来解决下这几个问题(三)
仅供参考,欢迎指正