前一阵子在忙项目,没什么更新,此次开始写点android源码内部的东西分析下。以6.0.1_r10版本android源码为例。
servicemanager是android服务管理,很是基础的组件之一,分析他的目的是可以深刻看到binder的一些处理方式。在开始前先说下阅读源码或者很是复杂代码的方式,个人方式是层级进入,一层掌握脉络以后若是感兴趣再对具体的点深刻分析了解,而且每层进行总结,这样我认为会比较好理解,也不容易产生一个点一直走下去,最后迷失在复杂繁琐的代码里的状况。固然我只表明我我的的体验。东西是写给本身的,若是能帮到他人我会很是高兴。linux
而后这里推荐下罗升阳先生的博客文章,确实很是不错,能够做为阅读参考。android
servicemanager源码位于/frameworks/native/cmds/servicemanager/service_manager.c下:api
347int main(int argc, char **argv) 348{ 349 struct binder_state *bs; 350 351 bs = binder_open(128*1024); 352 if (!bs) { 353 ALOGE("failed to open binder driver\n"); 354 return -1; 355 } 356 357 if (binder_become_context_manager(bs)) { 358 ALOGE("cannot become context manager (%s)\n", strerror(errno)); 359 return -1; 360 } 361 362 selinux_enabled = is_selinux_enabled(); 363 sehandle = selinux_android_service_context_handle(); 364 selinux_status_open(true); 365 366 if (selinux_enabled > 0) { 367 if (sehandle == NULL) { 368 ALOGE("SELinux: Failed to acquire sehandle. Aborting.\n"); 369 abort(); 370 } 371 372 if (getcon(&service_manager_context) != 0) { 373 ALOGE("SELinux: Failed to acquire service_manager context. Aborting.\n"); 374 abort(); 375 } 376 } 377 378 union selinux_callback cb; 379 cb.func_audit = audit_callback; 380 selinux_set_callback(SELINUX_CB_AUDIT, cb); 381 cb.func_log = selinux_log_callback; 382 selinux_set_callback(SELINUX_CB_LOG, cb); 383 384 binder_loop(bs, svcmgr_handler); 385 386 return 0; 387}
1.binder_open打开binder驱动设备;
2.binder_become_context_manager(bs),将本身做为binder的管理者;
3.binder_loop(bs, svcmgr_handler),进入循环,做为server等待client的请求;cookie
位于/frameworks/native/cmds/servicemanager/binder.c:app
96struct binder_state *binder_open(size_t mapsize) 97{ 98 struct binder_state *bs; 99 struct binder_version vers; 100 101 bs = malloc(sizeof(*bs)); 102 if (!bs) { 103 errno = ENOMEM; 104 return NULL; 105 } 106 107 bs->fd = open("/dev/binder", O_RDWR); 108 if (bs->fd < 0) { 109 fprintf(stderr,"binder: cannot open device (%s)\n", 110 strerror(errno)); 111 goto fail_open; 112 } 113 114 if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) || 115 (vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) { 116 fprintf(stderr, 117 "binder: kernel driver version (%d) differs from user space version (%d)\n", 118 vers.protocol_version, BINDER_CURRENT_PROTOCOL_VERSION); 119 goto fail_open; 120 } 121 122 bs->mapsize = mapsize; 123 bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0); 124 if (bs->mapped == MAP_FAILED) { 125 fprintf(stderr,"binder: cannot map device (%s)\n", 126 strerror(errno)); 127 goto fail_map; 128 } 129 130 return bs; 131 132fail_map: 133 close(bs->fd); 134fail_open: 135 free(bs); 136 return NULL; 137}
首先,创建一个结构体binder_state,而后剩下的就是给这个结构体的成员赋值。bs->fd给打开的驱动设备文件描述符;bs->mapped给内存映射地址;
插一句,这里对goto的应用很规范,可见任何语句并不是有好与很差,而在于怎么用。
看到这里其实能够猜想,binder的机制就是内存映射,或者能够说是文件映射,由于在linux上任何的设备均可以看作是文件。
如今不要深刻,往回看,以前的service_manager.c的main函数里,后面就要走binder_become_context_manager这个将本身设为binder管理者。函数
146int binder_become_context_manager(struct binder_state *bs) 147{ 148 return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0); 149}
这里就作了一件事儿,就是下发控制字,告诉驱动设置context管理者为0,这里也能够猜想,这个0表明必定含义,应该就是servicemanager本身,后面再继续解释这个问题。oop
372void binder_loop(struct binder_state *bs, binder_handler func) 373{ 374 int res; 375 struct binder_write_read bwr; 376 uint32_t readbuf[32]; 377 378 bwr.write_size = 0; 379 bwr.write_consumed = 0; 380 bwr.write_buffer = 0; 381 382 readbuf[0] = BC_ENTER_LOOPER; 383 binder_write(bs, readbuf, sizeof(uint32_t)); 384 385 for (;;) { 386 bwr.read_size = sizeof(readbuf); 387 bwr.read_consumed = 0; 388 bwr.read_buffer = (uintptr_t) readbuf; 389 390 res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); 391 392 if (res < 0) { 393 ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno)); 394 break; 395 } 396 397 res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func); 398 if (res == 0) { 399 ALOGE("binder_loop: unexpected reply?!\n"); 400 break; 401 } 402 if (res < 0) { 403 ALOGE("binder_loop: io error %d %s\n", res, strerror(errno)); 404 break; 405 } 406 }
1.先经过binder_write下发了一个BC_ENTER_LOOPER控制字,表示要驱动设备进入looper状态(binder_write内部也是走的ioctrl BINDER_WRITE_READ写入驱动设备);
2.进入死循环,不停从设备读取数据,成功读取到以后,进入binder_parse函数;
3.binder_parse,从字面看是解析binder,可是具体作什么不清楚,只能猜想是对刚才读取到的内容进行处理。
同属于binder.c这一层,所以咱们看看binder_parse具体内容:ui
204int binder_parse(struct binder_state *bs, struct binder_io *bio, 205 uintptr_t ptr, size_t size, binder_handler func) 206{ 207 int r = 1; 208 uintptr_t end = ptr + (uintptr_t) size; 209 210 while (ptr < end) { 211 uint32_t cmd = *(uint32_t *) ptr; 212 ptr += sizeof(uint32_t); 213#if TRACE 214 fprintf(stderr,"%s:\n", cmd_name(cmd)); 215#endif 216 switch(cmd) { 217 case BR_NOOP: 218 break; 219 case BR_TRANSACTION_COMPLETE: 220 break; 221 case BR_INCREFS: 222 case BR_ACQUIRE: 223 case BR_RELEASE: 224 case BR_DECREFS: 225#if TRACE 226 fprintf(stderr," %p, %p\n", (void *)ptr, (void *)(ptr + sizeof(void *))); 227#endif 228 ptr += sizeof(struct binder_ptr_cookie); 229 break; 230 case BR_TRANSACTION: { 231 struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr; 232 if ((end - ptr) < sizeof(*txn)) { 233 ALOGE("parse: txn too small!\n"); 234 return -1; 235 } 236 binder_dump_txn(txn); 237 if (func) { 238 unsigned rdata[256/4]; 239 struct binder_io msg; 240 struct binder_io reply; 241 int res; 242 243 bio_init(&reply, rdata, sizeof(rdata), 4); 244 bio_init_from_txn(&msg, txn); 245 res = func(bs, txn, &msg, &reply); 246 binder_send_reply(bs, &reply, txn->data.ptr.buffer, res); 247 } 248 ptr += sizeof(*txn); 249 break; 250 } 251 case BR_REPLY: { 252 struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr; 253 if ((end - ptr) < sizeof(*txn)) { 254 ALOGE("parse: reply too small!\n"); 255 return -1; 256 } 257 binder_dump_txn(txn); 258 if (bio) { 259 bio_init_from_txn(bio, txn); 260 bio = 0; 261 } else { 262 /* todo FREE BUFFER */ 263 } 264 ptr += sizeof(*txn); 265 r = 0; 266 break; 267 } 268 case BR_DEAD_BINDER: { 269 struct binder_death *death = (struct binder_death *)(uintptr_t) *(binder_uintptr_t *)ptr; 270 ptr += sizeof(binder_uintptr_t); 271 death->func(bs, death->ptr); 272 break; 273 } 274 case BR_FAILED_REPLY: 275 r = -1; 276 break; 277 case BR_DEAD_REPLY: 278 r = -1; 279 break; 280 default: 281 ALOGE("parse: OOPS %d\n", cmd); 282 return -1; 283 } 284 } 285 286 return r; 287}
刚才从驱动设备读取的buffer的前32位取出来做为cmd进行switch判断处理。BR_表明从设备驱动反馈的命令,BR_TRANSACTION字面看是交易,那么能够猜想是对接受到的发送方(client)的内容进行处理。往下看,BR_TRANSACTION流程里,先把收到的数据转成binder_transaction_data结构,而后走了binder_dump_txn,这里基本上就是输出一些信息,不太关注。以后是关键的部分,调用了func,这个东西是个binder_handler,其实看看定义就知道,是个回调函数,回到servicemanager里面的main,能够看到是个svcmgr_handler,具体内容也在servicemanager里面,以下:spa
244int svcmgr_handler(struct binder_state *bs, 245 struct binder_transaction_data *txn, 246 struct binder_io *msg, 247 struct binder_io *reply) 248{ 249 struct svcinfo *si; 250 uint16_t *s; 251 size_t len; 252 uint32_t handle; 253 uint32_t strict_policy; 254 int allow_isolated; 255 256 //ALOGI("target=%p code=%d pid=%d uid=%d\n", 257 // (void*) txn->target.ptr, txn->code, txn->sender_pid, txn->sender_euid); 258 259 if (txn->target.ptr != BINDER_SERVICE_MANAGER) 260 return -1; 261 262 if (txn->code == PING_TRANSACTION) 263 return 0; 264 265 // Equivalent to Parcel::enforceInterface(), reading the RPC 266 // header with the strict mode policy mask and the interface name. 267 // Note that we ignore the strict_policy and don't propagate it 268 // further (since we do no outbound RPCs anyway). 269 strict_policy = bio_get_uint32(msg); 270 s = bio_get_string16(msg, &len); 271 if (s == NULL) { 272 return -1; 273 } 274 275 if ((len != (sizeof(svcmgr_id) / 2)) || 276 memcmp(svcmgr_id, s, sizeof(svcmgr_id))) { 277 fprintf(stderr,"invalid id %s\n", str8(s, len)); 278 return -1; 279 } 280 281 if (sehandle && selinux_status_updated() > 0) { 282 struct selabel_handle *tmp_sehandle = selinux_android_service_context_handle(); 283 if (tmp_sehandle) { 284 selabel_close(sehandle); 285 sehandle = tmp_sehandle; 286 } 287 } 288 289 switch(txn->code) { 290 case SVC_MGR_GET_SERVICE: 291 case SVC_MGR_CHECK_SERVICE: 292 s = bio_get_string16(msg, &len); 293 if (s == NULL) { 294 return -1; 295 } 296 handle = do_find_service(bs, s, len, txn->sender_euid, txn->sender_pid); 297 if (!handle) 298 break; 299 bio_put_ref(reply, handle); 300 return 0; 301 302 case SVC_MGR_ADD_SERVICE: 303 s = bio_get_string16(msg, &len); 304 if (s == NULL) { 305 return -1; 306 } 307 handle = bio_get_ref(msg); 308 allow_isolated = bio_get_uint32(msg) ? 1 : 0; 309 if (do_add_service(bs, s, len, handle, txn->sender_euid, 310 allow_isolated, txn->sender_pid)) 311 return -1; 312 break; 313 314 case SVC_MGR_LIST_SERVICES: { 315 uint32_t n = bio_get_uint32(msg); 316 317 if (!svc_can_list(txn->sender_pid)) { 318 ALOGE("list_service() uid=%d - PERMISSION DENIED\n", 319 txn->sender_euid); 320 return -1; 321 } 322 si = svclist; 323 while ((n-- > 0) && si) 324 si = si->next; 325 if (si) { 326 bio_put_string16(reply, si->name); 327 return 0; 328 } 329 return -1; 330 } 331 default: 332 ALOGE("unknown code %d\n", txn->code); 333 return -1; 334 } 335 336 bio_put_uint32(reply, 0); 337 return 0; 338}
简单看下,就是对传递的数据的具体处理,包括了addservice等具体的过程处理。暂时先不深究。code
至此咱们能够看出来,servicemanager->binder.c这层基本上就是servicemanager提供系统的服务管理,binder.c提供对驱动设备的操做api。整个过程再梳理下:1.打开binder驱动设备;2.将本身做为binder上下文的管理者,经过binder.c传递0给设备驱动(ioctrl);3.进入binder_looper循环,不停从binder设备驱动读取内容,并解析,而后根据cmd判断后抛给servicemanager进行真正处理;4.servicemanager里再根据读取到的数据内容来决定进行各类cmd动做的处理,包括addservice等;这么看这一层的脉络基本上比较清晰了。这么写把binder独立了出来做为一个api层,能够搭载任何的生成调用,也就是说binder.c这一层只管与binder设备驱动通信,其他的抛给调用者,很标准聪明的解耦。