从源码角度看CPU相关日志

简介

(本文原地址在个人博客CheapTalks, 欢迎你们来看看~)java

安卓系统中,普通开发者经常遇到的是ANR(Application Not Responding)问题,即应用主线程没有相应。根本的缘由在于安卓框架特殊设定,将专门作UI相关的、用户可以敏锐察觉到的操做放在了一个专门的线程中,即主线程。一旦这个线程在规定的时间内没有完成某个任务,如广播onReceive,Activity跳转,或者bindApplication,那么框架就会产生所谓的ANR,弹出对话框,让用户选择继续等待或者杀死没有完成任务的应用进程。android

老生常谈的说法是开发者在主线程中执行了耗时操做致使了任务执行时间过久,这样的问题一般很好定位与解决,每每打个bugreport这个ANR的root cause就原形毕露了,再不济咱们也可以经过LOG定位到耗时点。数组

今天咱们讲的是另外一种常见的状况,这种状况每每是因为CPU硬件设备的落后、底层CPU调控策略不当致使的。这种问题很恼人,明明按照常理绝对不会出现耗时的地方由于它也可以出现ANR和卡顿,给用户带来极其糟糕的体验。bash

这篇文章我将先给出一个示例,经过ANR日志反应当前的系统情况,而后从源码角度看安卓framework是如何打出这些LOG的。app

示例

如下是我截取的一段LOG,系统频繁的打出BIND_APPLICATION的耗时日志,这些日志出现的十分频繁,平均每两秒就出现一次,调用bindApplication有时甚至可以超过10s+,框架

这种状况是十分严重的,当系统要启动一个前台广播时,就须要10s内完成这个任务,不然就会出现ANR。若是启动的这个前台广播要运行在一个没有启动的进程中,那么在启动广播以前就要开启一个进程,而后调用bindApplication以触发Application.onCreate。这期间会先将BIND_APPLICATION、RECEIVER依次enqueue到ActivityThread$H主线程队列中,若是BIND_APPLICATION的处理时间过长,将会间接的致使RECEIER的任务没有获得处理,最终致使ANR。一样的原理,这种状况甚至会致使Input的任务没有及时获得处理,最终致使用户可察觉的卡顿。ide

08-28 20:35:58.737  4635  4635 I tag_activity_manager: [0,com.android.providers.calendar,110,3120]
08-28 20:35:58.757  4653  4653 I tag_activity_manager: [0,com.xiaomi.metoknlp,110,3073]
08-28 20:35:58.863  4601  4601 I tag_activity_manager: [0,android.process.acore,110,3392]
08-28 20:36:00.320  5040  5040 I tag_activity_manager: [0,com.lbe.security.miui,110,3045]
08-28 20:36:00.911  4233  4233 I tag_activity_manager: [0,com.miui.securitycenter.remote,110,8653]
08-28 20:36:03.254  4808  4808 I tag_activity_manager: [0,com.android.phone,110,7059]
08-28 20:36:05.538  5246  5246 I tag_activity_manager: [0,com.xiaomi.market,110,3406]
08-28 20:36:09.006  5153  5153 I tag_activity_manager: [0,com.miui.klo.bugreport,110,10166]
08-28 20:36:09.070  5118  5118 I tag_activity_manager: [0,com.android.settings,110,10680]
08-28 20:36:11.259  5570  5570 I tag_activity_manager: [0,com.miui.core,110,4895]复制代码

ActivityManagerService经过Binder call调用到应用的ActivityThread方法,而后将任务enqueue处处理主线程队列中oop

// bind call 调用到这个方法
        public final void bindApplication(String processName, ApplicationInfo appInfo, List<ProviderInfo> providers, ComponentName instrumentationName, ProfilerInfo profilerInfo, Bundle instrumentationArgs, IInstrumentationWatcher instrumentationWatcher, IUiAutomationConnection instrumentationUiConnection, int debugMode, boolean enableOpenGlTrace, boolean isRestrictedBackupMode, boolean persistent, Configuration config, CompatibilityInfo compatInfo, Map<String, IBinder> services, Bundle coreSettings) {

...

                // 封装AppBindData对象
            AppBindData data = new AppBindData();
...
            sendMessage(H.BIND_APPLICATION, data);
        }
...

    private class H extends Handler {
...
        public static final int BIND_APPLICATION = 110;
...

        public void handleMessage(Message msg) {
            if (DEBUG_MESSAGES) Slog.v(TAG, ">>> handling: " + codeToString(msg.what));
            switch (msg.what) {
            ...
            case BIND_APPLICATION:
                  // 在主线程队列中进程执行
                Trace.traceBegin(Trace.TRACE_TAG_ACTIVITY_MANAGER, "bindApplication");
                AppBindData data = (AppBindData)msg.obj;
                handleBindApplication(data);
                Trace.traceEnd(Trace.TRACE_TAG_ACTIVITY_MANAGER);
                break;
            ...
            }
...复制代码

上面的例子中,系统很快不出所料的出现了ANR,而且这个问题不是因为在APP中作了耗时操做,而是由于系统CPU负载太高致使的。如下贴出CPU负载日志:ui

// bindApplication卡了主线程57s+
Running message is { when=-57s68ms what=110 obj=AppBindData{appInfo=ApplicationInfo{be09873 com.android.settings}} target=android.app.ActivityThread$H planTime=1504652580856 dispatchTime=1504652580897 finishTime=0 }
Message 0: { when=-57s35ms what=140 arg1=5 target=android.app.ActivityThread$H planTime=1504652580890 dispatchTime=0 finishTime=0 }

// 分析了一下发生异常时的系统状态
08-28 20:36:13.000 2692 2759 E ActivityManager: ANR in com.android.settings 
08-28 20:36:13.000 2692 2759 E ActivityManager: PID: 5118 
// 从左到右,分别是最近1分钟,5分钟,15分钟的CPU负载,超过11就是负载过分
08-28 20:36:13.000 2692 2759 E ActivityManager: Load: 20.12 / 13.05 / 6.96 
// 发生ANR时,CPU的使用状况
08-28 20:36:13.000 2692 2759 E ActivityManager: CPU usage from 2967ms to -4440ms ago:  
// systemserver过于繁忙
08-28 20:36:13.000 2692 2759 E ActivityManager: 73% 2692/system_server: 57% user + 15% kernel / faults: 16217 minor 11 major 

08-28 20:36:13.000 2692 2759 E ActivityManager: 61% 4840/com.miui.home: 55% user + 5.4% kernel / faults: 26648 minor 17 major 
08-28 20:36:13.000 2692 2759 E ActivityManager: 19% 330/mediaserver: 17% user + 2.1% kernel / faults: 5180 minor 18 major 
08-28 20:36:13.000 2692 2759 E ActivityManager: 18% 4096/com.android.systemui: 14% user + 4% kernel / faults: 12965 minor 30 major 
...复制代码

固然,证实系统致使ANR不只仅须要CPU Load日志,同时也须要排除当前应用是否有耗时操做、耗时binder call的调用、是否等待锁等等状况。排除以后,就能够判断确认当前问题是因为CPU资源稀缺,致使应用执行bindApplication没有拿到足够的时间片,致使任务没有及时的完成,最终间接的致使队列排名靠后的广播或服务ANR。this

深刻源码

AMS.appNotResponding

private static final String TAG = TAG_WITH_CLASS_NAME ? "ActivityManagerService" : TAG_AM;

    final void appNotResponding(ProcessRecord app, ActivityRecord activity, ActivityRecord parent, boolean aboveSystem, final String annotation) {
...

        long anrTime = SystemClock.uptimeMillis();
        // 若是开启了CPU监听,将会先更新CPU的使用状况
        if (MONITOR_CPU_USAGE) {
            updateCpuStatsNow();
        }

...
         // 新建一个用户跟踪CPU使用状况的对象
         // 将会打印因此线程的CPU使用情况
        final ProcessCpuTracker processCpuTracker = new ProcessCpuTracker(true);

...

        String cpuInfo = null;
        if (MONITOR_CPU_USAGE) {
            updateCpuStatsNow();
            synchronized (mProcessCpuTracker) {
                cpuInfo = mProcessCpuTracker.printCurrentState(anrTime);
            }
            info.append(processCpuTracker.printCurrentLoad());
            info.append(cpuInfo);
        }

        info.append(processCpuTracker.printCurrentState(anrTime));

         // 直接将LOG打印出来
        Slog.e(TAG, info.toString());
...
    }复制代码

AMS.updateCpuStatsNow

void updateCpuStatsNow() {
        synchronized (mProcessCpuTracker) {
            mProcessCpuMutexFree.set(false);
            final long now = SystemClock.uptimeMillis();
            boolean haveNewCpuStats = false;

              // 最少每5秒更新CPU的数据
            if (MONITOR_CPU_USAGE &&
                    mLastCpuTime.get() < (now - MONITOR_CPU_MIN_TIME)) {
                mLastCpuTime.set(now);
                mProcessCpuTracker.update();
                if (mProcessCpuTracker.hasGoodLastStats()) {
                    haveNewCpuStats = true;
                    //Slog.i(TAG, mProcessCpu.printCurrentState());
                    //Slog.i(TAG, "Total CPU usage: "
                    // + mProcessCpu.getTotalCpuPercent() + "%");

                    // Slog the cpu usage if the property is set.
                    if ("true".equals(SystemProperties.get("events.cpu"))) {
                         // 用户态时间
                        int user = mProcessCpuTracker.getLastUserTime();
                        // 系统态时间
                        int system = mProcessCpuTracker.getLastSystemTime();
                        // IO等待时间
                        int iowait = mProcessCpuTracker.getLastIoWaitTime();
                        // 硬中断时间
                        int irq = mProcessCpuTracker.getLastIrqTime();
                        // 软中断时间
                        int softIrq = mProcessCpuTracker.getLastSoftIrqTime();
                        // 闲置时间
                        int idle = mProcessCpuTracker.getLastIdleTime();

                        int total = user + system + iowait + irq + softIrq + idle;
                        if (total == 0) total = 1;

                             // 输出百分比
                        EventLog.writeEvent(EventLogTags.CPU,
                                ((user + system + iowait + irq + softIrq) * 100) / total,
                                (user * 100) / total,
                                (system * 100) / total,
                                (iowait * 100) / total,
                                (irq * 100) / total,
                                (softIrq * 100) / total);
                    }
                }
            }

              // 各种CPU时间归类与更新
            final BatteryStatsImpl bstats = mBatteryStatsService.getActiveStatistics();
            synchronized (bstats) {
                synchronized (mPidsSelfLocked) {
                    if (haveNewCpuStats) {
                        if (bstats.startAddingCpuLocked()) {
                            int totalUTime = 0;
                            int totalSTime = 0;
                            // 遍历全部ProcessCpuTracker
                            final int N = mProcessCpuTracker.countStats();
                            for (int i = 0; i < N; i++) {
                                ProcessCpuTracker.Stats st = mProcessCpuTracker.getStats(i);
                                if (!st.working) {
                                    continue;
                                }
                                // ProcessRecord的CPU时间更新
                                ProcessRecord pr = mPidsSelfLocked.get(st.pid);
                                totalUTime += st.rel_utime;
                                totalSTime += st.rel_stime;
                                if (pr != null) {
                                    BatteryStatsImpl.Uid.Proc ps = pr.curProcBatteryStats;
                                    if (ps == null || !ps.isActive()) {
                                        pr.curProcBatteryStats = ps = bstats.getProcessStatsLocked(
                                                pr.info.uid, pr.processName);
                                    }
                                    ps.addCpuTimeLocked(st.rel_utime, st.rel_stime);
                                    pr.curCpuTime += st.rel_utime + st.rel_stime;
                                } else {
                                    BatteryStatsImpl.Uid.Proc ps = st.batteryStats;
                                    if (ps == null || !ps.isActive()) {
                                        st.batteryStats = ps = bstats.getProcessStatsLocked(
                                                bstats.mapUid(st.uid), st.name);
                                    }
                                    ps.addCpuTimeLocked(st.rel_utime, st.rel_stime);
                                }
                            }
                            // 将数据更新到BatteryStatsImpl
                            final int userTime = mProcessCpuTracker.getLastUserTime();
                            final int systemTime = mProcessCpuTracker.getLastSystemTime();
                            final int iowaitTime = mProcessCpuTracker.getLastIoWaitTime();
                            final int irqTime = mProcessCpuTracker.getLastIrqTime();
                            final int softIrqTime = mProcessCpuTracker.getLastSoftIrqTime();
                            final int idleTime = mProcessCpuTracker.getLastIdleTime();
                            bstats.finishAddingCpuLocked(totalUTime, totalSTime, userTime,
                                    systemTime, iowaitTime, irqTime, softIrqTime, idleTime);
                        }
                    }
                }

                    // 每30分钟写入电池数据
                if (mLastWriteTime < (now - BATTERY_STATS_TIME)) {
                    mLastWriteTime = now;
                    mBatteryStatsService.scheduleWriteToDisk();
                }
            }
        }
    }复制代码

BatteryStatsImpl.finishAddingCpuLocked

public void finishAddingCpuLocked(int totalUTime, int totalSTime, int statUserTime, int statSystemTime, int statIOWaitTime, int statIrqTime, int statSoftIrqTime, int statIdleTime) {
        if (DEBUG) Slog.d(TAG, "Adding cpu: tuser=" + totalUTime + " tsys=" + totalSTime
                + " user=" + statUserTime + " sys=" + statSystemTime
                + " io=" + statIOWaitTime + " irq=" + statIrqTime
                + " sirq=" + statSoftIrqTime + " idle=" + statIdleTime);
        mCurStepCpuUserTime += totalUTime;
        mCurStepCpuSystemTime += totalSTime;
        mCurStepStatUserTime += statUserTime;
        mCurStepStatSystemTime += statSystemTime;
        mCurStepStatIOWaitTime += statIOWaitTime;
        mCurStepStatIrqTime += statIrqTime;
        mCurStepStatSoftIrqTime += statSoftIrqTime;
        mCurStepStatIdleTime += statIdleTime;
    }复制代码

ProcessCpuTracker.update

update方法主要是读取/proc/stat与/proc/loadavg文件的数据来更新当前的CPU时间,其中CPU负载接口onLoadChange在LoadAverageService中有使用,用于展现一个动态的View在界面,便于查看CPU的实时数据。

具体关于这两个文件,我会在最后列出这两个节点文件的实例数据并做出简单的解析。

关于/proc目录,它实际上是一个虚拟目录,其子目录与子文件也都是虚拟的,并不占用实际的存储空间,它容许动态的读取出系统的实时信息。

public void update() {
        if (DEBUG) Slog.v(TAG, "Update: " + this);

        final long nowUptime = SystemClock.uptimeMillis();
        final long nowRealtime = SystemClock.elapsedRealtime();

          // 复用size=7的LONG数组
        final long[] sysCpu = mSystemCpuData;
        // 读取/proc/stat文件
        if (Process.readProcFile("/proc/stat", SYSTEM_CPU_FORMAT,
                null, sysCpu, null)) {
            // Total user time is user + nice time.
            final long usertime = (sysCpu[0]+sysCpu[1]) * mJiffyMillis;
            // Total system time is simply system time.
            final long systemtime = sysCpu[2] * mJiffyMillis;
            // Total idle time is simply idle time.
            final long idletime = sysCpu[3] * mJiffyMillis;
            // Total irq time is iowait + irq + softirq time.
            final long iowaittime = sysCpu[4] * mJiffyMillis;
            final long irqtime = sysCpu[5] * mJiffyMillis;
            final long softirqtime = sysCpu[6] * mJiffyMillis;

            // This code is trying to avoid issues with idle time going backwards,
            // but currently it gets into situations where it triggers most of the time. :(
            if (true || (usertime >= mBaseUserTime && systemtime >= mBaseSystemTime
                    && iowaittime >= mBaseIoWaitTime && irqtime >= mBaseIrqTime
                    && softirqtime >= mBaseSoftIrqTime && idletime >= mBaseIdleTime)) {
                mRelUserTime = (int)(usertime - mBaseUserTime);
                mRelSystemTime = (int)(systemtime - mBaseSystemTime);
                mRelIoWaitTime = (int)(iowaittime - mBaseIoWaitTime);
                mRelIrqTime = (int)(irqtime - mBaseIrqTime);
                mRelSoftIrqTime = (int)(softirqtime - mBaseSoftIrqTime);
                mRelIdleTime = (int)(idletime - mBaseIdleTime);
                mRelStatsAreGood = true;
...

                mBaseUserTime = usertime;
                mBaseSystemTime = systemtime;
                mBaseIoWaitTime = iowaittime;
                mBaseIrqTime = irqtime;
                mBaseSoftIrqTime = softirqtime;
                mBaseIdleTime = idletime;

            } else {
                mRelUserTime = 0;
                mRelSystemTime = 0;
                mRelIoWaitTime = 0;
                mRelIrqTime = 0;
                mRelSoftIrqTime = 0;
                mRelIdleTime = 0;
                mRelStatsAreGood = false;
                Slog.w(TAG, "/proc/stats has gone backwards; skipping CPU update");
                return;
            }
        }

        mLastSampleTime = mCurrentSampleTime;
        mCurrentSampleTime = nowUptime;
        mLastSampleRealTime = mCurrentSampleRealTime;
        mCurrentSampleRealTime = nowRealtime;

        final StrictMode.ThreadPolicy savedPolicy = StrictMode.allowThreadDiskReads();
        try {
               // 收集/proc文件节点信息
            mCurPids = collectStats("/proc", -1, mFirst, mCurPids, mProcStats);
        } finally {
            StrictMode.setThreadPolicy(savedPolicy);
        }

        final float[] loadAverages = mLoadAverageData;
        // 读取/proc/loadavg文件信息
        // 即最新1分钟,5分钟,15分钟的CPU负载
        if (Process.readProcFile("/proc/loadavg", LOAD_AVERAGE_FORMAT,
                null, null, loadAverages)) {
            float load1 = loadAverages[0];
            float load5 = loadAverages[1];
            float load15 = loadAverages[2];
            if (load1 != mLoad1 || load5 != mLoad5 || load15 != mLoad15) {
                mLoad1 = load1;
                mLoad5 = load5;
                mLoad15 = load15;
                // onLoadChanged是个空实现,在LoadAverageService的内部类对它进行了重写,用来更新CPU负载的数据
                onLoadChanged(load1, load5, load15);
            }
        }
...
    }复制代码

ProcessCpuTracker.collectStats

private int[] collectStats(String statsFile, int parentPid, boolean first,
            int[] curPids, ArrayList<Stats> allProcs) {{
         // 获取感兴趣的进程id
        int[] pids = Process.getPids(statsFile, curPids);
        int NP = (pids == null) ? 0 : pids.length;
        int NS = allProcs.size();
        int curStatsIndex = 0;
        for (int i=0; i<NP; i++) {
            int pid = pids[i];
            if (pid < 0) {
                NP = pid;
                break;
            }
            Stats st = curStatsIndex < NS ? allProcs.get(curStatsIndex) : null;

            if (st != null && st.pid == pid) {
                // Update an existing process...
                st.added = false;
                st.working = false;
                curStatsIndex++;
                if (DEBUG) Slog.v(TAG, "Existing "
                        + (parentPid < 0 ? "process" : "thread")
                        + " pid " + pid + ": " + st);

                if (st.interesting) {
                    final long uptime = SystemClock.uptimeMillis();

                        // 进程状态缓冲数组
                    final long[] procStats = mProcessStatsData;
                    if (!Process.readProcFile(st.statFile.toString(),
                            PROCESS_STATS_FORMAT, null, procStats, null)) {
                        continue;
                    }

                    final long minfaults = procStats[PROCESS_STAT_MINOR_FAULTS];
                    final long majfaults = procStats[PROCESS_STAT_MAJOR_FAULTS];
                    final long utime = procStats[PROCESS_STAT_UTIME] * mJiffyMillis;
                    final long stime = procStats[PROCESS_STAT_STIME] * mJiffyMillis;

                    if (utime == st.base_utime && stime == st.base_stime) {
                        st.rel_utime = 0;
                        st.rel_stime = 0;
                        st.rel_minfaults = 0;
                        st.rel_majfaults = 0;
                        if (st.active) {
                            st.active = false;
                        }
                        continue;
                    }

                    if (!st.active) {
                        st.active = true;
                    }
...

                    st.rel_uptime = uptime - st.base_uptime;
                    st.base_uptime = uptime;
                    st.rel_utime = (int)(utime - st.base_utime);
                    st.rel_stime = (int)(stime - st.base_stime);
                    st.base_utime = utime;
                    st.base_stime = stime;
                    st.rel_minfaults = (int)(minfaults - st.base_minfaults);
                    st.rel_majfaults = (int)(majfaults - st.base_majfaults);
                    st.base_minfaults = minfaults;
                    st.base_majfaults = majfaults;
                    st.working = true;
                }

                continue;
            }

            if (st == null || st.pid > pid) {
                // We have a new process!
                st = new Stats(pid, parentPid, mIncludeThreads);
                allProcs.add(curStatsIndex, st);
                curStatsIndex++;
                NS++;
...

                final String[] procStatsString = mProcessFullStatsStringData;
                final long[] procStats = mProcessFullStatsData;
                st.base_uptime = SystemClock.uptimeMillis();
                String path = st.statFile.toString();
                //Slog.d(TAG, "Reading proc file: " + path);
                if (Process.readProcFile(path, PROCESS_FULL_STATS_FORMAT, procStatsString,
                        procStats, null)) {
                    // This is a possible way to filter out processes that
                    // are actually kernel threads... do we want to? Some
                    // of them do use CPU, but there can be a *lot* that are
                    // not doing anything.
                    st.vsize = procStats[PROCESS_FULL_STAT_VSIZE];
                    if (true || procStats[PROCESS_FULL_STAT_VSIZE] != 0) {
                        st.interesting = true;
                        st.baseName = procStatsString[0];
                        st.base_minfaults = procStats[PROCESS_FULL_STAT_MINOR_FAULTS];
                        st.base_majfaults = procStats[PROCESS_FULL_STAT_MAJOR_FAULTS];
                        st.base_utime = procStats[PROCESS_FULL_STAT_UTIME] * mJiffyMillis;
                        st.base_stime = procStats[PROCESS_FULL_STAT_STIME] * mJiffyMillis;
                    } else {
                        Slog.i(TAG, "Skipping kernel process pid " + pid
                                + " name " + procStatsString[0]);
                        st.baseName = procStatsString[0];
                    }
                } else {
                    Slog.w(TAG, "Skipping unknown process pid " + pid);
                    st.baseName = "<unknown>";
                    st.base_utime = st.base_stime = 0;
                    st.base_minfaults = st.base_majfaults = 0;
                }

                if (parentPid < 0) {
                    getName(st, st.cmdlineFile);
                    if (st.threadStats != null) {
                        mCurThreadPids = collectStats(st.threadsDir, pid, true,
                                mCurThreadPids, st.threadStats);
                    }
                } else if (st.interesting) {
                    st.name = st.baseName;
                    st.nameWidth = onMeasureProcessName(st.name);
                }

                if (DEBUG) Slog.v("Load", "Stats added " + st.name + " pid=" + st.pid
                        + " utime=" + st.base_utime + " stime=" + st.base_stime
                        + " minfaults=" + st.base_minfaults + " majfaults=" + st.base_majfaults);

                st.rel_utime = 0;
                st.rel_stime = 0;
                st.rel_minfaults = 0;
                st.rel_majfaults = 0;
                st.added = true;
                if (!first && st.interesting) {
                    st.working = true;
                }
                continue;
            }

            // This process has gone away!
            st.rel_utime = 0;
            st.rel_stime = 0;
            st.rel_minfaults = 0;
            st.rel_majfaults = 0;
            st.removed = true;
            st.working = true;
            allProcs.remove(curStatsIndex);
            NS--;
            if (DEBUG) Slog.v(TAG, "Removed "
                    + (parentPid < 0 ? "process" : "thread")
                    + " pid " + pid + ": " + st);
            // Decrement the loop counter so that we process the current pid
            // again the next time through the loop.
            i--;
            continue;
        }

        while (curStatsIndex < NS) {
            // This process has gone away!
            final Stats st = allProcs.get(curStatsIndex);
            st.rel_utime = 0;
            st.rel_stime = 0;
            st.rel_minfaults = 0;
            st.rel_majfaults = 0;
            st.removed = true;
            st.working = true;
            allProcs.remove(curStatsIndex);
            NS--;
            if (localLOGV) Slog.v(TAG, "Removed pid " + st.pid + ": " + st);
        }

        return pids;
    }复制代码

Process.readProcFile

jboolean android_os_Process_readProcFile(JNIEnv* env, jobject clazz, jstring file, jintArray format, jobjectArray outStrings, jlongArray outLongs, jfloatArray outFloats) {
...
    int fd = open(file8, O_RDONLY);
...
    env->ReleaseStringUTFChars(file, file8);

    // 将文件数据读取到buffer中
    char buffer[256];
    const int len = read(fd, buffer, sizeof(buffer)-1);
    close(fd);
...
    buffer[len] = 0;

    return android_os_Process_parseProcLineArray(env, clazz, buffer, 0, len,
            format, outStrings, outLongs, outFloats);

}复制代码

Process.cpp parseProcLineArray

jboolean android_os_Process_parseProcLineArray(JNIEnv* env, jobject clazz, char* buffer, jint startIndex, jint endIndex, jintArray format, jobjectArray outStrings, jlongArray outLongs, jfloatArray outFloats) {
    // 先获取要读取的数据buffer长度
    const jsize NF = env->GetArrayLength(format);
    const jsize NS = outStrings ? env->GetArrayLength(outStrings) : 0;
    const jsize NL = outLongs ? env->GetArrayLength(outLongs) : 0;
    const jsize NR = outFloats ? env->GetArrayLength(outFloats) : 0;

    jint* formatData = env->GetIntArrayElements(format, 0);
    jlong* longsData = outLongs ?
        env->GetLongArrayElements(outLongs, 0) : NULL;
    jfloat* floatsData = outFloats ?
        env->GetFloatArrayElements(outFloats, 0) : NULL;
...

    jsize i = startIndex;
    jsize di = 0;

    jboolean res = JNI_TRUE;

    // 循环解析buffer中的数据到xxData中
    for (jsize fi=0; fi<NF; fi++) {
        jint mode = formatData[fi];
        if ((mode&PROC_PARENS) != 0) {
            i++;
        } else if ((mode&PROC_QUOTES) != 0) {
            if (buffer[i] == '"') {
                i++;
            } else {
                mode &= ~PROC_QUOTES;
            }
        }
        const char term = (char)(mode&PROC_TERM_MASK);
        const jsize start = i;
...

        jsize end = -1;
        if ((mode&PROC_PARENS) != 0) {
            while (i < endIndex && buffer[i] != ')') {
                i++;
            }
            end = i;
            i++;
        } else if ((mode&PROC_QUOTES) != 0) {
            while (buffer[i] != '"' && i < endIndex) {
                i++;
            }
            end = i;
            i++;
        }
        while (i < endIndex && buffer[i] != term) {
            i++;
        }
        if (end < 0) {
            end = i;
        }

        if (i < endIndex) {
            i++;
            if ((mode&PROC_COMBINE) != 0) {
                while (i < endIndex && buffer[i] == term) {
                    i++;
                }
            }
        }

        if ((mode&(PROC_OUT_FLOAT|PROC_OUT_LONG|PROC_OUT_STRING)) != 0) {
            char c = buffer[end];
            buffer[end] = 0;
            if ((mode&PROC_OUT_FLOAT) != 0 && di < NR) {
                char* end;
                floatsData[di] = strtof(buffer+start, &end);
            }
            if ((mode&PROC_OUT_LONG) != 0 && di < NL) {
                char* end;
                longsData[di] = strtoll(buffer+start, &end, 10);
            }
            if ((mode&PROC_OUT_STRING) != 0 && di < NS) {
                jstring str = env->NewStringUTF(buffer+start);
                env->SetObjectArrayElement(outStrings, di, str);
            }
            buffer[end] = c;
            di++;
        }
    }

    // 将xxData解析到outxxx中
    env->ReleaseIntArrayElements(format, formatData, 0);
    if (longsData != NULL) {
        env->ReleaseLongArrayElements(outLongs, longsData, 0);
    }
    if (floatsData != NULL) {
        env->ReleaseFloatArrayElements(outFloats, floatsData, 0);
    }

    return res;
}复制代码

实例数据

/proc/stat

// [1]user, [2]nice, [3]system, [4]idle, [5]iowait, [6]irq, [7]softirq
// 1. 从系统启动开始累计到当前时刻,用户态CPU时间
// 2. nice值为负的进程所占有的CPU时间
// 3. 内核CPU时间
// 4. 除IO等待时间的其它时间
// 5. 硬盘IO等待时间
// 6. 硬中断时间
// 7. 软中断时间
cpu  76704 76700 81879 262824 17071 10 15879 0 0 0
cpu0 19778 22586 34375 106542 7682 7 10185 0 0 0
cpu1 11460 6197 7973 18043 2151 0 1884 0 0 0
cpu2 17438 20917 13339 24945 2845 1 1822 0 0 0
cpu3 28028 27000 26192 113294 4393 2 1988 0 0 0
intr 4942220 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 602630 0 0 0 0 0 0 0 0 0 0 0 0 0 15460 0 0 0 0 0 0 67118 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1854 8 5 0 10 0 0 0 6328 0 0 0 0 0 0 0 0 0 0 892 0 0 0 0 2 106 2 0 2 0 0 0 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 7949 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10256 3838 0 0 0 0 0 0 0 499 69081 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 725052 0 14911 0 0 0 0 0 1054 0 0 0 0 0 0 2073 0 0 0 1371 5 0 659329 654662 0 0 0 0 0 0 0 0 0 6874 0 7 0 0 0 0 913 312 0 0 0 245372 0 0 2637 0 0 0 0 0 0 0 0 0 0 0 0 96 0 0 0 0 0 13906 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8804 0 0 0 0 0 0 0 0 0 0 0 0 2294 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 13860 0 0 5 5 0 0 0 0 1380 362 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7069 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 // 中断信息
ctxt 11866606 // 自系统启动以来CPU发生的上下文交换次数
btime 1507554066 // 系统启动到如今为止的时间,单位为秒
processes 38582    // 系统启动以来所建立的任务个数目
procs_running 1 // 当前运行队列的任务的数目
procs_blocked 0 // 当前被阻塞的任务数目
softirq 2359224 2436 298396 2839 517350 2436 2436 496108 329805 2067 705351复制代码

/proc/loadavg

10.55 19.87 25.93 2/2082 7475复制代码

/proc/1/stat

1 (init) S 0 0 0 0 -1 4194560 2206 161131 0 62 175 635 175 244 20 0 1 0 0 2547712 313 4294967295 32768 669624 3196243632 3196242928 464108 0 0 0 65536 3224056068 0 0 17 3 0 0 0 0 0 676368 693804 712704复制代码
相关文章
相关标签/搜索