(0). Android/Linux 内存分配的两个重要策略.
Linux 在分配内存时, 为了节省内存, 按需分配, 使用了延时分配以及Copy-On-Write 的策略.
延时分配即针对user space 申请memory 时, 先只是明面上的分配虚拟空间, 等到真正操作memory 时, 才真正分配具体的物理内存, 这个需要借助MMU 的data abort 转换成page fault 来达成. 这样就可以极大的避免因user space 过度申请memory, 或者错误申请memory 造成的memory 浪费.
而Copy-On-Write 即是在进程fork 时, 子进程和父进程使用同一份memory, 只有当某块memory 被更新时, 才重新copy 出新的一份. 这个在android 上表现也非常显著, 上层app 包括system server 都由zygote fork 出来, 并且没重新exec 新的bin, ART VM/Lib 的memory 都是共享的, 可以极大的节省Memory 的使用.
对应的我们在评估一个进程的memory 使用时, 我们往往就需要观察它使用的虚拟的memory 空间, 它真实的使用的物理memory, 它和其他进程有均摊多少memory, 即:
VSS- Virtual Set Size 虚拟耗用内存(包含共享库占用的内存)RSS- Resident Set Size 实际使用物理内存(包含共享库占用的内存)PSS- Proportional Set Size 实际使用的物理内存(比例分配共享库占用的内存)USS- Unique Set Size 进程独自占用的物理内存(不包含共享库占用的内存)
(1). 内存的整体使用情况.
要分析memory leaks, 你需要知道总体的内存使用情况和划分. 以判断内存泄露是发生在user space, kernel space, mulit-media 等使用的memory, 从而进一步去判断具体的memory leaks.
user space 使用的memory 即通常包括从进程直接申请的memory, 比如 malloc: 先mmap/sbrk 整体申请大块Memory 后再malloc 细分使用, 比如stack memory, 直接通过mmap 从系统申请; 以及因user space 进程打开文件所使用的page cache, 以及使用ZRAM 压缩 user space memory 存储所占用的memory.
kernel space 使用的memory 通常包括 kernel stack, slub, page table, vmalloc, shmem 等.
mulit-media 使用的memory 通常使用的方式包括 ion, gpu 等.
其他方式的memory 使用, 此类一般直接从buddy system 中申请出以page 为单位的memory, android 中比较常见如ashmem.
而从进程的角度来讲, 通常情况下进程所使用的memory, 都会通过mmap 映射到进程空间后访问使用(注: 也会一些非常特别异常的流程, 没有mmap 到进程空间), 所以进程的memory maps 资讯是至关重要的. 对应在AEE DB 里面的file 是 PROCESS_MAPS
下面枚举一些关键的段:
b1100000-b1180000 rw-p 00000000 00:00 0 [anon:libc_malloc]
malloc 通过jemalloc 所管控的空间, 常见的malloc leaks 都会可以看到这种libc_malloc段空间显著增长.
address perms offset dev inode pathnameaefe5000-af9fc000 r-xp 00000000 103:0a 25039 /data/app/in.startv.hotstar-c_zk-AatlkkDg2B_FSQFuQ==/lib/arm/libAVEAndroid.soaf9fc000-afa3e000 r--p 00a16000 103:0a 25039 /data/app/in.startv.hotstar-c_zk-AatlkkDg2B_FSQFuQ==/lib/arm/libAVEAndroid.soafa3e000-afad2000 rw-p 00a58000 103:0a 25039 /data/app/in.startv.hotstar-c_zk-AatlkkDg2B_FSQFuQ==/lib/arm/libAVEAndroid.so
第一段 "r-xp" 则是只读并可执行的主体代码段. 第二段 "r--p" 则是这个lib 使用的只读变量段 , 第三段 "rw-p" 则是这个lib 使用的数据段.
7110f000-71110000 rw-p 00000000 00:00 0 [anon:.bss]71712000-71713000 rw-p 00000000 00:00 0 [anon:.bss]71a49000-71a4a000 rw-p 00000000 00:00 0 [anon:.bss]
BSS(Block Started by Symbol) 段, 存放进程未初始化的static 以及 gloal 变量, 默认初始化时全部为0. 通常此类不会有memory leaks, 基本上长度在程序启动时就已经决定了.
//java thread6f5b0b2000-6f5b0b3000 ---p 00000000 00:00 0 [anon:thread stack guard]6f5b0b3000-6f5b0b4000 ---p 00000000 00:00 06f5b0b4000-6f5b1b0000 rw-p 00000000 00:00 0//native thread74d0d0e000-74d0d0f000 ---p 00000000 00:00 0 [anon:thread stack guard]74d0d0f000-74d0e0c000 rw-p 00000000 00:00 0
pthread stack 使用memory, 注意目前pthread create 时只标注了它底部的 "thread stack guard", 默认pthread stack 大小是1M - 16K. guard 是 4K. 注意的是java thread 在art 里面还会再隔离一个page, 判断收到的SIGSEGV 是否为StackOverflowError.
7e9cf16000-7e9cf17000 ---p 00000000 00:00 0 [anon:thread signal stack guard]7e9cf17000-7e9cf1b000 rw-p 00000000 00:00 0 [anon:thread signal stack]
对应Pthread signal stack, 大小为16K,同样底部有guard 保护.
7f31245000-7f31246000 ---p 00000000 00:00 0 [anon:bionic TLS guard]7f31246000-7f31249000 rw-p 00000000 00:00 0 [anon:bionic TLS]
对应Pthread 的TLS, 长度为12K, 同样底部有guard 保护.
edce5000-edce6000 rw-s 00000000 00:05 1510969 /dev/ashmem/shared_memory/443BA81EE7976CA437BCBFF7935200B2 (deleted)
此类是ashmem, 访问/dev/ashmem 然后申请的memory, 通常比较关键是要确认它的name, 一般从它的name 可以明确得知memory 的申请位置. 至于 (deleted) 标识, 是指 mmap 时有带MAP_FILE flag, 并且对应的path file已经unlink 或者不存在.
7e8d008000-7e8d306000 rw-s 00000000 00:0a 7438 anon_inode:dmabuf7e8d306000-7e8d604000 rw-s 00000000 00:0a 7438 anon_inode:dmabuf7e8d604000-7e8d902000 rw-s 00000000 00:0a 7438 anon_inode:dmabuf7e8d902000-7e8dc00000 rw-s 00000000 00:0a 7438 anon_inode:dmabuf
ion memory 段. ion buffer 的 vma name 标注成dmabuf, 即已经mmap 的ion memory 可以从这个直接统计算出.
注意的是, maps 打印的资讯只是地址空间资讯, 即是虚拟地址空间占用情况, 而实际的具体的memory 占用多少需要审查 proc/pid/smaps. 比如:
7e8ea00000-7e8ee00000 rw-p 00000000 00:00 0 [anon:libc_malloc]Name: [anon:libc_malloc]Size: 4096 kBRss: 888 kBPss: 888 kBShared_Clean: 0 kBShared_Dirty: 0 kBPrivate_Clean: 0 kBPrivate_Dirty: 888 kBReferenced: 888 kBAnonymous: 888 kBAnonHugePages: 0 kBShmemPmdMapped: 0 kBShared_Hugetlb: 0 kBPrivate_Hugetlb: 0 kBSwap: 0 kBSwapPss: 0 kBKernelPageSize: 4 kBMMUPageSize: 4 kBLocked: 0 kBVmFlags: rd wr mr mw me nr
比如这段jemalloc 使用的memory, 对应是一个4M 大小, 但实际目前使用的RSS=PSS=888K, 即还有大部分没有真实填充memory.
同样人为的查看maps 比较耗时, 目前在android 里面有 procrank, showmap, pmap 等命令可供查看. procrank 根据进程使用的memory 进行排序统计系统中进程的memory 使用量, 不过它一般没有统计ion 等资讯. 注意的是这个命令默认只编译到了debug 版本.
k71v1_64_bsp:/ # procrank -hUsage: procrank [ -W ] [ -v | -r | -p | -u | -s | -h ]-v Sort by VSS.-r Sort by RSS.-p Sort by PSS.-u Sort by USS.-s Sort by swap.(Default sort order is PSS.)-R Reverse sort order (default is descending).-c Only show cached (storage backed) pages-C Only show non-cached (ram/swap backed) pages-k Only show pages collapsed by KSM-w Display statistics for working set only.-W Reset working set of all processes.-o Show and sort by oom score against lowmemorykiller thresholds.-h Display this help screen.
showmap 根据进程的maps/smaps 进行统计排序, 注意的是这个命令默认只编译到了debug 版本.
k71v1_64_bsp:/ # showmapshowmap [-t] [-v] [-c] [-q]-t = terse (show only items with private pages)-v = verbose (dont coalesce maps with the same name)-a = addresses (show virtual memory map)-q = quiet (dont show error if map could not be read)
pmap 把maps 的每个段打印出来, 如果使用-x 则会使用smaps 中数据匹配, 统计PSS, SWAP 等.
OP46E7:/ # pmap --helpusage: pmap [-xq] [pids...]Reports the memory map of a process or processes.-x Show the extended format-q Do not display some header/footer lines
从系统角度来看memory 的使用情况, 通常会习惯性简单的查看 proc/meminfo: 下面简单和大家分享具体的含义.
k71v1_64_bsp:/ # cat proc/meminfoMemTotal: 3849612 kBMemFree: 206920 kBMemAvailable: 1836292 kBBuffers: 73472 kBCached: 1571552 kBSwapCached: 14740 kBActive: 1165488 kBInactive: 865688 kBActive(anon): 202140 kBInactive(anon): 195580 kBActive(file): 963348 kBInactive(file): 670108 kBUnevictable: 5772 kBMlocked: 5772 kBSwapTotal: 1048572 kBSwapFree: 787780 kBDirty: 32 kBWriteback: 0 kBAnonPages: 383924 kBMapped: 248488 kBShmem: 6488 kBSlab: 391060 kBSReclaimable: 199712 kBSUnreclaim: 191348 kBKernelStack: 22640 kBPageTables: 28056 kBNFS_Unstable: 0 kBBounce: 0 kBWritebackTmp: 0 kBCommitLimit: 2973376 kBCommitted_AS: 42758232 kBVmallocTotal: 258867136 kBVmallocUsed: 0 kBVmallocChunk: 0 kBCmaTotal: 2093056 kBCmaFree: 78916 kB
我沿用kernel document 的注释: /kernel/Documentation/filesystems/proc.txt
MemTotal: Total usable ram (i.e. physical ram minus a few reservedbits and the kernel binary code)MemFree: The sum of LowFree+HighFreeMemAvailable: An estimate of how much memory is available for starting newapplications, without swapping. Calculated from MemFree,SReclaimable, the size of the file LRU lists, and the lowwatermarks in each zone.The estimate takes into account that the system needs somepage cache to function well, and that not all reclaimableslab will be reclaimable, due to items being in use. Theimpact of those factors will vary from system to system.Buffers: Relatively temporary storage for raw disk blocksshouldnt get tremendously large (20MB or so)Cached: in-memory cache for files read from the disk (thepagecache). Doesnt include SwapCachedSwapCached: Memory that once was swapped out, is swapped back in butstill also is in the swapfile (if memory is needed itdoesnt need to be swapped out AGAIN because it is alreadyin the swapfile. This saves I/O)Active: Memory that has been used more recently and usually notreclaimed unless absolutely necessary.Inactive: Memory which has been less recently used. It is moreeligible to be reclaimed for other purposesHighTotal:HighFree: Highmem is all memory above ~860MB of physical memoryHighmem areas are for use by userspace programs, orfor the pagecache. The kernel must use tricks to accessthis memory, making it slower to access than lowmem.LowTotal:LowFree: Lowmem is memory which can be used for everything thathighmem can be used for, but it is also available for thekernels use for its own data structures. Among manyother things, it is where everything from the Slab isallocated. Bad things happen when youre out of lowmem.SwapTotal: total amount of swap space availableSwapFree: Memory which has been evicted from RAM, and is temporarilyon the diskDirty: Memory which is waiting to get written back to the diskWriteback: Memory which is actively being written back to the diskAnonPages: Non-file backed pages mapped into userspace page tablesAnonHugePages: Non-file backed huge pages mapped into userspace page tablesMapped: files which have been mmaped, such as librariesSlab: in-kernel data structures cacheSReclaimable: Part of Slab, that might be reclaimed, such as cachesSUnreclaim: Part of Slab, that cannot be reclaimed on memory pressurePageTables: amount of memory dedicated to the lowest level of pagetables.NFS_Unstable: NFS pages sent to the server, but not yet committed to stablestorageBounce: Memory used for block device "bounce buffers"WritebackTmp: Memory used by FUSE for temporary writeback buffersCommitLimit: Based on the overcommit ratio (vm.overcommit_ratio),this is the total amount of memory currently available tobe allocated on the system. This limit is only adhered toif strict overcommit accounting is enabled (mode 2 invm.overcommit_memory).The CommitLimit is calculated with the following formula:CommitLimit = ([total RAM pages] - [total huge TLB pages]) *overcommit_ratio / 100 + [total swap pages]For example, on a system with 1G of physical RAM and 7Gof swap with a `vm.overcommit_ratio` of 30 it wouldyield a CommitLimit of 7.3G.For more details, see the memory overcommit documentationin vm/overcommit-accounting.Committed_AS: The amount of memory presently allocated on the system.The committed memory is a sum of all of the memory whichhas been allocated by processes, even if it has not been"used" by them as of yet. A process which malloc()s 1Gof memory, but only touches 300M of it will show up asusing 1G. This 1G is memory which has been "committed" toby the VM and can be used at any time by the allocatingapplication. With strict overcommit enabled on the system(mode 2 in vm.overcommit_memory),allocations which wouldexceed the CommitLimit (detailed above) will not be permitted.This is useful if one needs to guarantee that processes willnot fail due to lack of memory once that memory has beensuccessfully allocated.VmallocTotal: total size of vmalloc memory areaVmallocUsed: amount of vmalloc area which is usedVmallocChunk: largest contiguous block of vmalloc area which is free
我们可以得到一些大体的"等式".
MemAvailable = free - kernel reserved memory + ative file + inactive file + SReclaimable - 2 * zone low water markCached = All file page - buffers - swapping = Active file + Inactive file + Unevictable file - BuffersSlab = SReclaimable + SUnreclaimableActive = Active(anon) + Active(file)Inactive = Inactive(anon) + Inactive(file)AnonPages + Buffers + Cached = Active + InactiveBuffers + Cached = Active(file) + Inactive(file)SwapTotal = SwapFree + SwapUsed(Not SwapCached)KernelStack = the number of kernel task * Stack Size(16K)Kernel Memory Usage = KernelStack + Slab + PageTables + Shmem + VmallocNative Memory Usage = Mapped + AnonPages + Others
(2). Android dumpsys meminfo 解析.
从Android 的角度, Google 提供了dumpsys meminfo 命令来获取全局以及某个进程的memory 信息. android 在 AativityManagerService 里面提供了一个meminfo 的service , 可以来抓取process 的memory 使用概要, 这个慢慢成为了android 上层判断的主流.
adb shell dumpsys meminfo ==> dump 全局的memory 使用情况. adb shell dumpsys meminfo pid ==> dump 单个process memory 使用情况. 它的一个好处在于, 如果是user build 没有root 权限的时候, 可以借道sh ==> system_server ==> binder ==> process 进行抓取操作, 规避了权限方面的风险.
对应的完整操作命令:
OP46E7:/ # dumpsys meminfo -hmeminfo dump options: [-a] [-d] [-c] [-s] [--oom] [process]-a: include all available information for each process.-d: include dalvik details.-c: dump in a compact machine-parseable representation.-s: dump only summary of application memory usage.-S: dump also SwapPss.--oom: only show processes organized by oom adj.--local: only collect details locally, dont call process.--package: interpret process arg as package, dumping allprocesses that have loaded that package.--checkin: dump data for a checkin--proto: dump data to protoIf [process] is specified it can be the name orpid of a specific process to dump.
下面我们将解析dumpsys meminfo 的数据来源, 便于大家读懂.
(2.1) 系统memory 来源解析.
Total RAM: 3,849,612K (status moderate)Free RAM: 1,870,085K ( 74,389K cached pss + 1,599,904K cached kernel + 195,792K free)Used RAM: 1,496,457K ( 969,513K used pss + 526,944K kernel)Lost RAM: 686,331KZRAM: 48,332K physical used for 260,604K in swap (1,048,572K total swap)Tuning: 384 (large 512), oom 322,560K, restore limit 107,520K (high-end-gfx)Total RAM: /proc/meminfo-MemTotalFree RAM: cached pss = All pss of process oom_score_adj >= 900cached kernel = /proc/meminfo.Buffers + /proc/meminfo.Cached + /proc/meminfo.SlabReclaimable- /proc/meminfo.Mappedfree = /proc/meminfo.MemFreeUsed RAM: used pss: total pss - cached pssKernel: /proc/meminfo.Shmem + /proc/meminfo.SlabUnreclaim + VmallocUsed + /proc/meminfo.PageTables + /proc/meminfo.KernelStackLost RAM: /proc/meminfo.memtotal - (totalPss - totalSwapPss) - /proc/meminfo.memfree - /proc/meminfo.cached - kernel used - zram used
(2.2) 单个Process 数据源解析
单个process 则通过binder 接入app 来抓取. 接到ActivityThread 的 dumpMeminfo 来统计.
Native Heap, 从jemalloc 取出,对应实现是 android_os_Debug_getNativeHeapSize() ==> mallinfo() ==> jemalloc
Dalvik Heap, 使用Runtime 从java heap 取出.
同时根据process 的smaps 解析数据 Pss Private Dirty Private Clean SwapPss.
** MEMINFO in pid 1138 [system] **Pss Private Private SwapPss Heap Heap HeapTotal Dirty Clean Dirty Size Alloc Free------ ------ ------ ------ ------ ------ ------Native Heap 62318 62256 0 0 137216 62748 74467Dalvik Heap 21549 21512 0 0 28644 16356 12288Dalvik Other 4387 4384 0 0Stack 84 84 0 0Ashmem 914 884 0 0Other dev 105 0 56 0.so mmap 10995 1112 4576 0.apk mmap 3912 0 2776 0.ttf mmap 20 0 0 0.dex mmap 60297 76 57824 0.oat mmap 2257 0 88 0.art mmap 3220 2788 12 0Other mmap 1944 4 672 0GL mtrack 5338 5338 0 0Unknown 3606 3604 0 0TOTAL 180946 102042 66004 0 165860 79104 86755App SummaryPss(KB)------Java Heap: 24312Native Heap: 62256Code: 66452Stack: 84Graphics: 5338Private Other: 9604System: 12900TOTAL: 180946 TOTAL SWAP PSS: 0ObjectsViews: 11 ViewRootImpl: 2AppContexts: 20 Activities: 0Assets: 15 AssetManagers: 0Local Binders: 528 Proxy Binders: 1134Parcel memory: 351 Parcel count: 370Death Recipients: 627 OpenSSL Sockets: 0WebViews: 0SQLMEMORY_USED: 384PAGECACHE_OVERFLOW: 86 MALLOC_SIZE: 117DATABASESpgsz dbsz Lookaside(b) cache Dbname4 64 85 12/29/8 /data/system_de/0/accounts_de.db4 40 0/0/0 (attached) ceDb: /data/system_ce/0/accounts_ce.db4 20 27 54/17/3 /data/system/notification_log.db
我给大家解释一下:
Java Heap: 24312 dalvik heap + .art mmapNative Heap: 62256Code: 66452 .so mmap + .jar mmap + .apk mmap + .ttf mmap + .dex mmap + .oat mmapStack: 84Graphics: 5338 Gfx dev + EGL mtrack + GL mtrackPrivate Other: 9604 TotalPrivateClean + TotalPrivateDirty - java - native - code - stack - graphicsSystem: 12900 TotalPss - TotalPrivateClean - TotalPrivateDirty
下面的解释来源于 https://developer.android.com/studio/profile/investigate-ram?hl=zh-cn
Dalvik Heap 您的应用中 Dalvik 分配占用的 RAM。Pss Total 包括所有 Zygote 分配(如上述 PSS 定义所述,通过进程之间的共享内存量来衡量)。Private Dirty 数值是仅分配到您应用的堆的实际 RAM,由您自己的分配和任何 Zygote 分配页组成,这些分配页自从 Zygote 派生应用进程以来已被修改。
Heap Alloc 是 Dalvik 和原生堆分配器为您的应用跟踪的内存量。此值大于 Pss Total 和 Private Dirty,因为您的进程从 Zygote 派生,且包含您的进程与所有其他进程共享的分配。
.so mmap 和 .dex mmap 映射的 .so(原生)和 .dex(Dalvik 或 ART)代码占用的 RAM。Pss Total 数值包括应用之间共享的平台代码;Private Clean 是您的应用自己的代码。通常情况下,实际映射的内存更大 - 此处的 RAM 仅为应用执行的代码当前所需的 RAM。不过,.so mmap 具有较大的私有脏 RAM,因为在加载到其最终地址时对原生代码进行了修改。
.oat mmap 这是代码映像占用的 RAM 量,根据多个应用通常使用的预加载类计算。此映像在所有应用之间共享,不受特定应用影响。
.art mmap 这是堆映像占用的 RAM 量,根据多个应用通常使用的预加载类计算。此映像在所有应用之间共享,不受特定应用影响。尽管 ART 映像包含 Object 实例,它仍然不会计入您的堆大小。
(3). 内存使用情况监测
我们说通常的监测机制有两种. 一种是轮询, 周期性的查看memory 的使用情况, 通常是通过脚本或者daemon 程序周期性的监测. 监测的数据一般包括: /proc/meminfo 系统总的memory 使用情况. /proc/zoneinfo 每个zone 的memory 使用情况. /proc/buddyinfo buddy system 的memory 情况. /proc/slabinfo slub 的memory 使用分布. /proc/vmallocinfo vmalloc 的memory 使用情况. /proc/zraminfo zram 的使用情况, 以及占用memory 情况. /proc/mtk_memcfg/slabtrace slab memory 的具体分布. /proc/vmstat 系统memory 根据使用类型的分布. /sys/kernel/debug/ion/ion_mm_heap mtk multi-media ion memory 使用情况. /sys/kernel/debug/ion/client_history ion 各个clients 使用的ion 情况粗略统计. /proc/mali/memory_usage arm mali gpu 使用memory 按进程统计 /sys/kernel/debug/mali0/gpu_memory arm mali gpu 使用memory 按进程统计 ps -A -T 打印系统所有进程/线程资讯, 可观察每个进程的线程量, 以及VSS/RSS dumpsys meminfo 从Android 角度观察系统memory 的使用情况. /sys/kernel/debug/mlog mtk 统计系统一段时间(约60s) 系统memory的使用情况, 包括kernel, user space, ion, gpu 等的分布.
大家可以写脚本周期性的抓取. 这里单独把mlog 抓出来说明, mlog 是MTK 开发的轻量级的memory log, 一体式抓取常见的memory 统计资讯, 包括kernel(vmalloc, slub...), user space (进程VSS,RSS...), ion, gpu 等在一段时间内部的概要使用情况. 并且提供了图形化的tool 来展示具体的memory 分布, 使用情况, 非常方便, 请大家优先使用(tool_for_memory_analysis). 下面是一些精美的截图:
各类memory 在一段时间内的变化情况:
Kernel/User/HW 的memory 统计变化:
一段时间内, 抓取的进程的memory 变化情况:
欢迎大家手工使用.
另外一种熔断, 即限制memory 的使用, 当到一定程度时, 主动发生异常, 回报错误. 通常情况下, 系统memory leaks , 就会伴随OOM 发生, 严重是直接KE. 而单个进程 memory leaks, 如果它的oom adj < 0, 即daemon service 或者 persist app, 通常它的memory leaks 也会触发系统OOM , 因为lmk 难以杀掉. 如果是普通app 发生memory leaks, 则往往直接被LMK 杀掉. 难以对系统产生直接异常. 当然进程也可能无法申请到memory 发生JE, NE 等异常.
针对总的系统的memory 使用, 我们可以通过设定, 限制系统总体的memory, 比如设置系统最大2GB: (1). ProjectConfig.mk CUSTOM_CONFIG_MAX_DRAM_SIZE = 0x80000000 注意: CUSTOM_CONFIG_MAX_DRAM_SIZE must be included by AUTO_ADD_GLOBAL_DEFINE_BY_NAME_VALUE
(2). preloader project config file vendor/mediatek/proprietary/bootable/bootloader/preloader/custom/{project}/{project}.mk CUSTOM_CONFIG_MAX_DRAM_SIZE = 0x80000000 注意: CUSTOM_CONFIG_MAX_DRAM_SIZE must be exported
针对某个进程使用的memory, 我们可以通过setrlimit 来进行限制, 如: 针对camerahalserver, 使用init 的setrlimit 进行限制.
service camerahalserver /vendor/bin/hw/camerahalserverclass mainuser cameraservergroup audio camera input drmrpc sdcard_rw system media graphicsioprio rt 4capabilities SYS_NICEwritepid /dev/cpuset/camera-daemon/tasks /dev/stune/top-app/tasks#limit VSS to 4GBrlimit as 0x100000000 0x100000000#limit malloc to 1GBrlimit data 0x40000000 0x40000000
把camerahalserver 的VSS 限制到4GB, 把malloc 的大小限制到1GB, 一旦超出就会返回 ENOMEM, 通常情况下,这样可自动产生NE. 以便抓到camerahalserver 的更多信息.
注意的是因为vendor 下面的service 是由 vendor_init 拉起来的, 需要给vendor_init 设置sepolicy. 以免无法设定成功.
/device/mediatek/sepolicy/basic/non_plat/vendor_init.teallow vendor_init self:global_capability_class_set sys_resource;
也可以直接在代码里面写死, 参考如/frameworks/av/media/libmedia/MediaUtils.cpp
针对APP java heap的memory leaks, 我们可以通过设定 dalvik 的heap size 进行限制, 通过system property 设定, 注意的是, 目前的做法会影响到所有的java process.
[dalvik.vm.heapgrowthlimit]: [384m][dalvik.vm.heapsize]: [512m]
暂无评论数据