前言:此文是自己解這個問題的一個記錄,您看到的時候可能還沒有結論,因此希望從中獲得最終解決方案的同仁還請在看之前三思;如果是看解決思路的,那麼請接着往下看:
最近遇到一個比較難解的問題:
設備老化後EMMC的讀寫性能衰減30%以上;
具體復現步驟爲:
1、刷機後開機;
2、靜置手機一段時間,等待進程穩定;
3、使用AndroBench跑IO性能;
4、使用內部老化工具填充手機,達到90%,再刪除到60%左右;
5、再次填充到90%;
6、再次跑AndroBench,發現寫入速度衰減厲害(順序寫入與隨機寫入分別衰減24%與36%)
解讀:
總所周知,EMMC存儲高佔用時IO性能肯定是有衰減的,但是此題詭異在於:
1、如果佔用空間是一個大文件,那麼衰減幅度沒有這麼大;
2、如果佔用空間是一個大文件,即便有大幅度衰減,也是概率性的;
3、而此題填充後的環境,重啓後問題依舊存在;
4、甚至在完全刪除填充數據後問題依舊存在;
出現此問題的機器配置如下:
SoC:MT6761D ([email protected] x 4)
Memory:2GB + 32GB
OS:Android Q
File System:F2FS
Data Encryption:FBE
做過的一些比較費時但是又沒有結果的嘗試:
1、使用FDE加密方式,問題依舊存在;
2、文件系統換回EXT4,問題依舊存在;
分析:
1、出於經驗判斷,首先需要排除的是硬件限制。因爲若是器件原因導致,後面的所有分析都是徒勞;
所以首先離器件最近的blockio信息查看(路徑/d/blockio或/d/blocktag/mmc/blockio或/sys/kernel/debug/blockio或/sys/kernel/debug/blocktag/mmc/blockio):
速度正常的情況下,blockio記錄的寫入信息如下:
[ 3771.970446]mmc.q:0.wt:80203,14536704,177.wl:17%,177573164,1000334925,3549.vm:995664,40,2992780,0,2990396,13222....
[ 3772.970512]mmc.q:0.wt:85263,15192064,174.wl:17%,174295678,1000066310,3709.vm:995664,40,2992780,0,2990396,13222....
[ 3773.970946]mmc.q:0.wt:86204,15536128,176.wl:17%,176434802,1000432849,3787.vm:995664,24,2992796,4,2990424,13222....
[ 3774.971025]mmc.q:0.wt:85943,15577088,177.wl:17%,177265870,1000079156,3803.vm:995664,24,2992796,0,2990428,13222....
而出現性能衰減時的信息如下:
[ 739.357804]mmc.q:0.wt:83355,10072064,118.wl:11%,118875025,1014868695,2459.vm:960640,56,1105952,0,1367124,12449....
[ 740.390092]mmc.q:0.wt:89297,10424320,114.wl:11%,114144052,1032285848,2544.vm:960640,28,1105960,36,1367124,12449....
[ 741.391601]mmc.q:0.wt:88104,10285056,114.wl:11%,114994275,1001509541,2486.vm:960648,0,1106056,0,1367284,12449....
[ 742.422869]mmc.q:0.wt:83245,10399744,122.wl:11%,122012438,1031268926,2539.vm:960640,44,1106104,0,1367288,12449....
各個字段的解讀可參考MTK online上的FAQ21831,現截取如下:
Type | Format | Examples | Description |
Storage Type and Request Queue | (ufs|mmc).(0~9) | mmc.q:0. mmc.q:1. ufs.q:0. |
mmc.q:0 => eMMC, mmc.q:1 => T-Card,ufs.q:0 => UFS |
Workload | wl:(0~99)% | wl:49% | Percentage of time that UFS/MMC driver is executing I/O. wl:49% => ~490ms out of 1000ms (49%) is exectuing I/O. |
Write Throughput | wt:speed,size,time | wt:2442,6004736,2400 | speed: KB/s size: written size in bytes |
Read Throughput | rt:speed,size,time | rt:38805,27418624,690 | speed: KB/s size: Read size in bytes |
Virtual Memory Statu | vm:fp,fd,nd,wb,nw | vm:0,178336,22776,0,59272 | Storage related virtual memory statistics in terms of KB FilePages(fp): number of pages used as file cache FileDirty(fd): number of delta dirty file pages that needs write to disk. NumDirtied(nd): accumulated number of dirtied pages. WriteBack(wb): number of pages that are writing back to disk Num Written(nw): accumulated number of written pages. |
Page PID logger | {pid:write_count,write_length, read_count,read_length} | {06643:00000:00000000:00522:02138112} {06740:00000:00000000:00174:00712704} |
I/O statistics of each process. pid: process id write_count: number of pages the process had written. write_size: written size in bytes. read_count: number of pages the process had read. read_size: read size in bytes. |
CPU Status | cpu:user,nice,system,idle,iowait,ir,softirq |
cpu:48146707,7679908,61820079, 114165927,4175379,0,125657 |
Currently not used. |
每一行表示每秒中cmdq的讀寫狀態,我抓取的隨機寫入時的信息,因此以這句爲例:
[ 739.357804]mmc.q:0.wt:83355,10072064,118.wl:11%,118875025,1014868695,2459.vm:960640,56,1105952,0,1367124,12449....
可以這麼解讀:
這一秒中之內,寫入了10072064字節的數據,寫入耗時118ms,佔用這一秒中的11%,折算速度爲83355KB/s;
而反觀正常情況下的信息,明顯寫入量少了,但是由於寫入耗時也少,折算的寫入速度是差不多的。
由於wl(workload)的計算方式是寫入耗時佔這一秒的比例。因此,如果是硬件限制,那麼此處的wl應該很高,即,每秒鐘都是一直在寫,而由於速度很慢,所以寫入耗時高;
但是是事實,出現問題時的wl反而比正常情況下地,說明此時壓力不再block層,而在起之前;
換句話說,磁盤寫入沒壓力,但是cmdq每秒接受、分配的任務就這麼點,所以導致上層顯示變慢;
小結:問題出在cmdq以上的部分,而不是硬件器件問題;
下一章會通過ftrace來繼續定位問題;