Determining the source of memory cache usage
To find out about "memory cache" use slabtop
, using -s
you can sort the output and c
is for the cache size, so use:
sudo slabtop -s c
For me most of the cache is related to inode_cache
And about "swap",
You can use status
file in each process directory in /proc
to find out which one of them is using the swap.
For specific program:
cd /proc/$(pgrep -x programname)
grep -i swap status
To get a list of all process cache size:
cd /proc
find -maxdepth 2 -iname status -exec grep -i -e name -e swap {} \; -exec echo "---" \;
the output would be similar to:
---
Name: atd
VmSwap: 0 kB
---
Name: rsyslogd
VmSwap: 0 kB
---
Name: cron
VmSwap: 0 kB
Related videos on Youtube
Comments
-
pelu over 1 year
I'm troubleshooting a machine with slow downs due to heavy swap usage after running for several days. The system has 16 GB of ram and should generally be fine, save that a large volume of the ram is being used by cache and not being freed on need. Continual use will grind the system to a halt while as much as 12 GB are tied up in cache.
Before you mention it, I'm well aware of Linux Ate My Ram.
A typical display of free after 3 - 4 days of running is:
total used free shared buff/cache available Mem: 15G 4.4G 184M 280M 10G 116M Swap: 15G 7.8G 8.1G
To troubleshoot, I've dropped swapiness to zero.
$ cat /proc/sys/vm/swappiness 0
Moreover, I'm unable to manually call a cache flush with any meaningful effect.
$ sudo su -c "free -h && sync && echo 3 > /proc/sys/vm/drop_caches && free -h" total used free shared buff/cache available Mem: 15G 4.4G 166M 280M 10G 104M Swap: 15G 7.8G 8.1G total used free shared buff/cache available Mem: 15G 4.4G 186M 280M 10G 115M Swap: 15G 7.8G 8.1G
I'm wondering if it may have to do with the on-board video with Skylake. Regardless, I'm unsure of how to continue to profile the issue, with most internet resources saying that cache usage is normal and will free as needed - when clearly it is not. Where should I look next?
-
pelu almost 7 yearsslabtop seems to have identified the culprit. I have ~12 GB tied up in a kernel space memory leak. Another useful tool here to highlight the issue was
smem -tw
. -
Ravexina almost 7 yearsI was going to edit and mention
smem
too, good that you already know about it :-)