what are pagecache, dentries, inodes?

20,962

Solution 1

With some oversimplification, let me try to explain in what appears to be the context of your question because there are multiple answers.

It appears you are working with memory caching of directory structures. An inode in your context is a data structure that represents a file. A dentries is a data structure that represents a directory. These structures could be used to build a memory cache that represents the file structure on a disk. To get a directly listing, the OS could go to the dentries--if the directory is there--list its contents (a series of inodes). If not there, go to the disk and read it into memory so that it can be used again.

The page cache could contain any memory mappings to blocks on disk. That could conceivably be buffered I/O, memory mapped files, paged areas of executables--anything that the OS could hold in memory from a file.

Your commands flush these buffers.

Solution 2

I am trying to understand what exactly are pagecache, dentries and inodes. What exactly are they?

user3344003 already gave an exact answer to that specific question, but it's still important to note those memory structures are dynamically allocated.

When there's no better use for "free memory", memory will be used for those caches, but automatically purged and freed when some other "more important" application wants to allocate memory.

No, those caches don't affect any caches maintained by any applications (including redis and memcached).

My Amazon EC2 server RAM was getting filled up over the days - from 6% to up to 95% in a matter of 7 days. I am having to run a bi-weekly cronjob to remove these cache. Then memory usage drops to 6% again.

Probably you're mis-interpreting the situation: your system may just be making efficient usage of its ressources.

To simplify things a little bit: "free" memory can also be seen as "unused", or even more dramatic - a waste of resources: you paid for it, but don't make use of it. That's a very un-economic situation, and the linux kernel tries to make some "more useful" use of your "free" memory.

Part of its strategy involves using it to save various kinds of disk I/O by using various dynamically sized memory caches. A quick access to cache memory saves "slow" disk access, so that's often a useful idea.

As soon as a "more important" process wants to allocate memory, the Linux kernel voluntarily frees those caches and makes the memory available to the requesting process. So there's usually no need to "manually free" those caches.

The Linux kernel may even decide to swap out memory of an otherwise idle process to disk (swap space), freeing RAM to be used for "more important" tasks, probably also including to be used as some cache.

So as long as your system is not actively swapping in/out, there's little reason to manually flush caches.

A common case to "manually flush" those caches is purely for benchmark comparison: your first benchmark run may run with "empty" caches and so give poor results, while a second run will show much "better" results (due to the pre-warmed caches). By flushing your caches before any benchmark run, you're removing the "warmed" caches and so your benchmark runs are more "fair" to be compared with each other.

Solution 3

Common misconception is that "Free Memory" is important. Memory is meant to be used.

So let's clear that out :

  • There's used memory, which is where important data is stored, and if that reaches 100% you're dead
  • Then there's cache/buffer, which is used as long as there is space to do so. It's facultative memory to access disk files faster, mostly. If you run out of free memory, this will just free itself and let you access disk directly.

Clearing cached memory as you suggest is most of the case useless and means you're deactivating an optimization, therefore you'll get a slow down.

If you really run out of memory, that is if your "used memory" is high, and you begin to see swap usage, then you must do something.

HOWEVER : there's a known bug running on AWS instances, with dentry cache eating memory with no apparent reason. It's clearly described and solved in this blog.

My own experience with this bug is that "dentry" cache consumes both "used" and "cached" memory and does not seem to release it in time, eventually causing swap. The bug itself can consume resources anyway, so you need to look into it.

Share:
20,962
Rakib
Author by

Rakib

DevOps engineer, web & mobile app backend engineer, cloud computing solutions architect, tech trainer for multiple tech startups & enterprises specializing in CloudInfra, RESTful APIs, Microservices, ETL.

Updated on July 16, 2022

Comments

  • Rakib
    Rakib almost 2 years

    Just learned these 3 new techniques from https://unix.stackexchange.com/questions/87908/how-do-you-empty-the-buffers-and-cache-on-a-linux-system:


    To free pagecache:

    # echo 1 > /proc/sys/vm/drop_caches
    

    To free dentries and inodes:

    # echo 2 > /proc/sys/vm/drop_caches
    

    To free pagecache, dentries and inodes:

    # echo 3 > /proc/sys/vm/drop_caches
    

    I am trying to understand what exactly are pagecache, dentries and inodes. What exactly are they?

    Do freeing them up also remove the useful memcached and/or redis cache?

    --

    Why i am asking this question? My Amazon EC2 server RAM was getting filled up over the days - from 6% to up to 95% in a matter of 7 days. I am having to run a bi-weekly cronjob to remove these cache. Then memory usage drops to 6% again.