Do tmpfs and devtmpfs share the same memory region?

6,155

Solution 1

For all the tmpfs mounts, "Avail" is an artificial limit. The default size for tmpfs mounts is half your RAM. It can be adjusted at mount time. (man mount, scroll to tmpfs).

The mounts don't share the same space, in that if you filled the /dev/shm mount, /dev would not show any more "Used", and it would not necessarily stop you from writing data to /dev

(Someone could contrive tmpfs mounts that share space by bind-mounting from a single tmpfs. But that's not how any of these mounts are set up by default).

They do share the same space, in that they're both backed by the system memory. If you tried to fill both /dev/shm and /dev, you would be allocating space equal to your physical RAM. Assuming you have swap space, this is entirely possible. However it's generally not a good idea and would end poorly.


This doesn't fit well with the idea of having multiple user-accessible tmpfs mounts. I.e. /dev/shm + /tmp on many systems. It arguably would be better if the two large mounts shared the same space. (Posix SHM is literally an interface to open files on a user-accessible tmpfs).

/dev/, /run, /sys/fs/cgroups are system directories. They should be tiny, not used for sizeable data and so not cause a problem. Debian (8) seems to be a bit better at setting limits for them; on a 500MB system I see them limited to 10, 100, 250 MB, and another 5 for /run/lock respectively.

/run has about 2MB used on my systems. systemd-journal is a substantial part of it, and by default may grow to 10% of "Avail". (RuntimeMaxUse option), which doesn't fit my model.

I would bet that's why you've got 50MB there. Allowing the equivalent of 5% of physical RAM for log files... personally it's not a big problem in itself, but it's not pretty and I'd call it a mistake / oversight. It would be better if a cap was set on the same order as that 2MB mark.

At the moment it suggests the size for /run should be manually set for every system, if you want to prevent death by a thousand bloats. Even 2% (from my Debian example) seems presumptuous.

Solution 2

Each tmpfs instance is independent so it's possible to overallocate memory, and if you fill up the entire memory with large files on tmpfs, the system will eventually halt due to no more memory available and not possible to free any (without deleting tmpfs files or umounting it).

tmpfs can use swap partitions to swap out data, but even that does not help if you're reading/writing those files actively at which point it has to be swapped back in.

Basically systems that have lots of instances of tmpfs mounted, usually operate under the assumption that although tmpfs is there, it won't actually be filled to the limit.


If you want to try this - preferably on a Live CD with nothing mounted - then it works like this:

mkdir a b c
mount -t tmpfs tmpfs a
mount -t tmpfs tmpfs b
mount -t tmpfs tmpfs c
truncate -s 1T a/a b/b c/c
shred -v -n 1 a/a b/b c/c

That creates three instances of tmpfs, by default each has a 50% memory limit so it's 150% in total (not counting swap, if you do have swap feel free to add d e f ...).

Output of shred will look something like this:

shred: a/a: pass 1/1 (random)...
shred: a/a: error writing at offset 1049104384: No space left on device
shred: b/b: pass 1/1 (random)...
# system hangs indefinitely at this point, without swap it never reaches c/c #

Solution 3

(1) All file systems based on tmpfs are sharing the OS available virtual memory as back-end. devtmpfs happens to use the same space but unlike the former, doesn't contain data so shouldn't grow.

(2) /run/users subdirectories are created by systemd as personal, transcient /tmp directories. They also share the same virtual memory space with all other tmpfs based file systems. The fact they appear smaller is due to a capping put in place to prevent a single user to affect all other users by filling this directory.

Share:
6,155

Related videos on Youtube

Nan Xiao
Author by

Nan Xiao

Updated on September 18, 2022

Comments

  • Nan Xiao
    Nan Xiao over 1 year

    My system disk usage is like this:

    # df -h
    Filesystem             Size  Used Avail Use% Mounted on
    /dev/mapper/rhel-root   50G   39G   12G  77% /
    devtmpfs               5.8G     0  5.8G   0% /dev
    tmpfs                  5.8G  240K  5.8G   1% /dev/shm
    tmpfs                  5.8G   50M  5.8G   1% /run
    tmpfs                  5.8G     0  5.8G   0% /sys/fs/cgroup
    /dev/mapper/rhel-home  1.3T  5.4G  1.3T   1% /home
    /dev/sda2              497M  212M  285M  43% /boot
    /dev/sda1              200M  9.5M  191M   5% /boot/efi
    tmpfs                  1.2G   16K  1.2G   1% /run/user/1200
    tmpfs                  1.2G   16K  1.2G   1% /run/user/1000
    tmpfs                  1.2G     0  1.2G   0% /run/user/0
    

    I have 2 questions about devtmpfs and tmpfs:
    (1)

    devtmpfs               5.8G     0  5.8G   0% /dev
    tmpfs                  5.8G  240K  5.8G   1% /dev/shm
    tmpfs                  5.8G   50M  5.8G   1% /run
    tmpfs                  5.8G     0  5.8G   0% /sys/fs/cgroup
    

    All the above spaces are 5.8G, do they share the same memory space?

    (2)

    tmpfs                  1.2G   16K  1.2G   1% /run/user/1200
    tmpfs                  1.2G   16K  1.2G   1% /run/user/1000
    tmpfs                  1.2G     0  1.2G   0% /run/user/0
    

    Does each user has his dedicated memory space, not shared space in /run/user partition?

  • user2948306
    user2948306 about 8 years
    Yes, sorry! If you think the edit i better removed, please do.
  • CMCDragonkai
    CMCDragonkai over 4 years
    Is there a way to recover from the deadlock without restarting the computer?