operation in /dev/shm causes overflow
Curious, as you're running this application what does df -h /dev/shm
show your RAM usage to be?
tmpfs
By default it's typically setup with 50% of whatever amount of RAM the system physically has. This is documented here on kernel.org, under the filesystem documentation for tmpfs. Also it's mentioned in the mount
man page.
excerpt from mount man page
The maximum number of inodes for this instance. The default is half of the number of your physical RAM pages, or (on a machine with highmem) the number of lowmem RAM pages, whichever is the lower.
confirmation
On my laptop with 8GB RAM I have the following setup for /dev/shm
:
$ df -h /dev/shm
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 4.4M 3.9G 1% /dev/shm
What's going on?
I think what's happening is that in addition to being allocated 50% of your RAM to start, you're essentially consuming the entire 50% over time and are pushing your /dev/shm
space into swap, along with the other 50% of RAM.
Note that one other characteristic of tmpfs
vs. ramfs
is that tmpfs
can be pushed into swap if needed:
excerpt from geekstuff.com
Table: Comparison of ramfs and tmpfs
Experimentation Tmpfs Ramfs
--------------- ----- -----
Fill maximum space and continue writing Will display error Will continue writing
Fixed Size Yes No
Uses Swap Yes No
Volatile Storage Yes Yes
At the end of the day it's a filesystem implemented in RAM, so I would expect it to act a little like both. What I mean by this is that as files/directories are deleted your're using some of the physical pages of memory for the inode table, and some for the actual space consumed by these files/directories.
Typically when you use space on a HDD, you don't actually free up the physical space, just the entries in the inode table, saying that the space consumed by a specific file is now available.
So from the RAM's perspective the space consumed by the files is just dirty pages in memory. So it will dutifully swap them out over time.
It's unclear if tmpfs
does anything special to clean up the actual RAM used by the filesystem that it's providing. I saw mention in several forums that people saw that it was taking upwards of 15 minutes for their system to "reclaim" space for files that they had deleted in the /dev/shm
.
Perhaps this paper I found on tmpfs
titled: tmpfs: A Virtual Memory File System will shed more light on how it is implemented at the lower level and how it functions with respect to the VMM. The paper was written specifically for SunOS but might hold some clues.
experimentation
The following contrived tests seem to indicate /dev/shm
is able to clean itself up.
experiment #1
Create a directory with a single file inside it, and then delete the directory 1000 times.
initial state of/dev/shm
$ df -k /dev/shm
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 3993744 5500 3988244 1% /dev/shm
fill it with files
$ for i in `seq 1 1000`;do mkdir /dev/shm/sam; echo "$i" \
> /dev/shm/sam/file$i; rm -fr /dev/shm/sam;done
final state of /dev/shm
$ df -k /dev/shm
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 3993744 5528 3988216 1% /dev/shm
experiment #2
Create a directory with a single 50MB file inside it, and then delete the directory 300 times.
fill it with 50MB files of random garbage$ start_time=`date +%s`
$ for i in `seq 1 300`;do mkdir /dev/shm/sam; \
dd if=/dev/random of=/dev/shm/sam/file$i bs=52428800 count=1 > \
/dev/shm/sam/file$i.log; rm -fr /dev/shm/sam;done \
&& echo run time is $(expr `date +%s` - $start_time) s
...
8 bytes (8 B) copied, 0.247272 s, 0.0 kB/s
0+1 records in
0+1 records out
9 bytes (9 B) copied, 1.49836 s, 0.0 kB/s
run time is 213 s
final state of /dev/shm
Again there was no noticable increase in the space consumed by /dev/shm
.
$ df -k /dev/shm
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 3993744 5500 3988244 1% /dev/shm
conclusion
I didn't notice any discernible effects with adding files and directories with my /dev/shm
. Running the above multiple times didn't seem to have any effect on it either. So I don't see any issue with using /dev/shm
in the manner you've described.
Related videos on Youtube
Daniel
Updated on September 18, 2022Comments
-
Daniel over 1 year
I am repeating tens of thousands of similar operations in /dev/shm, each with a directory created, files written, and then removed. My assumption used to be that I was actually creating directories and removing them in place, so the memory consumption had to be quite low. However it turned out the usage was rather high, and finally caused memory overflow. So my questions is: with operations like
mkdir /dev/shm/foo touch /dev/shm/foo/bar [edit] /dev/shm/foo/bar .... rm -rf /dev/shm/foo
Will it finally cause memory overflow? and if it does, why is that, since it seems to be removing them in-place.
Note: this is a tens of thousands similar operation.
-
Admin over 2 yearsKernel bug perhaps related? unix.stackexchange.com/questions/309898/…
-
-
Daniel almost 11 yearsThanks @sim, That's an impressive test, though I think I have had a sense where goes wild. It may be due to the writing of one file, which grabbed away all available memories. I will double check it and possibly come back to you later