cat /dev/null emptied my log file but size did not change

124,206

Solution 1

Assuming you meant to say

cat /dev/null > file_log.txt

or

cp /dev/null file_log.txt

which has the same effect for that matter, the answer is that the process that has the file open for writing did so without O_APPEND, or it sets the offset into the file arbitrarily, in which case a sparse file is created.

The manual page for write(2) explains that pretty clear:

For a seekable file (i.e., one to which lseek(2) may be applied, for example, a regular file) writing takes place at the file offset, and the file offset is incremented by the number of bytes actually written. If the file was open(2)ed with O_APPEND, the file offset is first set to the end of the file before writing. The adjustment of the file offset and the write operation are performed as an atomic step.

The said offset is a property of the according file descriptor of the writing process - if another process truncates the file or writes itself to the file, this will not have any effect on the offset. (Moreover, if the same process opens the file for writing without O_APPEND it will receive a different file descriptor for that and writing to the file through the new file descriptor will have the same effect.)

Suppose that process P opens a file for writing without appending, yielding file descriptor fd. Then the effect on the file size (as stat() reports it) of truncating a file (e.g. by copying /dev/null to it) will be undone as soon as P writes to fd. Specifically, on write() to fd the system will move ("seek") to the offset associated with fd, filling the space from the current end of file (possibly to the beginning, if it was entirely truncated) up to the offset with zeros. However, if the file has grown larger in the mean time, writing to fd will overwrite the content of the file, beginning at the offset.

A sparse file is a file that contains "holes", i.e. the system "knows" that there are large regions with zeroes, which are not really written to disk. This is why du and ls disagree - du looks at the actual disk usage, while ls uses simply stat() to extract the file size attribute.

Remedy: restart the process. If possible, rewrite the part where the file is opened to use O_APPEND (or mode a when using fopen()).

Solution 2

cat /dev/null is a no op as it outputs exactly nothing. cp /dev/null file is equally pointless.

A simpler way to blank a file's content is to redirect the null command to it that way:

: > file

or even, with most shells, only use a redirection without specifying any command:

> file  

The fact the reported size by ls is still high is just due by the writing process seeking to its expected idea of what the end of file should be before writing. Because there is "nothing" before the seek point, this shouldn't hurt. The only risk would be that you want to do backups or copies of the affected file with using a non sparse file aware tool.

Note that restarting the writing process won't "recover" the space as the file will stay "holey".

If you really want the reported file size to be zeroed, you need to stop (kill) the writing process before blanking the file.

Solution 3

cat /dev/null file_log.txt

This only made cat read /dev/null and immediately read file_log.txt and output the result to stdout, your screen. This won't delete anything, at all.

If you want to test out, use cat /dev/null non_existent_file and you will see that it errors out.

The correct way to truncate a file, is using shell redirectors or any kind of editor to remove the lines. What you intended to do was:

cat /dev/null > file_log.txt

which account for the first method.

Share:
124,206

Related videos on Youtube

user78960
Author by

user78960

Updated on September 18, 2022

Comments

  • user78960
    user78960 over 1 year

    I'm quite new to Unix. Using Solaris 10 I faced the below issue.

    There is a large log file with size 9.5G. I tried to empty the file using the below command.

    # cat /dev/null file_log.txt
    

    By doing this I regained space on the file system but the size of the file still shows the same and is increasing. I figured a process is still running into the log file.

    Is there a way to correct the file size? Is this going to effect my file system?