rsync with --hard-links freezes
Solution 1
My answer, which I give from hard-earned experience, is: Don't do this. Don't try to copy a directory hierarchy that makes heavy use of hard links, such as one created using rsnapshot
or rsync --link-dest
or similar. It won't work on anything but small datasets. At least, not reliably. (Your mileage may vary, of course; perhaps your backup datasets are much smaller than mine were.)
The problem with using rsync --hard-links
to recreate the hard-linked structure of files on the destination side is that discovering the hard-links on the source side is hard. rsync
has to build a map of inodes in memory to find the hard-links, and unless your source has relatively few files, this can and will blow up. In my case, when I learned of this problem and was looking around for alternate solutions, I tried cp -a
, which is also supposed to preserve the hard-link structure of files in the destination. It churned away for a long time and then finally died (with a segfault, or something like that).
My recommendation is to set aside an entire partition for your rsnapshot
backup. When it fills up, bring another partition online. It is much easier to move around hard-link-heavy datasets as entire partitions, rather than as individual files.
Solution 2
At the point rsync seems to hang, is it hung or just busy? Check for cpu activity with top
and disk activity with iotop -o
.
It could be busy copying over a large file. You would see this in iotop
or similar, or in rsync's display if you ran it with the --progress
option.
It could also be busy scanning through lists of inodes to check for linked files. If incremental recursion is being used, which is the default for recursive transfers in most cases if both client and server have rsync v3.0.0 or later, it could have just hit a directory with many files and be running the link check between all the files in it and all those found previously. The --hard-links
option can be very CPU intensive over large sets of files (this is why it is not included in the list of options implied by the general --archive
option). This will manifest itself as high CPU use at the time rsync seems paused/hung.
Related videos on Youtube
![Adam Matan](https://i.stack.imgur.com/QG9pG.jpg?s=256&g=1)
Adam Matan
Team leader, developer, and public speaker. I build end-to-end apps using modern cloud infrastructure, especially serverless tools. My current position is R&D Manager at Corvid by Wix.com, a serverless platform for rapid web app generation. My CV and contact details are available on my Github README.
Updated on September 17, 2022Comments
-
Adam Matan almost 2 years
I have a large directory called
servers
, which contains many hard-links made byrsnapshot
. That means that the structure is more or less like:./servers ./servers/daily.0 ./servers/daily.0/file1 ./servers/daily.0/file2 ./servers/daily.0/file3 ./servers/daily.1 ./servers/daily.1/file1 ./servers/daily.1/file2 ./servers/daily.1/file3 ...
The snapshots were created with
rsnapshot
in a space-saving way: if/servers/daily.0/file1
is the same as/servers/daily.1/file1
, they both point to the same inode using hard-link, instead of just copying a complete snapshot every cycle./servers/daily.0/file1/servers/daily.0/file1I've tried to copy it with the hard links structure, in order to save space on the destination drive, using:
nohup time rsync -avr --remove-source-files --hard-links servers /old_backups
After some time, the rsync freezes - no new lines are added to the
nohup.out
file, and no files seem to move from one drive to another. Removing thenohup
didn't solve the problem.Any idea what's wrong?
Adam
-
Robbie almost 12 yearsThe
hardlink
program can search for identical files and hardlink them, but it requires them to all have exactly the same attributes (size, contents, permission, owner, group, etc) to work properly. I was able to use it to relink several tens of gigabytes of a music backup in about half an hour. -
Steve Pitchers almost 10 yearsWhat do you mean by "blow up"? Exponential slowness? Or does an error occur to let you know that it's not going to work?
-
user1133275 over 4 yearsEven COW file systems (LVM/ZFS/Btrfs) get slow with many copies but they are more robust than hard links.
-
Yaroslav Nikitenko over 3 yearsMany people report that it's actually hung. It makes a lot of
stat()
system calls, bugzilla.samba.org/show_bug.cgi?id=10678#c1 - the difference for the end user whether it's very slow or completely stopped is probably non-existent. -
Axel over 3 years@YaroslavNikitenko - many people incorrectly report that it is actually hung. It has to make all those
stat()
calls and keep track of the extra information (consuming memory and CPU resource) to do the job it is being asked to do when run over a large tree with those options enabled. Perhaps there could be more progress information while it churns over the required processing, though that wouldn't help in many scripted circumstances when rsync is told to run quiet unless there are actual errors to report.