Transfer 15TB of tiny files
Solution 1
I have had very good results using tar
, pigz
(parallel gzip) and nc
.
Source machine:
tar -cf - -C /path/of/small/files . | pigz | nc -l 9876
Destination machine:
To extract:
nc source_machine_ip 9876 | pigz -d | tar -xf - -C /put/stuff/here
To keep archive:
nc source_machine_ip 9876 > smallstuff.tar.gz
If you want to see the transfer rate just pipe through pv
after pigz -d
!
Solution 2
I'd stick to the rsync solution. Modern (3.0.0+) rsync uses incremental file list, so it does not have to build full list before transfer. So restarting it won't require you to do whole transfer again in case of trouble. Splitting the transfer per top or second level directory will optimize this even further. (I'd use rsync -a -P
and add --compress
if your network is slower than your drives.)
Solution 3
Set up a VPN (if its internet), create a virtual drive of some format on the remote server (make it ext4), mount it on the remote server, then mount that on the local server (using a block-level protocol like iSCSI), and use dd or another block-level tool to do the transfer. You can then copy the files off the virtual drive to the real (XFS) drive at your own convenience.
Two reasons:
- No filesystem overhead, which is the main performance culprit
- No seeking, you're looking at sequential read/write on both sides
Solution 4
If the old server is being decommissioned and the files can be offline for a few minutes then it is often fastest to just pull the drives out the old box and cable them into the new server, mount them (back online now) and copy the files to the new servers native disks.
Solution 5
Use mbuffer and if it is on a secure network you can avoid the encryption step.
Related videos on Youtube
lbanz
Updated on September 18, 2022Comments
-
lbanz almost 2 years
I'm archiving data from one server to another. Initially I started a
rsync
job. It took 2 weeks for it to build the file list just for 5 TB of data and another week to transfer 1 TB of data.Then I had to kill the job as we need some down time on the new server.
It's been agreed that we will tar it up since we probably won't need to access it again. I was thinking of breaking it into 500 GB chunks. After I
tar
it then I was going to copy it across throughssh
. I was usingtar
andpigz
but it is still too slow.Is there a better way to do it? I think both servers are on Redhat. Old server is Ext4 and the new one is XFS.
File sizes range from few kb to few mb and there are 24 million jpegs in 5TB. So I'm guessing around 60-80 million for 15TB.
edit: After playing with rsync, nc, tar, mbuffer and pigz for a couple of days. The bottleneck is going to be the disk IO. As the data is striped across 500 SAS disks and around 250 million jpegs. However, now I learnt about all these nice tools that I can use in future.
-
D34DM347 almost 9 yearspossible duplicate of linux to linux, 10TB transfer?
-
TessellatingHeckler almost 9 yearsIs there a better way to do it? - Yeah, Windows Server 2012 R2 DFS replication would prepare that in about 10 hours. And it would sync changes, and pick up where it left off after reboots.
-
Thomas Weller almost 9 years@TessellatingHeckler: so you suggest OP migrates from Redhat to Windows before archiving?
-
TessellatingHeckler almost 9 years@ThomasWeller They asked "is there a better way?", and there is. I make no recommendation that they use the better way. They're free to use commands in a pipe which can't recover from interruption, won't verify the file content, can't report copy status, can't use previously copied blocks to avoid copying parts of files, has no implicit support low-priority copying, can't be paused, has no mention of copying ACLs, and needs someone to stay logged in to run it. Anyone else following along, however, might be interested - or prompted to say "x does that on Linux".
-
Shiva Saurabh almost 9 years@TessellatingHeckler: That sounds a bit like BTRFS send/receive. en.wikipedia.org/wiki/Btrfs#Send.2Freceive. I think that can work as a dump/restore but with incremental capability. Some other Linux filesystems also have dump/restore tools that read the data in disk order, not logical directory order (e.g.
xfsdump
). The problem here is that the OP is going from ext4 to XFS, so this isn't an option. (BTW, OP, I'd suggest evaluating BTRFS for use on your server. XFS can handle being used as an object store for zillions of small files, but BTRFS may be better at it.) -
Fox almost 9 yearsIt's a little offtopic, but: @PeterCordes I'd be very careful recommending btrfs for production use, yet. Lately I had some data corruption issues related to btrfs and bcache on Ubuntu 14.04.
-
lbanz almost 9 years@TessellatingHeckler It is true that these are free commands and doesn't have any reporting status if there are corruptions. Now that you mention it, I think I'm going back to rsync. Because I know there might be corruption in our old system when the temperature threshold was breached.
-
Shiva Saurabh almost 9 years@lbanz: ssh encryption, or rsync's gzip compression, might be bottlenecking you. Discussion in comments on unix.stackexchange.com/a/228048/79808 has some numbers for compression.
-
Shiva Saurabh almost 9 years@Fox: From what I've read, if you use BTRFS, it's a good idea to use the latest kernel. They usually fix more bugs than they introduce, and it's still new and improving, so a years-old stable-distro kernel version of BTRFS is not ideal.
-
Fox almost 9 years@PeterCordes that is why I recommend being careful. Myself being rather fan of bleeding edge, I quite understand why some people like long term support distros, which tend to stick to an older kernel. So sure, btrfs is maturing at a pretty good pace, but it's not an universal answer. Sure not without buts.
-
Shiva Saurabh almost 9 yearsSpeaking of FS-as-object-store, I did some digging when this came up recently, since I was curious. unix.stackexchange.com/a/222640/79808 has most of what I found. Traditional-filesystem on RAID5 is a bad choice. One object-store system I looked at did redundancy at an object level, and wanted a separate XFS filesystem on each disk. The difference is subtle but huge. Metadata ops improve, because each CPU can be searching a separate small free-inode map, instead of one giant one, for example. Taking RAID5 out of the picture for small object writes is also huge.
-
Admin almost 9 yearsSounds like a great little use case for BitTorrent Sync to me. getsync.com
-
Aloha almost 9 yearsIf you're not going to access it again*, what if you simply removed the drive itself and stored it in an airtight container (Lock&Lock) together with a packet of desiccant and maybe a bit of bubble wrap or padding? If you needed to transfer it, use snail mail or other physical methods. It's usually faster than 17 weeks. I am assuming that the files are in a different drive than the OS.
-
SnakeDoc almost 9 years@TessellatingHeckler LOL, the OP asked for a better way, not Windows... nobody genuinely wants Windows.
-
corsiKa almost 9 years
-
alexw almost 9 yearsYou might want to stick ice packs on the drives during the transfer as well, to help prevent heat degradation.
-
shodanshok almost 9 years@TessellatingHeckler The example (and the link) you reported clearly state that the preseed phase (read: file upload to the new server) is done by DFSR via robocopy. While robocopy is very useful, rsync is a better alternative from almost any point of view.
-
Rahul Patil almost 9 years@lbanz so how much time it took ?
-
lbanz almost 9 years@RahulPatil small files is around 6mb/s and large files are at 150mb/s. I'm expecting 1-2 months to transfer 15TB of small files.
-
-
h0tw1r3 almost 9 yearsFYI, you can replace
pigz
withgzip
or remove it altogether, but the speed will be significantly slower. -
Thomas Weller almost 9 yearsHow can this be accepted if OP has already tried
tar
andpigz
? I don't understand... -
Doktor J almost 9 years@ThomasWeller where did you get that he's tried
pigz
? From the question it looks like he's only triedrsync
so far, and was considering usingtar
to split and bundle the data. Especially if he hasn't used the-z
/--compress
option on rsync,pigz
could theoretically help significantly. -
Shiva Saurabh almost 9 yearslzma (
xz
) decompresses faster than bzip2, and does well on most input. Unfortunately,xz
's multithread option isn't implemented yet. -
neutrinus almost 9 yearsUsually the compression stage needs more horsepower than decompression, so if the CPU is the limiting factor, pbzip2 would result in better overall performance. Decompression shouldn't affect the process, if both machines are similar.
-
lbanz almost 9 yearsIt's about 1PB of 2TB drives so it is way too much.
-
lbanz almost 9 yearsI'm using rsync 2.6.8 on the old server. As it is one of those boxes where we're not allowed to install/update anything as stated by the vendor or it voids the warranty. I might update it and see if it is any quicker.
-
lbanz almost 9 years@ThomasWeller yes indeed I already tried tar and pigz but not nc. I was using ssh so it added a lot more overhead.
-
lbanz almost 9 yearsintermediatesql.com/linux/… Using nc/pigz seems to score the highest on benchmark too. I was piping it through ssh so it was incredibly slow.
-
Fox almost 9 yearsFind (or build) a statically-linked rsync binary and just run it from your home. Hopefully that won't ruin no warranty.
-
Shiva Saurabh almost 9 yearsYes, my point was it's a shame that there isn't a single-stream multi-thread lzma. Although for this use-case, of transferring whole filesystems of data,
pigz
would prob. be the slowest compressor you'd want to use. Or evenlz4
. (There's alz4mt
multi-threaded-for-a-single-stream available. It doesn't thread very efficiently (spawns new threads extremely often), but it does get a solid speedup) -
lbanz almost 9 years@h0tw1r3 Just to let you know that it is insanely fast. After pressing enter, and doing ls. It has already done 1GB. With rsync or piping it over ssh usually takes 20-30 mins just for 1GB. The bit I'm worried is how to verify the data once it has completed the transfer.
-
Axel almost 9 yearsTo verify the data run the compression step on both sides and compare the result on one side or the other. You'd need to make sure that the files are all in the archive in the same order which might not be possible. In that case you could (assuming enough space is available repeat the transfer in reverse to a different location and compare the result using a standard file compare utility. Or if space is short, transfer to a third (spacious) location from both source and target servers, and do the compare there.
-
the-wabbit almost 9 yearsI would propose using
mbuffer
instead ofnc
. The advantage is the ability to define a local buffer for the transfer. Plus, you have some additional stats. It is being widely used in zfs dataset transfers for years. -
lbanz almost 9 years@h0tw1r3 looks like this doesn't help either when there are so many tiny jpegs. It's around 24 million of jpegs in the folder and pigz is just using 1 core. Whereas if the files are larger, it uses the default 8 cores and is insanely fast.
-
JB. almost 9 yearsBypassing the filesystem is good. Copying block-level of a read-write mounted filesystem is a really bad idea. Unmount or mount read-only first.
-
Arthur Kay almost 9 yearsHaving a 15TB copy sucks, too. It means the new server needs minimum 30.
-
David Balažic almost 9 yearsFor checking you could use tee to divert a copy of the tar to sha256sum (or other checksum/CRC tool) on both source and destination. And then compare the resulting checksum values.
-
h0tw1r3 almost 9 years@lbanz the speed that
tar
is able to collect small files is likely a disk or filesystem IO problem. The size of files should not make a difference topigz
because the data it receives is atar
stream, not the individual files. -
liori almost 9 yearsIf the server is using LVM, one could do a read-only snapshot of the filesystem and copy it instead. Space overhead only for the changes in the filesystem that happen while the snapshot is read.
-
EEAA almost 9 yearsThere has been a non-FUSE native Linux port of ZFS for quite a while now: zfsonlinux.org
-
answer42 almost 9 years@lbanz that simply means that
tar
isn't producing data fast enough forpigz
to use much CPU for compression. Reading lots of small files involves many more syscalls, many more disk seeks, and a lot more kernel overhead than reading the same number of bytes of larger files, and it looks like you're simply bottlenecking at a fundamental level. -
neutrinus almost 9 yearsIt will not deduplicate, no way to resume, compressing using only one CPU.
-
Gwyneth Llewelyn over 5 yearsHow about
unison
? How does it compare torsync
?