how to create multi tar archives for a huge folder
Solution 1
I wrote this bash script to do it.
It basically forms an array containing the names of the files to go into each tar, then starts tar
in parallel on all of them.
It might not be the most efficient way, but it will get the job done as you want.
I can expect it to consume large amounts of memory though.
You will need to adjust the options in the start of the script.
You might also want to change the tar options cvjf
in the last line (like removing the verbose output v
for performance or changing compression j
to z
, etc ...).
Script
#!/bin/bash
# User configuratoin
#===================
files=(*.log) # Set the file pattern to be used, e.g. (*.txt) or (*)
num_files_per_tar=5 # Number of files per tar
num_procs=4 # Number of tar processes to start
tar_file_dir='/tmp' # Tar files dir
tar_file_name_prefix='tar' # prefix for tar file names
tar_file_name="$tar_file_dir/$tar_file_name_prefix"
# Main algorithm
#===============
num_tars=$((${#files[@]}/num_files_per_tar)) # the number of tar files to create
tar_files=() # will hold the names of files for each tar
tar_start=0 # gets update where each tar starts
# Loop over the files adding their names to be tared
for i in `seq 0 $((num_tars-1))`
do
tar_files[$i]="$tar_file_name$i.tar.bz2 ${files[@]:tar_start:num_files_per_tar}"
tar_start=$((tar_start+num_files_per_tar))
done
# Start tar in parallel for each of the strings we just constructed
printf '%s\n' "${tar_files[@]}" | xargs -n$((num_files_per_tar+1)) -P$num_procs tar cjvf
Explanation
First, all the file names that match the selected pattern are stored in the array files
. Next, the for loop slices this array and forms strings from the slices. The number of the slices is equal to the number of the desired tarballs. The resulting strings are stored in the array tar_files
. The for loop also adds the name of the resulting tarball to the beginning of each string. The elements of tar_files
take the following form (assuming 5 files/tarball):
tar_files[0]="tar0.tar.bz2 file1 file2 file3 file4 file5"
tar_files[1]="tar1.tar.bz2 file6 file7 file8 file9 file10"
...
The last line of the script, xargs
is used to start multiple tar
processes (up to the maximum specified number) where each one will process one element of tar_files
array in parallel.
Test
List of files:
$ls
a c e g i k m n p r t
b d f h j l o q s
Generated Tarballs: $ls /tmp/tar* tar0.tar.bz2 tar1.tar.bz2 tar2.tar.bz2 tar3.tar.bz2
Solution 2
Here's another script. You can choose whether you want precisely one million files per segment, or precisely 30 segments. I've gone with the former in this script, but the split
keyword allows either choice.
#!/bin/bash
#
DIR="$1" # The source of the millions of files
TARDEST="$2" # Where the tarballs should be placed
# Create the million-file segments
rm -f /tmp/chunk.*
find "$DIR" -type f | split -l 1000000 - /tmp/chunk.
# Create corresponding tarballs
for CHUNK in $(cd /tmp && echo chunk.*)
do
test -f "$CHUNK" || continue
echo "Creating tarball for chunk '$CHUNK'" >&2
tar cTf "/tmp/$CHUNK" "$TARDEST/$CHUNK.tar"
rm -f "/tmp/$CHUNK"
done
There are a number of niceties that could be applied to this script. The use of /tmp/chunk.
as the file list prefix should probably be pushed out into a constant declaration, and the code shouldn't really assume it can delete anything matching /tmp/chunk.*
, but I've left it this way as a proof of concept rather than a polished utility. If I were using this I would use mktemp
to create a temporary directory for holding the file lists.
Solution 3
This one does precisely what was requested:
#!/bin/bash
ctr=0;
# Read 1M lines, strip newline chars, put the results into an array named "asdf"
while readarray -n 1000000 -t asdf; do
ctr=$((${ctr}+1));
# "${asdf[@]}" expands each entry in the array such that any special characters in
# the filename won't cause problems
tar czf /destination/path/asdf.${ctr}.tgz "${asdf[@]}";
# If you don't want compression, use this instead:
#tar cf /destination/path/asdf.${ctr}.tar "${asdf[@]}";
# this is the canonical way to generate output
# for consumption by read/readarray in bash
done <(find /source/path -not -type d);
readarray
(in bash) can also be used to execute a callback function, so that could potentially be re-written to resemble:
function something() {...}
find /source/path -not -type d \
| readarray -n 1000000 -t -C something asdf
GNU parallel
could be leveraged to do something similar (untested; I don't have parallel
installed where I'm at so I'm winging it):
find /source/path -not -type d -print0 \
| parallel -j4 -d '\0' -N1000000 tar czf '/destination/path/thing_backup.{#}.tgz'
Since that's untested you could add the --dry-run
arg to see what it'll actually do. I like this one the best, but not everyone has parallel
installed. -j4
makes it use 4 jobs at a time, -d '\0'
combined with find
's -print0
makes it ignore special characters in the filename (whitespace, etc). The rest should be self explanatory.
Something similar could be done with parallel
but I don't like it because it generates random filenames:
find /source/path -not -type d -print0 \
| parallel -j4 -d '\0' -N1000000 --tmpdir /destination/path --files tar cz
I don't [yet?] know of a way to make it generate sequential filenames.
xargs
could also be used, but unlike parallel
there's no straightforward way to generate the output filename so you'd end up doing something stupid/hacky like this:
find /source/path -not -type d -print0 \
| xargs -P 4 -0 -L 1000000 bash -euc 'tar czf $(mktemp --suffix=".tgz" /destination/path/backup_XXX) "$@"'
The OP said they didn't want to use split ... I thought that seemed weird as cat
will re-join them just fine; this produces a tar and splits it into 3gb chunks:
tar c /source/path | split -b $((3*1024*1024*1024)) - /destination/path/thing.tar.
... and this un-tars them into the current directory:
cat $(\ls -1 /destination/path/thing.tar.* | sort) | tar x
Related videos on Youtube
Yan Zhu
Updated on September 18, 2022Comments
-
Yan Zhu almost 2 years
I have a large folder with 30M small files. I hope to backup the folder into 30 archives, each tar.gz file will have 1M files. The reason to split into multi archives is that to untar one single large archive will take month.. pipe tar to split also won't work because when untar the file, I have to cat all archives together.
Also, I hope not to mv each file to a new dir, because even ls is very painful for this huge folder.
-
Ulrich Schwarz about 9 yearsIf your question is how to split your list of 30000 files into 30 lists of 1000 files each,
xargs -L
may be helpful. -
roaima about 9 years
tar
isn'tzip
. Please don't confuse them.
-
-
Bichoy about 9 yearsI like the
split
idea: a neat solution to the large memory consumption I expect for my script ... -
roaima about 9 years@Bichoy. Thank you. But you could reduce your memory consumption by serialising your
tar
commands :-) -
Bichoy about 9 yearsI am assuming the main flow in my script is I have to main ALL the filenames in memory and create individual arrays out of that. Parallel execution of tar will be very beneficial here, since the files are small, it will probably be I/O limited and it is good to have multiple tar processes hanging around waiting for I/O operations to finish .. Just my opinion, what do you think?