compressing dd backup on the fly

96,735

Solution 1

Do you have access to the sda2-backup...gz file? Sudo only works with the command after it, and doesn't apply to the redirection. If you want it to apply to the redirection, then run the shell as root so all the children process are root as well:

sudo bash -c "dd if=/dev/sda2 | gzip > /media/disk/sda2-backup-10august09.gz"

Alternatively, you could mount the disk with the uid / gid mount options (assuming ext3) so you have write permissions as whatever user you are. Or, use root to create a folder in /media/disk which you have permissions for.

Other Information that might help you:

  • The block size only really matters for speed for the most part. The default is 512 bytes which you want to keep for the MBR and floppy disks. Larger sizes to a point should speed up the operations, think of it as analogous to a buffer. Here is a link to someone who did some speed benchmarks with different block sizes. But you should do your own testing, as performance is influenced by many factors. Take also a look at the other answer by andreas
  • If you want to accomplish this over the network with ssh and netcat so space may not be as big of an issue, see this serverfault question.
  • Do you really need an image of the partition, there might be better backup strategies?
  • dd is a very dangerous command, use of instead of if and you end up overwriting what you are trying to backup!! Notice how the keys o and i are next to each other? So be very very very careful.

Solution 2

In the first case, dd is running as root. In the second case, dd is running as root but gzip is running as you.

Change the permissions on /media/disk, give yourself a root shell, or run the gzip as root too.

Solution 3

In addition, you can replace gzip with bzip2 --best for much better compression:

sudo dd if=/dev/sda2 | bzip2 --best > /media/disk/$(date +%Y%m%d_%H%M%S)_sda2-backup.bz2

Solution 4

sudo dd if=/dev/sda1 bs=32M | 7z a -si  /data/$(date +%Y%m%d_%H%M%S)_sda1-backup.tar.7z

7z utilizes all CPU cores. Also, adding bs=32M or with some other non-default values may significantly speed up the process.

Test results:

root@pentagon:~# dd if=/dev/sda1 | bzip2 -c > /data/$(date +%Y%m%d_%H%M%S)_pentagon-backup-sda1.bz2
12288000+0 records in
12288000+0 records out
6291456000 bytes (6.3 GB) copied, 2033.77 s, 3.1 MB/s
root@pentagon:~# dd if=/dev/sda1 bs=32M | 7z a -si  /data/$(date +%Y%m%d_%H%M%S)_pentagon-backup-sda1.tar.7z

7-Zip (a) [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=C,Utf16=off,HugeFiles=on,64 bits,4 CPUs x64)

Creating archive: /data/20210818_104748_pentagon-backup-sda1.tar.7z

Items to compress: 1

5917M + [Content]187+1 records in
187+1 records out
6291456000 bytes (6.3 GB) copied, 1393.34 s, 4.5 MB/s
                   
Files read from disk: 1
Archive size: 818956969 bytes (782 MiB)
Everything is Ok

Almost 2 times faster.

root@pentagon:~# ls -Alh /data
....
-rw-r--r-- 1 root root            1.2G Aug 18 10:40 20210818_100651_pentagon-backup-sda1.bz2
-rw-r--r-- 1 root root            782M Aug 18 11:11 20210818_104748_pentagon-backup-sda1.tar.7z
....

And, almost 2 times smaller.

Credits to Igor Pavlov for that.

Share:
96,735

Related videos on Youtube

Phil
Author by

Phil

Linux user, coding a little bit ;)

Updated on September 17, 2022

Comments

  • Phil
    Phil almost 2 years

    Maybe this will sound like dumb question but the way i'm trying to do it doesn't work.

    I'm on livecd, drive is unmounted, etc.

    When i do backup this way

    sudo dd if=/dev/sda2 of=/media/disk/sda2-backup-10august09.ext3 bs=64k
    

    ...normally it would work but i don't have enough space on external hd i'm copying to (it ALMOST fits into it). So I wanted to compress this way

     sudo dd if=/dev/sda2 | gzip > /media/disk/sda2-backup-10august09.gz
    

    ...but i got permissions denied. I don't understand.

  • Phil
    Phil almost 15 years
    i'll try this. how do i also make it bs=64k? (and do i have to?)
  • chris
    chris almost 15 years
    The bs=64k only makes the transfer go faster because dd will be reading blocks of 64k each instead of the default block size of (I don't remember).
  • Kyle Brandt
    Kyle Brandt almost 15 years
    What chris said, and if you want to include it put it after dd and before the pipe symbol ( | ) as it is an argument to dd.
  • Bill Weiss
    Bill Weiss almost 15 years
    At a cost of lots of time. See changelog.complete.org/archives/… "How to think about compression" for more details.
  • andreas
    andreas over 10 years
    @BillWeiss: Thanks for your comment, very interesting read!
  • Rik Schneider
    Rik Schneider over 8 years
    I also occasionally will use "sudo tee $file > /dev/null" in a pipeline to allow writing to a file that my user account doesn't have access too.
  • Admin
    Admin over 8 years
    compression : lzma > bzip2 > gzip .. speed: gzip > bzip2 > lzma . Unless you are publishing the disk image on internet, there is not much benefit for the time , CPU power and memory you are spending for a better compression.
  • Smar
    Smar over 3 years
    And nowadays zstd is pretty good option.
  • William Desportes
    William Desportes almost 3 years
    You can use status=progress after "if=" to add some progress tracking