insert header to a file

14,643

Solution 1

header="/name/of/file/containing/header"
for file in "$@"
do
    cat "$header" "$file" > /tmp/xx.$$
    mv /tmp/xx.$$ "$file"
done

You might prefer to locate the temporary file on the same file system as the file you are editing, but anything that requires inserting data at the front of the file is going to end up working very close to this. If you are going to be doing this all day, every day, you might assemble something a little slicker, but the chances are the savings will be minuscule (fractions of a second per file).

If you really, really must use sed, then I suppose you could use:

header="/name/of/file/containing/header"
for file in "$@"
do
    sed -e "0r $header" "$file" > /tmp/xx.$$
    mv /tmp/xx.$$ "$file"
done

The command reads the content of header 'after' line 0 (before line 1), and then everything else is passed through unchanged. This isn't as swift as cat though.

An analogous construct using awk is:

header="/name/of/file/containing/header"
for file in "$@"
do
    awk '{print}' "$header" "$file" > /tmp/xx.$$
    mv /tmp/xx.$$ "$file"
done

This simply prints each input line on the output; again, not as swift as cat.

One more advantage of cat over sed or awk; cat will work even if the big files are mainly binary data (it is oblivious to the content of the files). Both sed and awk are designed to handle data split into lines; while modern versions will probably handle even binary data fairly well, it is not what they are designed for.

Solution 2

I did it all with a Perl script, because I had to traverse a directory tree and handle various file types differently. The basic script was

#!perl -w
process_directory(".");

sub process_directory {
    my $dir = shift;
    opendir DIR, $dir or die "$dir: not a directory\n";
    my @files = readdir DIR;
    closedir DIR;
    foreach(@files) {
        next if(/^\./ or /bin/ or /obj/);  # ignore some directories
        if(-d "$dir/$_") {
            process_directory("$dir/$_");
        } else {
            fix_file("$dir/$_");
        }
    }
}

sub fix_file {
    my $file = shift;
    open SRC, $file or die "Can't open $file\n";
    my $file = "$file-f";
    open FIX, ">$fix" or die "Can't open $fix\n";
    print FIX <<EOT;
        -- Text to insert
EOT
    while(<SRC>) {
        print FIX;
    }
    close SRC;
    close FIX;
    my $oldfile = $file;
    $oldFile =~ s/(.*)\.\(\w+)$/$1-old.$2/;
    if(rename $file, $oldFile) {
        rename $fix, $file;
    }
}

Share and enjoy! Or not -- I'm no Perl hacker, so this is probably double-plus-unoptimal Perl code. Still, it worked for me!

Share:
14,643
jianfeng.mao
Author by

jianfeng.mao

Updated on June 17, 2022

Comments

  • jianfeng.mao
    jianfeng.mao almost 2 years

    I would like to hear your directions on how to insert lines of header (all lines in a file) to another file (more bigger, several GB). I prefer the Unix/awk/sed ways of do that job.

    # header I need to insert to another, they are in a file named "header".
    
    
    ##fileformat=VCFv4.0
    ##fileDate=20090805
    ##source=myImputationProgramV3.1
    ##reference=1000GenomesPilot-NCBI36
    ##phasing=partial
    ##INFO=<ID=NS,Number=1,Type=Integer,Description="Number of Samples With Data">
    ##INFO=<ID=DP,Number=1,Type=Integer,Description="Total Depth">
    ##INFO=<ID=AF,Number=.,Type=Float,Description="Allele Frequency">
    ##INFO=<ID=AA,Number=1,Type=String,Description="Ancestral Allele">
    ##INFO=<ID=DB,Number=0,Type=Flag,Description="dbSNP membership, build 129">
    ##INFO=<ID=H2,Number=0,Type=Flag,Description="HapMap2 membership">
    ##FILTER=<ID=q10,Description="Quality below 10">
    ##FILTER=<ID=s50,Description="Less than 50% of samples have data">
    ##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype">
    ##FORMAT=<ID=GQ,Number=1,Type=Integer,Description="Genotype Quality">
    ##FORMAT=<ID=DP,Number=1,Type=Integer,Description="Read Depth">
    ##FORMAT=<ID=HQ,Number=2,Type=Integer,Description="Haplotype Quality">
    #CHROM POS     ID        REF ALT    QUAL FILTER INFO 
    
  • jianfeng.mao
    jianfeng.mao about 13 years
    That is so cool answer. I have learned much from your directions. Thanks a lot.
  • jianfeng.mao
    jianfeng.mao about 13 years
    I just begin to learn Unix shell/sed/awk/perl to do bioinformatics. I have not tested your scripts. But, many thanks for your kindness.
  • Javier
    Javier about 13 years
    @Jonathan, I also need to add header lines to an existing large file. In my case, the header lines correspond to numbers that are stored in bash variables. How, could I use awk/sed to do it?. I did sth. like: awk -v param1="$param1" -v param2="$param2" 'BEGIN{print "description1"; print param1 " " param2; print "description2"}{print}' test.data > ${tmpFile}. In files larger than 2GB, I get the following error: awk: cannot open test.data (Value too large for defined data type). Do you have any suggestions on how to deal with large files?
  • Jonathan Leffler
    Jonathan Leffler about 13 years
    @Javier: if your awk has problems with big files (>2GB), then it is probably time to upgrade something - probably awk. To substitute parameters as described, you would do a little more work. See the answer to SO 6025342 to see one way handling variables in files.
  • Javier
    Javier about 13 years
    @Jonathan, thanks! I recently found a website concerning Large File Support in linux (suse.de/~aj/linux_lfs.html). I was using ubuntu 10.04 64bits and recently installed 11.04 32bits. While using awk with large files in the 10.04 version (>2GB), I didn't have any problems. This seems to be the reason why I'm having such issues. Do you think, it might be the case? I'm also trying to discover how stable the 11.04 64 bits version is.
  • Jonathan Leffler
    Jonathan Leffler about 13 years
    @Javier: The fact that you're using a 32-bit system might account for it, though often 32-bit software is compiled to work with big files these days. When the software is configured, the configuration usually looks for large file support and ensures it is used. 64-bit Linux is as rock solid as 32-bit Linux.
  • Javier
    Javier about 13 years
    @Jonathan, thanks for the clarifications. I installed the 64-bit ubuntu version and now everything's working fine with files larger than 2GB. At the same time, it seems to be pretty stable, as a 32 bits version one.
  • johnsyweb
    johnsyweb almost 13 years
    @jianfeng.mao: "Thanks a lot" is best expressed by clicking on the check box outline to the left of the answer.