Best way to skip a header when reading in from a text file in Perl?

29,269

Solution 1

Let's get some data on this. I benchmarked everybody's techniques...

#!/usr/bin/env perl

sub flag_in_loop {
    my $file = shift;

    open my $fh, $file;

    my $first = 1;
    while(<$fh>) {
        if( $first ) {
            $first = 0;
        }
        else {
            my $line = $_;
        }
    }

    return;
}

sub strip_before_loop {
    my $file = shift;

    open my $fh, $file;

    my $header = <$fh>;
    while(<$fh>) {
        my $line = $_;
    }

    return;
}

sub line_number_in_loop {
    my $file = shift;

    open my $fh, $file;

    while(<$fh>) {
        next if $. < 2;

        my $line = $_;
    }

    return;
}

sub inc_in_loop {
    my $file = shift;

    open my $fh, $file;

    my $first;
    while(<$fh>) {
        $first++ or next;

        my $line = $_;
    }

    return;
}

sub slurp_to_array {
    my $file = shift;

    open my $fh, $file;

    my @array = <$fh>;
    shift @array;

    return;
}


my $Test_File = "/usr/share/dict/words";
print `wc $Test_File`;

use Benchmark;

timethese shift || -10, {
    flag_in_loop        => sub { flag_in_loop($Test_File); },
    strip_before_loop   => sub { strip_before_loop($Test_File); },
    line_number_in_loop => sub { line_number_in_loop($Test_File); },
    inc_in_loop         => sub { inc_in_loop($Test_File); },
    slurp_to_array      => sub { slurp_to_array($Test_File); },
};

Since this is I/O which can be affected by forces beyond the ability of Benchmark.pm to adjust for, I ran them several times and checked I got the same results.

/usr/share/dict/words is a 2.4 meg file with about 240k very short lines. Since we're not processing the lines, line length shouldn't matter.

I only did a tiny amount of work in each routine to emphasize the difference between the techniques. I wanted to do some work so as to produce a realistic upper limit on how much performance you're going to gain or lose by changing how you read files.

I did this on a laptop with an SSD, but its still a laptop. As I/O speed increases, CPU time becomes more significant. Technique is even more important on a machine with fast I/O.

Here's how many times each routine read the file per second.

slurp_to_array:       4.5/s
line_number_in_loop: 13.0/s
inc_in_loop:         15.5/s
flag_in_loop:        15.8/s
strip_before_loop:   19.9/s

I'm shocked to find that my @array = <$fh> is slowest by a huge margin. I would have thought it would be the fastest given all the work is happening inside the perl interpreter. However, it's the only one which allocates memory to hold all the lines and that probably accounts for the performance lag.

Using $. is another surprise. Perhaps that's the cost of accessing a magic global, or perhaps its doing a numeric comparison.

And, as predicted by algorithmic analysis, putting the header check code outside the loop is the fastest. But not by much. Probably not enough to worry about if you're using the next two fastest.

Solution 2

You can just assign it a dummy variable for the 1st time:

#!/usr/bin/perl
use strict;
use warnings;

open my $fh, '<','a.txt' or die $!;

my $dummy=<$fh>;   #First line is read here
while(<$fh>){
        print ;
}
close($fh);

Solution 3

I always use $. (current line number) to achieve this:

#!/usr/bin/perl
use strict;
use warnings;

open my $fh, '<', 'myfile.txt' or die "$!\n";

while (<$fh>) {
    next if $. < 2; # Skip first line

    # Do stuff with subsequent lines
}

Solution 4

You can read a file in a file handle and then can either use array or while loop to iterate over lines. for while loop, @Guru has the solution for you. for array, it would be as below:

#!/usr/bin/perl
use strict;
use warnings;

open (my $fh, '<','a.txt')  or die "cant open the file: $! \n";
my @array = <$fh>;

my $dummy = shift (@array);   << this is where the headers are stored.

foreach (@array)
{
   print $_."\n";
}
close ($fh);
Share:
29,269
New2Perl
Author by

New2Perl

Updated on July 12, 2022

Comments

  • New2Perl
    New2Perl almost 2 years

    I'm grabbing a few columns from a tab delineated file in Perl. The first line of the file is completely different from the other lines, so I'd like to skip that line as fast and efficiently as possible.

    This is what I have so far.

    my $firstLine = 1;
    
    while (<INFILE>){
        if($firstLine){
            $firstLine = 0;
        }
        else{
            my @columns = split (/\t+/);
            print OUTFILE "$columns[0]\t\t$columns[1]\t$columns[2]\t$columns[3]\t$columns[11]\t$columns[12]\t$columns[15]\t$columns[20]\t$columns[21]\n";
        }
    }
    

    Is there a better way to do this, perhaps without $firstLine? OR is there a way to start reading INFILE from line 2 directly?

    Thanks in advance!

  • New2Perl
    New2Perl over 11 years
    fh shouldn't have a $, as it's a file handle. But this looks like the most efficient solution. Thanks!
  • Jim Davis
    Jim Davis over 11 years
    It's a lexical file handle; it's actually preferred these days.
  • Schwern
    Schwern over 11 years
    As a general technique this is less performant because your loop must now, on every iteration, make an extra check. It also clutters up the loop.
  • Schwern
    Schwern over 11 years
    By storing the entire file in an array, this potentially consumes a lot of memory.
  • flesk
    flesk over 11 years
    The performance loss is a given, but it's negligible enough to be worth it since it looks tidier. If you feel it clutters up the loop, it must be a matter of taste.
  • Schwern
    Schwern over 11 years
    "Clutters up the loop" meaning it adds to the amount of code which you have to understand to know what's going on inside the loop, yet it only applies to the first iteration. I'm comparing it to Guru's best case which is to put that code outside the loop, not the OP's.
  • Michał Leon
    Michał Leon over 10 years
    This is way more efficient than sequentially reading the file from disk. And "lots of memory" was important 15 years ago.
  • Ömer An
    Ömer An almost 4 years
    @JimDavis Those days are old days by now.