How to find duplicate files with same name but in different case that exist in same directory in Linux?

43,486

Solution 1

The other answer is great, but instead of the "rather monstrous" perl script i suggest

perl -pe 's!([^/]+)$!lc $1!e'

Which will lowercase just the filename part of the path.

Edit 1: In fact the entire problem can be solved with:

find . | perl -ne 's!([^/]+)$!lc $1!e; print if 1 == $seen{$_}++'

Edit 3: I found a solution using sed, sort and uniq that also will print out the duplicates, but it only works if there are no whitespaces in filenames:

find . |sed 's,\(.*\)/\(.*\)$,\1/\2\t\1/\L\2,'|sort|uniq -D -f 1|cut -f 1

Edit 2: And here is a longer script that will print out the names, it takes a list of paths on stdin, as given by find. Not so elegant, but still:

#!/usr/bin/perl -w

use strict;
use warnings;

my %dup_series_per_dir;
while (<>) {
    my ($dir, $file) = m!(.*/)?([^/]+?)$!;
    push @{$dup_series_per_dir{$dir||'./'}{lc $file}}, $file;
}

for my $dir (sort keys %dup_series_per_dir) {
    my @all_dup_series_in_dir = grep { @{$_} > 1 } values %{$dup_series_per_dir{$dir}};
    for my $one_dup_series (@all_dup_series_in_dir) {
        print "$dir\{" . join(',', sort @{$one_dup_series}) . "}\n";
    }
}

Solution 2

Try:

ls -1 | tr '[A-Z]' '[a-z]' | sort | uniq -c | grep -v " 1 "

Simple, really :-) Aren't pipelines wonderful beasts?

The ls -1 gives you the files one per line, the tr '[A-Z]' '[a-z]' converts all uppercase to lowercase, the sort sorts them (surprisingly enough), uniq -c removes subsequent occurrences of duplicate lines whilst giving you a count as well and, finally, the grep -v " 1 " strips out those lines where the count was one.

When I run this in a directory with one "duplicate" (I copied qq to qQ), I get:

2 qq

For the "this directory and every subdirectory" version, just replace ls -1 with find . or find DIRNAME if you want a specific directory starting point (DIRNAME is the directory name you want to use).

This returns (for me):

2 ./.gconf/system/gstreamer/0.10/audio/profiles/mp3
2 ./.gconf/system/gstreamer/0.10/audio/profiles/mp3/%gconf.xml
2 ./.gnome2/accels/blackjack
2 ./qq

which are caused by:

pax> ls -1d .gnome2/accels/[bB]* .gconf/system/gstreamer/0.10/audio/profiles/[mM]* [qQ]?
.gconf/system/gstreamer/0.10/audio/profiles/mp3
.gconf/system/gstreamer/0.10/audio/profiles/MP3
.gnome2/accels/blackjack
.gnome2/accels/Blackjack
qq
qQ

Update:

Actually, on further reflection, the tr will lowercase all components of the path so that both of

/a/b/c
/a/B/c

will be considered duplicates even though they're in different directories.

If you only want duplicates within a single directory to show as a match, you can use the (rather monstrous):

perl -ne '
    chomp;
    @flds = split (/\//);
    $lstf = $f[-1];
    $lstf =~ tr/A-Z/a-z/;
    for ($i =0; $i ne $#flds; $i++) {
        print "$f[$i]/";
    };
    print "$x\n";'

in place of:

tr '[A-Z]' '[a-z]'

What it does is to only lowercase the final portion of the pathname rather than the whole thing. In addition, if you only want regular files (no directories, FIFOs and so forth), use find -type f to restrict what's returned.

Solution 3

I believe

ls | sort -f | uniq -i -d

is simpler, faster, and will give the same result

Solution 4

Following up on the response of mpez0, to detect recursively just replace "ls" by "find .". The only problem I see with this is that if this is a directory that is duplicating, then you have 1 entry for each files in this directory. Some human brain is required to treat the output of this.

But anyway, you're not automatically deleting these files, are you?

find . | sort -f | uniq -i -d

Solution 5

This is a nice little command line app called findsn you get if you compile fslint that the deb package does not include.

it will find any files with the same name, and its lightning fast and it can handle different case.

/findsn --help
find (files) with duplicate or conflicting names.
Usage: findsn [-A -c -C] [[-r] [-f] paths(s) ...]

If no arguments are supplied the $PATH is searched for any redundant or conflicting files.

-A  reports all aliases (soft and hard links) to files.
    If no path(s) specified then the $PATH is searched.

If only path(s) specified then they are checked for duplicate named files. You can qualify this with -C to ignore case in this search. Qualifying with -c is more restrictive as only files (or directories) in the same directory whose names differ only in case are reported. I.E. -c will flag files & directories that will conflict if transfered to a case insensitive file system. Note if -c or -C specified and no path(s) specified the current directory is assumed.

Share:
43,486
Camsoft
Author by

Camsoft

Updated on February 02, 2020

Comments

  • Camsoft
    Camsoft over 4 years

    How can I return a list of files that are named duplicates i.e. have same name but in different case that exist in the same directory?

    I don't care about the contents of the files. I just need to know the location and name of any files that have a duplicate of the same name.

    Example duplicates:

    /www/images/taxi.jpg
    /www/images/Taxi.jpg
    

    Ideally I need to search all files recursively from a base directory. In above example it was /www/