Get the unique lines of second file in result of comparing two files

9,650

Solution 1

You could accomplish this with grep.

Here is an example:

$ echo localhost > local_hosts

$ grep -v -f local_hosts /etc/hosts
127.0.1.1       ubuntu

# The following lines are desirable for IPv6 capable hosts
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Solution 2

Generally you want to keep the lines in file2 that are not in file1 actually.

There are more possibilities of these,

comm <(sort file1) <(sort file2) -23

via join

join -v 1 <(sort file1) <(sort file2)

or via AWK which doesn't need to sort the files:

awk 'NR==FNR{lines[$0];next} !($0 in lines)' file2 file1
Share:
9,650

Related videos on Youtube

anlarye
Author by

anlarye

Updated on September 18, 2022

Comments

  • anlarye
    anlarye almost 2 years

    I have two text files, and I want to read file1 line by line, searching for the same line in file2 and removing it from file2.

    I have the pseudocode of:

    for line in file1.txt
    do
      sed search line and delete in file2.txt
    done
    
    • j0h
      j0h over 7 years
      not a single answer for the diff command yet.
  • muru
    muru over 7 years
    More to the point, you should accomplish this with grep. Regex special characters in a line from file1 would cause a lot of problems with sed, but grep has -F for that.
  • anlarye
    anlarye over 7 years
    In this case I do want to remove the lines from file2 that are in file1. Sort wont work as that would disrupt the order of the lines in file2.
  • αғsнιη
    αғsнιη over 7 years
    so the only 'awk' option would be as your expectation when I already included in my answer : )