Get the unique lines of second file in result of comparing two files
9,650
Solution 1
You could accomplish this with grep.
Here is an example:
$ echo localhost > local_hosts
$ grep -v -f local_hosts /etc/hosts
127.0.1.1 ubuntu
# The following lines are desirable for IPv6 capable hosts
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Solution 2
Generally you want to keep the lines in file2 that are not in file1 actually.
There are more possibilities of these,
comm <(sort file1) <(sort file2) -23
via join
join -v 1 <(sort file1) <(sort file2)
or via AWK which doesn't need to sort the files:
awk 'NR==FNR{lines[$0];next} !($0 in lines)' file2 file1
Related videos on Youtube
Author by
anlarye
Updated on September 18, 2022Comments
-
anlarye almost 2 years
I have two text files, and I want to read file1 line by line, searching for the same line in file2 and removing it from file2.
I have the pseudocode of:
for line in file1.txt do sed search line and delete in file2.txt done
-
j0h over 7 yearsnot a single answer for the diff command yet.
-
-
muru over 7 yearsMore to the point, you should accomplish this with grep. Regex special characters in a line from
file1
would cause a lot of problems withsed
, butgrep
has-F
for that. -
anlarye over 7 yearsIn this case I do want to remove the lines from file2 that are in file1. Sort wont work as that would disrupt the order of the lines in file2.
-
αғsнιη over 7 yearsso the only 'awk' option would be as your expectation when I already included in my answer : )