Filter a list to only leave objects that occur once
Solution 1
Here's another dictionary oriented way:
l = [0, 1, 1, 2, 2]
d = {}
for i in l: d[i] = i in d
[k for k in d if not d[k]] # unordered, loop over the dictionary
[k for k in l if not d[k]] # ordered, loop over the original list
Solution 2
You'll need two loops (or equivalently a loop and a listcomp, like below), but not nested ones:
import collections
d = collections.defaultdict(int)
for x in L: d[x] += 1
L[:] = [x for x in L if d[x] == 1]
This solution assumes that the list items are hashable, that is, that they're usable as indices into dicts, members of sets, etc.
The OP indicates they care about object IDENTITY and not VALUE (so for example two sublists both worth [1,2,3
which are equal but may not be identical would not be considered duplicates). If that's indeed the case then this code is usable, just replace d[x]
with d[id(x)]
in both occurrences and it will work for ANY types of objects in list L.
Mutable objects (lists, dicts, sets, ...) are typically not hashable and therefore cannot be used in such ways. User-defined objects are by default hashable (with hash(x) == id(x)
) unless their class defines comparison special methods (__eq__
, __cmp__
, ...) in which case they're hashable if and only if their class also defines a __hash__
method.
If list L's items are not hashable, but are comparable for inequality (and therefore sortable), and you don't care about their order within the list, you can perform the task in time O(N log N)
by first sorting the list and then applying itertools.groupby
(almost but not quite in the way another answer suggested).
Other approaches, of gradually decreasing perfomance and increasing generality, can deal with unhashable sortables when you DO care about the list's original order (make a sorted copy and in a second loop check out repetitions on it with the help of bisect
-- also O(N log N) but a tad slower), and with objects whose only applicable property is that they're comparable for equality (no way to avoid the dreaded O(N**2) performance in that maximally general case).
If the OP can clarify which case applies to his specific problem I'll be glad to help (and in particular, if the objects in his are ARE hashable, the code I've already given above should suffice;-).
Solution 3
[x for x in the_list if the_list.count(x)==1]
Though that's still a nested loop behind the scenes.
Solution 4
In the same spirit as Alex's solution you can use a Counter/multiset (built in 2.7, recipe compatible from 2.5 and above) to do the same thing:
In [1]: from counter import Counter
In [2]: L = [0, 1, 1, 2, 2]
In [3]: multiset = Counter(L)
In [4]: [x for x in L if multiset[x] == 1]
Out[4]: [0]
Solution 5
>>> l = [0,1,1,2,2]
>>> [x for x in l if l.count(x) is 1]
[0]
Comments
-
Daniel Farrell almost 2 years
I would like to filter this list,
[0, 1, 1, 2, 2]
to only leave
[0]
I'm struggling to do it in a 'pythonic' way. Is it possible without nested loops?
-
Douglas Leeder over 14 yearsI think I prefer Alex's solution as it only iterates through the list twice, your solution is n^2.
-
Daniel Farrell over 14 yearsWOW. I really need to go away a read about list comprehension! I'm still trying to figure out exactly what is happening above. But thanks very much sepp and Alex.
-
sepp2k over 14 years@boyfarrell: you can read that as "Go through all x in the_list and select those where
the_list.count(x)==1
, i.e. those that appear only once" -
Daniel Farrell over 14 yearsNo I don't need to hash I think it's just the duplication of objects that I wanted to remove. (I'm still thinking in C but what I wanted to say above was that the pointers to the objects will be the same so there is no need to hash -- is that valid in Python land?)
-
samtregar over 14 yearsWhy did you write "L[:] = list(set(L))" instead of the more obvious (to me) "L = list(set(L))"? They seem to do the same thing when I try them in the interpreter. Is there some nuance I'm missing? Thanks!
-
yantrab over 14 yearsThe second solution doesn't seem to do the right thing: it removes duplicates, but the problem was to remove all items that are duplicated.
-
Alex Martelli over 14 years@samtregar, rebinding just the name sometimes works just as well as rebinding the contents, sometimes it doesn't (because there are other outstanding references to the original list object beyond its original name -- e.g that's the case for function arguments), why risk it?
-
Alex Martelli over 14 yearsYep, and my approach boils down to exactly the same, except that I do a single pass beforehand to compute how many times each object appears (so the overall approach is O(N)) instead of a counting pass per item (which makes this approach overall O(N**2)).
-
hughdbrown over 14 yearsLove collections.defaultdict. I need to write a bot that will answer all python questions with defaultdict.
-
nilamo over 14 yearsConverting back to a list would be nice.
-
Markus over 14 yearsIs there any advantage to using
is
over==
? I know 1 is a small enough number for this to work, but isis
actually faster when comparing integers? -
Markus over 14 yearsAlex's solution may be faster, but this is more elegant I think. ;-)
-
Jochen Ritzel over 14 yearsYou shouldn't use
is
with numbers, it only works because cpython optimizes some often used constants ( like small (<255) ints, 1.0, 0.0, empty tuples/sets, etc) and treats them as singletons ... but that is not part of the python language. -
hughdbrown over 14 yearsI'm not going to vote sepp2k's solution down, but it is not the best solution. Alex's use of defaultdict and a list comprehension to filter is exactly right. It is elegant.
-
Jochen Ritzel over 14 years+1 Really nice, it collects not a bit more information than you need to solve the task. You can get rid of the
.keys()
though. -
mhawke over 14 yearsYes, .keys() is entirely optional, but slightly more readable IMO.
-
Maria Eugenia D'Amato over 14 yearsnice thinking 'cause it doesn't have unnecessary info like Alex's does, though it's essentially the same concept, and it won't work on items which aren't hashable.
-
Martijn Pieters almost 8 years@mhawke: no it isn't. It is much slower because
d.keys()
in Python 2 produces a new list object each time you call it. In Python 3 it creates a new dictionary view each time. Usek in d
, always. Anddict.has_key
has been deprecated in favour ofkey in dict
. And if you looped overl
instead ofd
you get to maintain order just like Alex's version does. Together, that makesfor i in l: d[i] = i in d
and[k for k in l if not d[k]]
. -
Martijn Pieters almost 8 yearsThis could be updated to use
collections.Counter()
instead:d = Counter(L)
thenL[:] = [x for x in L if d[x] == 1]
. -
Martijn Pieters almost 8 years@Markus: quadratic behaviour is not really all that elegant.
-
Martijn Pieters almost 8 yearsDo not use
is
when testing for value equality. This solution is a O(N**2) complexity approach aslist.count()
does a full scan of the list each time you call it, and you call it N times. -
Martijn Pieters almost 8 yearsThis doesn't preserve order like Alex's does.
-
mhawke almost 8 years@MartijnPieters: thanks for the suggestions that would improve this ancient answer.