How to remove duplicates based on a key in Mongodb?

95,782

Solution 1

This answer is obsolete : the dropDups option was removed in MongoDB 3.0, so a different approach will be required in most cases. For example, you could use aggregation as suggested on: MongoDB duplicate documents even after adding unique key.

If you are certain that the source_references.key identifies duplicate records, you can ensure a unique index with the dropDups:true index creation option in MongoDB 2.6 or older:

db.things.ensureIndex({'source_references.key' : 1}, {unique : true, dropDups : true})

This will keep the first unique document for each source_references.key value, and drop any subsequent documents that would otherwise cause a duplicate key violation.

Important Note: Any documents missing the source_references.key field will be considered as having a null value, so subsequent documents missing the key field will be deleted. You can add the sparse:true index creation option so the index only applies to documents with a source_references.key field.

Obvious caution: Take a backup of your database, and try this in a staging environment first if you are concerned about unintended data loss.

Solution 2

This is the easiest query I used on my MongoDB 3.2

db.myCollection.find({}, {myCustomKey:1}).sort({_id:1}).forEach(function(doc){
    db.myCollection.remove({_id:{$gt:doc._id}, myCustomKey:doc.myCustomKey});
})

Index your customKey before running this to increase speed

Solution 3

While @Stennie's is a valid answer, it is not the only way. Infact the MongoDB manual asks you to be very cautious while doing that. There are two other options

  1. Let the MongoDB do that for you using Map Reduce
  2. You do programatically which is less efficient.

Solution 4

Here is a slightly more 'manual' way of doing it:

Essentially, first, get a list of all the unique keys you are interested.

Then perform a search using each of those keys and delete if that search returns bigger than one.

    db.collection.distinct("key").forEach((num)=>{
      var i = 0;
      db.collection.find({key: num}).forEach((doc)=>{
        if (i)   db.collection.remove({key: num}, { justOne: true })
        i++
      })
    });

Solution 5

Expanding on Fernando's answer, I found that it was taking too long, so I modified it.

var x = 0;
db.collection.distinct("field").forEach(fieldValue => {
  var i = 0;
  db.collection.find({ "field": fieldValue }).forEach(doc => {
    if (i) {
      db.collection.remove({ _id: doc._id });
    }
    i++;
    x += 1;
    if (x % 100 === 0) {
      print(x); // Every time we process 100 docs.
    }
  });
});

The improvement is basically using the document id for removing, which should be faster, and also adding the progress of the operation, you can change the iteration value to your desired amount.

Also, indexing the field before the operation helps.

Share:
95,782
user1518659
Author by

user1518659

Updated on July 08, 2022

Comments

  • user1518659
    user1518659 almost 2 years

    I have a collection in MongoDB where there are around (~3 million records). My sample record would look like,

     { "_id" = ObjectId("50731xxxxxxxxxxxxxxxxxxxx"),
       "source_references" : [
                               "_id" : ObjectId("5045xxxxxxxxxxxxxx"),
                               "name" : "xxx",
                               "key" : 123
                              ]
     }
    

    I am having a lot of duplicate records in the collection having same source_references.key. (By Duplicate I mean, source_references.key not the _id).

    I want to remove duplicate records based on source_references.key, I'm thinking of writing some PHP code to traverse each record and remove the record if exists.

    Is there a way to remove the duplicates in Mongo Internal command line?