Finding mongoDB records in batches (using mongoid ruby adapter)
Solution 1
With Mongoid, you don't need to manually batch the query.
In Mongoid, Model.all
returns a Mongoid::Criteria
instance. Upon calling #each
on this Criteria, a Mongo driver cursor is instantiated and used to iterate over the records. This underlying Mongo driver cursor already batches all records. By default the batch_size
is 100.
For more information on this topic, read this comment from the Mongoid author and maintainer.
In summary, you can just do this:
Model.all.each do |r|
Sunspot.index(r)
end
Solution 2
If you are iterating over a collection where each record requires a lot of processing (i.e querying an external API for each item) it is possible for the cursor to timeout. In this case you need to perform multiple queries in order to not leave the cursor open.
require 'mongoid'
module Mongoid
class Criteria
def in_batches_of(count = 100)
Enumerator.new do |y|
total = 0
loop do
batch = 0
self.limit(count).skip(total).each do |item|
total += 1
batch += 1
y << item
end
break if batch == 0
end
end
end
end
end
Here is a helper method you can use to add the batching functionality. It can be used like so:
Post.all.order_by(:id => 1).in_batches_of(7).each_with_index do |post, index|
# call external slow API
end
Just make sure you ALWAYS have an order_by on your query. Otherwise the paging might not do what you want it to. Also I would stick with batches of 100 or less. As said in the accepted answer Mongoid queries in batches of 100 so you never want to leave the cursor open while doing the processing.
Solution 3
It is faster to send batches to sunspot as well. This is how I do it:
records = []
Model.batch_size(1000).no_timeout.only(:your_text_field, :_id).all.each do |r|
records << r
if records.size > 1000
Sunspot.index! records
records.clear
end
end
Sunspot.index! records
no_timeout
: prevents the cursor to disconnect (after 10 min, by default)
only
: selects only the id and the fields, which are actually indexed
batch_size
: fetch 1000 entries instead of 100
Solution 4
I am not sure about the batch processing, but you can do this way
current_page = 0
item_count = Model.count
while item_count > 0
Model.all.skip(current_page * 1000).limit(1000).each do |item|
Sunpot.index(item)
end
item_count-=1000
current_page+=1
end
But if you are looking for a perfect long time solution i wouldn't recommend this. Let me explain how i handled the same scenario in my app. Instead of doing batch jobs,
-
i have created a resque job which updates the solr index
class SolrUpdator @queue = :solr_updator def self.perform(item_id) item = Model.find(item_id) #i have used RSolr, u can change the below code to handle sunspot solr = RSolr.connect :url => Rails.application.config.solr_path js = JSON.parse(item.to_json) solr.add js end
end
-
After adding the item, i just put an entry to the resque queue
Resque.enqueue(SolrUpdator, item.id.to_s)
- Thats all, start the resque and it will take care of everything
Dan L
Updated on July 09, 2022Comments
-
Dan L almost 2 years
Using rails 3 and mongoDB with the mongoid adapter, how can I batch finds to the mongo DB? I need to grab all the records in a particular mongo DB collection and index them in solr (initial index of data for searching).
The problem I'm having is that doing Model.all grabs all the records and stores them into memory. Then when I process over them and index in solr, my memory gets eaten up and the process dies.
What I'm trying to do is batch the find in mongo so that I can iterate over 1,000 records at a time, pass them to solr to index, and then process the next 1,000, etc...
The code I currently have does this:
Model.all.each do |r| Sunspot.index(r) end
For a collection that has about 1.5 million records, this eats up 8+ GB of memory and kills the process. In ActiveRecord, there is a find_in_batches method that allows me to chunk up the queries into manageable batches that keeps the memory from getting out of control. However, I can't seem to find anything like this for mongoDB/mongoid.
I would LIKE to be able to do something like this:
Model.all.in_batches_of(1000) do |batch| Sunpot.index(batch) end
That would alleviate my memory problems and query difficulties by only doing a manageable problem set each time. The documentation is sparse, however, on doing batch finds in mongoDB. I see lots of documentation on doing batch inserts but not batch finds.
-
Dan L over 12 yearsRamesh, the first block of code you provided works very well for my use case. It's just a one-time load and index of the data using a script file, so using resque may be overkill for my particular case. But the batching ability works perfectly!
-
Ryan McGeary over 12 yearsThis isn't necessary. Mongoid and the underlying Mongo driver already batch queries with a cursor. This keeps the memory footprint small.
-
RameshVel over 12 yearsthanks for the info @RyanMcGeary, god how have i missed the cursor thing,,, in the link durran specified about batch_size, how can we specify that externally...?
-
Ryan McGeary over 12 years@RameshVel, I'm not sure if Mongoid exposes the ability to change the
batch_size
per query. That might be a worthy patch if it isn't already an option. -
Mindey I. over 11 years
Model.all.to_a
would load the entire collection into memory. -
Bogdan Gusiev over 10 yearsNice, And what about other
Enumerable
methods likemap
orcollect
? -
Adit Saxena over 10 yearsThat's right, please don't do this: When we're talking about large datasets avoid converting entire collection to array at once: use
Model.find_each
or batch in any way but neverModel.all.to_a
-
matt walters almost 10 yearsremember to 'Sunspot.index! records' after the loop or you won't index the last group of < 1000 I believe
-
Mic92 almost 10 yearsCorrect. I forgot to copy this part.
-
Paul McClean about 9 yearsModel.find_each is not a Mongoid method. You would use Model.all.each instead.
-
rewritten about 9 yearsThe
.no_timeout
method on criteria saves you from having to manually reconnect:Post.all.order_by(:id => 1).batch_size(7).no_timeout.each_with_index do ...
-
rewritten about 9 yearsLoading all the database in memory... duh. The whole point of this is to be able to query documents in batches, if you have 4 million documents you will kill your server by first loading them into a single array, and then another array of groups.
-
ratnakar about 9 years@rewritten please check the above solution , the same what I given he explained. thanks for explanation Ryan McGeary.
-
bigpotato almost 9 yearsso does that mean by default, the database is hit ~
n / 100
times everytime? -
Ryan McGeary almost 9 years@Edmund "Hit" probably isn't the best word to use here, because it implies re-running the query each time. It's a database cursor. Think of it more like streaming the data across in batches of 100.
-
p.matsinopoulos almost 8 years@RyanMcGeary link inside your answer is broken. Can you edit/correct?
-
Ryan McGeary almost 8 years@p.matsinopoulos Took me a while to find the same comment. It's been almost 5 years, and Mongoid has since switched from GitHub Issues to JIRA. I think I found the appropriate comment.
-
Adrien Jarthon about 4 yearsFor the record in recent versions the batch size internally usually starts at 100 but then increases to reduce the number of calls to the database. What's great about this also is that it works with all enumerable methods so if you want to gets your records in actual ruby batches (like arrays of 100), you can do:
Model.all.each_slice(100) { |array| ... }
-
A moskal escaping from Russia about 4 years
in_groups_of
is a Rails Array method, to be used you should convertModel.all
to an array, which is not recommended at all. The -1 is to warn people to not do that. -
mltsy about 4 yearsOne MAJOR catch with this behavior that gets me over and over is that it doesn't work on relations, because the relation will store an "IdentityMap" of all the loaded records. For instance:
person.purchases.each { ... }
will load all "purchases" into memory, attached to theperson
instance. Instead, you have to callPurchase.where(person: person).each
to avoid storing all the returned records in memory. -
Curious Sam almost 2 years@rewritten In some cases, this doesn't work; even with
no_timeout
, it will timeout regardless. I don't know what the limit was, but from what I observed, it will timeout if you iterate over the collection for around 2-3 hours.