mongodump error "Failed: error reading from db: EOF" (no entries in server log)

13,579

Solution 1

The Error Failed: error reading from db: EOF is caused from running out of memory during the oplog write out.

You can use less memory as when you run mongodump add the --quiet option.

mongodump --quiet

Solution 2

The "Failed: error reading from db: EOF" is caused from running out of memory during the oplog write out.

Solution 3

For containerized setup you must remember to set storage.wiredTiger.engineConfig.cacheSizeGB.

From https://docs.mongodb.com/manual/reference/configuration-options/#storage.wiredTiger.engineConfig.cacheSizeGB :

The storage.wiredTiger.engineConfig.cacheSizeGB limits the size of the WiredTiger internal cache. The operating system will use the available free memory for filesystem cache, which allows the compressed MongoDB data files to stay in memory. In addition, the operating system will use any free RAM to buffer file system blocks and file system cache.

To accommodate the additional consumers of RAM, you may have to decrease WiredTiger internal cache size.

Share:
13,579
Sybil
Author by

Sybil

Updated on July 15, 2022

Comments

  • Sybil
    Sybil almost 2 years

    Our mongodb.conf of version 3.06 and data files only 240 MB in size. Network is reliable at this timestamps.

    # mongod.conf
    
    # Where to store the data.
    
    # Note: if you run mongodb as a non-root user (recommended) you may
    # need to create and set permissions for this directory manually,
    # e.g., if the parent directory isn't mutable by the mongodb user.
    dbpath=/db/db32/mongodb/data/
    
    # path to logfile
    logpath=/db/db32/mongodb/logs/mongod.log
    
    # add new entries to the end of the logfile
    logappend=true
    
    # Listen to local interface only. Comment out to listen on all interfaces.
    #bind_ip = 127.0.0.1
    
    # enable operation journaling
    #journal = true
    #smallfiles = true
    nojournal = true
    
    # Enables periodic logging of CPU utilization and I/O wait
    cpu = true
    
    # enable database authentication for users connecting from remote hosts
    auth = true
    
    # Verbose logging output.
    #verbose = true
    
    # Enable db quota management
    #quota = true
    
    # Set oplogging level where n is
    #   0=off (default)
    #   1=W
    #   2=R
    #   3=both
    #   7=W+some reads
    #diaglog = 0
    
    # Ignore query hints
    #nohints = true
    
    # Turns off server-side scripting.  This will result in greatly limited
    # functionality
    noscripting = true
    
    # Turns off table scans.  Any query that would do a table scan fails.
    #notablescan = true
    
    # Disable data file preallocation.
    #noprealloc = true
    
    # Specify .ns file size for new databases.
    # nssize = <size>
    
    # Replication Options
    
    # in replicated mongo databases, specify the replica set name here
    #replSet=setname
    # maximum size in megabytes for replication operation log
    #oplogSize=1024
    # path to a key file storing authentication info for connections
    # between replica set members
    #keyFile=/path/to/keyfile
    
    # Forces the mongod to validate all requests from clients
    objcheck = true
    
    # Disable HTTP status interface
    nohttpinterface = true
    
    # disable REST interface
    rest = false
    
    # database profiling 1 = only includes slow operations
    profile = 1
    
    # logs slow queries to the log
    slowms = 100
    
    # maximum number of simultaneous connections
    maxConns = 25
    

    mongodump with verbose flag. In server log no entries to this timestamp.

    (...)
    2016-03-09T16:51:17.378+0100    enqueued collection 'Tetzi005.xxx'
    2016-03-09T16:51:17.384+0100    enqueued collection 'Tetzi005.xxxxxx'
    2016-03-09T16:51:17.391+0100    enqueued collection 'Tetzi005.system.indexes'
    2016-03-09T16:51:17.391+0100    finalizing intent manager with longest task first prioritizer
    2016-03-09T16:51:17.391+0100    dumping with 8 job threads
    2016-03-09T16:51:17.391+0100    starting dump routine with id=0
    2016-03-09T16:51:17.391+0100    starting dump routine with id=4
    2016-03-09T16:51:17.391+0100    starting dump routine with id=1
    2016-03-09T16:51:17.391+0100    writing Tetzi005.DailyEmailUser to dbbackup/dump/Tetzi005/xxxxxxx.bson
    2016-03-09T16:51:17.391+0100    starting dump routine with id=3
    2016-03-09T16:51:17.391+0100    starting dump routine with id=6
    2016-03-09T16:51:17.391+0100    writing Tetzi005.Prototype to dbbackup/dump/Tetzi005/xxxxxxx.bson
    2016-03-09T16:51:17.392+0100    starting dump routine with id=7
    2016-03-09T16:51:17.392+0100    writing Tetzi005.ProfileUser to dbbackup/dump/Tetzi005/xxxxxxx.bson
    2016-03-09T16:51:17.392+0100    starting dump routine with id=2
    2016-03-09T16:51:17.392+0100    writing Tetzi005.OrganizationDataSet to dbbackup/dump/Tetzi005/xxxxxxxx.bson
    2016-03-09T16:51:17.392+0100    writing Tetzi005.DailyUserCount to dbbackup/dump/Tetzi005/xxxxxxxxxx.bson
    2016-03-09T16:51:17.392+0100    writing Tetzi005.DailyEmailOrganization to dbbackup/dump/Tetzi005/xxxxxxxxxxxxx.bson
    2016-03-09T16:51:17.392+0100    starting dump routine with id=5
    2016-03-09T16:51:17.392+0100    writing Tetzi005.OrganizationStatistics to dbbackup/dump/Tetzi005/xxxxxxxxxxx.bson
    2016-03-09T16:51:17.392+0100    writing Tetzi005.Organization to dbbackup/dump/Tetzi005/xxxx.bson
    2016-03-09T16:51:17.398+0100        counted 112 documents in Tetzi005.xxxxxxxxxxxxx
    2016-03-09T16:51:17.398+0100        counted 475 documents in Tetzi005.xxxxxxxx
    2016-03-09T16:51:17.405+0100    Failed: error reading from db: EOF
    

    No solutions when googling for Failed: error reading from db: EOF

    We have this problem only with large plan. Technically the configuration doesn't differ (expect memory, disk and maxConns). All mongo Server running in Docker container. Docker runs on OpenStack VMs with RHEL7.

    small          Maximum 10 concurrent connections, 1GB storage, 256MB memory   paid   
    medium         Maximum 15 concurrent connections, 8GB storage, 1GB memory     paid   
    large          Maximum 25 concurrent connections, 16GB storage, 4GB memory    paid   
    
  • iosdev33
    iosdev33 almost 8 years
    Sorry for the delay in responding, In the MongoDB logs it was complaining about memory. Added more memory to my virtual machine and that fixed the issue.
  • Robert
    Robert about 7 years
    Will that reduce the memory usage of mongodump or of the mongo database itself? If it only affects mongodump then it will be of no use if mongodump uses a remote database...
  • Flask
    Flask about 7 years
    funny stuff, its the usage of mongodump!