Tensorflow CPU memory problem (allocation exceeds 10% of system memory)

10,705

In my experience, a common cause of this type of issue is that we are using a reasonable batch size in training but trying to use a larger batch size (usually the whole dataset) when evaluating.

I have found myself doing this sort of thing in error:

model.fit(x_train, y_train, epochs=5, batch_size=10)
model.evaluate(x_test, y_test)

whereas we really need to do this:

model.fit(x_train, y_train, epochs=5, batch_size=10)
model.evaluate(x_test, y_test, batch_size=10)
Share:
10,705
Juan
Author by

Juan

Data Scientist Junior

Updated on June 04, 2022

Comments

  • Juan
    Juan almost 2 years

    I created a program in python using Keras/Tensorflow. I don't have any problem for the creation of my data and the training. However, I have the following error when I want to evaluate my model:

    Using TensorFlow backend.
    WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4213: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
    Instructions for updating:
    Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
    2018-12-05 19:20:44.932780: W tensorflow/core/framework/allocator.cc:122] Allocation of 3359939800 exceeds 10% of system memory.
    terminate called after throwing an instance of 'std::bad_alloc'
      what():  std::bad_alloc
    Abandon (core dumped)
    

    It seems to be a memory allocation problem. I reduced the size of my model and make smaller all the parameters but nothing has changed. I don't know how to solve this issue.