Why does Keras LSTM batch size used for prediction have to be the same as fitting batch size?

28,843

Solution 1

Unfortunately what you want to do is impossible with Keras ... I've also struggle a lot of time on this problems and the only way is to dive into the rabbit hole and work with Tensorflow directly to do LSTM rolling prediction.

First, to be clear on terminology, batch_size usually means number of sequences that are trained together, and num_steps means how many time steps are trained together. When you mean batch_size=1 and "just predicting the next value", I think you meant to predict with num_steps=1.

Otherwise, it should be possible to train and predict with batch_size=50 meaning you are training on 50 sequences and make 50 predictions every time step, one for each sequence (meaning training/prediction num_steps=1).

However, I think what you mean is that you want to use stateful LSTM to train with num_steps=50 and do prediction with num_steps=1. Theoretically this make senses and should be possible, and it is possible with Tensorflow, just not Keras.

The problem: Keras requires an explicit batch size for stateful RNN. You must specify batch_input_shape (batch_size, num_steps, features).

The reason: Keras must allocate a fixed-size hidden state vector in the computation graph with shape (batch_size, num_units) in order to persist the values between training batches. On the other hand, when stateful=False, the hidden state vector can be initialized dynamically with zeroes at the beginning of each batch so it does not need to be a fixed size. More details here: http://philipperemy.github.io/keras-stateful-lstm/

Possible work around: Train and predict with num_steps=1. Example: https://github.com/keras-team/keras/blob/master/examples/lstm_stateful.py. This might or might not work at all for your problem as the gradient for back propagation will be computed on only one time step. See: https://github.com/fchollet/keras/issues/3669

My solution: use Tensorflow: In Tensorflow you can train with batch_size=50, num_steps=100, then do predictions with batch_size=1, num_steps=1. This is possible by creating a different model graph for training and prediction sharing the same RNN weight matrices. See this example for next-character prediction: https://github.com/sherjilozair/char-rnn-tensorflow/blob/master/model.py#L11 and blog post http://karpathy.github.io/2015/05/21/rnn-effectiveness/. Note that one graph can still only work with one specified batch_size, but you can setup multiple model graphs sharing weights in Tensorflow.

Solution 2

Sadly what you wish for is impossible because you specify the batch_size when you define the model... However, I found a simple way around this problem: create 2 models! The first is used for training and the second for predictions, and have them share weights:

train_model = Sequential([Input(batch_input_shape=(batch_size,...),
<continue specifying your model>])

predict_model = Sequential([Input(batch_input_shape=(1,...),
<continue specifying exact same model>])

train_model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
predict_model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())

Now you can use any batch size you want. after you fit your train_model just save it's weights and load them with the predict_model:

train_model.save_weights('lstm_model.h5')
predict_model.load_weights('lstm_model.h5')

notice that you only want to save and load the weights, and not the whole model (which includes the architecture, optimizer etc...). This way you get the weights but you can input one batch at a time... more on keras save/load models: https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model

notice that you need to install h5py to use "save weights".

Solution 3

Another easy workaround is:

def create_model(batch_size):
    model = Sequential()
    model.add(LSTM(1, batch_input_shape=(batch_size, 1, sl), stateful=True))
    model.add(Dense(1))
    return model

model_train = create_model(batch_size=50)

model_train.compile(loss='mean_squared_error', optimizer='adam')
model_train.fit(trainX, trainY, epochs=epochs, batch_size=batch_size)

model_predict = create_model(batch_size=1)

weights = model_train.get_weights()
model_predict.set_weights(weights)

Solution 4

The best solution to this problem is "Copy Weights". It can be really helpful if you want to train & predict with your LSTM model with different batch sizes.

For example, once you have trained your model with 'n' batch size as shown below:

# configure network
n_batch = len(X)
n_epoch = 1000
n_neurons = 10
# design network
model = Sequential()
model.add(LSTM(n_neurons, batch_input_shape=(n_batch, X.shape[1], X.shape[2]), stateful=True))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')

And now you want to want predict values fewer than your batch size where n=1.

What you can do is that, copy the weights of your fit model and reinitialize the new model LSTM model with same architecture and set batch size equal to 1.

# re-define the batch size
n_batch = 1
# re-define model
new_model = Sequential()
new_model.add(LSTM(n_neurons, batch_input_shape=(n_batch, X.shape[1], X.shape[2]),       stateful=True))
new_model.add(Dense(1))
# copy weights
old_weights = model.get_weights()
new_model.set_weights(old_weights)

Now you can easily predict and train LSTMs with different batch sizes.

For more information please read: https://machinelearningmastery.com/use-different-batch-sizes-training-predicting-python-keras/

Solution 5

I also have same problem and resolved it.

In another way, you can save your weights, when you test your result, you can reload your model with same architecture and set batch_size=1 as below:

 n_neurons = 10
 # design network
 model = Sequential()
 model.add(LSTM(n_neurons, batch_size=1, batch_input_shape=(n_batch,X.shape[1], X.shape[2]), statefull=True))
 model.add(Dense(1))
 model.compile(loss='mean_squared_error', optimizer='adam')
 model.load_weights("w.h5")

It will work well. I hope it will helpfull for you.

Share:
28,843
DanielSon
Author by

DanielSon

Studying Computer Science, Machine Learning and Data Mining

Updated on October 09, 2021

Comments

  • DanielSon
    DanielSon over 2 years

    When using a Keras LSTM to predict on time series data I've been getting errors when I'm trying to train the model using a batch size of 50, while then trying to predict on the same model using a batch size of 1 (ie just predicting the next value).

    Why am I not able to train and fit the model with multiple batches at once, and then use that model to predict for anything other than the same batch size. It doesn't seem to make sense, but then I could easily be missing something about this.

    Edit: this is the model. batch_size is 50, sl is sequence length, which is set at 20 currently.

        model = Sequential()
        model.add(LSTM(1, batch_input_shape=(batch_size, 1, sl), stateful=True))
        model.add(Dense(1))
        model.compile(loss='mean_squared_error', optimizer='adam')
        model.fit(trainX, trainY, epochs=epochs, batch_size=batch_size, verbose=2)
    

    here is the line for predicting on the training set for RMSE

        # make predictions
        trainPredict = model.predict(trainX, batch_size=batch_size)
    

    here is the actual prediction of unseen time steps

    for i in range(test_len):
        print('Prediction %s: ' % str(pred_count))
    
        next_pred_res = np.reshape(next_pred, (next_pred.shape[1], 1, next_pred.shape[0]))
        # make predictions
        forecastPredict = model.predict(next_pred_res, batch_size=1)
        forecastPredictInv = scaler.inverse_transform(forecastPredict)
        forecasts.append(forecastPredictInv)
        next_pred = next_pred[1:]
        next_pred = np.concatenate([next_pred, forecastPredict])
    
        pred_count += 1
    

    This issue is with the line:

    forecastPredict = model.predict(next_pred_res, batch_size=batch_size)

    The error when batch_size here is set to 1 is:

    ValueError: Cannot feed value of shape (1, 1, 2) for Tensor 'lstm_1_input:0', which has shape '(10, 1, 2)' which is the same error that throws when batch_size here is set to 50 like the other batch sizes as well.

    The total error is:

        forecastPredict = model.predict(next_pred_res, batch_size=1)
      File "/home/entelechy/tf_keras/lib/python3.5/site-packages/keras/models.py", line 899, in predict
        return self.model.predict(x, batch_size=batch_size, verbose=verbose)
      File "/home/entelechy/tf_keras/lib/python3.5/site-packages/keras/engine/training.py", line 1573, in predict
        batch_size=batch_size, verbose=verbose)
       File "/home/entelechy/tf_keras/lib/python3.5/site-packages/keras/engine/training.py", line 1203, in _predict_loop
        batch_outs = f(ins_batch)
      File "/home/entelechy/tf_keras/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 2103, in __call__
        feed_dict=feed_dict)
      File "/home/entelechy/tf_keras/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 767, in run
        run_metadata_ptr)
      File "/home/entelechy/tf_keras/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 944, in _run
        % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
    ValueError: Cannot feed value of shape (1, 1, 2) for Tensor 'lstm_1_input:0', which has shape '(10, 1, 2)'
    

    Edit: Once I set the model to stateful=False then I am able to use different batch sizes for fitting/training and prediction. What is the reason for this?

  • DanielSon
    DanielSon about 7 years
    Hey thanks for a really good reply. Can you explain the difference between batch_size and num_steps again? I've never actually used or seen num_step and thought batch_size was just how many windows are trained on at the same time. What is the difference between a sequence and a time step?
  • Hai-Anh Trinh
    Hai-Anh Trinh about 7 years
    For RNN models, the inputs are usually 3D tensors (batch_size, num_steps, num_features) meaning you train on multiple sequences in the same batch, each sequence have length num_steps, each num steps have num_features.
  • DanielSon
    DanielSon almost 7 years
    Oren I'm going to try that out, looks like a great solution!
  • NPE
    NPE almost 7 years
    Thank you for this.
  • Jeremy
    Jeremy almost 7 years
    Didn't work for me. ValueError: Tensor("Placeholder:0", shape=(4, 24), dtype=float32) must be from the same graph as Tensor("l1_1/kernel:0", shape=(4, 24), dtype=float32_ref).
  • Tomasz Sętkowski
    Tomasz Sętkowski over 6 years
    Here is a complete short example on how to restore tensorflow model with LSTM cells when using different batch_size and num_steps than when it was trained.
  • Voy
    Voy over 4 years
    What makes you think author meant num_steps not batch_size? Whilst your answer is related and somewhat useful, I think you're making a wrong assumption. If I'm not misreading something, they clearly speak about the various batch_size, not the num_step (which they refer to as sl). Other answers seem to provide better solutions to the specific problem author describes. Still, thanks for putting in the effort to write such a detailed answer!
  • Admin
    Admin over 3 years
    @Hai-AnhTrinh Do you have link to blog post to implementation of rnn using tensorflow computation graph?