expected ndim=3, found ndim=2
LSTM layer expects inputs to have shape of (batch_size, timesteps, input_dim)
. In keras you need to pass (timesteps, input_dim
) for input_shape argument. But you are setting input_shape (9,). This shape does not include timesteps dimension. The problem can be solved by adding extra dimension to input_shape for time dimension. E.g adding extra dimension with value 1 could be simple solution. For this you have to reshape input dataset( X Train) and Y shape. But this might be problematic because the time resolution is 1 and you are feeding length one sequence. With length one sequence as input, using LSTM does not seem the right option.
x_train = x_train.reshape(-1, 1, 9)
x_test = x_test.reshape(-1, 1, 9)
y_train = y_train.reshape(-1, 1, 5)
y_test = y_test.reshape(-1, 1, 5)
model = Sequential()
model.add(LSTM(100, input_shape=(1, 9), return_sequences=True))
model.add(LSTM(5, input_shape=(1, 9), return_sequences=True))
model.compile(loss="mean_absolute_error", optimizer="adam", metrics= ['accuracy'])
history = model.fit(X_train,y_train,epochs=100, validation_data=(X_test,y_test))
mht
Updated on July 12, 2022Comments
-
mht almost 2 years
I'm new with Keras and I'm trying to implement a Sequence to Sequence LSTM. Particularly, I have a dataset with 9 features and I want to predict 5 continuous values.
I split the training and the test set and their shape are respectively:
X TRAIN (59010, 9) X TEST (25291, 9) Y TRAIN (59010, 5) Y TEST (25291, 5)
The LSTM is extremely simple at the moment:
model = Sequential() model.add(LSTM(100, input_shape=(9,), return_sequences=True)) model.compile(loss="mean_absolute_error", optimizer="adam", metrics= ['accuracy']) history = model.fit(X_train,y_train,epochs=100, validation_data=(X_test,y_test))
But I have the following error:
ValueError: Input 0 is incompatible with layer lstm_1: expected ndim=3, found ndim=2
Can anyone help me?
-
Avv almost 3 yearsThank you very much. I did the same exact thing, but I still got the same issue. Any help please?
-
Avv almost 3 yearsWhy adding this does solve my issue! I had data of shape (804, 291) for both X and y since I want to build an LSTM autoencoder. I just added ` X = tf.convert_to_tensor(scaled_features.to_numpy().reshape(-1, 804, 291), np.float32) y = tf.convert_to_tensor(scaled_features.to_numpy().reshape(-1, 804, 291), np.float32)` and now it works. Could you please tell me why!
-
Avv almost 3 yearsI noticed I have loss nan for
adam
optimizer during training. Any idea why does that happen please? -
Mitiku almost 3 yearsWhich loss are you using?
-
Avv almost 3 yearsThank you for replying back. I did solve nan and infinity loss by removing 0s and 1s, but that might affect credibility of my results! I tried regularization, l2, clipnorm and others but I still got loss around 9.06! Any suggestion please?