Hot to fix Tensorflow model not running in Eager mode with .fit()?
It took a while to find a solution that works for me in tensorflow==2.0.0
, so I wanted to share it here in case it helps other people too:
model.compile(run_eagerly=True)
If that doesn't work, you can try to force it after the model compiles:
model.compile()
model.run_eagerly = True
ryan651
Updated on July 20, 2022Comments
-
ryan651 almost 2 years
I'm trying to run a basic CNN keras model in Eager Execution but Tensorflow refuses to treat the model as eager. I originally attempted this in stable 1.13 branch (latest), making sure to enable eager execution with no result. I upgraded to 2.0 (latest) but again nothing.
Model
class CNN2(tf.keras.Model): def __init__(self, num_classes=7): super(CNN2, self).__init__() self.cnn1 = tf.keras.layers.Conv2D(32, (5,5), padding='same', strides=(2, 2), kernel_initializer='he_normal') self.bn1 = tf.keras.layers.BatchNormalization() self.cnn2 = tf.keras.layers.Conv2D(64, (5,5), padding='same', strides=(2, 2), kernel_initializer='he_normal') self.cnn3 = tf.keras.layers.Conv2D(128, (5,5), padding='same', strides=(2, 2), kernel_initializer='he_normal') self.bn2 = tf.keras.layers.BatchNormalization() self.pool = tf.keras.layers.MaxPooling2D((2,2)) self.dnn1 = tf.keras.layers.Dense(128) self.dropout1 = tf.keras.layers.Dropout(0.45) self.flatten = tf.keras.layers.Flatten() self.dnn2 = tf.keras.layers.Dense(512) self.dnn3 = tf.keras.layers.Dense(256) self.classifier = tf.keras.layers.Dense(num_classes) def simpleLoop(self, inputs, x): #x_Numpy = x.numpy(), for i, input in inputs: print("{0} - {1}".format(i,len(input))) def call(self, inputs, training=None, mask=None): print(tf.executing_eagerly()) x = tf.nn.leaky_relu(self.cnn1(inputs)) x = self.bn1(x) x = self.pool(x) x = tf.nn.leaky_relu(x) x = tf.nn.leaky_relu(self.bn2(self.cnn2(x))) x = self.pool(x) x = self.dropout1(tf.nn.leaky_relu(self.cnn3(x))) x = self.flatten(x) self.simpleLoop(inputs, x) x = self.dropout1(self.dnn1(x)) x = self.dropout1(self.dnn2(x)) x = self.dropout1(self.dnn3(x)) output = self.classifier(x) #with tf.device('/cpu:0'): output = tf.nn.softmax(output) return output
Parameter Setting
batch_size = 50 epochs = 150 num_classes = 7
Checking Eager is on and version
print(tf.executing_eagerly()) print(tf.__version__) >>True >>2.0.0-alpha0
Running the Model
modelE = CNN2(num_classes) modelE.run_eagerly = True print(modelE.run_eagerly) #model = CNN2(num_classes) modelE.compile(optimizer=tf.optimizers.Adam(0.00008), loss='categorical_crossentropy', metrics=['accuracy'], run_eagerly=True) # TF Keras tries to use entire dataset to determine shape without this step when using .fit() # Fix = Use exactly one sample from the provided input dataset to determine input/output shape/s for the model dummy_x = tf.zeros((1, size, size, 1)) modelE._set_inputs(dummy_x) # Train hist = modelE.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(x_test, y_test), verbose=1) # Evaluate on test set scores = modelE.evaluate(x_test, y_test, batch_size, verbose=1)
This results in the error
AttributeError: 'Tensor' object has no attribute 'numpy'
And when I remove the offending line
x.numpy()
I instead get this errorTypeError: Tensor objects are only iterable when eager execution is enabled. To iterate over this tensor use tf.map_fn.
It also prints False for the
print(tf.executing_eagerly())
located within thedef call()
method of the model.
How can it be forced into eager mode and not a graph? Again I tried this in both up to date 1.13 and 2.0. Is this a bug?