NotImplementedError: Cannot convert a symbolic Tensor (2nd_target:0) to a numpy array

120,081

Solution 1

I found the solution to this problem:

It was because I mixed symbolic tensor with a non-symbolic type, such as a numpy. For example. It is NOT recommended to have something like this:

def my_mse_loss_b(b):
     def mseb(y_true, y_pred):
         ...
         a = np.ones_like(y_true) #numpy array here is not recommended
         return K.mean(K.square(y_pred - y_true)) + a
     return mseb

Instead, you should convert all to symbolic tensors like this:

def my_mse_loss_b(b):
     def mseb(y_true, y_pred):
         ...
         a = K.ones_like(y_true) #use Keras instead so they are all symbolic
         return K.mean(K.square(y_pred - y_true)) + a
     return mseb

Hope this help!

Solution 2

For me, the issue occurred when upgrading from numpy 1.19 to 1.20 and using ray's RLlib, which uses tensorflow 2.2 internally. Simply downgrading with

pip install numpy==1.19.5

solved the problem; the error did not occur anymore.

Update (comment by @codeananda): You can also update to a newer TensorFlow (2.6+) version now that resolves the problem (pip install -U tensorflow).

Solution 3

I faced the same error. When I tried passing my input layer to the Data augmentation Sequential layer. The error and my code is as shown below.
Error:
NotImplementedError: Cannot convert a symbolic Tensor (data_augmentation/random_rotation_5/rotation_matrix/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported.

My code that generated the error:


#Create data augmentation layer using the Sequential model using horizontal flipping, rotations and zoom etc.
data_augmentation = Sequential([
    preprocessing.RandomFlip("horizontal"),
    preprocessing.RandomRotation(0.2),
    preprocessing.RandomZoom(0.2),
    preprocessing.RandomHeight(0.2),
    preprocessing.RandomWidth(0.2)
   # preprocessing.Rescale()
], name="data_augmentation")

# Setting up the input_shape and base model, and freezing the underlying base model layers.
input_shape = (224,224,3)
base_model = tf.keras.applications.EfficientNetB0(include_top=False)
base_model.trainable=False

#Create the input layers
inputs = tf.keras.Input(shape=input_shape, name="input_layer")

#Add in data augmentation Sequential model as a layer
x = data_augmentation(inputs) #This is the line of code that generated the error.

My solution to the generated Error:
Solution 1:
I was running on a lower version of Tensorflow version 2.4.0. So i uninstalled it and reinstalled it to get a higher version 2.6.0. The newer tensor flow version automatically uninstalls and reinstall numpy version (1.19.5) (if numpy is already installed in your local machine). This will automatically solve the bug. Enter the below commands in the terminal of your current conda environment:

pip uninstall tensorflow
pip install tensorflow

Solution 2:
Its the simplest of all the suggested solutions I guess. Run your code in Google colab instead of your local machine. Colab will always have the latest packages preinstalled.

Solution 4

I tried to add a SimpleRNN layer to my model and I received a similar error (NotImplementedError: Cannot convert a symbolic Tensor (SimpleRNN-1/strided_slice:0) to a numpy array) with Python 3.9.5.

When I created another environment with Python 3.8.10 and all the other modules I needed, the issue was solved.

Share:
120,081
T D Nguyen
Author by

T D Nguyen

I'm writing this for a badge that many of you may not be aware its existence. I'm interested in software and cloud engineering and research software agents. Besides, calculus, statistics, and music are also my favourites!

Updated on January 27, 2022

Comments

  • T D Nguyen
    T D Nguyen over 2 years

    I try to pass 2 loss functions to a model as Keras allows that.

    loss: String (name of objective function) or objective function or Loss instance. See losses. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses.

    The two loss functions:

    def l_2nd(beta):
        def loss_2nd(y_true, y_pred):
            ...
            return K.mean(t)
    
        return loss_2nd
    

    and

    def l_1st(alpha):
        def loss_1st(y_true, y_pred):
            ...
            return alpha * 2 * tf.linalg.trace(tf.matmul(tf.matmul(Y, L, transpose_a=True), Y)) / batch_size
    
        return loss_1st
    

    Then I build the model:

    l2 = K.eval(l_2nd(self.beta))
    l1 = K.eval(l_1st(self.alpha))
    self.model.compile(opt, [l2, l1])
    

    When I train, it produces the error:

    1.15.0-rc3 WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630:
    calling BaseResourceVariable.__init__ (from
    tensorflow.python.ops.resource_variable_ops) with constraint is
    deprecated and will be removed in a future version. Instructions for
    updating: If using Keras pass *_constraint arguments to layers.
    --------------------------------------------------------------------------- 
    NotImplementedError                       Traceback (most recent call
    last) <ipython-input-20-298384dd95ab> in <module>()
         47                          create_using=nx.DiGraph(), nodetype=None, data=[('weight', int)])
         48 
    ---> 49     model = SDNE(G, hidden_size=[256, 128],)
         50     model.train(batch_size=100, epochs=40, verbose=2)
         51     embeddings = model.get_embeddings()
    
    10 frames <ipython-input-19-df29e9865105> in __init__(self, graph,
    hidden_size, alpha, beta, nu1, nu2)
         72         self.A, self.L = self._create_A_L(
         73             self.graph, self.node2idx)  # Adj Matrix,L Matrix
    ---> 74         self.reset_model()
         75         self.inputs = [self.A, self.L]
         76         self._embeddings = {}
    
    <ipython-input-19-df29e9865105> in reset_model(self, opt)
    
    ---> 84         self.model.compile(opt, loss=[l2, l1])
         85         self.get_embeddings()
         86 
    
    /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/tracking/base.py
    in _method_wrapper(self, *args, **kwargs)
        455     self._self_setattr_tracking = False  # pylint: disable=protected-access
        456     try:
    --> 457       result = method(self, *args, **kwargs)
        458     finally:
        459       self._self_setattr_tracking = previous_value  # pylint: disable=protected-access
    
    NotImplementedError: Cannot convert a symbolic Tensor (2nd_target:0)
    to a numpy array.
    

    Please help, thanks!