Converting Tensor to np.array using K.eval() in Keras returns InvalidArgumentError

15,481

The loss function is compiled with the model. At compile time, y_true and y_pred are only placeholder tensors, so they do not have a value yet and can therefore not be evaluated. This is why you get the error message.

Your loss function should use Keras tensors, not the numpy arrays they evaluate to. If you need to use additional numpy arrays, convert them to tensors via the variable method of keras.backend (Keras Backend Documentation).

Edit:

You will still need to stay inside the Keras function space to make your loss work. If this is the concrete loss function that you want to implement, and assuming that your values are in {0,1}, you can try something like this:

import keras.backend as K

def custom_loss_function(y_true, y_pred):

    y_true = y_true*2 - K.ones_like(y_true) # re-codes values of y_true from {0,1} to {-1,+1}
    y_true = y_true*y_pred # makes the values that you are not interested in equal to zero
    classification_score = K.abs(K.sum(y_true))
Share:
15,481
Milind Dalvi
Author by

Milind Dalvi

Updated on June 22, 2022

Comments

  • Milind Dalvi
    Milind Dalvi almost 2 years

    This is to define a custom loss function in Keras. The code is as follows:

    from keras import backend as K
    from keras.models import Sequential
    from keras.layers import Dense
    from keras.callbacks import EarlyStopping
    from keras.optimizers import Adam
    
    def custom_loss_function(y_true, y_pred):
        a_numpy_y_true_array = K.eval(y_true)
        a_numpy_y_pred_array = K.eval(y_pred)
    
        # some million dollar worth custom loss that needs numpy arrays to be added here...
    
        return K.mean(K.binary_crossentropy(y_true, y_pred), axis=-1)
    
    
    def build_model():
        model= Sequential()
        model.add(Dense(16, input_shape=(701, ), activation='relu'))
        model.add(Dense(16, activation='relu'))
        model.add(Dense(1, activation='sigmoid'))
        model.compile(loss=custom_loss_function, optimizer=Adam(lr=0.005), metrics=['accuracy'])  
        return model
    
    model = build_model()
    early_stop = EarlyStopping(monitor="val_loss", patience=1) 
    model.fit(kpca_X, y, epochs=50, validation_split=0.2, callbacks=[early_stop], verbose=False)
    

    The above code returns following error:

    ---------------------------------------------------------------------------
    InvalidArgumentError                      Traceback (most recent call last)
    D:\milind.dalvi\personal\_python\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args)
       1326     try:
    -> 1327       return fn(*args)
       1328     except errors.OpError as e:
    
    D:\milind.dalvi\personal\_python\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
       1305                                    feed_dict, fetch_list, target_list,
    -> 1306                                    status, run_metadata)
       1307 
    
    D:\milind.dalvi\personal\_python\Anaconda3\lib\contextlib.py in __exit__(self, type, value, traceback)
         88             try:
    ---> 89                 next(self.gen)
         90             except StopIteration:
    
    D:\milind.dalvi\personal\_python\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py in raise_exception_on_not_ok_status()
        465           compat.as_text(pywrap_tensorflow.TF_Message(status)),
    --> 466           pywrap_tensorflow.TF_GetCode(status))
        467   finally:
    
    InvalidArgumentError: You must feed a value for placeholder tensor 'dense_84_target' with dtype float and shape [?,?]
         [[Node: dense_84_target = Placeholder[dtype=DT_FLOAT, shape=[?,?], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
    

    So anybody knows how we could convert y_true and y_pred which is Tensor("dense_84_target:0", shape=(?, ?), dtype=float32) into numpy array

    EDIT: --------------------------------------------------------

    Basically what I expect to write in loss function is something as follows:

    def custom_loss_function(y_true, y_pred):
    
        classifieds = []
        for actual, predicted in zip(y_true, y_pred):
            if predicted == 1:
                classifieds.append(actual)
        classification_score = abs(classifieds.count(0) - classifieds.count(1))
    
        return SOME_MAGIC_FUNCTION_TO_CONVERT_INT_TO_TENSOR(classification_score)
    
  • Milind Dalvi
    Milind Dalvi about 6 years
    Your answer is definitely helpful and hence I would upvote it, but I am not looking for external numpy arrays to K.variable conversion. I have updated the EDIT to clarify what I am seeking for!