tensorflow InvalidArgumentError: You must feed a value for placeholder tensor with dtype float

10,667

Solution 1

From your error message, the name of the missing placeholder—'Placeholder_54'—is suspicious, because that suggests that at least 54 placeholders have been created in the current interpreter session.

There aren't enough details to say for sure, but I have some suspicions. Are you running the same code multiple times in the same interpreter session (e.g. using IPython/Jupyter or the Python shell)? Assuming that is the case, I suspect that your cost tensor depends on placeholders that were created in a previous execution of that code.

Indeed, your code creates two tf.placeholder() tensors x and y after building the rest of the model, so it seems likely that either:

  1. The missing placeholder was created in a previous execution of this code, or

  2. The input() function calls tf.placeholder() internally and it is these placeholders (perhaps the tensors X and Y?) that you should be feeding.

Solution 2

I think I came to a similar error. It seems your graph does not have those tensor's x, y on it, you created placeholders with the same names, but that does not mean you got tensor's in your graph with those names.

Here is the link to my question (which I answered my self..): link

Use this for getting all the tensors in your graph (pretty useful):

[n.name for n in tf.get_default_graph().as_graph_def().node]
Share:
10,667
printemp
Author by

printemp

Updated on June 30, 2022

Comments

  • printemp
    printemp almost 2 years

    I am new to tensorflow and want to train a logistic model for classification.

    # Set model weights
    W = tf.Variable(tf.zeros([30, 16]))
    b = tf.Variable(tf.zeros([16]))
    train_X, train_Y, X, Y = input('train.csv')
    
    #construct model
    pred = model(X, W, b)
    # Minimize error using cross entropy
    cost = tf.reduce_mean(-tf.reduce_sum(Y*tf.log(pred), reduction_indices=1))
    # Gradient Descent
    learning_rate = 0.1
    #optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
    optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
    # Initializing the variables
    init = tf.initialize_all_variables()
    
    get_ipython().magic(u'matplotlib inline')
    import collections
    import matplotlib.pyplot as plt
    
    training_epochs = 200
    batch_size = 300
    train_X, train_Y, X, Y = input('train.csv')
    acc = []
    x = tf.placeholder(tf.float32, [None, 30]) 
    y = tf.placeholder(tf.float32, [None, 16])
    with tf.Session() as sess:
         sess.run(init)
         # Training cycle
         for epoch in range(training_epochs):
             avg_cost = 0.0
             #print(type(y_train[0][0]))
             print(type(train_X))
             print(type(train_X[0][0]))
             print X
             _, c = sess.run([optimizer, cost], feed_dict = {x: train_X, y: train_Y})
    

    The feef_dict method does not work, with the complain:

    InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_54' with dtype float [[Node: Placeholder_54 = Placeholderdtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]] Caused by op u'Placeholder_54':

    I check the data type, for the training feature data X:

      train_X type: <type 'numpy.ndarray'>
      train_X[0][0]: <type 'numpy.float32'>
      train_X size: (300, 30)
      place_holder info : Tensor("Placeholder_56:0", shape=(?, 30), dtype=float32)
    

    I do not know why it complains. Hope sb could help, thanks