ValueError: Tensor must be from the same graph as Tensor with Bidirectinal RNN in Tensorflow

47,225

Solution 1

TensorFlow stores all operations on an operational graph. This graph defines what functions output to where, and it links it all together so that it can follow the steps you have set up in the graph to produce your final output. If you try to input a Tensor or operation on one graph into a Tensor or operation on another graph it will fail. Everything must be on the same execution graph.

Try removing with tf.Graph().as_default():

TensorFlow provides you a default graph which is referred to if you do not specify a graph. You are probably using the default graph in one spot and a different graph in your training block.

There does not seem to be a reason you are specifying a graph as default here and most likely you are using separate graphs on accident. If you really want to specify a graph then you probably want to pass it as a variable, not set it like this.

Solution 2

If you are using tf 2.x with Keras - then maybe disable-ling eager execution before building the model graph could help. So to disable eager execution - adding the following line before defining the model.

tf.compat.v1.disable_eager_execution()
Share:
47,225
Admin
Author by

Admin

Updated on August 12, 2020

Comments

  • Admin
    Admin over 3 years

    I'm doing text tagger using Bidirectional dynamic RNN in tensorflow. After maching input's dimension, I tried to run a Session. this is blstm setting parts:

    fw_lstm_cell = BasicLSTMCell(LSTM_DIMS)
    bw_lstm_cell = BasicLSTMCell(LSTM_DIMS)
    
    (fw_outputs, bw_outputs), _ = bidirectional_dynamic_rnn(fw_lstm_cell,
                                                            bw_lstm_cell,
                                                            x_place,
                                                            sequence_length=SEQLEN,
                                                            dtype='float32')
    

    and this is runing part:

      with tf.Graph().as_default():
        # Placehoder Settings
        x_place, y_place = set_placeholder(BATCH_SIZE, EM_DIMS, MAXLEN)
    
        # BLSTM Model Building
        hlogits = tf_kcpt.build_blstm(x_place)
    
        # Compute loss
        loss = tf_kcpt.get_loss(log_likelihood)
    
        # Training
        train_op = tf_kcpt.training(loss)
    
        # load Eval method
        eval_correct = tf_kcpt.evaluation(logits, y_place)
    
        # Session Setting & Init
        init = tf.global_variables_initializer()
        sess = tf.Session()
        sess.run(init)
    
        # tensor summary setting
        summary = tf.summary.merge_all()
        summary_writer = tf.summary.FileWriter(LOG_DIR, sess.graph)
    
        # Save
        saver = tf.train.Saver()
    
        # Run epoch
        for step in range(EPOCH):
            start_time = time.time()
    
            feed_dict = fill_feed_dict(KCPT_SET['train'], x_place, y_place)
            _, loss_value = sess.run([train_op, loss], feed_dict=feed_dict)
    

    But, it give me the error:

    ValueError: Tensor("Shape:0", shape=(1,), dtype=int32) must be from the same graph as Tensor("bidirectional_rnn/fw/fw/stack_2:0", shape=(1,), dtype=int32).

    Help me, please

  • Agile Bean
    Agile Bean over 5 years
    You can alternatively reset the graph by tf.reset_default_graph() and then execute all code blocks again. this often removes the problem
  • mrgloom
    mrgloom over 4 years
    Why with tf.Graph().as_default(): don't reuse already created graph?
  • Sowmya Ganesan
    Sowmya Ganesan almost 4 years
    I assigned to variable and reused it later, but still didnt work. ValueError: Tensor("optimizations:0", shape=(3,), dtype=string) must be from the same graph as Tensor("PaddedBatchDatasetV2:0", shape=(), dtype=variant).
  • Christabella Irwanto
    Christabella Irwanto almost 4 years
    @mrgloom tf.Graph() creates a new graph