How to turn off dropout for testing in Tensorflow?

26,871

Solution 1

The easiest way is to change the keep_prob parameter using a placeholder_with_default:

prob = tf.placeholder_with_default(1.0, shape=())
layer = tf.nn.dropout(layer, prob)

in this way when you train you can set the parameter like this:

sess.run(train_step, feed_dict={prob: 0.5})

and when you evaluate the default value of 1.0 is used.

Solution 2

With the new tf.estimator API you specify a model function, that returns different models, based on whether you are training or testing, but still allows you to reuse your model code. In your model function you would do something similar to:

def model_fn(features, labels, mode):

    training = (mode == tf.estimator.ModeKeys.TRAIN)
    ...
    t = tf.layers.dropout(t, rate=0.25, training=training, name='dropout_1')
    ...

The mode argument is automatically passed depending on whether you call estimator.train(...) or estimator.predict(...).

Solution 3

you should set the keep_prob in tensorflow dropout layer, that is the probability to keep the weight, I think you set that variable with values between 0.5 and 0.8. When testing the network you must simply feed keep_prob with 1.

You should define something like that:

keep_prob = tf.placeholder(tf.float32, name='keep_prob')
drop = tf.contrib.rnn.DropoutWrapper(layer1, output_keep_prob=keep_prob)

Then change the values in the session:

_ = sess.run(cost, feed_dict={'input':training_set, 'output':training_labels, 'keep_prob':0.8}) # During training
_ = sess.run(cost, feed_dict={'input':testing_set, 'output':testing_labels, 'keep_prob':1.}) # During testing

Solution 4

if you don't want to use Estimator API, you can create the dropout this way:

tf_is_traing_pl = tf.placeholder_with_default(True, shape=())
tf_drop_out = tf.layers.dropout(last_output, rate=0.8, training=tf.is_training_pl)

So, you feed the session with {'tf_is_training': False} when doing evaluation instead of changing the dropout rate.

Solution 5

With the update of Tensorflow, the class tf.layer.dropout should be used instead of tf.nn.dropout.

This supports an is_training parameter. Using this allows your models to define keep_prob once, and not rely on your feed_dict to manage the external parameter. This allows for better refactored code.

More info: https://www.tensorflow.org/api_docs/python/tf/layers/dropout

Share:
26,871
Admin
Author by

Admin

Updated on May 02, 2020

Comments

  • Admin
    Admin about 4 years

    I am fairly new to Tensorflow and ML in general, so I hereby apologize for a (likely) trivial question.

    I use the dropout technique to improve learning rates of my network, and it seems to work just fine. Then, I would like to test the network on some data to see if it works like this:

       def Ask(self, image):
            return self.session.run(self.model, feed_dict = {self.inputPh: image})
    

    Obviously, it yields different results each time as the dropout is still in place. One solution I can think of is to create two separate models - one for a training and the other one for an actual later use of the network, however, such a solution seems impractical to me.

    What's the common approach to solving this problem?

  • Suleka_28
    Suleka_28 over 5 years
    I tried this approach but it is giving me NaN errors when assigning the 'prob' to the tf.layers.dropout(<layer>,rate=prob,training=True). I used the placeholder that you have suggested with a default value. Question: stackoverflow.com/questions/54069395/…
  • nessuno
    nessuno over 5 years
    Because tf.nn.dropout has the drop_probability and not the keep_probability
  • leonard
    leonard over 4 years
    What if we want to work with the frozen graph of an estimator and not the saved_model? How do we specify which mode we are in?