Tensorflow batch size in input placholder

18,326

Here you are specifying the model input. You want to leave the Batch size, to None, that means that you can run the model with a variable number of inputs (one or more). Batching is important to efficiently use your computing resources.

x = tf.placeholder("float", shape=[None, 784])
y_ = tf.placeholder("float", shape=[None, 10])

The next important line is:

batch = mnist.train.next_batch(50)

Here you are sending 50 elements as input but you can also change that to just one

batch = mnist.train.next_batch(1)

Without modifying the graph. If you specify the Batch size (some number instead of None in the first snippet), then you would have to change each time and that is not ideal, specially in production.

Share:
18,326

Related videos on Youtube

Andrea Sindico
Author by

Andrea Sindico

Updated on September 16, 2022

Comments

  • Andrea Sindico
    Andrea Sindico over 1 year

    I am new to Tensorflow and I can't get why the input placeholder is often dimensioned with the size of the batches used for training.

    In this example I found here and in the Official Mnist tutorial it is not

    from get_mnist_data_tf import read_data_sets
    mnist = read_data_sets("MNIST_data/", one_hot=True)
    import tensorflow as tf
    sess = tf.InteractiveSession()
    x = tf.placeholder("float", shape=[None, 784])
    y_ = tf.placeholder("float", shape=[None, 10])
    W = tf.Variable(tf.zeros([784,10]))
    b = tf.Variable(tf.zeros([10]))
    sess.run(tf.initialize_all_variables())
    y = tf.nn.softmax(tf.matmul(x,W) + b)
    cross_entropy = -tf.reduce_sum(y_*tf.log(y))
    train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
    for i in range(1000):
      batch = mnist.train.next_batch(50)
      train_step.run(feed_dict={x: batch[0], y_: batch[1]})
    correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
    
    print(accuracy.eval(feed_dict={x: mnist.test.images,
                                   y_: mnist.test.labels}))
    

    So what is the best and right way to dimension and create the model input and train it?