How to implement mini-batch gradient descent in python?

22,803

Solution 1

This function returns the mini-batches given the inputs and targets:

def iterate_minibatches(inputs, targets, batchsize, shuffle=False):
    assert inputs.shape[0] == targets.shape[0]
    if shuffle:
        indices = np.arange(inputs.shape[0])
        np.random.shuffle(indices)
    for start_idx in range(0, inputs.shape[0] - batchsize + 1, batchsize):
        if shuffle:
            excerpt = indices[start_idx:start_idx + batchsize]
        else:
            excerpt = slice(start_idx, start_idx + batchsize)
        yield inputs[excerpt], targets[excerpt]

and this tells you how to use that for training:

for n in xrange(n_epochs):
    for batch in iterate_minibatches(X, Y, batch_size, shuffle=True):
        x_batch, y_batch = batch
        l_train, acc_train = f_train(x_batch, y_batch)

    l_val, acc_val = f_val(Xt, Yt)
    logging.info('epoch ' + str(n) + ' ,train_loss ' + str(l_train) + ' ,acc ' + str(acc_train) + ' ,val_loss ' + str(l_val) + ' ,acc ' + str(acc_val))

Obviously you need to define the f_train, f_val and other functions yourself given the optimisation library (e.g. Lasagne, Keras) you are using.

Solution 2

The following function returns (yields) mini-batches. It is based on the function provided by Ash, but correctly handles the last minibatch.

def iterate_minibatches(inputs, targets, batchsize, shuffle=False):
    assert inputs.shape[0] == targets.shape[0]
    if shuffle:
        indices = np.arange(inputs.shape[0])
        np.random.shuffle(indices)
    for start_idx in range(0, inputs.shape[0], batchsize):
        end_idx = min(start_idx + batchsize, inputs.shape[0])
        if shuffle:
            excerpt = indices[start_idx:end_idx]
        else:
            excerpt = slice(start_idx, end_idx)
        yield inputs[excerpt], targets[excerpt]
Share:
22,803
savan77
Author by

savan77

Updated on July 09, 2022

Comments

  • savan77
    savan77 almost 2 years

    I have just started to learn deep learning. I found myself stuck when it came to gradient descent. I know how to implement batch gradient descent. I know how it works as well how mini-batch and stochastic gradient descent works in theory. But really can't understand how to implement in code.

    import numpy as np
    X = np.array([ [0,0,1],[0,1,1],[1,0,1],[1,1,1] ])
    y = np.array([[0,1,1,0]]).T
    alpha,hidden_dim = (0.5,4)
    synapse_0 = 2*np.random.random((3,hidden_dim)) - 1
    synapse_1 = 2*np.random.random((hidden_dim,1)) - 1
    for j in xrange(60000):
        layer_1 = 1/(1+np.exp(-(np.dot(X,synapse_0))))
        layer_2 = 1/(1+np.exp(-(np.dot(layer_1,synapse_1))))
        layer_2_delta = (layer_2 - y)*(layer_2*(1-layer_2))
        layer_1_delta = layer_2_delta.dot(synapse_1.T) * (layer_1 * (1-layer_1))
        synapse_1 -= (alpha * layer_1.T.dot(layer_2_delta))
        synapse_0 -= (alpha * X.T.dot(layer_1_delta))
    

    This is the sample code from ANDREW TRASK's blog. It's small and easy to understand. This code implements batch gradient descent but I would like to implement mini-batch and stochastic gradient descent in this sample. How could I do this? What I have to add/modify in this code in order to implement mini-batch and stochastic gradient descent respectively? Your help will help me a lot. Thanks in advance.( I know this sample code has few examples, whereas I need large dataset to split into mini-batches. But I would like to know how can I implement it)