Simple multi layer neural network implementation

41,841

Solution 1

A simple implementation

Here is a readable implementation using classes in Python. This implementation trades efficiency for understandability:

    import math
    import random

    BIAS = -1

    """
    To view the structure of the Neural Network, type
    print network_name
    """

    class Neuron:
        def __init__(self, n_inputs ):
            self.n_inputs = n_inputs
            self.set_weights( [random.uniform(0,1) for x in range(0,n_inputs+1)] ) # +1 for bias weight

        def sum(self, inputs ):
            # Does not include the bias
            return sum(val*self.weights[i] for i,val in enumerate(inputs))

        def set_weights(self, weights ):
            self.weights = weights

        def __str__(self):
            return 'Weights: %s, Bias: %s' % ( str(self.weights[:-1]),str(self.weights[-1]) )

    class NeuronLayer:
        def __init__(self, n_neurons, n_inputs):
            self.n_neurons = n_neurons
            self.neurons = [Neuron( n_inputs ) for _ in range(0,self.n_neurons)]

        def __str__(self):
            return 'Layer:\n\t'+'\n\t'.join([str(neuron) for neuron in self.neurons])+''

    class NeuralNetwork:
        def __init__(self, n_inputs, n_outputs, n_neurons_to_hl, n_hidden_layers):
            self.n_inputs = n_inputs
            self.n_outputs = n_outputs
            self.n_hidden_layers = n_hidden_layers
            self.n_neurons_to_hl = n_neurons_to_hl
    
            # Do not touch
            self._create_network()
            self._n_weights = None
            # end

        def _create_network(self):
            if self.n_hidden_layers>0:
                # create the first layer
                self.layers = [NeuronLayer( self.n_neurons_to_hl,self.n_inputs )]
        
                # create hidden layers
                self.layers += [NeuronLayer( self.n_neurons_to_hl,self.n_neurons_to_hl ) for _ in range(0,self.n_hidden_layers)]
        
                # hidden-to-output layer
                self.layers += [NeuronLayer( self.n_outputs,self.n_neurons_to_hl )]
            else:
                # If we don't require hidden layers
                self.layers = [NeuronLayer( self.n_outputs,self.n_inputs )]

        def get_weights(self):
            weights = []
    
            for layer in self.layers:
                for neuron in layer.neurons:
                    weights += neuron.weights
    
            return weights

        @property
        def n_weights(self):
            if not self._n_weights:
                self._n_weights = 0
                for layer in self.layers:
                    for neuron in layer.neurons:
                        self._n_weights += neuron.n_inputs+1 # +1 for bias weight
            return self._n_weights

        def set_weights(self, weights ):
            assert len(weights)==self.n_weights, "Incorrect amount of weights."
    
            stop = 0
            for layer in self.layers:
                for neuron in layer.neurons:
                    start, stop = stop, stop+(neuron.n_inputs+1)
                    neuron.set_weights( weights[start:stop] )
            return self

        def update(self, inputs ):
            assert len(inputs)==self.n_inputs, "Incorrect amount of inputs."
    
            for layer in self.layers:
                outputs = []
                for neuron in layer.neurons:
                    tot = neuron.sum(inputs) + neuron.weights[-1]*BIAS
                    outputs.append( self.sigmoid(tot) )
                inputs = outputs   
            return outputs

        def sigmoid(self, activation,response=1 ):
            # the activation function
            try:
                return 1/(1+math.e**(-activation/response))
            except OverflowError:
                return float("inf")

        def __str__(self):
            return '\n'.join([str(i+1)+' '+str(layer) for i,layer in enumerate(self.layers)])

A more efficient implementation (with learning)

If you are looking for a more efficient example of a neural network with learning (backpropagation), take a look at my neural network Github repository here.

Solution 2

Hmm this is tricky. I had the same problem before and I couldn't find anything between good but heavily math loaded explanation and ready to use implementations.

The problem with ready to use implementations like PyBrain is that they hide the details, so people interested in learning how to implement ANNs are in need of something else. Reading the code of such solutions can be challenging too because they often use heuristics to improve performance and that makes the code harder to follow for a starter.

However, there are a few of resources you could use:

http://msdn.microsoft.com/en-us/magazine/jj658979.aspx

http://itee.uq.edu.au/~cogs2010/cmc/chapters/BackProp/

http://www.codeproject.com/Articles/19323/Image-Recognition-with-Neural-Networks

http://freedelta.free.fr/r/php-code-samples/artificial-intelligence-neural-network-backpropagation/

Solution 3

Here is an example of how you can implement a feedforward neural network using numpy. First import numpy and specify the dimensions of your inputs and your targets.

import numpy as np

input_dim = 1000
target_dim = 10

We will build the network structure now. As suggested in Bishop's great "Pattern Recognition and Machine Learning", you can simply consider the last row of your numpy matrices as bias weights and the last column of your activations as bias neurons. Input/output dimensions of the first/last weight matrix needs to be 1 greater, then.

dimensions = [input_dim+1, 500, 500, target_dim+1]

weight_matrices = []
for i in range(len(dimensions)-1):
  weight_matrix = np.ones((dimensions[i], dimensions[i]))
  weight_matrices.append(weight_matrix)

If your inputs are stored in a 2d numpy matrix, where each row corresponds to one sample and the columns correspond to the attributes of your samples, you can propagate through the network like this: (assuming logistic sigmoid function as activation function)

def activate_network(inputs):
  activations = [] # we store the activations for each layer here
  a = np.ones((inputs.shape[0], inputs.shape[1]+1)) #add the bias to the inputs
  a[:,:-1] = inputs

  for w in weight_matrices:
    x = a.dot(w) # sum of weighted inputs
    a = 1. / (1. - np.exp(-x)) # apply logistic sigmoid activation
    a[:,-1] = 1. # bias for the next layer.
    activations.append(a)

  return activations

The last element in activations will be the output of your network, but be careful, you need to omit the additional column for the biases, so your output will be activations[-1][:,:-1].

To train a network, you need to implement backpropagation which takes a few additional lines of code. You need to loop from the last element of activations to the first, basically. Make sure to set the bias column in the error signal to zero for each layer before each backpropagation step.

Solution 4

Here is a backprop algorithm in native python.

Share:
41,841

Related videos on Youtube

Animattronic
Author by

Animattronic

C#, Java and Ruby developer Lawyer (Legal Logic specialization) Sci Fi books/movies enthusiast Krav Maga and Aikido coach

Updated on June 12, 2020

Comments

  • Animattronic
    Animattronic almost 4 years

    some time ago I have started my adventure with machine learning (during last 2 years of my studies). I have read a lot of books and written a lot of code with machine learning algorithms EXCEPT neural networks, which were out of my scope. I'm very interested in this topic, but I have a huge problem: All the books I have read have two main issues:

    1. Contain tones of maths equations. After lecture I'm quite familiar with them and by hand, on the paper I can do the calculations.
    2. Contain big examples embedded in some complicated context (for example investigating internet shop sales rates) and to get inside neural networks implementation, I have to write lot of code to reproduce the context. What is missing - SIMPLE straightforward implementation without a lot of context and equations.

    Could you please advise me, where I can find SIMPLE implementation of multi layer perception (neural network) ? I don't need theoretical knowledge, and don want also context-embedded examples. I prefer some scripting languages to save time and effort - 99% of my previous works were done in Python.

    Here is the list of books I have read before (and not found what I wanted):

    1. Machine learning in action
    2. Programming Collective Intelligence
    3. Machine Learning: An Algorithmic Perspective
    4. Introduction to neural networks in Java
    5. Introduction to neural networks in C#
  • Animattronic
    Animattronic about 11 years
    Hi! Thanks for reply. No I haven't tried PyBrain because of the fact that I want to do all the things alone :P I'm looking for a tutorial rather that others ready-to-go solutions even if it's well documented. But it sound lika a plan, if nothing better appears :)
  • Coding man
    Coding man about 10 years
    what should be the argument of update method? An example would be great.
  • jorgenkg
    jorgenkg about 10 years
    A list of input values. eg: input_values = [1, 0, 0, 1, 0.5]
  • Llamageddon
    Llamageddon almost 10 years
    What about training?
  • Ryder Brooks
    Ryder Brooks over 6 years
    An example written by someone who actually knows how to write code and NAME variables!! I found the Github repository very helpful.