Export a neural network trained with MATLAB in other programming languages

21,943

Solution 1

I solved the problems described above, and I think it is useful to share what I've learned.

Premises

First of all, we need some definitions. Let's consider the following image, taken from [1]:

A scheme of Neural Network

In the above figure, IW stands for initial weights: they represent the weights of neurons on the Layer 1, each of which is connected with each input, as the following image shows [1]:

All neurons are connected with all inputs

All the other weights, are called layer weights (LW in the first figure), that are also connected with each output of the previous layer. In our case of study, we use a network with only two layers, so we will use only one LW array to solve our problems.

Solution of the problem

After the above introduction, we can proceed by dividing the issue in two steps:

  • Force the number of initial weights to match with the input array length
  • Use the weights to implement and use the neural network just trained in other programming languages

A - Force the number of initial weights to match with the input array length

Using the nprtool, we can train our network, and at the end of the process, we can also export in the workspace some information about the entire training process. In particular, we need to export:

  • a MATLAB network object that represents the neural network created
  • the input array used to train the network
  • the target array used to train the network

Also, we need to generate a M-file that contains the code used by MATLAB to create the neural network, because we need to modify it and change some training options.

The following image shows how to perform these operations:

The nprtool GUI to export data and generate the M-code

The M-code generated will be similar to the following one:

function net = create_pr_net(inputs,targets)
%CREATE_PR_NET Creates and trains a pattern recognition neural network.
%
%  NET = CREATE_PR_NET(INPUTS,TARGETS) takes these arguments:
%    INPUTS - RxQ matrix of Q R-element input samples
%    TARGETS - SxQ matrix of Q S-element associated target samples, where
%      each column contains a single 1, with all other elements set to 0.
%  and returns these results:
%    NET - The trained neural network
%
%  For example, to solve the Iris dataset problem with this function:
%
%    load iris_dataset
%    net = create_pr_net(irisInputs,irisTargets);
%    irisOutputs = sim(net,irisInputs);
%
%  To reproduce the results you obtained in NPRTOOL:
%
%    net = create_pr_net(trainingSetInput,trainingSetOutput);

% Create Network
numHiddenNeurons = 20;  % Adjust as desired
net = newpr(inputs,targets,numHiddenNeurons);
net.divideParam.trainRatio = 75/100;  % Adjust as desired
net.divideParam.valRatio = 15/100;  % Adjust as desired
net.divideParam.testRatio = 10/100;  % Adjust as desired

% Train and Apply Network
[net,tr] = train(net,inputs,targets);
outputs = sim(net,inputs);

% Plot
plotperf(tr)
plotconfusion(targets,outputs)

Before start the training process, we need to remove all preprocessing and postprocessing functions that MATLAB executes on inputs and outputs. This can be done adding the following lines just before the % Train and Apply Network lines:

net.inputs{1}.processFcns = {};
net.outputs{2}.processFcns = {};

After these changes to the create_pr_net() function, simply we can use it to create our final neural network:

net = create_pr_net(input, target);

where input and target are the values we exported through nprtool.

In this way, we are sure that the number of weights is equal to the length of input array. Also, this process is useful in order to simplify the porting to other programming languages.

B - Implement and use the neural network just trained in other programming languages

With these changes, we can define a function like this:

function [ Results ] = classify( net, input )
    y1 = tansig(net.IW{1} * input + net.b{1});

    Results = tansig(net.LW{2} * y1 + net.b{2});
end

In this code, we use the IW and LW arrays mentioned above, but also the biases b, used in the network schema by the nprtool. In this context, we don't care about the role of biases; simply, we need to use them because nprtool does it.

Now, we can use the classify() function defined above, or the sim() function equally, obtaining the same results, as shown in the following example:

>> sim(net, input(:, 1))

ans =

    0.9759
   -0.1867
   -0.1891

>> classify(net, input(:, 1))

ans =

   0.9759   
  -0.1867
  -0.1891

Obviously, the classify() function can be interpreted as a pseudocode, and then implemented in every programming languages in which is possibile to define the MATLAB tansig() function [2] and the basic operations between arrays.

References

[1] Howard Demuth, Mark Beale, Martin Hagan: Neural Network Toolbox 6 - User Guide, MATLAB

[2] Mathworks, tansig - Hyperbolic tangent sigmoid transfer function, MATLAB Documentation center

Additional notes

Take a look to the robott's answer and the Sangeun Chi's answer for more details.

Solution 2

Thanks to VitoShadow and robott answers, I can export Matlab neural network values to other applications.

I really appreciate them, but I found some trivial errors in their codes and want to correct them.

1) In the VitoShadow codes,

Results = tansig(net.LW{2} * y1 + net.b{2});
-> Results = net.LW{2} * y1 + net.b{2};

2) In the robott preprocessing codes, It would be easier extracting xmax and xmin from the net variable than calculating them.

xmax = net.inputs{1}.processSettings{1}.xmax
xmin = net.inputs{1}.processSettings{1}.xmin

3) In the robott postprocessing codes,

xmax = net.outputs{2}.processSettings{1}.xmax
xmin = net.outputs{2}.processSettings{1}.xmin

Results = (ymax-ymin)*(Results-xmin)/(xmax-xmin) + ymin;
-> Results = (Results-ymin)*(xmax-xmin)/(ymax-ymin) + xmin;

You can manually check and confirm the values as follows:

p2 = mapminmax('apply', net(:, 1), net.inputs{1}.processSettings{1})

-> preprocessed data

y1 = purelin ( net.LW{2} * tansig(net.iw{1}* p2 + net.b{1}) + net.b{2})

-> Neural Network processed data

y2 = mapminmax( 'reverse' , y1, net.outputs{2}.processSettings{1})

-> postprocessed data

Reference: http://www.mathworks.com/matlabcentral/answers/14517-processing-of-i-p-data

Solution 3

This is a small improvement to the great Vito Gentile's answer.

If you want to use the preprocessing and postprocessing 'mapminmax' functions, you have to pay attention because 'mapminmax' in Matlab normalizes by ROW and not by column!

This is what you need to add to the upper "classify" function, to keep a coherent pre/post-processing:

[m n] = size(input);
ymax = 1;
ymin = -1;
for i=1:m
   xmax = max(input(i,:));
   xmin = min(input(i,:));
   for j=1:n
     input(i,j) = (ymax-ymin)*(input(i,j)-xmin)/(xmax-xmin) + ymin;
   end
end

And this at the end of the function:

ymax = 1;
ymin = 0;
xmax = 1;
xmin = -1;
Results = (ymax-ymin)*(Results-xmin)/(xmax-xmin) + ymin;

This is Matlab code, but it can be easily read as pseudocode. Hope this will be helpful!

Share:
21,943
Vito Gentile
Author by

Vito Gentile

I’m a researcher in human-computer interaction and adjunct professor of computer science, working at both University of Palermo (Italy) and University of Hertfordshire (UK). At the same time, I’m currently working for Codemotion Magazine and HTML.it as technical writer and content manager. Finally, I currently collaborate with synbrAIn S.r.l. as senior data scientist, working on AI and machine learning solutions in several application domains. Above all else, I’m a passionate developer. I had prior experiences as mobile and web developer (full-stack), working with several technologies and programming languages including Python, JavaScript, C#, Java and PHP. I also love music, technology, and any other new curious facts about computers and programming.

Updated on March 08, 2020

Comments

  • Vito Gentile
    Vito Gentile about 4 years

    I trained a neural network using the MATLAB Neural Network Toolbox, and in particular using the command nprtool, which provides a simple GUI to use the toolbox features, and to export a net object containing the informations about the NN generated.

    In this way, I created a working neural network, that I can use as classifier, and a diagram representing it is the following:

    Diagram representing the Neural Network

    There are 200 inputs, 20 neurons in the first hidden layer, and 2 neurons in the last layer that provide a bidimensional output.

    What I want to do is to use the network in some other programming language (C#, Java, ...).

    In order to solve this problem, I try to use the following code in MATLAB:

    y1 = tansig(net.IW{1} * input + net.b{1});
    Results = tansig(net.LW{2} * y1 + net.b{2});
    

    Assuming that input is a monodimensional array of 200 elements, the previous code would work if net.IW{1} is a 20x200 matrix (20 neurons, 200 weights).

    The problem is that I noticed that size(net.IW{1}) returns unexpected values:

    >> size(net.IW{1})
    
        ans =
    
        20   199
    

    I got the same problem with a network with 10000 input. In this case, the result wasn't 20x10000, but something like 20x9384 (I don't remember the exact value).

    So, the question is: how can I obtain the weights of each neuron? And after that, can someone explain me how can I use them to produce the same output of MATLAB?

  • Lukas Ignatavičius
    Lukas Ignatavičius over 10 years
    Thank you for this answer. It helped me a lot. But i needed processFcns. I used only [inputs,PSi] = mapminmax(inputs); on my input data and saved it to minmaxParams.mat save('minmaxParams','PSi'). And then in my classify function i added load minmaxParams.mat;input = mapminmax('apply',input, PSi);
  • fermat4214
    fermat4214 about 7 years
    This exporting function is designed for a simple case: only for an ANN with 1input, 1output, and 1hidden layer. One can easily extend it to more layered ANNs
  • fermat4214
    fermat4214 about 7 years
    I have merged all comments, corrected some typos, and gave an example implementation that works fine for a single hidden layer ANN. It includes Sangeun Chi's improvements and corrections.