Unable to approximate the sine function using a neural network

24,006

Solution 1

Use a linear output unit.

Here is a simple example using R:

set.seed(1405)
x <- sort(10*runif(50))
y <- sin(x) + 0.2*rnorm(x)

library(nnet)
nn <- nnet(x, y, size=6, maxit=40, linout=TRUE)
plot(x, y)
plot(sin, 0, 10, add=TRUE)
x1 <- seq(0, 10, by=0.1)
lines(x1, predict(nn, data.frame(x=x1)), col="green")

neural net prediction

Solution 2

When you train the network, you should normalize the target (the sin function) to the range [0,1], then you can keep the sigmoid transfer function.

sin(x) in [-1,1]  =>  0.5*(sin(x)+1) in [0,1]

Train data:
    input    target    target_normalized
    ------------------------------------
    0         0          0.5
    pi/4      0.70711    0.85355
    pi/2      1           1
    ...

Note that that we mapped the target before training. Once you train and simulate the network, you can map back the output of the net.


The following is a MATLAB code to illustrate:

%% input and target
input = linspace(0,4*pi,200);
target = sin(input) + 0.2*randn(size(input));

% mapping
[targetMinMax,mapping] = mapminmax(target,0,1);

%% create network (one hidden layer with 6 nodes)
net = newfit(input, targetMinMax, [6], {'tansig' 'tansig'});
net.trainParam.epochs = 50;
view(net)

%% training
net = init(net);                            % init
[net,tr] = train(net, input, targetMinMax); % train
output = sim(net, input);                   % predict

%% view prediction
plot(input, mapminmax('reverse', output, mapping), 'r', 'linewidth',2), hold on
plot(input, target, 'o')
plot(input, sin(input), 'g')
hold off
legend({'predicted' 'target' 'sin()'})

network output

Share:
24,006
Vincent Floriot
Author by

Vincent Floriot

Updated on March 09, 2020

Comments

  • Vincent Floriot
    Vincent Floriot over 4 years

    I am trying to approximate the sine() function using a neural network I wrote myself. I have tested my neural network on a simple OCR problem already and it worked, but I am having trouble applying it to approximate sine(). My problem is that during training my error converges on exactly 50%, so I'm guessing it's completely random.

    I am using one input neuron for the input (0 to PI), and one output neuron for the result. I have a single hidden layer in which I can vary the number of neurons but I'm currently trying around 6-10.

    I have a feeling the problem is because I am using the sigmoid transfer function (which is a requirement in my application) which only outputs between 0 and 1, while the output for sine() is between -1 and 1. To try to correct this I tried multiplying the output by 2 and then subtracting 1, but this didn't fix the problem. I'm thinking I have to do some kind of conversion somewhere to make this work.

    Any ideas?