What is the role of the bias in neural networks?

396,097

Solution 1

I think that biases are almost always helpful. In effect, a bias value allows you to shift the activation function to the left or right, which may be critical for successful learning.

It might help to look at a simple example. Consider this 1-input, 1-output network that has no bias:

simple network

The output of the network is computed by multiplying the input (x) by the weight (w0) and passing the result through some kind of activation function (e.g. a sigmoid function.)

Here is the function that this network computes, for various values of w0:

network output, given different w0 weights

Changing the weight w0 essentially changes the "steepness" of the sigmoid. That's useful, but what if you wanted the network to output 0 when x is 2? Just changing the steepness of the sigmoid won't really work -- you want to be able to shift the entire curve to the right.

That's exactly what the bias allows you to do. If we add a bias to that network, like so:

simple network with a bias

...then the output of the network becomes sig(w0*x + w1*1.0). Here is what the output of the network looks like for various values of w1:

network output, given different w1 weights

Having a weight of -5 for w1 shifts the curve to the right, which allows us to have a network that outputs 0 when x is 2.

Solution 2

A simpler way to understand what the bias is: it is somehow similar to the constant b of a linear function

y = ax + b

It allows you to move the line up and down to fit the prediction with the data better.

Without b, the line always goes through the origin (0, 0) and you may get a poorer fit.

Solution 3

Here are some further illustrations showing the result of a simple 2-layer feed forward neural network with and without bias units on a two-variable regression problem. Weights are initialized randomly and standard ReLU activation is used. As the answers before me concluded, without the bias the ReLU-network is not able to deviate from zero at (0,0).

enter image description here enter image description here enter image description here

Solution 4

Two different kinds of parameters can be adjusted during the training of an ANN, the weights and the value in the activation functions. This is impractical and it would be easier if only one of the parameters should be adjusted. To cope with this problem a bias neuron is invented. The bias neuron lies in one layer, is connected to all the neurons in the next layer, but none in the previous layer and it always emits 1. Since the bias neuron emits 1 the weights, connected to the bias neuron, are added directly to the combined sum of the other weights (equation 2.1), just like the t value in the activation functions.1

The reason it's impractical is because you're simultaneously adjusting the weight and the value, so any change to the weight can neutralize the change to the value that was useful for a previous data instance... adding a bias neuron without a changing value allows you to control the behavior of the layer.

Furthermore the bias allows you to use a single neural net to represent similar cases. Consider the AND boolean function represented by the following neural network:

ANN
(source: aihorizon.com)

  • w0 corresponds to b.
  • w1 corresponds to x1.
  • w2 corresponds to x2.

A single perceptron can be used to represent many boolean functions.

For example, if we assume boolean values of 1 (true) and -1 (false), then one way to use a two-input perceptron to implement the AND function is to set the weights w0 = -3, and w1 = w2 = .5. This perceptron can be made to represent the OR function instead by altering the threshold to w0 = -.3. In fact, AND and OR can be viewed as special cases of m-of-n functions: that is, functions where at least m of the n inputs to the perceptron must be true. The OR function corresponds to m = 1 and the AND function to m = n. Any m-of-n function is easily represented using a perceptron by setting all input weights to the same value (e.g., 0.5) and then setting the threshold w0 accordingly.

Perceptrons can represent all of the primitive boolean functions AND, OR, NAND ( 1 AND), and NOR ( 1 OR). Machine Learning- Tom Mitchell)

The threshold is the bias and w0 is the weight associated with the bias/threshold neuron.

Solution 5

The bias is not an NN term. It's a generic algebra term to consider.

Y = M*X + C (straight line equation)

Now if C(Bias) = 0 then, the line will always pass through the origin, i.e. (0,0), and depends on only one parameter, i.e. M, which is the slope so we have less things to play with.

C, which is the bias takes any number and has the activity to shift the graph, and hence able to represent more complex situations.

In a logistic regression, the expected value of the target is transformed by a link function to restrict its value to the unit interval. In this way, model predictions can be viewed as primary outcome probabilities as shown:

Sigmoid function on Wikipedia

This is the final activation layer in the NN map that turns on and off the neuron. Here also bias has a role to play and it shifts the curve flexibly to help us map the model.

Share:
396,097
Karan
Author by

Karan

Updated on July 09, 2022

Comments

  • Karan
    Karan almost 2 years

    I'm aware of the gradient descent and the back-propagation algorithm. What I don't get is: when is using a bias important and how do you use it?

    For example, when mapping the AND function, when I use two inputs and one output, it does not give the correct weights. However, when I use three inputs (one of which is a bias), it gives the correct weights.

  • Kiril
    Kiril about 14 years
    Think of it as a general rule of thumb: add bias! Neural networks are "unpredictable" to a certain extent so if you add a bias neuron you're more likely to find solutions faster then if you didn't use a bias. Of course this is not mathematically proven, but it's what I've observed in literature and in general use.
  • bayer
    bayer about 14 years
    If you use a sigmoid transfer function, you introduce non-linearity. Stating that this is a linear function is wrong and also somehow dangerous, as the non-linearity of the sigmoid is key to the solution of several problems. Also, sigmoid(0) = 0.5, and there is no x for which sigmoid(x) = 0.
  • Debilski
    Debilski about 14 years
    Yeah, but it is 0.5 for any input of 0 without a bias, regardless of what the linear function before looks like. And that’s the point. You don’t normally train your sigmoid function, you just live with it. The linearity problem happens well before the sigmoid function.
  • bayer
    bayer about 14 years
    I get your point: the layer is not able to learn a different output for 0 than the one it started out with. That's correct and important. However, the "linear function argument" just does not apply in my opinion. Even with a bias, the function is still linear. The linearity property is misleading here. (Yes, I might be nitpicking.)
  • Debilski
    Debilski about 14 years
    I’d say, that with a bias it’s affine. ( en.wikipedia.org/wiki/Affine_transformation#Representation )
  • bayer
    bayer about 14 years
    Yes, you're correct. Thanks for pointing out that difference to me. (Why do we call it linear regression then, btw, although it's affine?)
  • Debilski
    Debilski about 14 years
    The regression model itself is linear. It just operates on the augmented input space, where every vector has an element ‘1’ added as the last component. This would probably be different if the bias was fixed, but I’m not sure about that. People might still call it linear even then.
  • Nate Kohl
    Nate Kohl about 11 years
    @user1621769: The simplest approach is a network with a single bias node that connects to all non-input nodes in the network.
  • Gabriel
    Gabriel about 11 years
    would that be the most effective though? it seems to me that a bias unit at the output layer, or 1 per layer would make the prediction better.
  • Nate Kohl
    Nate Kohl about 11 years
    @user1621769: The main function of a bias is to provide every node with a trainable constant value (in addition to the normal inputs that the node recieves). You can achieve that with a single bias node with connections to N nodes, or with N bias nodes each with a single connection; the result should be the same.
  • Dimpl
    Dimpl over 8 years
    @user1621769: You might be failing to take weightings into account? Each connection has a trainable weighting, and the bias node has a fixed value.
  • ashwani
    ashwani about 8 years
    In case if I don't need to shift the curve to either left or right, then is it still necessary to add the bias unit or can I skip that?
  • Nate Kohl
    Nate Kohl about 8 years
    @user132458 It's a little unusual to know such things about your desired solution, but I suppose you could skip it if you know the problem doesn't require it.
  • blue-sky
    blue-sky about 8 years
    nice anology but if we set the bias to 1 then why does it make difference to the fit now that every line will now go through (0,1) instead of (0,0)?As all lines are now biased to y=1 instead of y=0 why is this helpful?
  • Carcigenicate
    Carcigenicate almost 8 years
    @blue-sky Because by multiplying a bias by a weight, you can shift it by an arbitrary amount.
  • Ben
    Ben almost 8 years
    Is it correct to call b a "coefficient"? Isn't a "coefficient" a number used to multiply a variable?
  • jorgenkg
    jorgenkg almost 8 years
    @user132458, if the training algorithm figures out that you don't need the bias shift, the bias weights will probably approach 0. Thus eliminating the bias signal.
  • Espanta
    Espanta over 7 years
    b is not "coefficient" rather it is intercept.
  • AwokeKnowing
    AwokeKnowing over 7 years
    you're probably better off with normalization. What's an example of a modern network that uses "lack of bias" to produce magnitude invariaance?
  • AwokeKnowing
    AwokeKnowing over 7 years
    I seem to remember from Andrew Ng's class that the bias was left out in part of the training process. could you update your answer to explain that considering your conclusion that it's "just another input"?
  • RobMcZag
    RobMcZag over 7 years
    @AwokeKnowing I do not remember that from Andrew Ng's class, but that was a few years ago. Also Bias can be on or off depending what you are trying to learn. I read that in image processing they do not use it to allow scaling. To me if you use it, you use it also in training. The effect is to stabilize coefficients when all or part of the inputs is null or almost null. Why would you not use bias during training and then use it when using the NN to predict outputs for new inputs? How could that be useful?
  • AwokeKnowing
    AwokeKnowing over 7 years
    No, it was more like, use it in the forward pass, but don't use it when calculating the gradient for backprop, or something like that.
  • RobMcZag
    RobMcZag over 7 years
    @AwokeKnowing I suppose that is a way to save some memory and time. You can decide you do not care to learn coefficients for the bias units. That can be fine if you have at least one hidden layer as the bias will provide some input to that layer and the output can be learned by the coefficients from the first to the second layer. I am not sure if the convergence speed will change. In my one layer example you are forced to learn also the bias coefficient as it is applied to the output.
  • Admin
    Admin about 7 years
    b is the coefficient of $x^0$. a is the coefficient of $x^1$
  • Íhor Mé
    Íhor Mé about 7 years
    @AwokeKnowing , I believe, the usual ResNet utilizes that, as it's a part of it's "initialization", but I'm not exactly sure they did this for strictly this purpose, or, maybe for considerations of model size/efficiency and I'm not sure this concept is published anywhere. But I think it's completely understandable on a theory level. If you don't have a bias that doesn't scale, when you scale values, all of the outputs simply scale accordingly. Aware of this concept, or not, large part of modern architectures don't have biases at least in a large part of their structures.
  • MANU
    MANU about 7 years
    Then what is bias value for each node in neural network ? Is it the same thing as threshold value for each neuron ? Bcoz some people use bias keyword for each node as well having the number of bias nodes same as number of nodes in a layer. Since The BIAS NODE (for intercept) is only one for a layer as per above explanation... hence this doubt !
  • maikovich
    maikovich about 7 years
    @NateKohl: I really like your answer, but you say something that sounds paradoxical to me: "a trainable constant value"... For me a constant value is a value that does not change. However you write trainable, which means this value is meant to change. So, is a bias constant, or trainable?
  • Nate Kohl
    Nate Kohl about 7 years
    @maikovich: Sorry -- I used 'constant' to differentiate the bias node's input (which doesn't change) from other input nodes (which receive input values that can vary). The weights coming out of the input/bias nodes are the trainable bits. Hope this helps.
  • Juan Zapata
    Juan Zapata about 7 years
    If you see a neural network as a bunch of linear functions connected to non-linear neurons then you can see that the bias helps each linear function to fit better the data by fitting the function along the y-axis.
  • George_E -old
    George_E -old over 6 years
    what would be a bias range? Is it also -1 to 1?
  • endolith
    endolith over 6 years
    In order for the layer to be a universal approximator, a bias with independent weights for each neuron is necessary, correct?
  • endolith
    endolith over 6 years
    @JuanCamiloZapata Doesn't the bias shift the function along the x-axis?
  • Admin
    Admin over 6 years
    @Gabriel: There should be one bias per hidden neuron.
  • Carsten
    Carsten almost 6 years
    Yes! In particular, without a bias, 0 would be a fixed point for many neural nets (as 0 is also a fixed point for many activation functions). I.e. for input 0 the NN would always output 0 with no chance of learning to output a different value.
  • Ajay
    Ajay over 5 years
    @Gabriel Their should be one bias per layer in network
  • jcm69
    jcm69 over 5 years
    @Ajay we need 1 bias per neuron in order to have each sigmoid (for each neuron) shifted left or right. One bias for a whole layer would only shift all the sigmoids left or right at the same time during the learning. And that's not what we want. We want each neuron of a given layer to compete with the others in order to represent its own "knowledge".
  • Hayat
    Hayat almost 5 years
    The explanation is nice. How the output is 2 here. I can't figure it out. Please help
  • Libin Wen
    Libin Wen over 4 years
    Nice interpretation. But I am not clear how the bias is learn-able (or trainable?). In a simple case where loss = ReLU(omega * x + a), the weight omega can be trained through the chain-rule, but how can the bias a be trained when the gradient to a is always a constant?
  • Komal-SkyNET
    Komal-SkyNET over 4 years
    Can you please add the function that's plotted with axis labels?
  • IRTFM
    IRTFM about 4 years
    That sounds completely analogous to the process of modeling systems "by hand" with linear regression. The simplest model would be Y_bar=mean(Y). Then you add complexity by including various X terms, stopping when there is no significant information gain.
  • IRTFM
    IRTFM about 4 years
    In many simple problems, the target data has been demeaned and scaled, so no bias is needed.and the potential for excessive outlier influence in variables with large ranges is reduced.
  • Py_Student
    Py_Student about 4 years
    Thanks for your answer - the top answer says it shifts it left and right? Can you please clarify?
  • JP K.
    JP K. over 3 years
    Sorry, the points are just randomly chosen. There is no real function behind them.
  • Burak Kaymakci
    Burak Kaymakci over 3 years
    The link is dead.
  • Íhor Mé
    Íhor Mé over 3 years
    Take this with a grain of salt, though, as I don't currently remember if back when I tested this, if I had accounted for batchNorm introducing it's own bias that throws out the need for bias. It's kind of a nice idea in theory, but, please, don't trust it blindly, test.
  • mon
    mon about 3 years
    Can you share the code doing the animation?
  • JP K.
    JP K. about 3 years
    Are you still interested in this? What would be the best way to share code on stackoverflow?
  • Syntax Hacker
    Syntax Hacker over 2 years
    @JPK. share it as github link please
  • JP K.
    JP K. over 2 years
  • Holdsworth
    Holdsworth over 2 years
    You could've formulated your answer in a more visually appealing manner.
  • Jason
    Jason about 2 years
    This is correct. Batch normalization, which is commonly used in modern architectures subsumes bias. See section 3.1 arxiv.org/pdf/1502.03167.pdf