What is the meaning of the word logits in TensorFlow?

181,091

Solution 1

Logits is an overloaded term which can mean many different things:


In Math, Logit is a function that maps probabilities ([0, 1]) to R ((-inf, inf))

enter image description here

Probability of 0.5 corresponds to a logit of 0. Negative logit correspond to probabilities less than 0.5, positive to > 0.5.

In ML, it can be

the vector of raw (non-normalized) predictions that a classification model generates, which is ordinarily then passed to a normalization function. If the model is solving a multi-class classification problem, logits typically become an input to the softmax function. The softmax function then generates a vector of (normalized) probabilities with one value for each possible class.

Logits also sometimes refer to the element-wise inverse of the sigmoid function.

Solution 2

Just adding this clarification so that anyone who scrolls down this much can at least gets it right, since there are so many wrong answers upvoted.

Diansheng's answer and JakeJ's answer get it right.
A new answer posted by Shital Shah is an even better and more complete answer.


Yes, logit as a mathematical function in statistics, but the logit used in context of neural networks is different. Statistical logit doesn't even make any sense here.


I couldn't find a formal definition anywhere, but logit basically means:

The raw predictions which come out of the last layer of the neural network.
1. This is the very tensor on which you apply the argmax function to get the predicted class.
2. This is the very tensor which you feed into the softmax function to get the probabilities for the predicted classes.


Also, from a tutorial on official tensorflow website:

Logits Layer

The final layer in our neural network is the logits layer, which will return the raw values for our predictions. We create a dense layer with 10 neurons (one for each target class 0–9), with linear activation (the default):

logits = tf.layers.dense(inputs=dropout, units=10)

If you are still confused, the situation is like this:

raw_predictions = neural_net(input_layer)
predicted_class_index_by_raw = argmax(raw_predictions)
probabilities = softmax(raw_predictions)
predicted_class_index_by_prob = argmax(probabilities)

where, predicted_class_index_by_raw and predicted_class_index_by_prob will be equal.

Another name for raw_predictions in the above code is logit.


As for the why logit... I have no idea. Sorry.
[Edit: See this answer for the historical motivations behind the term.]


Trivia

Although, if you want to, you can apply statistical logit to probabilities that come out of the softmax function.

If the probability of a certain class is p,
Then the log-odds of that class is L = logit(p).

Also, the probability of that class can be recovered as p = sigmoid(L), using the sigmoid function.

Not very useful to calculate log-odds though.

Solution 3

Summary

In context of deep learning the logits layer means the layer that feeds in to softmax (or other such normalization). The output of the softmax are the probabilities for the classification task and its input is logits layer. The logits layer typically produces values from -infinity to +infinity and the softmax layer transforms it to values from 0 to 1.

Historical Context

Where does this term comes from? In 1930s and 40s, several people were trying to adapt linear regression to the problem of predicting probabilities. However linear regression produces output from -infinity to +infinity while for probabilities our desired output is 0 to 1. One way to do this is by somehow mapping the probabilities 0 to 1 to -infinity to +infinity and then use linear regression as usual. One such mapping is cumulative normal distribution that was used by Chester Ittner Bliss in 1934 and he called this "probit" model, short for "probability unit". However this function is computationally expensive while lacking some of the desirable properties for multi-class classification. In 1944 Joseph Berkson used the function log(p/(1-p)) to do this mapping and called it logit, short for "logistic unit". The term logistic regression derived from this as well.

The Confusion

Unfortunately the term logits is abused in deep learning. From pure mathematical perspective logit is a function that performs above mapping. In deep learning people started calling the layer "logits layer" that feeds in to logit function. Then people started calling the output values of this layer "logit" creating the confusion with logit the function.

TensorFlow Code

Unfortunately TensorFlow code further adds in to confusion by names like tf.nn.softmax_cross_entropy_with_logits. What does logits mean here? It just means the input of the function is supposed to be the output of last neuron layer as described above. The _with_logits suffix is redundant, confusing and pointless. Functions should be named without regards to such very specific contexts because they are simply mathematical operations that can be performed on values derived from many other domains. In fact TensorFlow has another similar function sparse_softmax_cross_entropy where they fortunately forgot to add _with_logits suffix creating inconsistency and adding in to confusion. PyTorch on the other hand simply names its function without these kind of suffixes.

Reference

The Logit/Probit lecture slides is one of the best resource to understand logit. I have also updated Wikipedia article with some of above information.

Solution 4

Logit is a function that maps probabilities [0, 1] to [-inf, +inf].

Softmax is a function that maps [-inf, +inf] to [0, 1] similar as Sigmoid. But Softmax also normalizes the sum of the values(output vector) to be 1.

Tensorflow "with logit": It means that you are applying a softmax function to logit numbers to normalize it. The input_vector/logit is not normalized and can scale from [-inf, inf].

This normalization is used for multiclass classification problems. And for multilabel classification problems sigmoid normalization is used i.e. tf.nn.sigmoid_cross_entropy_with_logits

Solution 5

Personal understanding, in TensorFlow domain, logits are the values to be used as input to softmax. I came to this understanding based on this tensorflow tutorial.

https://www.tensorflow.org/tutorials/layers


Although it is true that logit is a function in maths(especially in statistics), I don't think that's the same 'logit' you are looking at. In the book Deep Learning by Ian Goodfellow, he mentioned,

The function σ−1(x) is called the logit in statistics, but this term is more rarely used in machine learning. σ−1(x) stands for the inverse function of logistic sigmoid function.

In TensorFlow, it is frequently seen as the name of last layer. In Chapter 10 of the book Hands-on Machine Learning with Scikit-learn and TensorFLow by Aurélien Géron, I came across this paragraph, which stated logits layer clearly.

note that logits is the output of the neural network before going through the softmax activation function: for optimization reasons, we will handle the softmax computation later.

That is to say, although we use softmax as the activation function in the last layer in our design, for ease of computation, we take out logits separately. This is because it is more efficient to calculate softmax and cross-entropy loss together. Remember that cross-entropy is a cost function, not used in forward propagation.

Share:
181,091
Milad P.
Author by

Milad P.

I'm UC-Irvine graduate student in the field of cosmology.

Updated on July 08, 2022

Comments

  • Milad P.
    Milad P. almost 2 years

    In the following TensorFlow function, we must feed the activation of artificial neurons in the final layer. That I understand. But I don't understand why it is called logits? Isn't that a mathematical function?

    loss_function = tf.nn.softmax_cross_entropy_with_logits(
         logits = last_layer,
         labels = target_output
    )
    
  • thertweck
    thertweck almost 7 years
    For Tensorflow: It's a name that it is thought to imply that this Tensor is the quantity that is being mapped to probabilities by the Softmax.
  • Charlie Parker
    Charlie Parker over 6 years
    is this just the same as the thing that gets exponentiated before the softmax? i.e. softmax(logit) = exp(logit)/Z(logit) then logit = h_NN(x)? so logit is the same as "score"?
  • Charlie Parker
    Charlie Parker over 6 years
    so logit is the same as the "score"
  • Diansheng
    Diansheng over 6 years
    Personal understanding, in TensorFlow domain, logits are the values to be used as input to softmax. I came to this understanding based on this tensorflow tutorial.
  • dleal
    dleal over 6 years
    I am not sure whether this answers the question. Maybe that is why it was never accepted. I understand what the logit function is, but it also puzzles my why Tensorflow calls these arguments logits. It is also the same designation for several of the parameters in Tensorflow's functions
  • AneesAhmed777
    AneesAhmed777 almost 6 years
    I suggest adding a line in your answer explicitly differentiating Logit function (statistics) and logits layer (tensorflow)
  • AneesAhmed777
    AneesAhmed777 almost 6 years
    That's in statistics/maths. We are talking machine learning here, where logit has different meaning. See this, this, this.
  • Tina Liu
    Tina Liu almost 5 years
    Greate!Can you make a simple example? Is this right?[1, 0.5, 0.5] through normalization become [0.5, 0.25, 0.25] and then soft max become[0,] if one hot [1, 0, 0]? or just out put [1, 0, 0] cause the output should be a vector?
  • LotiLotiLoti
    LotiLotiLoti over 3 years
    In TensorFlow, logit refers to the unscaled output of a layer, which can be input into any activation function, not just Softmax. This is illustrated by tf.nn.sigmoid_cross_entropy_with_logits. The with_logits suffix simply denotes that unscaled logit output should be passed, not the prediction output from the final layer activation function that is typically used by error functions such as tf.keras.losses.MSE or tf.keras.losses.CategoricalCrossentropy
  • iacob
    iacob about 3 years
    "From pure mathematical perspective logit is a function that performs above mapping." This section is wrong. It's common in statistics to call the logit of a probability itself the "logits". that feeds in to logit function the SoftMax function isn't the logit function, but its inverse, the (multinomial) logistic function.
  • Giancarlo Sportelli
    Giancarlo Sportelli about 3 years
    Example: the last layer of a NN returns a tensor k = [1,2,3,4,1,2,3] of logits, which classifies an input as belonging to each of 7 possible categories. Note that the fourth logit is the biggest, i.e., the fourth category is the most probable, and that it is four times the first one. k is fed to a softmax function that returns the probabilities [0.02,0.06, 0.18,0.48,0.02,0.06,0.18] that the input belongs to each category (softmax is a normalized exponential function). Note that the fourth probability is 24 times the first one (due to the exponential term in softmax) and that all sum 1.