What is num_units in tensorflow BasicLSTMCell?

42,656

Solution 1

The number of hidden units is a direct representation of the learning capacity of a neural network -- it reflects the number of learned parameters. The value 128 was likely selected arbitrarily or empirically. You can change that value experimentally and rerun the program to see how it affects the training accuracy (you can get better than 90% test accuracy with a lot fewer hidden units). Using more units makes it more likely to perfectly memorize the complete training set (although it will take longer, and you run the risk of over-fitting).

The key thing to understand, which is somewhat subtle in the famous Colah's blog post (find "each line carries an entire vector"), is that X is an array of data (nowadays often called a tensor) -- it is not meant to be a scalar value. Where, for example, the tanh function is shown, it is meant to imply that the function is broadcast across the entire array (an implicit for loop) -- and not simply performed once per time-step.

As such, the hidden units represent tangible storage within the network, which is manifest primarily in the size of the weights array. And because an LSTM actually does have a bit of it's own internal storage separate from the learned model parameters, it has to know how many units there are -- which ultimately needs to agree with the size of the weights. In the simplest case, an RNN has no internal storage -- so it doesn't even need to know in advance how many "hidden units" it is being applied to.


  • A good answer to a similar question here.
  • You can look at the source for BasicLSTMCell in TensorFlow to see exactly how this is used.

Side note: This notation is very common in statistics and machine-learning, and other fields that process large batches of data with a common formula (3D graphics is another example). It takes a bit of getting used to for people who expect to see their for loops written out explicitly.

Solution 2

From this brilliant article

num_units can be interpreted as the analogy of hidden layer from the feed forward neural network. The number of nodes in hidden layer of a feed forward neural network is equivalent to num_units number of LSTM units in a LSTM cell at every time step of the network.

See the image there too!

enter image description here

Solution 3

The argument n_hidden of BasicLSTMCell is the number of hidden units of the LSTM.

As you said, you should really read Colah's blog post to understand LSTM, but here is a little heads up.


If you have an input x of shape [T, 10], you will feed the LSTM with the sequence of values from t=0 to t=T-1, each of size 10.

At each timestep, you multiply the input with a matrix of shape [10, n_hidden], and get a n_hidden vector.

Your LSTM gets at each timestep t:

  • the previous hidden state h_{t-1}, of size n_hidden (at t=0, the previous state is [0., 0., ...])
  • the input, transformed to size n_hidden
  • it will sum these inputs and produce the next hidden state h_t of size n_hidden

From Colah's blog post: LSTM


If you just want to have code working, just keep with n_hidden = 128 and you will be fine.

Solution 4

Since I had some problems to combine the information from the different sources I created the graphic below which shows a combination of the blog post (http://colah.github.io/posts/2015-08-Understanding-LSTMs/) and (https://jasdeep06.github.io/posts/Understanding-LSTM-in-Tensorflow-MNIST/) where I think the graphics are very helpful but an error in explaining the number_units is present.

Several LSTM cells form one LSTM layer. This is shown in the figure below. Since you are mostly dealing with data that is very extensive, it is not possible to incorporate everything in one piece into the model. Therefore, data is divided into small pieces as batches, which are processed one after the other until the batch containing the last part is read in. In the lower part of the figure you can see the input (dark grey) where the batches are read in one after the other from batch 1 to batch batch_size. The cells LSTM cell 1 to LSTM cell time_step above represent the described cells of the LSTM model (http://colah.github.io/posts/2015-08-Understanding-LSTMs/). The number of cells is equal to the number of fixed time steps. For example, if you take a text sequence with a total of 150 characters, you could divide it into 3 (batch_size) and have a sequence of length 50 per batch (number of time_steps and thus of LSTM cells). If you then encoded each character one-hot, each element (dark gray boxes of the input) would represent a vector that would have the length of the vocabulary (number of features). These vectors would flow into the neuronal networks (green elements in the cells) in the respective cells and would change their dimension to the length of the number of hidden units (number_units). So the input has the dimension (batch_size x time_step x features). The Long Time Memory (Cell State) and Short Time Memory (Hidden State) have the same dimensions (batch_size x number_units). The light gray blocks that arise from the cells have a different dimension because the transformations in the neural networks (green elements) took place with the help of the hidden units (batch_size x time_step x number_units). The output can be returned from any cell but mostly only the information from the last block (black border) is relevant (not in all problems) because it contains all information from the previous time steps.

LSTM architecture_new

Solution 5

An LSTM keeps two pieces of information as it propagates through time:

A hidden state; which is the memory the LSTM accumulates using its (forget, input, and output) gates through time, and The previous time-step output.

Tensorflow’s num_units is the size of the LSTM’s hidden state (which is also the size of the output if no projection is used).

To make the name num_units more intuitive, you can think of it as the number of hidden units in the LSTM cell, or the number of memory units in the cell.

Look at this awesome post for more clarity

Share:
42,656
Subrat
Author by

Subrat

Updated on October 28, 2021

Comments

  • Subrat
    Subrat over 2 years

    In MNIST LSTM examples, I don't understand what "hidden layer" means. Is it the imaginary-layer formed when you represent an unrolled RNN over time?

    Why is the num_units = 128 in most cases ?

    • nbro
      nbro over 6 years
      I'd like to note that the authors of that tutorial (that is, the one the OP is linking to) have changed the name of the variables, including num_units to num_hidden. There's now a comment in front of that variable saying hidden layer num of features.
    • Subrat
      Subrat over 6 years
      Sure, I've modified it accordingly.
  • Subrat
    Subrat almost 8 years
    "the input, transformed to size n_hidden" is totally cool when done like you say, with matrix multiplication. But in the mnist code example i mentioned, he is seems to be juggling all the vectors values in the batch at : x = tf.transpose(x, [1, 0, 2]) ... , to get 28 x 128 x 28 shape. I dont get that .
  • Olivier Moindrot
    Olivier Moindrot almost 8 years
    The RNN iterates over each row of the image. In the code of the RNN function, they want to get a list of length 128 (the number of steps, or number of rows of the image), with each element of shape [batch_size, row_size] where row_size=28 (size of a row of the image).
  • Subrat
    Subrat almost 8 years
    Is there an upper limit to the input layer size in tf ? I get segfault when increasing the dimension to thousand plus and its fine with less. Also , shouldn't it be "...they want to get a list of length 28... " there ^
  • Olivier Moindrot
    Olivier Moindrot almost 8 years
    Yes you are right it should be 28. The only limit to the size of the input is the memory of your GPU. If you want to use higher input dimension, you should adapt your batch size so that it fits into your memory
  • Brent Bradburn
    Brent Bradburn over 7 years
    Further questions: How much total memory is involved? How are the weights connected to the LSTM units? Note: See TensorBoard graph visualizations.
  • Brent Bradburn
    Brent Bradburn over 7 years
    I recommend LSTM: A Search Space Odyssey sections 1-3.
  • Brent Bradburn
    Brent Bradburn over 7 years
    Looks like there was a followup in the comments here: RNNS IN TENSORFLOW, A PRACTICAL GUIDE AND UNDOCUMENTED FEATURES
  • Brent Bradburn
    Brent Bradburn over 7 years
    Did I get it right: "a simple RNN doesn't need to know in advance how many hidden units"? Doesn't it need to know that to construct the weights that map between the units -- which grow in count exponentially based on the number of units (even in the simplest RNN). I think that I didn't understand that aspect of the architecture when I wrote this answer (see my first comment). But note that graph visualizations don't tend to help due to the array-based notation.
  • Brent Bradburn
    Brent Bradburn over 7 years
    ...Kind of funny that, using an array-based notation, a data path with an exponential signal count can be represented by a single dark line.
  • Ramesh-X
    Ramesh-X over 6 years
    and tf.nn.dynamic_rnn will feed the rnn with data for each time step..
  • nbro
    nbro over 6 years
    This answer contains unnecessary details and is a bit confusing. The answer is, in my opinion, more concise and gives the idea more rapidly.
  • Subrat
    Subrat over 6 years
    Its definitely a good answer but the part "num_units number of LSTM units in a LSTM cell" might not help if someone is really confused like I was. I think a figure showing the scaling up of the vector dim in a seq-seq single layered lstm training session with a note about the weight updates using the gate equations would be awesome.
  • Subrat
    Subrat over 6 years
    This answer starts by solving the '128' problem, then the next paragraphs help with how the sequence is processed.
  • Biranchi
    Biranchi almost 6 years
    Excellent block diagram for LSTM, Can you explain with diagram what exactly is inside of the units in the num_units of each LSTM cell, as each LSTM cell contains Input Gate, output Gate and Forget gates respectively.
  • Arraval
    Arraval almost 6 years
    @Biranchi, Inside the LSTM cell are LSTM units. In the article cited, each of the num_units in each LSTM cells receives one pixel of a certain row of an image. The size of the image is 28x28 pixels. In the example, they used 28 num_units and 28 LSTM cells. Basically each cells works on a given row of the image.
  • Spandyie
    Spandyie almost 6 years
    This figure perfectly summarizes everything
  • Vedanshu
    Vedanshu over 5 years
    You say "the input to the LSTM cell is a vector of dimension nhid". But the input is generally of shape [batch, T, input] where the input can be of any shape. So, when input is dynamically unrolled we would have an input of [b,t, input]. RNN would transform it as [b,t, nhid]. So, the output would be shape nhid not the input.
  • HARSH NILESH PATHAK
    HARSH NILESH PATHAK over 4 years
    Good answer, You usually have embeddings for your input data and thus assume for every word for simplicity. So let's say each word has a distributed representation of 150 dimensions which are the features in the above diagram. Then num_units will act as the dimensionality of RNN/LSTM cell (say 128). So 150 -> 128. And hence output dimensions will be 128. Batch size and time_steps remains as it is.
  • Brent Bradburn
    Brent Bradburn almost 4 years
    At the time of my original answer, the question specifically referenced an example here: github.com/aymericdamien/TensorFlow-Examples/blob/master/…
  • Alexis
    Alexis over 3 years
    Amazing Illustrations. Thx for sharing. It finally explains what are these units that confuse everybody . I never understood why RNN are not not explained like this.
  • Mr.O
    Mr.O over 3 years
    This answer contradicts the other answers in this post.
  • Mr.O
    Mr.O over 3 years
    This answer contradicts the other answers in this post.
  • Matt
    Matt over 3 years
    If you had the sentence "the dog ate the food" and each word corresponds to a single input, is the full sentence being input at an individual timestep (t = 0 for example) as opposed to each word being input into a unit at the next timestep i.e "the" (t = 0), "dog" (t = 1) etc. Im really confused to be honest.