TensorFlow Embedding Lookup

11,110

There is already an answer on what does tf.nn.embedding_lookup here.


When tried with same params, and 2-d array ids, tf.nn.embedding_lookup returns 3-d array, instead of 2-d which I do not understand why.

When you had a 1-D list of ids [0, 1], the function would return a list of embeddings [embedding_0, embedding_1] where embedding_0 is an array of shape embedding_size. For instance the list of ids could be a batch of words.

Now, you have a matrix of ids, or a list of list of ids. For instance, you now have a batch of sentences, i.e. a batch of list of words, i.e. a list of list of words.

If your list of sentences is: [[0, 1], [0, 3]] (sentence 1 is [0, 1], sentence 2 is [0, 3]), the function will compute a matrix of embeddings, which will be of shape [2, 2, embedding_size]and will look like:

[[embedding_0, embedding_1],
 [embedding_0, embedding_3]]

Concerning the partition_strategy argument, you don't have to bother about it. Basically, it allows you to give a list of embedding matrices as params instead of 1 matrix, if you have limitations in computation.

So, you could split your embedding matrix of shape [1000, embedding_size] in ten matrices of shape [100, embedding_size] and pass this list of Variables as params. The argument partition_strategy handles the distribution of the vocabulary (the 1000 words) among the 10 matrices.

Share:
11,110
VM_AI
Author by

VM_AI

Updated on June 07, 2022

Comments

  • VM_AI
    VM_AI almost 2 years

    I am trying to learn how to build RNN for Speech Recognition using TensorFlow. As a start, I wanted to try out some example models put up on TensorFlow page TF-RNN

    As per what was advised, I had taken some time to understand how word IDs are embedded into a dense representation (Vector Representation) by working through the basic version of word2vec model code. I had an understanding of what tf.nn.embedding_lookup actually does, until I actually encountered the same function being used with two dimensional array in TF-RNN ptb_word_lm.py, when it did not make sense any more.

    what I though tf.nn.embedding_lookup does:

    Given a 2-d array params, and a 1-d array ids, function tf.nn.embedding_lookup fetches rows from params, corresponding to the indices given in ids, which holds with the dimension of output it is returning.

    What I am confused about:

    When tried with same params, and 2-d array ids, tf.nn.embedding_lookup returns 3-d array, instead of 2-d which I do not understand why.

    I looked up the manual for Embedding Lookup, but I still find it difficult to understand how the partitioning works, and the result that is returned. I recently tried some simple example with tf.nn.embedding_lookup and it appears that it returns different values each time. Is this behaviour due to the randomness involved in partitioning ?

    Please help me understand how tf.nn.embedding_lookup works, and why is used in both word2vec_basic.py, and ptb_word_lm.py i.e., what is the purpose of even using them ?

  • vgoklani
    vgoklani over 7 years
    Where does tf learn the embedding? Is that done in this function too?
  • Olivier Moindrot
    Olivier Moindrot over 7 years
    All the embeddings are stored in the embedding matrix ˋparamsˋ, which are learned by gradient descent. With the embedding lookup, only a small portion of the embeddings are updated each time (only the words in the sentences in the batch).
  • Shamane Siriwardhana
    Shamane Siriwardhana about 7 years
    So this function is only for making it's inner processes easy ?