stopword removing when using the word2vec

17,200

Solution 1

Personaly I think, removal of stop word will give better results, check link

Also for topic modeling, you shlould perform preprocessing on the text, following things you must do,

  1. Remove of stop words.
  2. Tokenization.
  3. Stemming and Lemmatization.

Solution 2

Gensim's implementation is based on the original Tomas Mikolov model of word2vec, then it downsamples all frequent words automatically based on frequency.

As stated in the paper:

We show that subsampling of frequent words during training results in a significant speedup (around 2x - 10x), and improves accuracy of the representations of less frequent words.

What it means is that these words are sometimes not considered in the window of the words to be predicted. The sample parameter which defaults to 0.001 is used as a parameter to prune out those words. If you want to remove some specific stopwords which would not be removed based on its frequency, you can do that.

Summary : The result would not make any significant difference if you do stop words removal.

Share:
17,200
samsamara
Author by

samsamara

Updated on July 24, 2022

Comments

  • samsamara
    samsamara almost 2 years

    I have been trying word2vec for a while now using the gensim's word2vec library. My question is do I have to remove stopwords from my input text? Because, based on my initial experimental results, I could see words like 'of', 'when'.. (stopwords) popping up when I do a model.most_similar('someword')..?

    But I didn't see anywhere referring that stop word removal is necessary with word2vec? Does the word2vec is supposed to handle stop words even if you don't remove them?

    What are the must do pre processing things (like for topic modeling, it's almost a must that you should do stopword removal)?