Lemmatization of a list of words

11,499

Solution 1

The method WordNetLemmatizer.lemmatize is probably expecting a string but you are passing it a list of strings. This is giving you the TypeError exception.

The result of line.split() is a list of strings which you are appending as a list to results i.e. a list of lists.

You want to use results.extend(line.strip().split())

results = []
with open('/Users/xyz/Documents/something5.txt', 'r') as f:
    for line in f:
        results.extend(line.strip().split())

lemma = WordNetLemmatizer()

lem = map(lemma.lemmatize, results)

with open("lem.txt", "w") as t:
    for item in lem:
        print >> t, item

or refactored without the intermediate results list

def words(fname):
    with open(fname, 'r') as document:
        for line in document:
            for word in line.strip().split():
                yield word

lemma = WordNetLemmatizer()
lem = map(lemma.lemmatize, words('/Users/xyz/Documents/something5.txt'))

Solution 2

Open a text file and and read lists as results as shown below
fo = open(filename)
results1 = fo.readlines()

results1
['I have a list of words in a text file', ' \n I want to perform lemmatization on them to remove words which have the same meaning but are in different tenses', '']

# Tokenize lists

results2 = [line.split() for line in results1]

# Remove empty lists

results2 = [ x for x in results2 if x != []]

# Lemmatize each word from a list using WordNetLemmatizer

from nltk.stem.wordnet import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
lemma_list_of_words = []
for i in range(0, len(results2)):
     l1 = results2[i]
     l2 = ' '.join([lemmatizer.lemmatize(word) for word in l1])
     lemma_list_of_words.append(l2)
lemma_list_of_words
['I have a list of word in a text file', 'I want to perform lemmatization on them to remove word which have the same meaning but are in different tense']

Please look at the lemmatized difference between lemma_list_of_words and results1.
Share:
11,499
minks
Author by

minks

Updated on July 28, 2022

Comments

  • minks
    minks almost 2 years

    So I have a list of words in a text file. I want to perform lemmatization on them to remove words which have the same meaning but are in different tenses. Like try, tried etc. When I do this, I keep getting an error like TypeError: unhashable type: 'list'

        results=[]
        with open('/Users/xyz/Documents/something5.txt', 'r') as f:
           for line in f:
              results.append(line.strip().split())
    
        lemma= WordNetLemmatizer()
    
        lem=[]
    
        for r in results:
           lem.append(lemma.lemmatize(r))
    
        with open("lem.txt","w") as t:
          for item in lem:
            print>>t, item
    

    How do I lemmatize words which are already tokens?