Python Untokenize a sentence

41,618

Solution 1

You can use "treebank detokenizer" - TreebankWordDetokenizer:

from nltk.tokenize.treebank import TreebankWordDetokenizer
TreebankWordDetokenizer().detokenize(['the', 'quick', 'brown'])
# 'The quick brown'

There is also MosesDetokenizer which was in nltk but got removed because of the licensing issues, but it is available as a Sacremoses standalone package.

Solution 2

To reverse word_tokenize from nltk, i suggest looking in http://www.nltk.org/_modules/nltk/tokenize/punkt.html#PunktLanguageVars.word_tokenize and do some reverse engineering.

Short of doing crazy hacks on nltk, you can try this:

>>> import nltk
>>> import string
>>> nltk.word_tokenize("I've found a medicine for my disease.")
['I', "'ve", 'found', 'a', 'medicine', 'for', 'my', 'disease', '.']
>>> tokens = nltk.word_tokenize("I've found a medicine for my disease.")
>>> "".join([" "+i if not i.startswith("'") and i not in string.punctuation else i for i in tokens]).strip()
"I've found a medicine for my disease."

Solution 3

use token_utils.untokenize from here

import re
def untokenize(words):
    """
    Untokenizing a text undoes the tokenizing operation, restoring
    punctuation and spaces to the places that people expect them to be.
    Ideally, `untokenize(tokenize(text))` should be identical to `text`,
    except for line breaks.
    """
    text = ' '.join(words)
    step1 = text.replace("`` ", '"').replace(" ''", '"').replace('. . .',  '...')
    step2 = step1.replace(" ( ", " (").replace(" ) ", ") ")
    step3 = re.sub(r' ([.,:;?!%]+)([ \'"`])', r"\1\2", step2)
    step4 = re.sub(r' ([.,:;?!%]+)$', r"\1", step3)
    step5 = step4.replace(" '", "'").replace(" n't", "n't").replace(
         "can not", "cannot")
    step6 = step5.replace(" ` ", " '")
    return step6.strip()

 tokenized = ['I', "'ve", 'found', 'a', 'medicine', 'for', 'my','disease', '.']
 untokenize(tokenized)
 "I've found a medicine for my disease."

Solution 4

from nltk.tokenize.treebank import TreebankWordDetokenizer
TreebankWordDetokenizer().detokenize(['the', 'quick', 'brown'])
# 'The quick brown'

Solution 5

For me, it worked when I installed python nltk 3.2.5,

pip install -U nltk

then,

import nltk
nltk.download('perluniprops')

from nltk.tokenize.moses import MosesDetokenizer

If you are using insides pandas dataframe, then

df['detoken']=df['token_column'].apply(lambda x: detokenizer.detokenize(x, return_str=True))
Share:
41,618
Brana
Author by

Brana

Updated on July 09, 2022

Comments

  • Brana
    Brana almost 2 years

    There are so many guides on how to tokenize a sentence, but i didn't find any on how to do the opposite.

     import nltk
     words = nltk.word_tokenize("I've found a medicine for my disease.")
     result I get is: ['I', "'ve", 'found', 'a', 'medicine', 'for', 'my', 'disease', '.']
    

    Is there any function than reverts the tokenized sentence to the original state. The function tokenize.untokenize() for some reason doesn't work.

    Edit:

    I know that I can do for example this and this probably solves the problem but I am curious is there an integrated function for this:

    result = ' '.join(sentence).replace(' , ',',').replace(' .','.').replace(' !','!')
    result = result.replace(' ?','?').replace(' : ',': ').replace(' \'', '\'')