Python NLTK: SyntaxError: Non-ASCII character '\xc3' in file (Sentiment Analysis -NLP)
119,108
Add the following to the top of your file # coding=utf-8
If you go to the link in the error you can seen the reason why:
Defining the Encoding
Python will default to ASCII as standard encoding if no other encoding hints are given. To define a source code encoding, a magic comment must be placed into the source files either as first or second line in the file, such as: # coding=
Author by
rkbom9
Updated on February 13, 2021Comments
-
rkbom9 over 3 years
I am playing around with NLTK to do an assignment on sentiment analysis. I am using Python 2.7. NLTK 3.0 and NumPy1.9.1 version.
This is the code :
__author__ = 'karan' import nltk import re import sys def main(): print("Start"); # getting the stop words stopWords = open("english.txt","r"); stop_word = stopWords.read().split(); AllStopWrd = [] for wd in stop_word: AllStopWrd.append(wd); print("stop words-> ",AllStopWrd); # sample and also cleaning it tweet1= 'Love, my new toyí ½í¸í ½í¸#iPhone6. Its good https://twitter.com/Sandra_Ortega/status/513807261769424897/photo/1' print("old tweet-> ",tweet1) tweet1 = tweet1.lower() tweet1 = ' '.join(re.sub("(@[A-Za-z0-9]+)|([^0-9A-Za-z \t])|(\w+:\/\/\S+)"," ",tweet1).split()) print(tweet1); tw = tweet1.split() print(tw) #tokenize sentences = nltk.word_tokenize(tweet1) print("tokenized ->", sentences) #remove stop words Otweet =[] for w in tw: if w not in AllStopWrd: Otweet.append(w); print("sans stop word-> ",Otweet) # get taggers for neg/pos/inc/dec/inv words taggers ={} negWords = open("neg.txt","r"); neg_word = negWords.read().split(); print("ned words-> ",neg_word) posWords = open("pos.txt","r"); pos_word = posWords.read().split(); print("pos words-> ",pos_word) incrWords = open("incr.txt","r"); inc_word = incrWords.read().split(); print("incr words-> ",inc_word) decrWords = open("decr.txt","r"); dec_word = decrWords.read().split(); print("dec wrds-> ",dec_word) invWords = open("inverse.txt","r"); inv_word = invWords.read().split(); print("inverse words-> ",inv_word) for nw in neg_word: taggers.update({nw:'negative'}); for pw in pos_word: taggers.update({pw:'positive'}); for iw in inc_word: taggers.update({iw:'inc'}); for dw in dec_word: taggers.update({dw:'dec'}); for ivw in inv_word: taggers.update({ivw:'inv'}); print("tagger-> ",taggers) print(taggers.get('little')) # get parts of speech posTagger = [nltk.pos_tag(tw)] print("posTagger-> ",posTagger) main();
This is the error that I am getting when running my code:
SyntaxError: Non-ASCII character '\xc3' in file C:/Users/karan/PycharmProjects/mainProject/sentiment.py on line 19, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details
How do I fix this error?
I also tried the code using Python 3.4.2 and with NLTK 3.0 and NumPy 1.9.1 but then I get the error:
Traceback (most recent call last): File "C:/Users/karan/PycharmProjects/mainProject/sentiment.py", line 80, in <module> main(); File "C:/Users/karan/PycharmProjects/mainProject/sentiment.py", line 72, in main posTagger = [nltk.pos_tag(tw)] File "C:\Python34\lib\site-packages\nltk\tag\__init__.py", line 100, in pos_tag tagger = load(_POS_TAGGER) File "C:\Python34\lib\site-packages\nltk\data.py", line 779, in load resource_val = pickle.load(opened_resource) UnicodeDecodeError: 'ascii' codec can't decode byte 0xcb in position 0: ordinal not in range(128)
-
Iulian Onofrei about 9 yearsOk, I'm very newbie at python and I had
u"a"
on the same line withu"ã"
-
Padraic Cunningham about 9 years@IulianOnofrei, for
u"ã"
you would need to declare the encoding. Did you get an error? -
Iulian Onofrei about 9 years@PadraicCunningham, I do declare it using
codecs.encode(u"ã", "utf-8")
, the error came fromu"a"
(after adding the magic comment, ofc), so all is well now, thanks. -
J-Dizzle over 8 yearsspends an hour with this issue solution: a magic comment. facepalms
-
user324747 about 4 yearsI added the "magic comment" and don't get that error, but os.path.isfile() is saying a filename with
é
doesn't exist. Ironic that the charactere
is inMarc-André Lemburg
, the author of the PEP the error links to.