What is the fool proof way to convert some string (utf-8 or else) to a simple ASCII string in python

11,313

Solution 1

If you want an ASCII string that unambiguously represents what you have got, without losing any information, the answer is simple:

Don't muck about with encode/decode, use the repr() function (Python 2.X) or the ascii() function (Python 3.x).

Solution 2

You say "the encoding of it varies". I guess that by "it" you mean a Python 2.x "string", which is really a sequence of bytes.

Answer part one: if you do not know the encoding of that encoded string, then no, there is no way at all to do anything meaningful with it*. If you do know the encoding, then step one is to convert your str into a unicode:

encoded_string = i_have_no_control()
the_encoding = 'utf-8' # for the sake of example
text = unicode(encoded_string, the_encoding)

Then you can re-encode your unicode object as ASCII, if you like.

ascii_garbage = text.encode('ascii', 'replace')

* There are heuristic methods for guessing encodings, but they are slow and unreliable. Here's one excellent attempt in Python.

Solution 3

I'd try to normalize the string then encode it. What about :

import unicodedata
s = u"éèêàùçÇ"
print unicodedata.normalize('NFKD',s).encode('ascii','ignore')

This works only if you have unicode as input. Therefor, you must know what can of encoding the function ouputs and decode it. If you don't, there are encoding detection heuristics, but on short strings, there are not reliable.

Of course, you could have luck and the function outputs rely on various unknow encodings but using ascii as a code base, therefor they would allocate the same value for the bytes from 0 to 127 (like utf-8).

In that case, you can just get rid of the unwanted chars by filtering them using OrderedSets :

import string.printable # asccii chars
print "".join(OrderedSet(string.printable) & OrderedSet(s))

Or if you want blanks instead :

print("".join(((char if char in  string.printable else " ") for char in s )))

"translate" can help you to do the same.

The only way to know if your are this lucky is to try it out... Sometimes, a big fat lucky day is what any dev need :-)

Solution 4

What's meant by "foolproof" is that the function does not fail with even the most obscure, impossible input -- meaning, you could feed the function random binary data and IT WOULD NEVER FAIL, NO MATTER WHAT. That's what "foolproof" means.

The function should then proceed do its best to convert to the destination encoding. If it has to throw away all the trash it does not understand, then that is perfectly fine and is in fact the most desirable result. Why try to salvage all the junk? Just discard the junk. Tell the user he's not merely a moron for using Microsoft anything, but a non-standard moron for using non-standard Microsoft anything...or for attempting to send in binary data!

I have just precisely this same need (though my need is in PHP), and I also have users who are at least as moronic as I am, sometimes moreso; however, they are definitely nicer and no doubt more patient.

The best, bottom-line thing I've found so far is (in PHP 5.3):

$fixed_string = iconv( 'ISO-8859-1', 'UTF-8//IGNORE//TRANSLATE', $in_string );

This attempts to translate whatever it can and simply throws away all the junk, resulting in a legal UTF-8 string output. I've also not been able to break it or cause it to fail or reject any incoming text or data, even by feeding it gobs of binary junk data.

Finding the iconv() and getting it to work is easy; what's so maddening and wasteful is reading through all the total garbage and bend-over-backwards idiocy that so many programmers seem to espouse when dealing with this encoding fiasco. What's become of the enviable (and respectable) "Flail and Burn The Idiots" mentality of old school programming? Let's get back to basics. Use iconv() and throw away their garbage, and don't be bashful when telling them you threw away their garbage -- in short, don't fail to flail the morons who feed you garbage. And you can tell them I told you so.

Solution 5

If all you want to do is preserve ASCII-compatible characters and throw away the rest, then in most encodings that boils down to removing all characters that have the high bit set -- i.e., characters with value over 127. This works because nearly all character sets are extensions of 7-bit ASCII.

If it's a normal string (i.e., not unicode), you need to decode it in an arbitrary character set (such as iso-8859-1 because it accepts any byte values) and then encode in ascii, using the ignore or replace option for errors:

>>> orig = '1ä2äö3öü4ü'
>>> orig.decode('iso-8859-1').encode('ascii', 'ignore')
'1234'
>>> orig.decode('iso-8859-1').encode('ascii', 'replace')
'1??2????3????4??'

The decode step is necessary because you need a unicode string in order to use encode. If you already have a Unicode string, it's simpler:

>>> orig = u'1ä2äö3öü4ü'
>>> orig.encode('ascii', 'ignore')
'1234'
>>> orig.encode('ascii', 'replace')
'1??2????3????4??'
Share:
11,313
olamundo
Author by

olamundo

Updated on June 04, 2022

Comments

  • olamundo
    olamundo almost 2 years

    Inside my python scrip, I get some string back from a function which I didn't write. The encoding of it varies. I need to convert it to ascii format. Is there some fool-proof way of doing this? I don't mind replacing the non-ascii chars with blanks or something else...

  • u0b34a0f6ae
    u0b34a0f6ae over 14 years
    Going directly to ascii (as unicode object) is also possible: '1ä2äö3öü4ü'.decode("ascii", "ignore"). Just because you use a simplified character set doesn't make the unicode type a bad choice for textual strings IMO.
  • Jonathan Feinberg
    Jonathan Feinberg over 14 years
    If your default encoding doesn't happen to be iso-8859-1, then your very first line there will explode when you attempt to decode that source string as iso-8859-1.
  • intgr
    intgr over 14 years
    @Jonathan Feinberg: Decoding from iso-8859-1 never fails because any character sequence has a defined meaning and is legal in ISO-8559-1. What does the default encoding have to do with it? I specify encodings everywhere explicitly.
  • intgr
    intgr over 14 years
    @kaizer.se: It works with 'ignore', but when you use 'replace' it would give you a Unicode string with: u'1\ufffd\ufffd2\ufffd\ufffd\ufffd\ufffd3\ufffd\ufffd\ufffd\‌​ufffd4\ufffd\ufffd'
  • intgr
    intgr over 14 years
    "no, there is no way at all to do anything meaningful with it" -- nearly every character set in use today inherits its lower characters from ASCII. In this case, there is something meaningful you can do: throw away all non-ASCII characters. This is what the asker wants. The exceptions (UTF-16 and UTF-32) would never be confused with any other character sets, so I believe it's safe to ignore those.
  • Jonathan Feinberg
    Jonathan Feinberg over 14 years
    You're seemingly of the opinion that the only character encodings in the world are defined by Unicode, but that isn't so. There are dozens more commonly used ones, such as shift-jis, windows-1252, etc. What's more, "converting to ascii" usually means "normalizing" characters, such as converting ä to a, which you certainly can't do by assuming your encoding is one byte per character, and masking non-ascii bytes, as you suggest!
  • intgr
    intgr over 14 years
    Both Shift-JIS and Windows-1252 inherit the lower ASCII codepoints from ASCII. Thus, stripping all characters with the high bit set (which is what my answer does) works in the common case. This is not ideal, but in many of cases sufficient. If you simply do not know the encoding, then obviously you cannot normalize it. As for autodetection, some character sets in the ISO-8859-* series have so many overlaps and ambiguities that they are essentially impossible to distinguish.
  • John Machin
    John Machin over 14 years
    Throwing away non-ASCII characters is often like throwing out the baby with the bath water. E.g. On a typical Chinese website (charset=gb2312 but don't believe that, should read charset=some-superset-of-gb2312, try the gbk codec instead), the ASCII-compatible characters are mostly HTML syntax; the content is mostly Chinese and is wrecked by all of your transformations. Likewise Russian. Note there's a designed-in trick with koi8_r (but not cp1251): ucity = u"\u041c\u043e\u0441\u043a\u0432\u0430"; ''.join(chr(ord(c) & 0x7f) for c in ucity.encode('koi8_r')) produces 'mOSKWA'.