UTF-8: how many bytes are used by languages to represent a visible character?

14,613

Solution 1

If you want something general, I think you should stick with this:

  • English takes very slightly more than 1 byte per character (there is the occasional non-ASCII character, often punctuation or symbols embedded in text).
  • Most other languages which use the latin alphabet use somewhat more than 1, but I would be surprised if you should expect more than, say, 1.5.
  • Languages using some of the other scripts (Greek, etc...) take around 2 bytes per character.
  • East Asian languages take about 3 bytes per character (spacing, control characters, and embedded ASCII make it take less, non-BMP makes it take more).

That's all very incomplete, approximate, and non-quantitative.

If you need something more quantitative, I think you will have to research each language individually. I doubt you will find precomputed results out there that already apply to a host of different languages.

If you have a corpus of text for a language, it's easy to calculate the average number of bytes required. Start with the Text corpus Wikipedia page. It links to at least one good freely available corpus for English and there might be some available for other languages as well (I didn't hunt through the links to find out).

Incidentally, I don't recommend using this information to truncate the length of a database field as you indicated (in comments) that you intend to do. First of all, if you used a corpus made up from litterature to come up with your expected number of bytes per character, you might find the corpus is not at all representative of the short little text strings that end up in your database, throwing off your expectation. Just get the whole database column. Most results will be much shorter than the maximum length, and when they're not, I don't think your optimization is worth it to save a hundred bytes or so.

Solution 2

Look at a list of Unicode blocks and their code point ranges, e.g. the browsable http://www.fileformat.info/info/unicode/block/index.htm or the official http://www.unicode.org/Public/UNIDATA/Blocks.txt :

  • Anything up to U+007F takes 1 byte: Basic Latin
  • Then up to U+07FF it takes 2 bytes: Greek, Arabic, Cyrillic, Hebrew, etc
  • Then up to U+FFFF it takes 3 bytes: Chinese, Japanese, Korean, Devanagari, etc
  • Beyond that it takes 4 bytes
Share:
14,613
sid_com
Author by

sid_com

Updated on June 03, 2022

Comments

  • sid_com
    sid_com about 2 years

    Does there exist a table or something similar which shows how many bytes different languages need on average to represent a visible character (glyph) when the encoding is utf8?

  • sid_com
    sid_com over 11 years
    I need/can use at most the length of a terminal row.
  • ClearCrescendo
    ClearCrescendo over 8 years
    It's difficult to know if it is important to support 4 byte UTF8. The characters >= U+10000 require four bytes and hence utf8mb4 rather than utf8 for mysql storage for example. There are symbols which fonts do support on OS X above U+10000 as well as some additional CJK characters. My conclusion at the moment is that if Chinese language support is important to you, that you should support 4 byte UTF-8 and allow a fuller range of characters.