Why is the length of this string longer than the number of characters in it?

25,369

Solution 1

Everyone else is giving the surface answer, but there's a deeper rationale too: the number of "characters" is a difficult-to-define question and can be surprisingly expensive to compute, whereas a length property should be fast.

Why is it difficult to define? Well, there's a few options and none are really more valid than another:

  • The number of code units (bytes or other fixed size data chunk; C# and Windows typically use UTF-16 so it returns the number of two-byte pieces) is certainly relevant, as the computer still needs to deal with the data in that form for many purposes (writing to a file, for example, cares about bytes rather than characters)

  • The number of Unicode codepoints is fairly easy to compute (although O(n) because you gotta scan the string for surrogate pairs) and might matter to a text editor.... but isn't actually the same thing as the number of characters printed on screen (called graphemes). For example, some accented letters can be represented in two forms: a single codepoint, or two points paired together, one representing the letter, and one saying "add an accent to my partner letter". Would the pair be two characters or one? You can normalize strings to help with this, but not all valid letters have a single codepoint representation.

  • Even the number of graphemes isn't the same as the length of a printed string, which depends on the font among other factors, and since some characters are printed with some overlap in many fonts (kerning), the length of a string on screen is not necessarily equal to the sum of the length of graphemes anyway!

  • Some Unicode points aren't even characters in the traditional sense, but rather some kind of control marker. Like a byte order marker or a right-to-left indicator. Do these count?

In short, the length of a string is actually a ridiculously complex question and calculating it can take a lot of CPU time as well as data tables.

Moreover, what's the point? Why does these metrics matter? Well, only you can answer that for your case, but personally, I find they are generally irrelevant. Limiting data entry I find is more logically done by byte limits, as that's what needs to be transferred or stored anyway. Limiting display size is better done by the display side software - if you have 100 pixels for the message, how many characters you fit depends on the font, etc., which isn't known by the data layer software anyway. Finally, given the complexity of the unicode standard, you're probably going to have bugs at the edge cases anyway if you try anything else.

So it is a hard question with not a lot of general purpose use. Number of code units is trivial to calculate - it is just the length of the underlying data array - and the most meaningful/useful as a general rule, with a simple definition.

That's why b has length 4 beyond the surface explanation of "because the documentation says so".

Solution 2

From the documentation of the String.Length property:

The Length property returns the number of Char objects in this instance, not the number of Unicode characters. The reason is that a Unicode character might be represented by more than one Char. Use the System.Globalization.StringInfo class to work with each Unicode character instead of each Char.

Solution 3

Your character at index 1 in "A𠈓C" is a SurrogatePair

The key point to remember is that surrogate pairs represent 32-bit single characters.

You can try this code and it will return True

Console.WriteLine(char.IsSurrogatePair("A𠈓C", 1));

Char.IsSurrogatePair Method (String, Int32)

true if the s parameter includes adjacent characters at positions index and index + 1, and the numeric value of the character at position index ranges from U+D800 through U+DBFF, and the numeric value of the character at position index+1 ranges from U+DC00 through U+DFFF; otherwise, false.

This is further explained in String.Length property:

The Length property returns the number of Char objects in this instance, not the number of Unicode characters. The reason is that a Unicode character might be represented by more than one Char. Use the System.Globalization.StringInfo class to work with each Unicode character instead of each Char.

Solution 4

As the other answers have pointed out, even if there are 3 visible character they are represented with 4 char objects. Which is why the Length is 4 and not 3.

MSDN states that

The Length property returns the number of Char objects in this instance, not the number of Unicode characters.

However if what you really want to know is the number of "text elements" and not the number of Char objects you can use the StringInfo class.

var si = new StringInfo("A𠈓C");
Console.WriteLine(si.LengthInTextElements); // 3

You can also enumerate each text element like this

var enumerator = StringInfo.GetTextElementEnumerator("A𠈓C");
while(enumerator.MoveNext()){
    Console.WriteLine(enumerator.Current);
}

Using foreach on the string will split the middle "letter" in two char objects and the printed result won't correspond to the string.

Solution 5

That is because the Length property returns the number of char objects, not the number of unicode characters. In your case, one of the Unicode characters is represented by more than one char object (SurrogatePair).

The Length property returns the number of Char objects in this instance, not the number of Unicode characters. The reason is that a Unicode character might be represented by more than one Char. Use the System.Globalization.StringInfo class to work with each Unicode character instead of each Char.

Share:
25,369

Related videos on Youtube

weini37
Author by

weini37

Updated on February 04, 2020

Comments

  • weini37
    weini37 over 4 years

    This code:

    string a = "abc";
    string b = "A𠈓C";
    Console.WriteLine("Length a = {0}", a.Length);
    Console.WriteLine("Length b = {0}", b.Length);
    

    outputs:

    Length a = 3
    Length b = 4
    

    Why? The only thing I could imagine is that the Chinese character is 2 bytes long and that the .Length method returns the byte count.

    • Chris Cirefice
      Chris Cirefice over 9 years
      How did I know it was a surrogate pair problem just from looking at the title. Ah, good 'ol System.Globalization is your ally!
    • phuclv
      phuclv over 9 years
      it's 4 bytes long in UTF-16, not 2
    • GMasucci
      GMasucci over 9 years
      the decimal value of the char 𠈓 is 131603, and as chars are unsigned bytes, that means you can achieve that value in 2 characters rather than 4 (unsigned 16 bit value max is 65535 (or 65536 variations) and using 2 chars to represent it allows for a maximum number of variations of not 65536*2(131072) but rather 65536*65536 variations( 4,294,967,296, effectively a 32 bit value)
    • Kaiserludi
      Kaiserludi over 9 years
      @GMAsucci: It's 2 characters in UTF-16, but 4 bytes, because a UTF16 character is 2 bytes in size, otherwise it could not store 65536 variations, but only 256.
    • phuclv
      phuclv over 9 years
      @GMasucci you cannot store 4,294,967,296 different codepoints with UTF-16 as some bits are used to denote surrogate pair
    • Medinoc
      Medinoc over 9 years
      Indeed, surrogate pairs are just enough to store 20 bits of payload, meaning 16*65536 possible codepoints (out of 17*65536 codepoints defined in all of Unicode)
    • GMasucci
      GMasucci over 9 years
      I stand happily corrected:) I was trying to point out the potential possible combinations not the actually available ones though, but still I should have had a clearer comment. Cheers guys:)
    • Khouri Giordano
      Khouri Giordano over 9 years
      As a detail, I would say these strings are probably encoded in the user's preferred multi-byte code page, not UTF-8 as everyone seems to assume.
    • Salman A
      Salman A over 9 years
      Interesting, I get same result in JavaScript: "A𠈓C".length // 4
    • Harry Johnston
      Harry Johnston over 9 years
      @KhouriGiordano: no, the C# "string" type uses UTF-16.
    • ItsMe
      ItsMe over 9 years
      I recommend reading the great article 'The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)' joelonsoftware.com/articles/Unicode.html
  • Michael
    Michael over 9 years
    Java behaves in the same way (also printing 4 for String b), as it uses the UTF-16 representation in char arrays. It's a 4 byte character in UTF-8.
  • Lightness Races in Orbit
    Lightness Races in Orbit over 9 years
    You have an ambiguous use of "character" in this answer. I suggest replacing at least the first one with precise terminology.
  • Yuval Itzchakov
    Yuval Itzchakov over 9 years
    Thank you. Fixed the ambiguity.
  • redcalx
    redcalx over 9 years
    Essentially '.Length' isn't what most coders think it is. Maybe there should be a set of more specific properties (e.g. GlyphCount) and Length marked as Obsolete!
  • Kroltan
    Kroltan over 9 years
    @locster I agree, but don't think Length should be obsolete, to maintain the analogy with arrays.
  • simonzack
    simonzack over 9 years
    @locster It shouldn't be obsolete. The python one makes a lot of sense and nobody questions it.
  • Adam D. Ruppe
    Adam D. Ruppe over 9 years
    I think .Length makes a lot of sense and is a natural property, as long as you understand what it is and why it is that way. Then it works like any other array (in some languages like D, a string literally is an array as far as the language is concerned and it works really well)
  • redcalx
    redcalx over 9 years
    However, the discussion is specifically about C# where strings are made up of unicode chars using Window's standard internal two byte encoding and for which the concept of string length is a somewhat fuzzy concept in some corner cases.
  • Adam D. Ruppe
    Adam D. Ruppe over 9 years
    String length is always a fuzzy concept with Unicode - even with UTF-32, where you don't have to think about surrogate pairs, there's still combining characters, etc., that complicate matters.
  • Jodrell
    Jodrell over 9 years
    All C# strings are encoded as UTF-16 LE. However, they are not necessarily normalized in any particular way.
  • Adam D. Ruppe
    Adam D. Ruppe over 9 years
    That's not true (a common misconception) - with UTF-32 , lengthInBytes / 4 would give the number of code points, but that is not the same as the number of "characters" or graphemes. Consider LATIN SMALL LETTER E followed by a COMBINING DIAERESIS... that prints as a single character, it can even be normalized to a single codepoint, but it is still two units long, even in UTF-32.
  • nhahtdh
    nhahtdh over 9 years
    I think your answer is potentially confusing. In this case, 𠈓 is only a single code point, but since its code point exceeds 0xFFFF, it must be represented as 2 code units by using surrogate pair. Grapheme is another concept built on top of code point, where a grapheme can be represented by a single code point or multiple code points, as seen in Korean's Hangul or many Latin-based languages.
  • Jodrell
    Jodrell over 9 years
    @nhahtdh, I agree, my answer was erroneous. I've rewritten it and hopefully it now creates greater clarity.
  • Jodrell
    Jodrell over 9 years
    @AdamD.Ruppe, Agreed, I've clarified my understanding since my previous comment (which is now deleted.)
  • Holger
    Holger over 9 years
    Just a little addendum: just can have more than one accent (or generally speaking, combining character), like in ọ̵̌ or ɘ̧̊̄ It should be clear that you can’t have predefined unicode codepoints for all possible combinations.
  • Erdinc Ay
    Erdinc Ay over 9 years
    Adam this is really a good answer, there are some other issues that you didnt mention, like letters that melt together so that two characters form one grapheme/glyph.