What is the Java's internal represention for String? Modified UTF-8? UTF-16?

39,022

Solution 1

Java uses UTF-16 for the internal text representation

The representation for String and StringBuilder etc in Java is UTF-16

https://docs.oracle.com/javase/8/docs/technotes/guides/intl/overview.html

How is text represented in the Java platform?

The Java programming language is based on the Unicode character set, and several libraries implement the Unicode standard. The primitive data type char in the Java programming language is an unsigned 16-bit integer that can represent a Unicode code point in the range U+0000 to U+FFFF, or the code units of UTF-16. The various types and classes in the Java platform that represent character sequences - char[], implementations of java.lang.CharSequence (such as the String class), and implementations of java.text.CharacterIterator - are UTF-16 sequences.

At the JVM level, if you are using -XX:+UseCompressedStrings (which is default for some updates of Java 6) The actual in-memory representation can be 8-bit, ISO-8859-1 but only for strings which do not need UTF-16 encoding.

http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html

and supports a non-standard modification of UTF-8 for string serialization.

Serialized Strings use UTF-8 by default.

And how many bytes does Java use for a char in memory?

A char is always two bytes, if you ignore the need for padding in an Object.

Note: a code point (which allows character > 65535) can use one or two characters, i.e. 2 or 4 bytes.

Solution 2

You can confirm the following by looking at the source code of the relevant version of the java.lang.String class in OpenJDK. (For some really old versions of Java, String was partly implemented in native code. That source code is not publicly available.)

Prior to Java 9, the standard in-memory representation for a Java String is UTF-16 code-units held in a char[].

With Java 6 update 21 and later, there was a non-standard option (-XX:UseCompressedStrings) to enable compressed strings. This feature was removed in Java 7.

For Java 9 and later, the implementation of String has been changed to use a compact representation by default. The java command documentation now says this:

-XX:-CompactStrings

Disables the Compact Strings feature. By default, this option is enabled. When this option is enabled, Java Strings containing only single-byte characters are internally represented and stored as single-byte-per-character Strings using ISO-8859-1 / Latin-1 encoding. This reduces, by 50%, the amount of space required for Strings containing only single-byte characters. For Java Strings containing at least one multibyte character: these are represented and stored as 2 bytes per character using UTF-16 encoding. Disabling the Compact Strings feature forces the use of UTF-16 encoding as the internal representation for all Java Strings.


Note that neither classical, "compressed" or "compact" strings ever used UTF-8 encoding as the String representation. Modified UTF-8 is used in other contexts; e.g. in class files, and the object serialization format.

See also:


To answer your specific questions:

Modified UTF-8? Or UTF-16? Which one is correct?

Either UTF-16 or an adaptive representation that depends on the actual data; see above.

And how many bytes does Java use for a char in memory?

A single char uses 2 bytes. There might be some "wastage" due to possible padding, depending on the context.

A char[] is 2 bytes per character plus the object header (typically 12 bytes including the array length) padded to (typically) a multiple of 8 bytes.

Please let me know which one is correct and how many bytes it uses.

If we are talking about a String now, it is not possible to give a general answer. It will depend on the Java version and hardware platform, as well as the String length and (in some cases) what the characters are. Indeed, for some versions of Java it even depends on how you created the String.

Solution 3

UTF-16.

From http://java.sun.com/javase/technologies/core/basic/intl/faq.jsp :

How is text represented in the Java platform?

The Java programming language is based on the Unicode character set, and several libraries implement the Unicode standard. The primitive data type char in the Java programming language is an unsigned 16-bit integer that can represent a Unicode code point in the range U+0000 to U+FFFF, or the code units of UTF-16. The various types and classes in the Java platform that represent character sequences - char[], implementations of java.lang.CharSequence (such as the String class), and implementations of java.text.CharacterIterator - are UTF-16 sequences.

Solution 4

The size of a char is 2 bytes.

Therefore, I would say that Java uses UTF-16 for internal String representation.

Share:
39,022
Johnny Lim
Author by

Johnny Lim

Updated on July 11, 2022

Comments

  • Johnny Lim
    Johnny Lim almost 2 years

    I searched Java's internal representation for String, but I've got two materials which look reliable but inconsistent.

    One is:

    http://www.codeguru.com/cpp/misc/misc/multi-lingualsupport/article.php/c10451

    and it says:

    Java uses UTF-16 for the internal text representation and supports a non-standard modification of UTF-8 for string serialization.

    The other is:

    http://en.wikipedia.org/wiki/UTF-8#Modified_UTF-8

    and it says:

    Tcl also uses the same modified UTF-8[25] as Java for internal representation of Unicode data, but uses strict CESU-8 for external data.

    Modified UTF-8? Or UTF-16? Which one is correct? And how many bytes does Java use for a char in memory?

    Please let me know which one is correct and how many bytes it uses.

  • tchrist
    tchrist about 12 years
    This answer is incorrect. Because Java uses UTF-16, each Unicode character is either 2 bytes or 4 bytes.
  • tchrist
    tchrist about 12 years
    The size of a Unicode character in Java varies between 2 bytes and 4 bytes, depending on whether we’re in plane 0 or not.
  • Deduplicator
    Deduplicator about 9 years
    Java serialization (and class-files) use modified CESU-8 though, which is a modified UTF-8.
  • Vishy
    Vishy over 8 years
    New URL: docs.oracle.com/javase/8/docs/api/java/lang/String.html Note: Java 9 should be out next year. ;)
  • Koray Tugay
    Koray Tugay over 8 years
    Can you elobrate on alignment issues ?
  • Koray Tugay
    Koray Tugay over 8 years
    @tchrist How? How can a character in Java be 4 bytes?
  • Vishy
    Vishy over 8 years
    @KorayTugay good question. This was 3 years ago but I think I was referring to padding in an object. Adding one char field could add up to 8 bytes with padding / object alignment.
  • tchrist
    tchrist over 8 years
    @KorayTugay Unicode characters (code points) are values between 0 and 0x10FFFF.
  • Koray Tugay
    Koray Tugay over 8 years
    @tchrist How can a UTF-16 encode end up in 4 bytes? Isn't UTF-16 always 2 bytes?
  • tchrist
    tchrist over 8 years
    @KorayTugay No, UTF-16 is either 2 bytes or 4 bytes. It is a variable-width encoding just like UTF-8. Only the obsolete UCS-2 is 2 bytes, and that's long dead.
  • Koray Tugay
    Koray Tugay almost 8 years
    @tchrist Java will treat a 4 byte Unicode Character as 2 Java Characters. Please see: tugay.biz/2016/07/stringlength-method-may-fool-you.html
  • Praxeolitic
    Praxeolitic over 6 years
    What endianness is used for the UTF-16? Also, you should mention that a Java char only supports BMP code points.
  • Vishy
    Vishy over 6 years
    @Praxeolitic the endianness is whatever is native to the processor. Generally little but it should almost never matter.
  • Ludovic Kuty
    Ludovic Kuty over 5 years
    The code unit of UT-16 is always 2 bytes. But the character itself needs 1 code unit or 2 code units hence 2 or 4 bytes.
  • Ludovic Kuty
    Ludovic Kuty over 5 years
    A char is 2 bytes but a character (char with no typewriter font) is 2 or 4 bytes as @tchrist mentionned
  • matvore
    matvore over 3 years
    @LudovicKuty a "character" is a rendering and language-specific concept - it can take up a large number of codepoints to compose a single character, so a character can take up hundreds of bytes. So it's more like "The codepoint itself - in UTF-16 - needs 2 or 4 bytes" Try an internet search for "unicode composition." You generally only care about "characters" - like at what codepoint a character begins or how many characters are in a string - if you're building a UI framework or implementing rendering logic.
  • Ludovic Kuty
    Ludovic Kuty over 3 years
    Yes, the codepoint, my bad. The notion of a character is quite abstract in the Unicode standard (if I remember correctly).
  • Stephen C
    Stephen C about 3 years
    The FAQ that is linked in this answer no longer exists. The closest I can find is this: docs.oracle.com/javase/8/docs/technotes/guides/intl/…. But note that if you carefully parse both the quoted text and the link I found, neither actually says what the internal String representation is. (They say that a String represents a char sequence, but that isn't the same thing.) In fact ... for recent Java implementations, the default implementation of String uses a byte[] rather than a char[] internally. You can check the OpenJDK source code to see.
  • Stephen C
    Stephen C about 3 years
    In fact, your inference is incorrect. Recent implementations do not (always) use UTF-16 for internal String representations.
  • Maarten Bodewes
    Maarten Bodewes over 2 years
    This answer is outdated. Generally you should not presume to know what the internal representation looks like. If this answer wants to be saved and not report BS, it should be updated with a specific runtime or runtimes for which this is the case.