UTF-16 to ASCII conversion in Java

36,384

Solution 1

How about this:

String input = ... // my UTF-16 string
StringBuilder sb = new StringBuilder(input.length());
for (int i = 0; i < input.length(); i++) {
    char ch = input.charAt(i);
    if (ch <= 0xFF) {
        sb.append(ch);
    }
}

byte[] ascii = sb.toString().getBytes("ISO-8859-1"); // aka LATIN-1

This is probably not the most efficient way to do this conversion for large strings since we copy the characters twice. However, it has the advantage of being straightforward.

BTW, strictly speaking there is no such character set as 8-bit ASCII. ASCII is a 7-bit character set. LATIN-1 is the nearest thing there is to an "8-bit ASCII" character set (and block 0 of Unicode is equivalent to LATIN-1) so I'll assume that's what you mean.

EDIT: in the light of the update to the question, the solution is even simpler:

String input = ... // my UTF-16 string
byte[] ascii = new byte[input.length()];
for (int i = 0; i < input.length(); i++) {
    ascii[i] = (byte) input.charAt(i);
}

This solution is more efficient. Since we now know how many bytes to expect, we can preallocate the byte array and in copy the (truncated) characters without using a StringBuilder as intermediate buffer.

However, I'm not convinced that dealing with bad data in this way is sensible.

EDIT 2: there is one more obscure "gotcha" with this. Unicode actually defines code points (characters) to be "roughly 21 bit" values ... 0x000000 to 0x10FFFF ... and uses surrogates to represent codes > 0x00FFFF. In other words, a Unicode codepoint > 0x00FFFF is actually represented in UTF-16 as two "characters". Neither my answer or any of the others take account of this (admittedly esoteric) point. In fact, dealing with codepoints > 0x00FFFF in Java is rather tricky in general. This stems from the fact that 'char' is a 16 bit type and String is defined in terms of 'char'.

EDIT 3: maybe a more sensible solution for dealing with unexpected characters that don't convert to ASCII is to replace them with the standard replacement character:

String input = ... // my UTF-16 string
byte[] ascii = new byte[input.length()];
for (int i = 0; i < input.length(); i++) {
    char ch = input.charAt(i);
    ascii[i] = (ch <= 0xFF) ? (byte) ch : (byte) '?';
}

Solution 2

You can use java.nio for an easy solution:

// first encode the utf-16 string as a ByteBuffer
ByteBuffer bb = Charset.forName("utf-16").encode(CharBuffer.wrap(utf16str));
// then decode those bytes as US-ASCII
CharBuffer ascii = Charset.forName("US-ASCII").decode(bb);

Solution 3

Java internally represents strings in UTF-16. If a String object is what you are starting with, you can encode using String.getBytes(Charset c), where you might specify US-ASCII (which can map code points 0x00-0x7f) or ISO-8859-1 (which can map code points 0x00-0xff, and may be what you mean by "8-bit ASCII").

As for adding "bad data"... ASCII or ISO-8859-1 strings simply can't represent values outside of a certain range. I believe getBytes will simply drop characters it's not able to represent in the destination character set.

Solution 4

Since this is an exercise, it sounds like you need to implement this manually. You can think of an encoding (e.g. UTF-16 or ASCII) as a lookup table that matches a sequence of bytes to a logical character (a codepoint).

Java uses UTF-16 strings, which means that any given codepoint can be represented in one or two char variables. Whether you want to handle the two-char surrogate pairs depends on how likely you think your application is to encounter them (see the Character class for detecting them). ASCII only uses the first 7 bits of an octet (byte), so the valid range of values is 0 to 127. UTF-16 uses identical values for this range (they're just wider). This can be confirmed with this code:

Charset ascii = Charset.forName("US-ASCII");
byte[] buffer = new byte[1];
char[] cbuf = new char[1];
for (int i = 0; i <= 127; i++) {
  buffer[0] = (byte) i;
  cbuf[0] = (char) i;
  String decoded = new String(buffer, ascii);
  String utf16String = new String(cbuf);
  if (!utf16String.equals(decoded)) {
    throw new IllegalStateException();
  }
  System.out.print(utf16String);
}
System.out.println("\nOK");

Therefore, you can convert UTF-16 to ASCII by casting a char to a byte.

You can read more about Java character encoding here.

Share:
36,384
His
Author by

His

Updated on June 11, 2021

Comments

  • His
    His almost 3 years

    Having ignored it all this time, I am currently forcing myself to learn more about unicode in Java. There is an exercise I need to do about converting a UTF-16 string to 8-bit ASCII. Can someone please enlighten me how to do this in Java? I understand that you can't represent all possible unicode values in ASCII, so in this case I want a code which exceeds 0xFF to be merely added anyway (bad data should also just be added silently).

    Thanks!

    • Stephen C
      Stephen C over 14 years
      "added away" ??? Do you mean "thrown away"? Discarded?
    • His
      His over 14 years
      Sorry for not being clear in the first place. Actually, I am not too clear myself. The exercise in the book I read only says that "a code which exceeds 0xFF should merely be cast to a byte and added anyway (bad data should be added silently).".
    • Joachim Sauer
      Joachim Sauer over 14 years
      0xFF is not a valid value for an ASCII character. ASCII is 7-bit, so the highest valid value is 0x7F.
  • Stephen C
    Stephen C over 14 years
    "I believe getBytes will simply drop characters it's not able to represent in the destination character set." It depends on the Charset's default replacement byte array ... according to the Javadoc.
  • Phil
    Phil over 14 years
    I happened upon that in the Javadoc as well, but I couldn't find anything about how the default Charset objects are implemented. Do you know what actually happens when you invoke, say, Charset.forName("US-ASCII")?
  • rplankenhorn
    rplankenhorn over 11 years
    In light of "Edit 2" above, could we not mark this as a solution? This is not a solution so it shouldn't be marked as such.
  • Stephen C
    Stephen C over 7 years
    @rplankenhorn - Actually, since the problem is really about "forcing" Unicode into ASCII, the either versions of the conversion are an adequate solution even in the face of surrogates. In the first version, any code-unit >= FF is going to be removed. In the second version, any code-unit >= FF is going to "added anyway" ... which is what the OP explicitly asked for. (Not that I think that that is a sensible approach.)
  • Manabu Tokunaga
    Manabu Tokunaga over 2 years
    Waking up this really old issue, but this is 2021 with Windows WSL2, and when I get a path from a WSL mounted drive on the Windows side, I did not get the standard "ASCII" file string in java.nio.Path. Basically it was an ASCII string with every other byte set to 0. The solution was easy (after reading this post) by new String(s.getBytes(StandardCharsets.US_ASCII)) and brought the string back the way I needed it.