Truncating Strings by Bytes
Solution 1
Why not convert to bytes and walk forward--obeying UTF8 character boundaries as you do it--until you've got the max number, then convert those bytes back into a string?
Or you could just cut the original string if you keep track of where the cut should occur:
// Assuming that Java will always produce valid UTF8 from a string, so no error checking!
// (Is this always true, I wonder?)
public class UTF8Cutter {
public static String cut(String s, int n) {
byte[] utf8 = s.getBytes();
if (utf8.length < n) n = utf8.length;
int n16 = 0;
int advance = 1;
int i = 0;
while (i < n) {
advance = 1;
if ((utf8[i] & 0x80) == 0) i += 1;
else if ((utf8[i] & 0xE0) == 0xC0) i += 2;
else if ((utf8[i] & 0xF0) == 0xE0) i += 3;
else { i += 4; advance = 2; }
if (i <= n) n16 += advance;
}
return s.substring(0,n16);
}
}
Note: edited to fix bugs on 2014-08-25
Solution 2
The more sane solution is using decoder:
final Charset CHARSET = Charset.forName("UTF-8"); // or any other charset
final byte[] bytes = inputString.getBytes(CHARSET);
final CharsetDecoder decoder = CHARSET.newDecoder();
decoder.onMalformedInput(CodingErrorAction.IGNORE);
decoder.reset();
final CharBuffer decoded = decoder.decode(ByteBuffer.wrap(bytes, 0, limit));
final String outputString = decoded.toString();
Solution 3
I think Rex Kerr's solution has 2 bugs.
- First, it will truncate to limit+1 if a non-ASCII character is just before the limit. Truncating "123456789á1" will result in "123456789á" which is represented in 11 characters in UTF-8.
- Second, I think he misinterpreted the UTF standard. https://en.wikipedia.org/wiki/UTF-8#Description shows that a 110xxxxx at the beginning of a UTF sequence tells us that the representation is 2 characters long (as opposed to 3). That's the reason his implementation usually doesn't use up all available space (as Nissim Avitan noted).
Please find my corrected version below:
public String cut(String s, int charLimit) throws UnsupportedEncodingException {
byte[] utf8 = s.getBytes("UTF-8");
if (utf8.length <= charLimit) {
return s;
}
int n16 = 0;
boolean extraLong = false;
int i = 0;
while (i < charLimit) {
// Unicode characters above U+FFFF need 2 words in utf16
extraLong = ((utf8[i] & 0xF0) == 0xF0);
if ((utf8[i] & 0x80) == 0) {
i += 1;
} else {
int b = utf8[i];
while ((b & 0x80) > 0) {
++i;
b = b << 1;
}
}
if (i <= charLimit) {
n16 += (extraLong) ? 2 : 1;
}
}
return s.substring(0, n16);
}
I still thought this was far from effective. So if you don't really need the String representation of the result and the byte array will do, you can use this:
private byte[] cutToBytes(String s, int charLimit) throws UnsupportedEncodingException {
byte[] utf8 = s.getBytes("UTF-8");
if (utf8.length <= charLimit) {
return utf8;
}
if ((utf8[charLimit] & 0x80) == 0) {
// the limit doesn't cut an UTF-8 sequence
return Arrays.copyOf(utf8, charLimit);
}
int i = 0;
while ((utf8[charLimit-i-1] & 0x80) > 0 && (utf8[charLimit-i-1] & 0x40) == 0) {
++i;
}
if ((utf8[charLimit-i-1] & 0x80) > 0) {
// we have to skip the starter UTF-8 byte
return Arrays.copyOf(utf8, charLimit-i-1);
} else {
// we passed all UTF-8 bytes
return Arrays.copyOf(utf8, charLimit-i);
}
}
Funny thing is that with a realistic 20-500 byte limit they perform pretty much the same IF you create a string from the byte array again.
Please note that both methods assume a valid utf-8 input which is a valid assumption after using Java's getBytes() function.
Solution 4
String s = "FOOBAR";
int limit = 3;
s = new String(s.getBytes(), 0, limit);
Result value of s
:
FOO
Solution 5
Use the UTF-8 CharsetEncoder, and encode until the output ByteBuffer contains as many bytes as you are willing to take, by looking for CoderResult.OVERFLOW.
stevebot
http://www.codedforyou.com http://techmobilehub.blogger.com
Updated on July 09, 2022Comments
-
stevebot almost 2 years
I create the following for truncating a string in java to a new string with a given number of bytes.
String truncatedValue = ""; String currentValue = string; int pivotIndex = (int) Math.round(((double) string.length())/2); while(!truncatedValue.equals(currentValue)){ currentValue = string.substring(0,pivotIndex); byte[] bytes = null; bytes = currentValue.getBytes(encoding); if(bytes==null){ return string; } int byteLength = bytes.length; int newIndex = (int) Math.round(((double) pivotIndex)/2); if(byteLength > maxBytesLength){ pivotIndex = newIndex; } else if(byteLength < maxBytesLength){ pivotIndex = pivotIndex + 1; } else { truncatedValue = currentValue; } } return truncatedValue;
This is the first thing that came to my mind, and I know I could improve on it. I saw another post that was asking a similar question there, but they were truncating Strings using the bytes instead of String.substring. I think I would rather use String.substring in my case.
EDIT: I just removed the UTF8 reference because I would rather be able to do this for different storage types aswell.
-
stevebot over 13 yearsI definitely could do that. Is there any reason why using String.substring is any worse? It seems like doing it the way you describe would have to account for all the code points, which isn't a whole lot of fun. (depending on your definition of fun :) ).
-
Rex Kerr over 13 years@stevebot - To be efficient, you need to take advantage of the known structure of the data. If you don't care about efficiency and want it to be easy, or you want to support every possible Java encoding without having to know what it is, your method seems reasonable enough.
-
Vishy over 13 years@nguyendat, there is lots of reasons this is not very performant. The main one would be the object creation for the substring() and getBytes() However, you would be surprised how much you can do in a milli-second and that is usually enough.
-
Stefan L about 11 yearsDoesn't look like this solution prevents a trailing half surrogate pair? Second, in case getBytes().length would happen to be applied to both halves of a surrogate pair individually (not immediately obvious to me it never will), it'd also underestimate the the size of the UTF-8 representation of the pair as a whole, assuming the "replacement byte array" is a single byte. Third, the 4-byte UTF-8 code points all require a two-char surrogate pair in Java, so effectively the max is just 3 bytes per Java character.
-
Stefan L about 11 yearsThat method doesn't handle surrogate pairs properly, e.g. substring("\uD800\uDF30\uD800\uDF30", 4).getBytes("UTF-8").length will return 8, not 4. Half a surrogate pair is represented as a single-byte "?" by String.getBytes("UTF-8").
-
Raymond Lukanta almost 9 yearsYou should also catch UnsupportedEncodingException at s.getBytes("UTF-8")
-
Zsolt Taskai over 8 yearsI don't see getBytes throwing anything. Although docs.oracle.com/javase/7/docs/api/java/lang/… says "The behavior of this method when this string cannot be encoded in the given charset is unspecified."
-
Raymond Lukanta over 8 yearsThe page you linked shows that it throws UnsupportedEncodingException: "public byte[] getBytes(String charsetName) throws UnsupportedEncodingException"
-
Zsolt Taskai over 8 yearsThanks! Strange, I don't know what version I used when I posted this solution 2 years ago. Updating the code above.
-
Hans Brende over 7 years@StefanL I posted a variant of this answer here which should handle surrogate pairs properly.
-
Pikachu about 7 yearsInstead of providing the encoding name as a String you can use the Charset constants from StandardCharsets class because the String#getBytes(Charset charset) method does not throw UnsupportedEncodingException.
-
Holger about 4 yearsCutting at an arbitrary byte index may create invalid encoded data, as a single character may use multiple bytes (especially with UTF-8). Worse, with other encodings it might produce wrong valid characters, which are not ignored. You could easily avoid this by first allocating a
ByteBuffer
with the desired size, then use it with aCharsetEncoder
, which will automatically encode only as many valid characters as fit into the buffer, then decode the buffer to aString
. Similar approach, but without the bug and even more efficient, as it won’t encode character beyond the intended limit. -
Holger about 4 yearsSee this answer. It does even eliminate the decoding step.
-
Holger about 4 yearsWouldn’t it be even more efficient to iterate over the String’s characters and predict their encoded length, instead of encoding the entire string, to iterate over the encoded bytes and reconstitute their character association? Similar to this, just with non-BMP character support and counting before doing
substring
like in your answer… -
Martin Rust over 3 yearsWhen the MAX_LENGTH interrupts the byte array in the middle of a multi-byte sequence, then the resulting string ends with a "?". Example:
s = "ää";
MAX_LENGTH = 3;
result:"ä?"
Given the simplicity of this code, however maybe in some situations this might be an option. -
Martin Rust over 3 yearscorrect my comment:
MAX_LENGTH = 5
(why does the solution useMAX_LENGTH - 2
?) Also note that as of Java 1.6,"UTF-8"
should be replaced byStandardCharsets.UTF_8
. -
kan almost 3 years@Holger My solution ignores truncated multibyte chars by
CodingErrorAction.IGNORE
. So it works fine. I am interested to see an example when it fails. However I agree, your solution looks neater and could be more performant. -
Holger almost 3 yearsYes, for UTF-8 using CodingErrorAction.IGNORE will do the right thing. But the OP said “I would rather be able to do this for different storage types aswell” and for other encodings, tearing multibyte sequences apart may result in valid (but wrong) characters.