Serializing an object as UTF-8 XML in .NET

165,583

Solution 1

Your code doesn't get the UTF-8 into memory as you read it back into a string again, so its no longer in UTF-8, but back in UTF-16 (though ideally its best to consider strings at a higher level than any encoding, except when forced to do so).

To get the actual UTF-8 octets you could use:

var serializer = new XmlSerializer(typeof(SomeSerializableObject));

var memoryStream = new MemoryStream();
var streamWriter = new StreamWriter(memoryStream, System.Text.Encoding.UTF8);

serializer.Serialize(streamWriter, entry);

byte[] utf8EncodedXml = memoryStream.ToArray();

I've left out the same disposal you've left. I slightly favour the following (with normal disposal left in):

var serializer = new XmlSerializer(typeof(SomeSerializableObject));
using(var memStm = new MemoryStream())
using(var  xw = XmlWriter.Create(memStm))
{
  serializer.Serialize(xw, entry);
  var utf8 = memStm.ToArray();
}

Which is much the same amount of complexity, but does show that at every stage there is a reasonable choice to do something else, the most pressing of which is to serialise to somewhere other than to memory, such as to a file, TCP/IP stream, database, etc. All in all, it's not really that verbose.

Solution 2

No, you can use a StringWriter to get rid of the intermediate MemoryStream. However, to force it into XML you need to use a StringWriter which overrides the Encoding property:

public class Utf8StringWriter : StringWriter
{
    public override Encoding Encoding => Encoding.UTF8;
}

Or if you're not using C# 6 yet:

public class Utf8StringWriter : StringWriter
{
    public override Encoding Encoding { get { return Encoding.UTF8; } }
}

Then:

var serializer = new XmlSerializer(typeof(SomeSerializableObject));
string utf8;
using (StringWriter writer = new Utf8StringWriter())
{
    serializer.Serialize(writer, entry);
    utf8 = writer.ToString();
}

Obviously you can make Utf8StringWriter into a more general class which accepts any encoding in its constructor - but in my experience UTF-8 is by far the most commonly required "custom" encoding for a StringWriter :)

Now as Jon Hanna says, this will still be UTF-16 internally, but presumably you're going to pass it to something else at some point, to convert it into binary data... at that point you can use the above string, convert it into UTF-8 bytes, and all will be well - because the XML declaration will specify "utf-8" as the encoding.

EDIT: A short but complete example to show this working:

using System;
using System.Text;
using System.IO;
using System.Xml.Serialization;

public class Test
{    
    public int X { get; set; }

    static void Main()
    {
        Test t = new Test();
        var serializer = new XmlSerializer(typeof(Test));
        string utf8;
        using (StringWriter writer = new Utf8StringWriter())
        {
            serializer.Serialize(writer, t);
            utf8 = writer.ToString();
        }
        Console.WriteLine(utf8);
    }


    public class Utf8StringWriter : StringWriter
    {
        public override Encoding Encoding => Encoding.UTF8;
    }
}

Result:

<?xml version="1.0" encoding="utf-8"?>
<Test xmlns:xsd="http://www.w3.org/2001/XMLSchema" 
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <X>0</X>
</Test>

Note the declared encoding of "utf-8" which is what we wanted, I believe.

Solution 3

Very good answer using inheritance, just remember to override the initializer

public class Utf8StringWriter : StringWriter
{
    public Utf8StringWriter(StringBuilder sb) : base (sb)
    {
    }
    public override Encoding Encoding { get { return Encoding.UTF8; } }
}

Solution 4

I found this blog post which explains the problem very well, and defines a few different solutions:

(dead link removed)

I've settled for the idea that the best way to do it is to completely omit the XML declaration when in memory. It actually is UTF-16 at that point anyway, but the XML declaration doesn't seem meaningful until it has been written to a file with a particular encoding; and even then the declaration is not required. It doesn't seem to break deserialization, at least.

As @Jon Hanna mentions, this can be done with an XmlWriter created like this:

XmlWriter writer = XmlWriter.Create (output, new XmlWriterSettings() { OmitXmlDeclaration = true });
Share:
165,583
Garry Shutler
Author by

Garry Shutler

Software developer, CTO of Cronofy.

Updated on October 04, 2020

Comments

  • Garry Shutler
    Garry Shutler over 3 years

    Proper object disposal removed for brevity but I'm shocked if this is the simplest way to encode an object as UTF-8 in memory. There has to be an easier way doesn't there?

    var serializer = new XmlSerializer(typeof(SomeSerializableObject));
    
    var memoryStream = new MemoryStream();
    var streamWriter = new StreamWriter(memoryStream, System.Text.Encoding.UTF8);
    
    serializer.Serialize(streamWriter, entry);
    
    memoryStream.Seek(0, SeekOrigin.Begin);
    var streamReader = new StreamReader(memoryStream, System.Text.Encoding.UTF8);
    var utf8EncodedXml = streamReader.ReadToEnd();
    
  • Jon Hanna
    Jon Hanna over 13 years
    Even when you override the Encoding parameter on StringWriter it still sends the written data to a StringBuilder, so it's still UTF-16. And the string can only ever be UTF-16.
  • Jon Skeet
    Jon Skeet over 13 years
    @Jon: Have you tried it? I have, and it works. It's the declared encoding which is important here; obviously internally the string is still UTF-16, but that doesn't make any difference until it's converted to binary (which could use any encoding, including UTF-8). The TextWriter.Encoding property is used by the XML serializer to determine which encoding name to specify within the document itself.
  • Jon Hanna
    Jon Hanna over 13 years
    I tried it and I got a string in UTF-16. Maybe that's what the querant wants.
  • Jon Skeet
    Jon Skeet over 13 years
    @Jon: And what was the declared encoding? In my experience, that's what questions like this are really trying to do - create an XML document which declares itself to be in UTF-8. As you say, it's best not to consider the text to be in any encoding until you need to... but as the XML document declares an encoding, that's something you need to consider.
  • Jon Hanna
    Jon Hanna over 13 years
    Yep, I've asked the querant to qualify. I read the question literally, but since the code he gives as an example produces a string maybe your read on it is correct (though in that case I'd suggest not having a declaration at all, since it would then be valid between UTF-8/UTF-16 re-encodings).
  • Garry Shutler
    Garry Shutler over 13 years
    @Jon Hanna is there a way to serialize to XML without having a declaration at all?
  • Jon Hanna
    Jon Hanna over 13 years
    @Garry, simplest I can think of right now is to take the second example in my answer, but when you create the XmlWriter do so with the factory method that takes an XmlWriterSettings object, and have the OmitXmlDeclaration property set to true.
  • Adriano Carneiro
    Adriano Carneiro over 11 years
    +1 Your Utf8StringWriter solution is extremely nice and clean
  • ony
    ony over 11 years
    Also. If you want to suppress BOM you can use XmlWriter.Create(memoryStream, new XmlWriterSettings { Encoding = new UTF8Encoding(false) }).
  • wuhcwdc
    wuhcwdc almost 11 years
    @JonSkeet : I checked here the difference between Utf-8 and Utf-16 differencebetween.net/technology/… and found that we should use UTF-8 for encoding. Kindly confirm is it correct. can you please tell u in which situation we should use UTF-16 ?
  • wuhcwdc
    wuhcwdc almost 11 years
    @JonSkeet - UTF-16 represents every character using two bytes. UTF-8 uses the one byte ASCII character encodings for ASCII characters. Does this means when i encode a text file and let's say the text file contains 10 characters. Does it means using UTF-8 => file size will becomes 10 * 8 = 80 charactres. means for each characters, 8 bits will be used. and similarly 10 * 16 = 160 in case of UTF-16. AM i correct ?
  • Jon Skeet
    Jon Skeet almost 11 years
    @PankajGarg: No, if all those characters are ASCII then the file will be 10 bytes in UTF-8 and 20 bytes in UTF-16. Remember bits != bytes.
  • briba
    briba about 10 years
    Wow! That´s perfect! Thank you @JonSkeet
  • CRice
    CRice over 9 years
    Strange that StringWriter needs a subclass to use utf8, why is there no setter...
  • Jon Skeet
    Jon Skeet over 9 years
    @CRice: I'd have preferred a constructor parameter... but yes, it's a bit annoying.
  • Sudhanshu Mishra
    Sudhanshu Mishra over 8 years
    If someone (like me) needs to read the XML created like Jon shows, remember to reposition the memory stream to 0, otherwise you'll get an exception saying "Root element is missing". So do this: memStm.Position = 0; XmlReader xmlReader = XmlReader.Create(memStm)
  • Auguste Van Nieuwenhuyzen
    Auguste Van Nieuwenhuyzen about 8 years
    Hi @JonSkeet - big fan, but I'm afraid I can't get your Utf8StringWriter to compile in .NET 4.5. I couldn't use => and had to instead create the actual getter with public override Encoding Encoding { get { return Encoding.UTF8; }}. Though then it worked a treat! Thanks!
  • Jon Skeet
    Jon Skeet about 8 years
    @IanGrainger: Indeed, that's C# 6 code (it was updated in November to use C# 6, not by me...)
  • hanzolo
    hanzolo about 8 years
    This should be the answer.. The Generated XML Shows the proper UTF encoding with this solution
  • Sergei G
    Sergei G over 3 years
    very nice solution