Preferred way of logging in .Net deployed to Azure

16,607

Solution 1

We use the build in diagnostics that writes to Azure Table storage. Anytime we need a message written to a log, it's just a "Trace.WriteLine(...)".

Since the logs are written to Azure Table Storage, we have a process that will download the log messages, and remove them from the table storage. This works well for us, but I think it probably depends on the application.

http://msdn.microsoft.com/en-us/library/gg433048.aspx

Hope it helps!

[Update]

public void GetLogs() {
        int cnt = 0;
        bool foundRows = false;
        var entities = context.LogTable;
        while (1 == 1) {
            foreach (var en in entities) {
                processLogRow(en);
                context.DeleteObject(en);
                cnt++;
                try {
                    if (cnt % 100 == 0) {
                        foundRows = true;
                        context.SaveChanges(SaveChangesOptions.Batch);
                    }
                } catch (Exception ex) {
                    Console.WriteLine("Exception deleting batch. {0}", ex.Message);
                }
            }
            if (!foundRows)
                break;
            else {
                context.SaveChanges(SaveChangesOptions.Batch);
            }
            foundRows = false;
        }
        Console.WriteLine("Done! Total Deleted: {0}", cnt);
    }

Solution 2

Adding a bit to Brosto's answer: It takes only a few lines of code to configure Azure Diagnostics. You decide what level you want to capture (verbose, informational, etc.). and how frequently you want to push locally-cached log messages to Azure storage (I usually go with something like 15 minute intervals). Log messages from all of your instances are then aggregated into the same table, easily queryable (or downloadable), with properties defining role and instance.

There are additional trace statements, such as Trace.TraceError(), Trace.TraceWarning(), etc.

You can even create a trace listener and watch your log output in almost-realtime on your local machine. The Azure AppFabric SDK Samples zip contains a sample (under \ServiceBus\Scenarios\CloudTrace) for doing this.

Solution 3

For error logging the best solution I saw is Elmah. It requires SQL database, but this is the error loggin tool that actually helps diagnose problems. It works fine on Azure.

Solution 4

For all my Azure sites I use custom logging to Azure tables. Although a bit more work, I find it gives me more control over the information that gets stored. Like Brosto above commented, it is best to have a local process that periodically downloads the logs to your local system. If you derive a class from TableServiceEntity you can define a structure containing all the fields you wish to log and use the same class to retrieve the data in your local application that retrieves the logs. I maintain some examples of the code to do this on my logging using Azure table storage page if it's of any help to anyone.

One of the problems I have experienced with using the Trace.Writeline method is that the logs are stored on the local instance and are periodically transferred to Azure table storage. Given the transient nature of an Azure instance, all local storage must be considered temporary at best. Therefore there is always a window for losing your log data while it is held on the local drive.

Given the cost of Azure table storage transactions, logging directly to Azure storage is extremely cost effective. If performance is a major issue for you, it may be worthwhile dedicating a separate thread (or threads) to servicing a memory held queue of logging data. Although this obviously gives similar issues with transient data if the Azure instance is recycled, the window for this to happen should be much smaller.

Solution 5

As was already mentioned, using Windows Azure Diagnostics is the way to go. However, all the logging from all your instances ends up in one big list, which can be hard to read through. Therefore I try to only send relatively important messages (Warn level and higher) to the diagnostics tables. Even so it's a pain to read the table directly. There are a few tools out there, I personally use Cerebrata Diagnostics Manager.

Although using the Trace functions directly works fine, I'd suggest using a logging framework such as NLog or log4net. That gives you a bit more flexibility to send some messages Trace/Azure Diagnostics and others to local storage.

For example, I added a ton of trace logging to track down a thread-hanging problem. I found that giving a root-relative file path such as "\ServiceLogs\MyLog.txt" will output to the F: drive on the instance. So I routed all that to the instance filesystem, rather than the Diagnostics tables. You have to remote into each instance to see those logs, but in this circumstance it's a good trade off.

Share:
16,607
Riri
Author by

Riri

Fantomen

Updated on June 06, 2022

Comments

  • Riri
    Riri almost 2 years

    Would you say this is the most optimal way of doing simple traditional logging in a Azure deployed application?

    If feels like a lot of work to actually get to the files etc ...

    What's worked best for you?

  • Brian Reischl
    Brian Reischl about 13 years
    Any chance that downloader code is posted somewhere? I'm just getting ready to do something similar.
  • Brosto
    Brosto about 13 years
    Full code isn't posted anywhere, but I'll update this post with the main loop that downloads the logs. This should give you an idea of how it works.
  • Sam
    Sam almost 11 years
    You can configure ELMAH to work with Table Storage. See for example wadewegner.com/2011/08/…
  • user3613932
    user3613932 over 7 years
    This answer only mentions one way of attacking the problem. Typically, you would log to disk in a rolling fashion only to keep the latest data, and also log to the Azure storage account which can't roll data currently. Furthermore, as of now, it is recommended to use EventSource (ETW) instead of Systems.Diagnostics.Trace because you can control the formatting as well through ETW.
  • Zordid
    Zordid almost 6 years
    This code has a serious "bad code" smell... :/I would even go so far as to say I see a bug here.
  • Zordid
    Zordid almost 6 years
    For any number of entities < 100 the SaveChanges is never going to be called. This happens when code is not clean.
  • Zordid
    Zordid almost 6 years
    Also, a method named "GetLogs" should GET THE LOGS - aka return something. It should never have serious side effects like deleting stuff from an object storage or processing something...