Logback file appender doesn't flush immediately

17,562

Solution 1

I decided to bring my solution to everybody. Let me clarify first of all that this is not a logback issue and not a JRE problem. This is described in the javadoc and generally shouldn't be an issue until you are faced with some old school integration solutions over the file syncing.

So this is a logback appender implemented to flush immediately:

public class ImmediateFileAppender<E> extends RollingFileAppender<E> {

    @Override
    public void openFile(String file_name) throws IOException {
        synchronized (lock) {
            File file = new File(file_name);
            if (FileUtil.isParentDirectoryCreationRequired(file)) {
                boolean result = FileUtil.createMissingParentDirectories(file);
                if (!result) {
                    addError("Failed to create parent directories for [" + file.getAbsolutePath() + "]");
                }
            }

            ImmediateResilientFileOutputStream resilientFos = new ImmediateResilientFileOutputStream(file, append);
            resilientFos.setContext(context);
            setOutputStream(resilientFos);
        }
    }

    @Override
    protected void writeOut(E event) throws IOException {
        super.writeOut(event);
    }

}

This is corresponding output stream utility class. Because of some methods and fields of original ResilientOutputStreamBase that was supposed for extending initially have packaged access modifiers I had to extend OutputStream instead and just copy the rest and unchanged of ResilientOutputStreamBase and ResilientFileOutputStream to this new one. I just display the changed code:

public class ImmediateResilientFileOutputStream extends OutputStream {

    // merged code from ResilientOutputStreamBase and ResilientFileOutputStream

    protected FileOutputStream os;

    public FileOutputStream openNewOutputStream() throws IOException {
        return new FileOutputStream(file, true);
    }

    @Override
    public void flush() {
        if (os != null) {
            try {
                os.flush();
                os.getFD().sync(); // this's make sence
                postSuccessfulWrite();
            } catch (IOException e) {
                postIOFailure(e);
            }
        }
    }

}

And finally the config:

<appender name="FOR_INTEGRATION" class="package.ImmediateFileAppender">
    <file>/somepath/for_integration.log</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
        <fileNamePattern>for_integration.log.%i</fileNamePattern>
        <minIndex>1</minIndex>
        <maxIndex>3</maxIndex>
    </rollingPolicy>
    <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
        <maxFileSize>5MB</maxFileSize>
    </triggeringPolicy>
    <encoder>
        <pattern>%d{HH:mm:ss.SSS} - %msg%n</pattern>
        <immediateFlush>true</immediateFlush>
    </encoder>
</appender>

Solution 2

You have done a good job - well done. Here is a proposal to get it more concise:

public class ImmediateFlushPatternLayoutEncoder extends PatternLayoutEncoder {
    public void doEncode(ILoggingEvent event) throws IOException {
        super.doEncode(event);
        if (isImmediateFlush()) {
             if (outputStream.os instanceof FileOutputStream) {
                 ((FileOutputStream) outputStream.os).getFD().sync();
             }
        }
    }
}

In configuration, you have to use that specific encoder:

<encoder class="ch.qos.logback.core.recovery.ImmediateFlushPatternLayoutEncoder">

Not tested. Probably there will be visibility issues with fields, requiring to use logback ch.qos.logback.core.recovery package itself.

By the way, I invite you to submit a bug report to logback to get an additional option immediateSync on LayoutWrappingEncoder.

Share:
17,562

Related videos on Youtube

Viktor Stolbin
Author by

Viktor Stolbin

Hardcore back-end guru in investment banking segment with solid exposure to low latency, high load, distributed systems. Enjoying full-stack development and designing of architecture in particular.

Updated on September 15, 2022

Comments

  • Viktor Stolbin
    Viktor Stolbin about 1 year

    For some circumstances I need to force flushing in logback's file appender immediately. I've found in docs this option is enabled by default. Mysteriously this doesn't work. As I see in the sources underlying process involves BufferedOutputSream correctly. Is there any issues with BufferedOutputSream.flush() ? Probably this is rather related to the flushing issue.

    Update: I found the issue on Windows XP Pro SP 3 and on Red Hat Enterprise Linux Server release 5.3 (Tikanga). I used these libs:

    jcl-over-slf4j-1.6.6.jar
    logback-classic-1.0.6.jar
    logback-core-1.0.6.jar
    slf4j-api-1.6.6.jar
    

    The logback.xml is:

    <configuration>
        <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
            <file>/somepath/file.log</file>
            <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
                <fileNamePattern>file.log.%i</fileNamePattern>
                <minIndex>1</minIndex>
                <maxIndex>3</maxIndex>
            </rollingPolicy>
            <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
                <maxFileSize>5MB</maxFileSize>
            </triggeringPolicy>
            <encoder>
                <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n</pattern>
            </encoder>
        </appender>
    
        <root level="debug">
            <appender-ref ref="FILE"/>
        </root>
    </configuration>
    

    Updated: I'd provide a unit test but that seems not so simple. Let me describe the issue more clearly.

    1. Event of logging occurred
    2. Event is passed into file appender
    3. Event is serialized with defined pattern
    4. Serialized message of event is passed to the file appender and is about to write out to output stream
    5. Writing to the stream is finished, output stream is flushed (I've checked the implementation). Note that immidiateFlush is true by default so method flush() is invoked explicitly
    6. No result in the file!

    A bit later when some underlying buffer was flowed the event appears in the file. So the question is: does output stream guarantee immediate flush?

    To be honest I've already solve this by implementing my own ImmediateRollingFileAppender that leverages facility of FileDescriptor of immediate syncing. Anybody interested in can follow this.

    So this is not a logback issue.

  • Viktor Stolbin
    Viktor Stolbin about 11 years
    Thanks, Martin. Will do this one of these days. Thanks for maintaining us and the project.
  • BlackJoker
    BlackJoker over 10 years
    this outputStream is a ResilientFileOutputStream,and it's not extended from FileOutputStream,so it does not has the getFD() method.
  • Yves Martin
    Yves Martin over 10 years
    Right. I updated my answer... but I have not tested more than before
  • Peter Corke
    Peter Corke over 7 years
    Ah crud. Looks like this would need reflection to pull the protected os field from ResilientOutputStreamBase.