[logback-user] Log events delivery guarantee.

Oleksandr Gavenko gavenkoa at gmail.com
Fri Jun 2 20:44:15 CEST 2017


On Fri, Jun 2, 2017 at 9:02 PM, Ralph Goers <rgoers at apache.org> wrote:
> I created the FlumeAppender for Log4j with just this purpose in mind. The FlumeAppender will write the log event to local disk and then returns control to the application.
> At this point eventual delivery is guaranteed.
Can you share how do you serialize log event?
https://logback.qos.ch/apidocs/ch/qos/logback/classic/spi/ILoggingEvent.html
interface has simple fields along with complex, like:

* MCD (which I definitely will use)
* StackTraceElement[]

Is that some CSV format? How do you handle control characters and new lines?

My end destination works with JSON format, I can stop with this serialization.

> A background thread reads the events that have been written to disk and forwards them on to another Apache Flume node.
> When that node confirms it has accepted them the event is deleted from local disk.
> The FlumeAppender has the ability to fail over to alternate Flume nodes.
> If none are available the events will simply stay on disk until it is full.

Originally I thought about complex solution that asynchronously send
to network unless remote host down or event buffer is full because of
load.

In later case write to disc and later try to deliver saved data.

On application shutdown saving to disk can be much faster then trying
to deliver logging events to external server.

What I wander is how do you manage saved events. In  single file or
several? How do you discover file for processing? How do you split
logging events? How do you keep pointers for not yet processes events?


More information about the logback-user mailing list