[logback-dev] [JIRA] Commented: (LBCORE-45) introduce FlushableAppender
Ceki Gulcu (JIRA)
noreply-jira at qos.ch
Mon Mar 9 12:32:10 CET 2009
[ http://jira.qos.ch/browse/LBCORE-45?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=11073#action_11073 ]
Ceki Gulcu commented on LBCORE-45:
----------------------------------
There are two details to consider.
1) when streams are closed they are also flushed
2) only a single thread can access a stream at a time
Passing the underlying output stream to the flushing policy would break encapsulation, which can't be allows. Instead, here is how I current envision FlushingPolicy:
interface FlushingPolicy {
booelan shouldFlush(LoggingEvent e);
}
Thus, flushing policy could be based on any criteria available to the shouldFlush() method, including time.
As for the FLUSH marker, it is just one possible approach for "manually flushing appenders when a request has been fully processed" which you asked for.
> introduce FlushableAppender
> ---------------------------
>
> Key: LBCORE-45
> URL: http://jira.qos.ch/browse/LBCORE-45
> Project: logback-core
> Issue Type: Improvement
> Components: Appender
> Affects Versions: unspecified
> Environment: Operating System: Linux
> Platform: PC
> Reporter: Bruno Navert
> Assignee: Ceki Gulcu
> Priority: Minor
>
> Suggest a new sub-interface of Appender:
> public interface FlushableAppender<E> extends Appender<E>, java.io.Flushable
> {
> }
> Then, WriterAppender could be defined to implement FlushableAppender, with this simple implementation:
> public void flush() throws IOException
> {
> writer.flush();
> }
> This would allow manual flushing of the appenders. This is particularly useful when buffered IO is used, obviously. It allows, for instance, to manually flush all appenders when a request has been fully processed, ensuring that we retain the benefits of buffered IO while also having the full logs after request processing.
> Here's sample code I used to get all appenders (run once after Logback configuration):
> public static Set<Appender> getAllAppenders()
> {
> ContextSelector selector = StaticLoggerBinder.SINGLETON.getContextSelector();
> LoggerContext loggerContext = selector.getLoggerContext();
> Map<String, Appender> appenders = newHashMap();
> // loop through all Loggers
> for ( Logger logger : loggerContext.getLoggerList() )
> {
> // for each logger, loop through all its appenders
> Iterator iter = logger.iteratorForAppenders();
> while ( iter.hasNext() )
> {
> // appenders are uniquely identified by name, so store them in the Map thus
> // this will overwrite the same entry in the Map many times (with the same reference)
> Appender appender = ( Appender ) iter.next();
> appenders.put( appender.getName(), appender );
> }
> }
> return newHashSet( appenders.values() );
> }
> The below bean is used in Spring, calling flush() forces all appenders to be flushed:
> public class LogbackFlushBean implements Flushable
> {
> protected final Logger log = LoggerFactory.getLogger( getClass() );
> private final Collection<FlushableAppender> flushableAppenders = newLinkedList();
> @PostConstruct
> public void loadFlushableAppenders()
> {
> for ( Appender appender : LogbackConfigurer.getAllAppenders() )
> {
> if ( appender instanceof FlushableAppender )
> {
> flushableAppenders.add( ( FlushableAppender ) appender );
> }
> else
> {
> log.debug( "appender {} is not Flushable, skipping", appender.getName() );
> }
> }
> }
> public void flush() throws IOException
> {
> for ( FlushableAppender appender : flushableAppenders )
> {
> log.debug( "flushing appender {}", appender.getName() );
> appender.flush();
> }
> }
> }
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://jira.qos.ch/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
More information about the logback-dev
mailing list