[logback-dev] synchronization style?

Ceki Gulcu ceki at qos.ch
Thu Jul 2 19:07:24 CEST 2009



Joern Huxhorn wrote:
> Ceki Gulcu wrote:
>>
>> Joern Huxhorn wrote:
>>
>>> Nope, a discussion like that is really necessary because this whole
>>> issue is pretty crucial and will essentially decide how well Logback
>>> will perform in environments with >1 CPU-cores.
>>> This will become more and more important with more cores added in every
>>> new CPU generation. The more cores are present, the more relevant the
>>> issue of unfairness will become, since waiting threads will  mean idling
>>> CPUs.
>> While open to further investigation and new data, evidence collected
>> indicates that threads in Sun's JDK on multi-core CPU
>> on Linux as well as other platforms can make other threads
>> starve but that is *not* a logback issue. Moreover, it would be unfair to
>> penalize *all* logback users using fair locking which is much slower
>> than synchronization or unfair locks.
>>
> I copied your results over from your test for the sake of discussion:
> java.runtime.version = 1.6.0_11-b03
> java.vendor          = Sun Microsystems Inc.
> java.version         = 1.6.0_11
> os.name              = Linux
> os.version           = 2.6.25-gentoo-r6
> Sync:   139698775, or 35 nanos per cycle
> Unfair: 87028802, or 57 nanos per cycle
> Fair:   4106449, or 1217 nanos per cycle
> 
> java.runtime.version = jvmxa6460-20081105_25433
> java.vendor          = IBM Corporation
> java.version         = 1.6.0
> os.name              = Linux
> os.version           = 2.6.25-gentoo-r6
> Sync:   356946629, or 14 nanos per cycle
> Unfair: 91892449, or 54 nanos per cycle
> Fair:   7380308, or 677 nanos per cycle
> 
> So yes, the locking itself is significantly slower.
> 
> The following numbers are the max amount of wasted time of my example app:
> 2959018000 synchronized
> 2078606000 unfair lock
> 27691000 fair lock
> The maximum wasted time of all threads is ~100 times smaller in case of
> the fair lock!! This means that the overall system is running *waaay*
> smoother... regardless of the additional nanos spent for the fair lock.
> The main problem is that *every* CPU has to access the appender,
> degrading multi-core-systems to semi-single-core.
> Minimizing the amount of time a thread (and therefore one of the other
> cores) has to wait is crucial.

Joern,


The numbers regarding *wasted* time above do not bring any new
information. Assuming there is thread starvation, those starving
threads will sit by idly. There is no new information there. It is
just stating the obvious.

The question is why there is starvation. In a real-world application,
threads will be waiting on several monitors so that the starvation
problem is less likely to occur. In other words, LockingInJava is not
or may not be so a representative of reality.

>> As a side note, and if my memory serves me well, you gather caller
>> data for every log call which can have significant impact on
>> performance. Are you collecting caller data?

> Yes, I do, but performance was absolutely sufficient on a single-core
> system - which is quite ironic. Let's just forget about my appender for
> the moment. The behavior is similar but less dramatic in case of
> FileAppender.

Interesting. Could you post your config file? When you say less dramatic, do you 
mean the behavior of your application in production? I guess it could be 
infeasible to try running your application with IBM's JDK on your 4-core Solaris 
machine.

Collecting caller data might just cause waiting threads to be parked
at the kernel instead of spun.

> Regards,
> Joern.

-- 
Ceki Gülcü
Logback: The reliable, generic, fast and flexible logging framework for Java.
http://logback.qos.ch


More information about the logback-dev mailing list