[logback-user] logback performance clarification

Gerweck Andy - agerwe Andy.Gerweck at acxiom.com
Wed Mar 14 20:06:38 CET 2007


I agree that this is a different case from Ryan's and I don't think there's anything really wrong with your test. As far as "strange" results, I was referring more to Mandeep's original post where the numbers changed significantly over time, which was the real inspiration for my post. I try to say something when I see microbenchmarks because I think people tend to underestimate how the JVM can distort performance.

However, I would still raise caution about taking these results too seriously. They do show that you've done a good job with LOGBack: I'm personally appreciative of the work you and Ceki have put into a great project.

With the Log4j bridge and/or SLF4J, it should be pretty easy for people to post performance numbers from real, complex applications. If people have benchmark suites and have time to post numbers, I'm sure we'd all be interested!

Thanks,
 Andy Gerweck

-----Original Message-----
From: logback-user-bounces at qos.ch [mailto:logback-user-bounces at qos.ch] On Behalf Of Sebastien Pennec
Sent: Wednesday, March 14, 2007 3:46 AM
To: logback users list
Subject: Re: [logback-user] logback performance clarification

Hello Andy,

I think that our current example is pretty different to what Ryan Lowe tried to 
measure. Here, there is no such thing as an Exception or stack trace to navigate, 
which is what modified his measures in an important manner. It all comes down to 
measuring how quickly an int comparison can be done. Logback and log4j are not 
designed the same way. There are less treatment that needs to be done before 
comparing the levels when using logback than log4j and this is what makes logback faster.

Also, to be sure that the code is "hot" enough, all methods are run twice, and only 
measured the second time. Since a method is run 1'000'000 times each time it is run, 
I guess that it's enough to warm the code up :)

Anyway, after we've studied and modified the code in the Logger class, it appears 
that the results I first posted were not strange, only logical. The code was simply 
less optimized as it is now.  I even wrote that we thought that our parametrized 
calls could not be faster than log4j or logback's isDebugEnabled() calls, and it is 
faster, now :)

Until now, I've never tried JProfiler. I'll definitely take a look at it and run the 
tests again when I have some time for it.

Thanks for the links you provided and the advice,

Sébastien

Gerweck Andy - agerwe wrote:
> Nice work on the optimizations. There are a lot of things that may 
> account for the strange results in the original benchmarks.
> 
> In general, microbenchmarks like this aren't very meaningful in Java. 
> See's Sun's Java Tuning White Paper 
> <http://java.sun.com/performance/reference/whitepapers/tuning.html#section3.1> 
> for a specific discussion of the problems of microbenchmarks. Ryan Lowe 
> (I make no general endorsement of him or his blog) blogged a good 
> example of how things can go very wrong 
> <http://www.ryanlowe.ca/blog/archives/000447_java_microbenchmarks_are_evil.php>. 
> 
> Because of HotSpot and other JVM features, you won't run execute same 
> machine code every time you call a method. Depending on your JVM 
> settings, methods may be inlined, optimized and even regressed to 
> interpreted mode all while your application is running.
> 
> The best thing to do is to put a library in your real application and 
> run real benchmarks or use something like JProfiler to see how much time 
> you're spending in different libraries. Microbenchmarks will never tell 
> you very much about how libraries perform in actual use. You just can't 
> simulate the ways the VM will optimize code in the wild, which will 
> often dwarf any differences you find in tight artificial loops.
> 
> If you insist on microbenchmarking, here are a few pointers:
> 
> ·         Run each separate test run in its own VM. This eliminates 
> influence of earlier tests (e.g., GC) on later tests. For example, for 
> each of the five configurations previously mentioned, run a separate VM.
> 
> ·         Run several hundred thousand iterations before starting the 
> timer. This helps make sure your code gets native compiled before you 
> start. Most applications run long enough to make the initial compile 
> time irrelevant, but it will skew your benchmarks.
> 
> ·         Use a benchmarking framework like Japex 
> <https://japex.dev.java.net/>.
> 
> Hope this is helpful,
> 
> /  Andy Gerweck /
> 
-- 
Sébastien Pennec
sebastien at qos.ch

Logback: The reliable, generic, fast and flexible logging framework for Java.
http://logback.qos.ch/
_______________________________________________
Logback-user mailing list
Logback-user at qos.ch
http://qos.ch/mailman/listinfo/logback-user
*************************************************************************
The information contained in this communication is confidential, is
intended only for the use of the recipient named above, and may be
legally privileged.

If the reader of this message is not the intended recipient, you are 
hereby notified that any dissemination, distribution or copying of this
communication is strictly prohibited.

If you have received this communication in error, please resend this
communication to the sender and delete the original message or any copy
of it from your computer system.

Thank you.
*************************************************************************



More information about the Logback-user mailing list