[logback-user] logback performance clarification
sebastien at qos.ch
Wed Mar 14 11:46:10 CET 2007
I think that our current example is pretty different to what Ryan Lowe tried to
measure. Here, there is no such thing as an Exception or stack trace to navigate,
which is what modified his measures in an important manner. It all comes down to
measuring how quickly an int comparison can be done. Logback and log4j are not
designed the same way. There are less treatment that needs to be done before
comparing the levels when using logback than log4j and this is what makes logback faster.
Also, to be sure that the code is "hot" enough, all methods are run twice, and only
measured the second time. Since a method is run 1'000'000 times each time it is run,
I guess that it's enough to warm the code up :)
Anyway, after we've studied and modified the code in the Logger class, it appears
that the results I first posted were not strange, only logical. The code was simply
less optimized as it is now. I even wrote that we thought that our parametrized
calls could not be faster than log4j or logback's isDebugEnabled() calls, and it is
faster, now :)
Until now, I've never tried JProfiler. I'll definitely take a look at it and run the
tests again when I have some time for it.
Thanks for the links you provided and the advice,
Gerweck Andy - agerwe wrote:
> Nice work on the optimizations. There are a lot of things that may
> account for the strange results in the original benchmarks.
> In general, microbenchmarks like this aren't very meaningful in Java.
> See's Sun's Java Tuning White Paper
> for a specific discussion of the problems of microbenchmarks. Ryan Lowe
> (I make no general endorsement of him or his blog) blogged a good
> example of how things can go very wrong
> Because of HotSpot and other JVM features, you won't run execute same
> machine code every time you call a method. Depending on your JVM
> settings, methods may be inlined, optimized and even regressed to
> interpreted mode all while your application is running.
> The best thing to do is to put a library in your real application and
> run real benchmarks or use something like JProfiler to see how much time
> you're spending in different libraries. Microbenchmarks will never tell
> you very much about how libraries perform in actual use. You just can't
> simulate the ways the VM will optimize code in the wild, which will
> often dwarf any differences you find in tight artificial loops.
> If you insist on microbenchmarking, here are a few pointers:
> · Run each separate test run in its own VM. This eliminates
> influence of earlier tests (e.g., GC) on later tests. For example, for
> each of the five configurations previously mentioned, run a separate VM.
> · Run several hundred thousand iterations before starting the
> timer. This helps make sure your code gets native compiled before you
> start. Most applications run long enough to make the initial compile
> time irrelevant, but it will skew your benchmarks.
> · Use a benchmarking framework like Japex
> Hope this is helpful,
> / Andy Gerweck /
sebastien at qos.ch
Logback: The reliable, generic, fast and flexible logging framework for Java.
More information about the Logback-user