[slf4j-dev] svn commit: r1210 - slf4j/trunk/slf4j-api/src/test/java/org/slf4j/helpers

Ralph Goers ralph.goers at dslextreme.com
Sat Oct 25 16:26:31 CEST 2008


What Ceki is doing is an imperfect, but better approach than what you 
are suggesting. The current approach adjusts its expectations based on 
the baseline performance of the build machine. So as builds are done on 
slower or faster hardware the expected baseline should change with it.

The problem with hard time limits is exactly what you say - as machines 
get faster they will naturally pass tests they should have failed.  So 
over time the performance tests will become meaningless.

The challenge with the current approach is that it might need to use a 
wider mix of instructions to get a more accurate representation of the 
machine.

Ralph

Thorbjørn Ravn Andersen wrote:
>
> I think that this approach is too brittle to handle the advances in 
> hardware and JIT technology without eventually breaking.
>
> How about setting an absolute time limit for each test after which a 
> watchdog kills it to avoid infinite looping, and then simply measure 
> each test, collect the test results as well as your basic time unit, and 
> then do an analysis afterwards?  The build should only break if the hard 
> limits were reached.
>
> Otherwise you may have a situation where people may be unable to build 
> from source :-S
> _______________________________________________
> dev mailing list
> dev at slf4j.org
> http://www.slf4j.org/mailman/listinfo/dev



More information about the slf4j-dev mailing list