[logback-dev] [JIRA] Updates for LOGBACK-1738: Some flaky tests

logback developers list logback-dev at qos.ch
Fri May 12 20:18:00 CEST 2023


logback / LOGBACK-1738 [Open]
Some flaky tests

==============================

Here's what changed in this issue in the last few minutes.

This issue has been created
This issue is now assigned to you.


View or comment on issue using this link
https://jira.qos.ch/browse/LOGBACK-1738

==============================
 Issue created
------------------------------

Agor Malon created this issue on 12/May/23 20:07

Summary:              Some flaky tests
Issue Type:           Bug
Affects Versions:     1.4.7
Assignee:             Logback dev list
Components:           logback-classic, logback-core
Created:              12/May/23 20:07
Labels:               Tests
Priority:             Major
Reporter:             Agor Malon
Description:
  Hello,
  
  We tried running your project and discovered that it contains some flaky tests (i.e., tests that nondeterministically pass and fail). We found these tests to fail more frequently when running them on certain machines of ours.
  
  To prevent others from running this project and its tests in machines that may result in flaky tests, we suggest adding information to the README.md file indicating the minimum resource configuration for running the tests of this project as to prevent observation of test flakiness.
  
  If we run this project in a machine with 1cpu and 1gb ram, we observe flaky tests. We found that the tests in this project did not have any flaky tests when we ran it on machines with 2cpu and 2gb ram.
  
  Here is a list of the tests we have identified and their likelihood of failure on a system with less than the recommended 2 CPUs and 2 GB RAM.
   # org.apache.shardingsphere.elasticjob.error.handler.wechat.WechatJobErrorHandlerTest#assertHandleExceptionWithNotifySuccessful ( 4 out 10 )
   # org.apache.shardingsphere.elasticjob.cloud.scheduler.mesos.MesosStateServiceTest#assertExecutors (1 out 10)
   # org.apache.shardingsphere.elasticjob.error.handler.wechat.WechatJobErrorHandlerTest#assertHandleExceptionWithWrongToken (1 out 10)
   # org.apache.shardingsphere.elasticjob.http.executor.HttpJobExecutorTest#assertProcessWithGet (1 out 10)
   # org.apache.shardingsphere.elasticjob.lite.spring.namespace.job.OneOffJobSpringNamespaceWithTypeTest#jobScriptWithJobTypeTest (1 out 10)
  
  Please let me know if you would like us to create a pull request on this matter (possibly to the readme of this project).
  
  Thank you for your attention to this matter. We hope that our recommendations will be helpful in improving the quality and performance of your project, especially for others to use
  h3. Reproducing
  
  Dockerfile
  
   
  {code:java}
  FROM maven:3.5.4-jdk-11
  WORKDIR /home/
  RUN git clone https://github.com/apache/shardingsphere-elasticjob && \
    cd shardingsphere-elasticjob && \
    git checkout 2bfce1ebc39475e1b8eda456b673ab9431c0f270
  WORKDIR /home/shardingsphere-elasticjob
  RUN mvn install -DskipTests
  ENTRYPOINT ["mvn", "test", "-fn"]
   
  {code}
   
  
  Build the image:
  {code:java}
  $> mkdir tmp
  $> cp Dockerfile tmp
  $> cd tmp
  $> docker build -t elasticjob . # estimated time of build 3m
  {code}
  
  
  
  
  Running:
  this configuration likely prevents flakiness (no flakiness in 10 runs)
  {code:java}
  $> docker run --rm --memory=2g --cpus=2 --memory-swap=-1 elasticjob | tee output.txt
  $> grep "Failures:" output.txt # checking results
  {code}
   
  
  this other configuration -similar to the previous- can't prevent flaky tests (observation in 10 runs)
  {code:java}
  $> docker run --rm --memory=1g --cpus=1 --memory-swap=-1 elasticjob | tee output2.txt
  $> grep "Failures:" output2.txt # checking results{code}


==============================
 This message was sent by Atlassian Jira (v9.6.0#960000-sha1:a3ee8af)



More information about the logback-dev mailing list