| Select which kind of scale you want to use: |
Help on fingerprints



): mark a successful result, which means this test has neither significant performance regression nor significant standard error
): mark a failing result, which means this test shows a significant performance regression (more than 10%)
): mark a failing result (see above) with a comment explaining this degradation.
or
): mark a failing or successful result with a significant standard error (more than 3%)
): mark an undefined result, which means that deviation on this test is not a number (NaN) or is infinite (happens when the reference value is equals to 0!)(build_test_time - baseline_test_time) / baseline_test_time)sqrt(build_test_stddev^2 / N + baseline_test_stddev^2 / N) / baseline_test_time)All 6 scenarios |
SLED 10 Sun 1.5.0_10 (2 x 3.00GHz - 3GB RAM) | RHEL 5.0 Sun 6.0_04 (2 x 3.00GHz - 3GB RAM) | Win XP Sun 1.5.0_10 (2 x 3.00GHz - 3GB RAM) |
| *AnnotationIncrementalBuildTests#testIncrementalAnnot() (vs. N20081218-2000) |
Performance criteria not met when compared to 'N20081218-2000': Elapsed Process= 0.74s is not within [0%, 110'%] of 23ms
- 3,113.0% [±37.1]
|
Performance criteria not met when compared to 'N20081218-2000': Elapsed Process= 0.62s is not within [0%, 110'%] of 18ms
- 3,361.1% [±33.5]
|
Performance criteria not met when compared to 'N20081218-2000': Elapsed Process= 0.93s is not within [0%, 110'%] of 16ms
- 5,706.2% [±36.6]
|
| ApiDescriptionTests#testCleanVisit() |
+1.6% [±1.3]
|
+0.3% [±0.9]
|
+1.4% [±2.0]
|
| EnumIncrementalBuildTests#testIncremantalEnum() |
- 7.0% [±1.5]
|
- 7.9% [±1.2]
|
- 9.6% [±1.2]
|
| FullSourceBuildTests#testCleanFullBuild() |
+4.7% [±1.1]
|
+2.7% [±0.9]
|
+5.8% [±0.7]
|
| FullSourceBuildTests#testFullBuild() |
+5.1% [±1.7]
|
+3.6% [±1.0]
|
+6.7% [±0.8]
|
| IncrementalBuildTests#testIncrementalBuildAll() |
Performance criteria not met when compared to 'R-3.4-200806172000_200906121800': Elapsed Process= 0.99s is not within [0%, 110'%] of 0.86s
- 14.5% [±1.5]
|
- 8.9% [±1.3]
|
- 2.3% [±0.9]
|