Performance of org.eclipse.compare: M20080911-1700 relative to 3.3 (200806171530)
RHEL 4.0 Sun 1.4.2_10 (3 GHz 2.5 GB)
Win XP Sun 1.4.2_10 (3 GHz 2 GB)
RHEL 3.0 Sun 1.4.2_10 (3 GHz 2 GB)
Win XP Sun 1.4.2_10 (2 GHz 512 MB)
RHEL 3.0 Sun 1.4.2_10 (2 GHz 512 MB)
Scenario Status
The following table gives a complete but compact view of performance results for the component.
Each line of the table shows the results for one scenario on all machines.
The name of the scenario is in bold when its results are also displayed in the fingerprints
and starts with an '*' when the scenario has no results in the last baseline run.
Here are information displayed for each test (ie. in each cell):
an icon showing whether the test fails or passes and whether it's reliable or not.
The legend for this icon is:
Green (): mark a successful result, which means this test has neither significant performance regression nor significant standard error
Red (): mark a failing result, which means this test shows a significant performance regression (more than 10%)
Gray (): mark a failing result (see above) with a comment explaining this degradation.
Yellow ( or ): mark a failing or successful result with a significant standard error (more than 3%)
Black (): mark an undefined result, which means that deviation on this test is not a number (NaN) or is infinite (happens when the reference value is equals to 0!)
"n/a": mark a test for with no performance results
the value of the deviation from the baseline as a percentage (ie. formula is: (build_test_time - baseline_test_time) / baseline_test_time)
the value of the standard error of this deviation as a percentage (ie. formula is: sqrt(build_test_stddev^2 / N + baseline_test_stddev^2 / N) / baseline_test_time)
When test only has one measure, the standard error cannot be computed and is replaced with a '[n/a]'.
Hints:
fly over image of failing tests to see the complete error message
to look at the complete and detailed test results, click on its image