The Benchmarking framework has been introduced in the g-Eclipse Service Job Framework to allow users to benchmark grid resources. The framework allows users to a execute a variety of benchmarks, hence a protocol of communication between the benchmark executable and the g-Eclipse Benchmarking framework was defined, which specifies the format of benchmark input and output. This communication protocol provides a layer of abstraction that allows the framework to interface with any benchmark.
For the communication between the g-Eclipse Benchmarking Framework and the benchmark executable Grid Benchmark Description Language (GBDL) files are used for defining both the input and output. GBDL is a language created to describe both the configuration and the output of the benchmark. Therefore, benchmarks that are destined to be used with g-Eclipse should accept input in GBDL format and return their output in GBDL format.
To avoid re-writing and porting existing benchmarks in order to conform the g-Eclipse Benchmarking Framework communication protocol, an alternative approach was used. This approach wraps existing benchmarks to scripts that will provide the required interface interface between the benchmarks and the framework. This wrapping scripts perform the following operations:
Since most of the parts needed to be performed by the wrapper scripts are similar, a toolset has been created that performs most of the aforementioned tasks and allows the creation of new benchmarks with ease. The toolset includes:
The first step in creating a g-Eclipse benchmark is to download the g-Eclipse benchmark toolset from here. Inside the archive file exists a template of a benchmark including the needed Python libraries, as well as tools to create self-extractable, self-executable benchmark files.
The exact contents of the template are the following:
installer/payload/handlers.py:models.pyrun.pyutils.pybuilder.sh UncompressThe next step in creating a g-Eclipse benchmark is to extract the
g-Eclipse benchmark toolset. After extracting the file browse to
toolset/installer/payload/ folder. Put the benchmark's source code and
any files required by the benchmark in this folder. Next, edit the
run.py file and set the details for the benchmark which is about to be
create.
Inside the benchmark template, the run.py file is
used to create the Bonnie benchmark is provided. This file provides a
complete layout of what should be included in the run.py
file. Below a step-by-step guide indicates what needs to be modified in
the run.py file.
Change the Bonnie string to the name of the
benchmark in benchmarker = utils.Benchmarker('input.gbdl',
'bonnie'). The name is used to distinguish the benchmark from other
benchmark executables.
For any system commands, such as source code extracting and
benchmark compilation, the benchmarker.run_command function
can be used. This function takes two arguments. The first argument is
the command and the second is a descriptive name of the command.
You can specify command line arguments for the benchmark that
could be manipulated through the input GBDL using the benchmarker.add_supported_arguments
function. This function aspects a list of Argument objects. The system
supports the following two types of Argument objects:
FlagArgument specifies command line arguments
that modify the behaviour of the benchmark by there existence or
absence. The constructor of the object accepts three arguments. A name
that is used to identify the argument, a true_representation
that is a Python format string to be used if the flag is true, and an
optional false_representation to be used if the flag is
falseValueArgument specifies command line arguments
that have an associated value. The constructor accepts two arguments. A
name which is used to identify the argument, and a representation
that is a Python format string with a string replacement position were
the provided value will be substituted.To run the benchmark use the benchmarker.run_command.
Append to the command line arguments that you can obtain by using the benchmarker.get_argument_str
function. The output can be obtained as the returned value of the
function.
The regex
tools provided by Python can be used in order to parse the benchmark
output and extract the metrics needed. Metrics can be added using the benchmarker.add_metric
function. The function accepts four arguments. A name for
the metric, the metrics unit, the metrics data
type and the metrics value.
Browse to toolset/installer/ and run the builder.sh.
This will tar the payload file and create the self-extractable,
self-executable selfextract.bsx file. This is the final
step of the g-Eclipse benchmark creation.