Introduction

The Benchmarking framework has been introduced in the g-Eclipse Service Job Framework to allow users to benchmark grid resources. The framework allows users to a execute a variety of benchmarks, hence a protocol of communication between the benchmark executable and the g-Eclipse Benchmarking framework was defined, which specifies the format of benchmark input and output. This communication protocol provides a layer of abstraction that allows the framework to interface with any benchmark.

For the communication between the g-Eclipse Benchmarking Framework and the benchmark executable Grid Benchmark Description Language (GBDL) files are used for defining both the input and output. GBDL is a language created to describe both the configuration and the output of the benchmark. Therefore, benchmarks that are destined to be used with g-Eclipse should accept input in GBDL format and return their output in GBDL format.

To avoid re-writing and porting existing benchmarks in order to conform the g-Eclipse Benchmarking Framework communication protocol, an alternative approach was used. This approach wraps existing benchmarks to scripts that will provide the required interface interface between the benchmarks and the framework. This wrapping scripts perform the following operations:

  1. Compile the benchmark's source code (if this is required).
  2. Parse the benchmark parameters from the input GBDL file and translate them to appropriate command line arguments for the benchmark executable.
  3. Execute the benchmark with the command line arguments retrieved from the previous step, and record the output produced by the benchmarks.
  4. Parse the benchmarks output to extract measured values(metrics).
  5. Use information from the input GBDL file and the metrics to create the output GBDL file.

Since most of the parts needed to be performed by the wrapper scripts are similar, a toolset has been created that performs most of the aforementioned tasks and allows the creation of new benchmarks with ease. The toolset includes:

Getting Started

Obtaining the required components

The first step in creating a g-Eclipse benchmark is to download the g-Eclipse benchmark toolset from here. Inside the archive file exists a template of a benchmark including the needed Python libraries, as well as tools to create self-extractable, self-executable benchmark files.

The exact contents of the template are the following:

Setting the workspace up.

The next step in creating a g-Eclipse benchmark is to extract the g-Eclipse benchmark toolset. After extracting the file browse to toolset/installer/payload/ folder. Put the benchmark's source code and any files required by the benchmark in this folder. Next, edit the run.py file and set the details for the benchmark which is about to be create.

Editing the run.py file

Inside the benchmark template, the run.py file is used to create the Bonnie benchmark is provided. This file provides a complete layout of what should be included in the run.py file. Below a step-by-step guide indicates what needs to be modified in the run.py file.

Changing the Benchmark Type

Change the Bonnie string to the name of the benchmark in benchmarker = utils.Benchmarker('input.gbdl', 'bonnie'). The name is used to distinguish the benchmark from other benchmark executables.

Executing system commands.

For any system commands, such as source code extracting and benchmark compilation, the benchmarker.run_command function can be used. This function takes two arguments. The first argument is the command and the second is a descriptive name of the command.

Specifying Benchmark Parameters

You can specify command line arguments for the benchmark that could be manipulated through the input GBDL using the benchmarker.add_supported_arguments function. This function aspects a list of Argument objects. The system supports the following two types of Argument objects:

Executing the Benchmark

To run the benchmark use the benchmarker.run_command. Append to the command line arguments that you can obtain by using the benchmarker.get_argument_str function. The output can be obtained as the returned value of the function.

Parsing the output and record metrics

The regex tools provided by Python can be used in order to parse the benchmark output and extract the metrics needed. Metrics can be added using the benchmarker.add_metric function. The function accepts four arguments. A name for the metric, the metrics unit, the metrics data type and the metrics value.

Creating a g-Eclipse Benchmark self-extractable, self-executable file.

Browse to toolset/installer/ and run the builder.sh. This will tar the payload file and create the self-extractable, self-executable selfextract.bsx file. This is the final step of the g-Eclipse benchmark creation.